title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
7.115. libical
7.115. libical 7.115.1. RHBA-2013:0471 - libical bug fix update Updated libical packages that fix one bug are now available for Red Hat Enterprise Linux 6. The libical packages provide a reference implementation of the iCalendar data type and serialization format used in dozens of calendaring and scheduling products. Bug Fix BZ#664332 The libical packages can be configured to abort when parsing improperly formatted iCalendar data, primarily useful for testing and debugging. In Red Hat Enterprise Linux this behavior is disabled, but some parts of the libical source code were improperly checking for this option. Consequently, the library aborted even if configured not to do so. The underlying source code has been modified and libical no longer aborts in the described scenario. All users of libical are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/libical
Chapter 115. KafkaUserTlsExternalClientAuthentication schema reference
Chapter 115. KafkaUserTlsExternalClientAuthentication schema reference Used in: KafkaUserSpec The type property is a discriminator that distinguishes use of the KafkaUserTlsExternalClientAuthentication type from KafkaUserTlsClientAuthentication , KafkaUserScramSha512ClientAuthentication . It must have the value tls-external for the type KafkaUserTlsExternalClientAuthentication . Property Property type Description type string Must be tls-external .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkausertlsexternalclientauthentication-reference
4.5.3. Network Configuration
4.5.3. Network Configuration Clicking on the Network tab displays the Network Configuration page, which provides an interface for configuring the network transport type. You can use this tab to select one of the following options: UDP Multicast and Let Cluster Choose the Multicast Address This is the default setting. With this option selected, the Red Hat High Availability Add-On software creates a multicast address based on the cluster ID. It generates the lower 16 bits of the address and appends them to the upper portion of the address according to whether the IP protocol is IPv4 or IPv6: For IPv4 - The address formed is 239.192. plus the lower 16 bits generated by Red Hat High Availability Add-On software. For IPv6 - The address formed is FF15:: plus the lower 16 bits generated by Red Hat High Availability Add-On software. Note The cluster ID is a unique identifier that cman generates for each cluster. To view the cluster ID, run the cman_tool status command on a cluster node. UDP Multicast and Specify the Multicast Address Manually If you need to use a specific multicast address, select this option enter a multicast address into the Multicast Address text box. If you do specify a multicast address, you should use the 239.192.x.x series (or FF15:: for IPv6) that cman uses. Otherwise, using a multicast address outside that range may cause unpredictable results. For example, using 224.0.0.x (which is "All hosts on the network") may not be routed correctly, or even routed at all by some hardware. If you specify or modify a multicast address, you must restart the cluster for this to take effect. For information on starting and stopping a cluster with Conga , see Section 5.4, "Starting, Stopping, Restarting, and Deleting Clusters" . Note If you specify a multicast address, make sure that you check the configuration of routers that cluster packets pass through. Some routers may take a long time to learn addresses, seriously impacting cluster performance. UDP Unicast (UDPU) As of the Red Hat Enterprise Linux 6.2 release, the nodes in a cluster can communicate with each other using the UDP Unicast transport mechanism. It is recommended, however, that you use IP multicasting for the cluster network. UDP Unicast is an alternative that can be used when IP multicasting is not available. For GFS2 deployments using UDP Unicast is not recommended. Click Apply . When changing the transport type, a cluster restart is necessary for the changes to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-network-conga-ca
7.154. pcp
7.154. pcp 7.154.1. RHBA-2015:1300 - pcp bug fix and enhancement update Updated pcp packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. Performance Co-Pilot (PCP) is a suite of tools, services, and libraries for acquisition, archiving, and analysis of system-level performance measurements. Its light-weight, distributed architecture makes it particularly well-suited to centralized analysis of complex systems. Note The pcp packages have been upgraded to upstream version 3.10.3, which provides numerous bug fixes and enhancements over the version. (BZ# 1158681 ) Bug Fixes BZ# 1158681 New kernel metrics: memory, vCPU, device mapper, nfs4.1 operations, more per-cgroup metrics New Performance Metrics Domain Agents (PMDA): NVIDIA, Linux, 389 Directory Server, hardware event counters, CIFS, activeMQ New vCPU and MemAvailable pmchart views New pmiostat, pcp-dmcache, pcp2graphite, ganglia2pcp tools Nanosecond resolution event timestamps The pmParseUnitsStr() function added to the Performance Metrics Application Programming Interface (PMAPI) ACAO header JSON responses added to the Performance Metrics Web Daemon (pmwebd) The "ruleset" extensions to the pmie language Support for Python v3 and Python API extensions Support for xz compression for daily archives Support for long form of command-line options Support for active service probing in libpcp Support for new sysstat versions and sar2pcp fixes Direct support for PCP archive in the pmatop utility BZ# 1196540 Previously, on IBM S/390 platforms, unanticipated formatting in the /proc/cpuinfo file negatively affected the PCP Linux kernel PMDA. As a consequence, the agent terminated unexpectedly with a segmentation fault when accessing certain processor related performance metrics. This update fixes parsing of /proc/cpuinfo for IBM S/390, and all PCP processor metrics are now fully functional and robust on this platform. BZ# 1131022 Previously, the PCP pmlogger daemon start script started the daemon only if the pmlogger service was enabled by the "chkconfig on" command. Consequently, the daemon silently failed to start when the service was disabled. With this update, additional diagnostics have been added to the start script. Now, when attempting to start the pmlogger daemon with the pmlogger service disabled, the user is properly informed and given instructions on how to eliminate the problem. Users of pcp are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-pcp
Chapter 7. DMN model execution
Chapter 7. DMN model execution You can create or import DMN files in your Red Hat Process Automation Manager project using Business Central or package the DMN files as part of your project knowledge JAR (KJAR) file without Business Central. After you implement your DMN files in your Red Hat Process Automation Manager project, you can execute the DMN decision service by deploying the KIE container that contains it to KIE Server for remote access and interacting with the container using the KIE Server REST API. For information about including external DMN assets with your project packaging and deployment method, see Packaging and deploying an Red Hat Process Automation Manager project . 7.1. Executing a DMN service using the KIE Server REST API Directly interacting with the REST endpoints of KIE Server provides the most separation between the calling code and the decision logic definition. The calling code is completely free of direct dependencies, and you can implement it in an entirely different development platform such as Node.js or .NET . The examples in this section demonstrate Nix-style curl commands but provide relevant information to adapt to any REST client. When you use a REST endpoint of KIE Server, the best practice is to define a domain object POJO Java class, annotated with standard KIE Server marshalling annotations. For example, the following code is using a domain object Person class that is annotated properly: Example POJO Java class @javax.xml.bind.annotation.XmlAccessorType(javax.xml.bind.annotation.XmlAccessType.FIELD) public class Person implements java.io.Serializable { static final long serialVersionUID = 1L; private java.lang.String id; private java.lang.String name; @javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter(org.kie.internal.jaxb.LocalDateXmlAdapter.class) private java.time.LocalDate dojoining; public Person() { } public java.lang.String getId() { return this.id; } public void setId(java.lang.String id) { this.id = id; } public java.lang.String getName() { return this.name; } public void setName(java.lang.String name) { this.name = name; } public java.time.LocalDate getDojoining() { return this.dojoining; } public void setDojoining(java.time.LocalDate dojoining) { this.dojoining = dojoining; } public Person(java.lang.String id, java.lang.String name, java.time.LocalDate dojoining) { this.id = id; this.name = name; this.dojoining = dojoining; } } For more information about the KIE Server REST API, see Interacting with Red Hat Process Automation Manager using KIE APIs . Prerequisites KIE Server is installed and configured, including a known user name and credentials for a user with the kie-server role. For installation options, see Planning a Red Hat Process Automation Manager installation . You have built the DMN project as a KJAR artifact and deployed it to KIE Server: For more information about project packaging and deployment and executable models, see Packaging and deploying an Red Hat Process Automation Manager project . You have the ID of the KIE container containing the DMN model. If more than one model is present, you must also know the model namespace and model name of the relevant model. Procedure Determine the base URL for accessing the KIE Server REST API endpoints. This requires knowing the following values (with the default local deployment values as an example): Host ( localhost ) Port ( 8080 ) Root context ( kie-server ) Base REST path ( services/rest/ ) Example base URL in local deployment for the traffic violations project: http://localhost:8080/kie-server/services/rest/server/containers/traffic-violation_1.0.0-SNAPSHOT Determine user authentication requirements. When users are defined directly in the KIE Server configuration, HTTP Basic authentication is used and requires the user name and password. Successful requests require that the user have the kie-server role. The following example demonstrates how to add credentials to a curl request: If KIE Server is configured with Red Hat Single Sign-On, the request must include a bearer token: curl -H "Authorization: bearer USDTOKEN" <request> Specify the format of the request and response. The REST API endpoints work with both JSON and XML formats and are set using request headers: JSON XML Optional: Query the container for a list of deployed decision models: [GET] server/containers/{containerId}/dmn Example curl request: Sample XML output: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <response type="SUCCESS" msg="Ok models successfully retrieved from container 'traffic-violation_1.0.0-SNAPSHOT'"> <dmn-model-info-list> <model> <model-namespace>https://kiegroup.org/dmn/_60b01f4d-e407-43f7-848e-258723b5fac8</model-namespace> <model-name>Traffic Violation</model-name> <model-id>_2CD7D1AA-BD84-4B43-AD21-B0342ADE655A</model-id> <decisions> <dmn-decision-info> <decision-id>_23428EE8-DC8B-4067-8E67-9D7C53EC975F</decision-id> <decision-name>Fine</decision-name> </dmn-decision-info> <dmn-decision-info> <decision-id>_B5EEE2B1-915C-44DC-BE43-C244DC066FD8</decision-id> <decision-name>Should the driver be suspended?</decision-name> </dmn-decision-info> </decisions> <inputs> <dmn-inputdata-info> <inputdata-id>_CEB959CD-3638-4A87-93BA-03CD0FB63AE3</inputdata-id> <inputdata-name>Violation</inputdata-name> <inputdata-typeref> <namespace-uri>https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8</namespace-uri> <local-part>tViolation</local-part> <prefix></prefix> </inputdata-typeref> </dmn-inputdata-info> <dmn-inputdata-info> <inputdata-id>_B0E810E6-7596-430A-B5CF-67CE16863B6C</inputdata-id> <inputdata-name>Driver</inputdata-name> <inputdata-typeref> <namespace-uri>https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8</namespace-uri> <local-part>tDriver</local-part> <prefix></prefix> </inputdata-typeref> </dmn-inputdata-info> </inputs> <itemdefinitions> <dmn-itemdefinition-info> <itemdefinition-id>_9C758F4A-7D72-4D0F-B63F-2F5B8405980E</itemdefinition-id> <itemdefinition-name>tViolation</itemdefinition-name> <itemdefinition-itemcomponent> <dmn-itemdefinition-info> <itemdefinition-id>_0B6FF1E2-ACE9-4FB3-876B-5BB30B88009B</itemdefinition-id> <itemdefinition-name>Code</itemdefinition-name> <itemdefinition-typeref> <namespace-uri>https://kiegroup.org/dmn/_60b01f4d-e407-43f7-848e-258723b5fac8</namespace-uri> <local-part>string</local-part> <prefix></prefix> </itemdefinition-typeref> <itemdefinition-itemcomponent/> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> <dmn-itemdefinition-info> <itemdefinition-id>_27A5DA18-3CA7-4C06-81B7-CF7F2F050E29</itemdefinition-id> <itemdefinition-name>date</itemdefinition-name> <itemdefinition-typeref> <namespace-uri>https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8</namespace-uri> <local-part>date</local-part> <prefix></prefix> </itemdefinition-typeref> <itemdefinition-itemcomponent/> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> <dmn-itemdefinition-info> <itemdefinition-id>_8961969A-8A80-4F12-B568-346920C0F038</itemdefinition-id> <itemdefinition-name>type</itemdefinition-name> <itemdefinition-typeref> <namespace-uri>https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8</namespace-uri> <local-part>string</local-part> <prefix></prefix> </itemdefinition-typeref> <itemdefinition-itemcomponent/> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> <dmn-itemdefinition-info> <itemdefinition-id>_7450F12A-3E95-4D5E-8DCE-2CB1FAC2BDD4</itemdefinition-id> <itemdefinition-name>speed limit</itemdefinition-name> <itemdefinition-typeref> <namespace-uri>https://kiegroup.org/dmn/_60b01f4d-e407-43f7-848e-258723b5fac8</namespace-uri> <local-part>number</local-part> <prefix></prefix> </itemdefinition-typeref> <itemdefinition-itemcomponent/> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> <dmn-itemdefinition-info> <itemdefinition-id>_0A9A6F26-6C14-414D-A9BF-765E5850429A</itemdefinition-id> <itemdefinition-name>Actual Speed</itemdefinition-name> <itemdefinition-typeref> <namespace-uri>https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8</namespace-uri> <local-part>number</local-part> <prefix></prefix> </itemdefinition-typeref> <itemdefinition-itemcomponent/> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> </itemdefinition-itemcomponent> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> <dmn-itemdefinition-info> <itemdefinition-id>_13C7EFD8-B85C-43BF-94D3-14FABE39A4A0</itemdefinition-id> <itemdefinition-name>tDriver</itemdefinition-name> <itemdefinition-itemcomponent> <dmn-itemdefinition-info> <itemdefinition-id>_EC11744C-4160-4549-9610-2C757F40DFE8</itemdefinition-id> <itemdefinition-name>Name</itemdefinition-name> <itemdefinition-typeref> <namespace-uri>https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8</namespace-uri> <local-part>string</local-part> <prefix></prefix> </itemdefinition-typeref> <itemdefinition-itemcomponent/> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> <dmn-itemdefinition-info> <itemdefinition-id>_E95BE3DB-4A51-4658-A166-02493EAAC9D2</itemdefinition-id> <itemdefinition-name>Age</itemdefinition-name> <itemdefinition-typeref> <namespace-uri>https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8</namespace-uri> <local-part>number</local-part> <prefix></prefix> </itemdefinition-typeref> <itemdefinition-itemcomponent/> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> <dmn-itemdefinition-info> <itemdefinition-id>_7B3023E2-BC44-4BF3-BF7E-773C240FB9AD</itemdefinition-id> <itemdefinition-name>State</itemdefinition-name> <itemdefinition-typeref> <namespace-uri>https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8</namespace-uri> <local-part>string</local-part> <prefix></prefix> </itemdefinition-typeref> <itemdefinition-itemcomponent/> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> <dmn-itemdefinition-info> <itemdefinition-id>_3D4B49DD-700C-4925-99A7-3B2B873F7800</itemdefinition-id> <itemdefinition-name>city</itemdefinition-name> <itemdefinition-typeref> <namespace-uri>https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8</namespace-uri> <local-part>string</local-part> <prefix></prefix> </itemdefinition-typeref> <itemdefinition-itemcomponent/> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> <dmn-itemdefinition-info> <itemdefinition-id>_B37C49E8-B0D9-4B20-9DC6-D655BB1CA7B1</itemdefinition-id> <itemdefinition-name>Points</itemdefinition-name> <itemdefinition-typeref> <namespace-uri>https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8</namespace-uri> <local-part>number</local-part> <prefix></prefix> </itemdefinition-typeref> <itemdefinition-itemcomponent/> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> </itemdefinition-itemcomponent> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> <dmn-itemdefinition-info> <itemdefinition-id>_A4077C7E-B57A-4DEE-9C65-7769636316F3</itemdefinition-id> <itemdefinition-name>tFine</itemdefinition-name> <itemdefinition-itemcomponent> <dmn-itemdefinition-info> <itemdefinition-id>_79B152A8-DE83-4001-B88B-52DFF0D73B2D</itemdefinition-id> <itemdefinition-name>Amount</itemdefinition-name> <itemdefinition-typeref> <namespace-uri>https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8</namespace-uri> <local-part>number</local-part> <prefix></prefix> </itemdefinition-typeref> <itemdefinition-itemcomponent/> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> <dmn-itemdefinition-info> <itemdefinition-id>_D7CB5F9C-9D55-48C2-83EE-D47045EC90D0</itemdefinition-id> <itemdefinition-name>Points</itemdefinition-name> <itemdefinition-typeref> <namespace-uri>https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8</namespace-uri> <local-part>number</local-part> <prefix></prefix> </itemdefinition-typeref> <itemdefinition-itemcomponent/> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> </itemdefinition-itemcomponent> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> </itemdefinitions> <decisionservices/> </model> </dmn-model-info-list> </response> Sample JSON output: { "type" : "SUCCESS", "msg" : "OK models successfully retrieved from container 'Traffic-Violation_1.0.0-SNAPSHOT'", "result" : { "dmn-model-info-list" : { "models" : [ { "model-namespace" : "https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8", "model-name" : "Traffic Violation", "model-id" : "_2CD7D1AA-BD84-4B43-AD21-B0342ADE655A", "decisions" : [ { "decision-id" : "_23428EE8-DC8B-4067-8E67-9D7C53EC975F", "decision-name" : "Fine" }, { "decision-id" : "_B5EEE2B1-915C-44DC-BE43-C244DC066FD8", "decision-name" : "Should the driver be suspended?" } ], "inputs" : [ { "inputdata-id" : "_CEB959CD-3638-4A87-93BA-03CD0FB63AE3", "inputdata-name" : "Violation", "inputdata-typeRef" : { "namespace-uri" : "https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8", "local-part" : "tViolation", "prefix" : "" } }, { "inputdata-id" : "_B0E810E6-7596-430A-B5CF-67CE16863B6C", "inputdata-name" : "Driver", "inputdata-typeRef" : { "namespace-uri" : "https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8", "local-part" : "tDriver", "prefix" : "" } } ], "itemDefinitions" : [ { "itemdefinition-id" : "_13C7EFD8-B85C-43BF-94D3-14FABE39A4A0", "itemdefinition-name" : "tDriver", "itemdefinition-typeRef" : null, "itemdefinition-itemComponent" : [ { "itemdefinition-id" : "_EC11744C-4160-4549-9610-2C757F40DFE8", "itemdefinition-name" : "Name", "itemdefinition-typeRef" : { "namespace-uri" : "https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8", "local-part" : "string", "prefix" : "" }, "itemdefinition-itemComponent" : [ ], "itemdefinition-isCollection" : false }, { "itemdefinition-id" : "_E95BE3DB-4A51-4658-A166-02493EAAC9D2", "itemdefinition-name" : "Age", "itemdefinition-typeRef" : { "namespace-uri" : "https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8", "local-part" : "number", "prefix" : "" }, "itemdefinition-itemComponent" : [ ], "itemdefinition-isCollection" : false }, { "itemdefinition-id" : "_7B3023E2-BC44-4BF3-BF7E-773C240FB9AD", "itemdefinition-name" : "State", "itemdefinition-typeRef" : { "namespace-uri" : "https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8", "local-part" : "string", "prefix" : "" }, "itemdefinition-itemComponent" : [ ], "itemdefinition-isCollection" : false }, { "itemdefinition-id" : "_3D4B49DD-700C-4925-99A7-3B2B873F7800", "itemdefinition-name" : "City", "itemdefinition-typeRef" : { "namespace-uri" : "https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8", "local-part" : "string", "prefix" : "" }, "itemdefinition-itemComponent" : [ ], "itemdefinition-isCollection" : false }, { "itemdefinition-id" : "_B37C49E8-B0D9-4B20-9DC6-D655BB1CA7B1", "itemdefinition-name" : "Points", "itemdefinition-typeRef" : { "namespace-uri" : "https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8", "local-part" : "number", "prefix" : "" }, "itemdefinition-itemComponent" : [ ], "itemdefinition-isCollection" : false } ], "itemdefinition-isCollection" : false }, { "itemdefinition-id" : "_A4077C7E-B57A-4DEE-9C65-7769636316F3", "itemdefinition-name" : "tFine", "itemdefinition-typeRef" : null, "itemdefinition-itemComponent" : [ { "itemdefinition-id" : "_79B152A8-DE83-4001-B88B-52DFF0D73B2D", "itemdefinition-name" : "Amount", "itemdefinition-typeRef" : { "namespace-uri" : "https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8", "local-part" : "number", "prefix" : "" }, "itemdefinition-itemComponent" : [ ], "itemdefinition-isCollection" : false }, { "itemdefinition-id" : "_D7CB5F9C-9D55-48C2-83EE-D47045EC90D0", "itemdefinition-name" : "Points", "itemdefinition-typeRef" : { "namespace-uri" : "https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8", "local-part" : "number", "prefix" : "" }, "itemdefinition-itemComponent" : [ ], "itemdefinition-isCollection" : false } ], "itemdefinition-isCollection" : false }, { "itemdefinition-id" : "_9C758F4A-7D72-4D0F-B63F-2F5B8405980E", "itemdefinition-name" : "tViolation", "itemdefinition-typeRef" : null, "itemdefinition-itemComponent" : [ { "itemdefinition-id" : "_0B6FF1E2-ACE9-4FB3-876B-5BB30B88009B", "itemdefinition-name" : "Code", "itemdefinition-typeRef" : { "namespace-uri" : "https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8", "local-part" : "string", "prefix" : "" }, "itemdefinition-itemComponent" : [ ], "itemdefinition-isCollection" : false }, { "itemdefinition-id" : "_27A5DA18-3CA7-4C06-81B7-CF7F2F050E29", "itemdefinition-name" : "Date", "itemdefinition-typeRef" : { "namespace-uri" : "https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8", "local-part" : "date", "prefix" : "" }, "itemdefinition-itemComponent" : [ ], "itemdefinition-isCollection" : false }, { "itemdefinition-id" : "_8961969A-8A80-4F12-B568-346920C0F038", "itemdefinition-name" : "Type", "itemdefinition-typeRef" : { "namespace-uri" : "https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8", "local-part" : "string", "prefix" : "" }, "itemdefinition-itemComponent" : [ ], "itemdefinition-isCollection" : false }, { "itemdefinition-id" : "_7450F12A-3E95-4D5E-8DCE-2CB1FAC2BDD4", "itemdefinition-name" : "Speed Limit", "itemdefinition-typeRef" : { "namespace-uri" : "https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8", "local-part" : "number", "prefix" : "" }, "itemdefinition-itemComponent" : [ ], "itemdefinition-isCollection" : false }, { "itemdefinition-id" : "_0A9A6F26-6C14-414D-A9BF-765E5850429A", "itemdefinition-name" : "Actual Speed", "itemdefinition-typeRef" : { "namespace-uri" : "https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8", "local-part" : "number", "prefix" : "" }, "itemdefinition-itemComponent" : [ ], "itemdefinition-isCollection" : false } ], "itemdefinition-isCollection" : false } ], "decisionServices" : [ ] } ] } } } Execute the model: [POST] server/containers/{containerId}/dmn Note The attribute model-namespace is automatically generated and is different for every user. Ensure that the model-namespace and model-name attributes that you use match those of the deployed model. Example curl request: Example JSON request: { "model-namespace" : "https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8", "model-name" : "Traffic Violation", "dmn-context" : { "Driver" : { "Points" : 15 }, "Violation" : { "Type" : "speed", "Actual Speed" : 135, "Speed Limit" : 100 } } } Example XML request (JAXB format): <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <dmn-evaluation-context> <dmn-context xsi:type="jaxbListWrapper" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <type>MAP</type> <element xsi:type="jaxbStringObjectPair" key="Violation"> <value xsi:type="jaxbListWrapper"> <type>MAP</type> <element xsi:type="jaxbStringObjectPair" key="Type"> <value xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema">speed</value> </element> <element xsi:type="jaxbStringObjectPair" key="Speed Limit"> <value xsi:type="xs:decimal" xmlns:xs="http://www.w3.org/2001/XMLSchema">100</value> </element> <element xsi:type="jaxbStringObjectPair" key="Actual Speed"> <value xsi:type="xs:decimal" xmlns:xs="http://www.w3.org/2001/XMLSchema">135</value> </element> </value> </element> <element xsi:type="jaxbStringObjectPair" key="Driver"> <value xsi:type="jaxbListWrapper"> <type>MAP</type> <element xsi:type="jaxbStringObjectPair" key="Points"> <value xsi:type="xs:decimal" xmlns:xs="http://www.w3.org/2001/XMLSchema">15</value> </element> </value> </element> </dmn-context> </dmn-evaluation-context> Note Regardless of the request format, the request requires the following elements: Model namespace Model name Context object containing input values Example JSON response: { "type": "SUCCESS", "msg": "OK from container 'Traffic-Violation_1.0.0-SNAPSHOT'", "result": { "dmn-evaluation-result": { "messages": [], "model-namespace": "https://kiegroup.org/dmn/_7D8116DE-ADF5-4560-A116-FE1A2EAFFF48", "model-name": "Traffic Violation", "decision-name": [], "dmn-context": { "Violation": { "Type": "speed", "Speed Limit": 100, "Actual Speed": 135 }, "Should Driver be Suspended?": "Yes", "Driver": { "Points": 15 }, "Fine": { "Points": 7, "Amount": 1000 } }, "decision-results": { "_E1AF5AC2-E259-455C-96E4-596E30D3BC86": { "messages": [], "decision-id": "_E1AF5AC2-E259-455C-96E4-596E30D3BC86", "decision-name": "Should the Driver be Suspended?", "result": "Yes", "status": "SUCCEEDED" }, "_D7F02CE0-AF50-4505-AB80-C7D6DE257920": { "messages": [], "decision-id": "_D7F02CE0-AF50-4505-AB80-C7D6DE257920", "decision-name": "Fine", "result": { "Points": 7, "Amount": 1000 }, "status": "SUCCEEDED" } } } } } Example XML (JAXB format) response: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <response type="SUCCESS" msg="OK from container 'Traffic_1.0.0-SNAPSHOT'"> <dmn-evaluation-result> <model-namespace>https://kiegroup.org/dmn/_A4BCA8B8-CF08-433F-93B2-A2598F19ECFF</model-namespace> <model-name>Traffic Violation</model-name> <dmn-context xsi:type="jaxbListWrapper" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <type>MAP</type> <element xsi:type="jaxbStringObjectPair" key="Violation"> <value xsi:type="jaxbListWrapper"> <type>MAP</type> <element xsi:type="jaxbStringObjectPair" key="Type"> <value xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema">speed</value> </element> <element xsi:type="jaxbStringObjectPair" key="Speed Limit"> <value xsi:type="xs:decimal" xmlns:xs="http://www.w3.org/2001/XMLSchema">100</value> </element> <element xsi:type="jaxbStringObjectPair" key="Actual Speed"> <value xsi:type="xs:decimal" xmlns:xs="http://www.w3.org/2001/XMLSchema">135</value> </element> </value> </element> <element xsi:type="jaxbStringObjectPair" key="Driver"> <value xsi:type="jaxbListWrapper"> <type>MAP</type> <element xsi:type="jaxbStringObjectPair" key="Points"> <value xsi:type="xs:decimal" xmlns:xs="http://www.w3.org/2001/XMLSchema">15</value> </element> </value> </element> <element xsi:type="jaxbStringObjectPair" key="Fine"> <value xsi:type="jaxbListWrapper"> <type>MAP</type> <element xsi:type="jaxbStringObjectPair" key="Points"> <value xsi:type="xs:decimal" xmlns:xs="http://www.w3.org/2001/XMLSchema">7</value> </element> <element xsi:type="jaxbStringObjectPair" key="Amount"> <value xsi:type="xs:decimal" xmlns:xs="http://www.w3.org/2001/XMLSchema">1000</value> </element> </value> </element> <element xsi:type="jaxbStringObjectPair" key="Should the driver be suspended?"> <value xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema">Yes</value> </element> </dmn-context> <messages/> <decisionResults> <entry> <key>_4055D956-1C47-479C-B3F4-BAEB61F1C929</key> <value> <decision-id>_4055D956-1C47-479C-B3F4-BAEB61F1C929</decision-id> <decision-name>Fine</decision-name> <result xsi:type="jaxbListWrapper" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <type>MAP</type> <element xsi:type="jaxbStringObjectPair" key="Points"> <value xsi:type="xs:decimal" xmlns:xs="http://www.w3.org/2001/XMLSchema">7</value> </element> <element xsi:type="jaxbStringObjectPair" key="Amount"> <value xsi:type="xs:decimal" xmlns:xs="http://www.w3.org/2001/XMLSchema">1000</value> </element> </result> <messages/> <status>SUCCEEDED</status> </value> </entry> <entry> <key>_8A408366-D8E9-4626-ABF3-5F69AA01F880</key> <value> <decision-id>_8A408366-D8E9-4626-ABF3-5F69AA01F880</decision-id> <decision-name>Should the driver be suspended?</decision-name> <result xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">Yes</result> <messages/> <status>SUCCEEDED</status> </value> </entry> </decisionResults> </dmn-evaluation-result> </response>
[ "@javax.xml.bind.annotation.XmlAccessorType(javax.xml.bind.annotation.XmlAccessType.FIELD) public class Person implements java.io.Serializable { static final long serialVersionUID = 1L; private java.lang.String id; private java.lang.String name; @javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter(org.kie.internal.jaxb.LocalDateXmlAdapter.class) private java.time.LocalDate dojoining; public Person() { } public java.lang.String getId() { return this.id; } public void setId(java.lang.String id) { this.id = id; } public java.lang.String getName() { return this.name; } public void setName(java.lang.String name) { this.name = name; } public java.time.LocalDate getDojoining() { return this.dojoining; } public void setDojoining(java.time.LocalDate dojoining) { this.dojoining = dojoining; } public Person(java.lang.String id, java.lang.String name, java.time.LocalDate dojoining) { this.id = id; this.name = name; this.dojoining = dojoining; } }", "mvn clean install", "curl -u username:password <request>", "curl -H \"Authorization: bearer USDTOKEN\" <request>", "curl -H \"accept: application/json\" -H \"content-type: application/json\"", "curl -H \"accept: application/xml\" -H \"content-type: application/xml\"", "curl -u wbadmin:wbadmin -H \"accept: application/xml\" -X GET \"http://localhost:8080/kie-server/services/rest/server/containers/traffic-violation_1.0.0-SNAPSHOT/dmn\"", "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <response type=\"SUCCESS\" msg=\"Ok models successfully retrieved from container 'traffic-violation_1.0.0-SNAPSHOT'\"> <dmn-model-info-list> <model> <model-namespace>https://kiegroup.org/dmn/_60b01f4d-e407-43f7-848e-258723b5fac8</model-namespace> <model-name>Traffic Violation</model-name> <model-id>_2CD7D1AA-BD84-4B43-AD21-B0342ADE655A</model-id> <decisions> <dmn-decision-info> <decision-id>_23428EE8-DC8B-4067-8E67-9D7C53EC975F</decision-id> <decision-name>Fine</decision-name> </dmn-decision-info> <dmn-decision-info> <decision-id>_B5EEE2B1-915C-44DC-BE43-C244DC066FD8</decision-id> <decision-name>Should the driver be suspended?</decision-name> </dmn-decision-info> </decisions> <inputs> <dmn-inputdata-info> <inputdata-id>_CEB959CD-3638-4A87-93BA-03CD0FB63AE3</inputdata-id> <inputdata-name>Violation</inputdata-name> <inputdata-typeref> <namespace-uri>https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8</namespace-uri> <local-part>tViolation</local-part> <prefix></prefix> </inputdata-typeref> </dmn-inputdata-info> <dmn-inputdata-info> <inputdata-id>_B0E810E6-7596-430A-B5CF-67CE16863B6C</inputdata-id> <inputdata-name>Driver</inputdata-name> <inputdata-typeref> <namespace-uri>https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8</namespace-uri> <local-part>tDriver</local-part> <prefix></prefix> </inputdata-typeref> </dmn-inputdata-info> </inputs> <itemdefinitions> <dmn-itemdefinition-info> <itemdefinition-id>_9C758F4A-7D72-4D0F-B63F-2F5B8405980E</itemdefinition-id> <itemdefinition-name>tViolation</itemdefinition-name> <itemdefinition-itemcomponent> <dmn-itemdefinition-info> <itemdefinition-id>_0B6FF1E2-ACE9-4FB3-876B-5BB30B88009B</itemdefinition-id> <itemdefinition-name>Code</itemdefinition-name> <itemdefinition-typeref> <namespace-uri>https://kiegroup.org/dmn/_60b01f4d-e407-43f7-848e-258723b5fac8</namespace-uri> <local-part>string</local-part> <prefix></prefix> </itemdefinition-typeref> <itemdefinition-itemcomponent/> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> <dmn-itemdefinition-info> <itemdefinition-id>_27A5DA18-3CA7-4C06-81B7-CF7F2F050E29</itemdefinition-id> <itemdefinition-name>date</itemdefinition-name> <itemdefinition-typeref> <namespace-uri>https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8</namespace-uri> <local-part>date</local-part> <prefix></prefix> </itemdefinition-typeref> <itemdefinition-itemcomponent/> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> <dmn-itemdefinition-info> <itemdefinition-id>_8961969A-8A80-4F12-B568-346920C0F038</itemdefinition-id> <itemdefinition-name>type</itemdefinition-name> <itemdefinition-typeref> <namespace-uri>https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8</namespace-uri> <local-part>string</local-part> <prefix></prefix> </itemdefinition-typeref> <itemdefinition-itemcomponent/> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> <dmn-itemdefinition-info> <itemdefinition-id>_7450F12A-3E95-4D5E-8DCE-2CB1FAC2BDD4</itemdefinition-id> <itemdefinition-name>speed limit</itemdefinition-name> <itemdefinition-typeref> <namespace-uri>https://kiegroup.org/dmn/_60b01f4d-e407-43f7-848e-258723b5fac8</namespace-uri> <local-part>number</local-part> <prefix></prefix> </itemdefinition-typeref> <itemdefinition-itemcomponent/> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> <dmn-itemdefinition-info> <itemdefinition-id>_0A9A6F26-6C14-414D-A9BF-765E5850429A</itemdefinition-id> <itemdefinition-name>Actual Speed</itemdefinition-name> <itemdefinition-typeref> <namespace-uri>https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8</namespace-uri> <local-part>number</local-part> <prefix></prefix> </itemdefinition-typeref> <itemdefinition-itemcomponent/> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> </itemdefinition-itemcomponent> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> <dmn-itemdefinition-info> <itemdefinition-id>_13C7EFD8-B85C-43BF-94D3-14FABE39A4A0</itemdefinition-id> <itemdefinition-name>tDriver</itemdefinition-name> <itemdefinition-itemcomponent> <dmn-itemdefinition-info> <itemdefinition-id>_EC11744C-4160-4549-9610-2C757F40DFE8</itemdefinition-id> <itemdefinition-name>Name</itemdefinition-name> <itemdefinition-typeref> <namespace-uri>https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8</namespace-uri> <local-part>string</local-part> <prefix></prefix> </itemdefinition-typeref> <itemdefinition-itemcomponent/> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> <dmn-itemdefinition-info> <itemdefinition-id>_E95BE3DB-4A51-4658-A166-02493EAAC9D2</itemdefinition-id> <itemdefinition-name>Age</itemdefinition-name> <itemdefinition-typeref> <namespace-uri>https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8</namespace-uri> <local-part>number</local-part> <prefix></prefix> </itemdefinition-typeref> <itemdefinition-itemcomponent/> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> <dmn-itemdefinition-info> <itemdefinition-id>_7B3023E2-BC44-4BF3-BF7E-773C240FB9AD</itemdefinition-id> <itemdefinition-name>State</itemdefinition-name> <itemdefinition-typeref> <namespace-uri>https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8</namespace-uri> <local-part>string</local-part> <prefix></prefix> </itemdefinition-typeref> <itemdefinition-itemcomponent/> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> <dmn-itemdefinition-info> <itemdefinition-id>_3D4B49DD-700C-4925-99A7-3B2B873F7800</itemdefinition-id> <itemdefinition-name>city</itemdefinition-name> <itemdefinition-typeref> <namespace-uri>https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8</namespace-uri> <local-part>string</local-part> <prefix></prefix> </itemdefinition-typeref> <itemdefinition-itemcomponent/> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> <dmn-itemdefinition-info> <itemdefinition-id>_B37C49E8-B0D9-4B20-9DC6-D655BB1CA7B1</itemdefinition-id> <itemdefinition-name>Points</itemdefinition-name> <itemdefinition-typeref> <namespace-uri>https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8</namespace-uri> <local-part>number</local-part> <prefix></prefix> </itemdefinition-typeref> <itemdefinition-itemcomponent/> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> </itemdefinition-itemcomponent> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> <dmn-itemdefinition-info> <itemdefinition-id>_A4077C7E-B57A-4DEE-9C65-7769636316F3</itemdefinition-id> <itemdefinition-name>tFine</itemdefinition-name> <itemdefinition-itemcomponent> <dmn-itemdefinition-info> <itemdefinition-id>_79B152A8-DE83-4001-B88B-52DFF0D73B2D</itemdefinition-id> <itemdefinition-name>Amount</itemdefinition-name> <itemdefinition-typeref> <namespace-uri>https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8</namespace-uri> <local-part>number</local-part> <prefix></prefix> </itemdefinition-typeref> <itemdefinition-itemcomponent/> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> <dmn-itemdefinition-info> <itemdefinition-id>_D7CB5F9C-9D55-48C2-83EE-D47045EC90D0</itemdefinition-id> <itemdefinition-name>Points</itemdefinition-name> <itemdefinition-typeref> <namespace-uri>https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8</namespace-uri> <local-part>number</local-part> <prefix></prefix> </itemdefinition-typeref> <itemdefinition-itemcomponent/> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> </itemdefinition-itemcomponent> <itemdefinition-iscollection>false</itemdefinition-iscollection> </dmn-itemdefinition-info> </itemdefinitions> <decisionservices/> </model> </dmn-model-info-list> </response>", "{ \"type\" : \"SUCCESS\", \"msg\" : \"OK models successfully retrieved from container 'Traffic-Violation_1.0.0-SNAPSHOT'\", \"result\" : { \"dmn-model-info-list\" : { \"models\" : [ { \"model-namespace\" : \"https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8\", \"model-name\" : \"Traffic Violation\", \"model-id\" : \"_2CD7D1AA-BD84-4B43-AD21-B0342ADE655A\", \"decisions\" : [ { \"decision-id\" : \"_23428EE8-DC8B-4067-8E67-9D7C53EC975F\", \"decision-name\" : \"Fine\" }, { \"decision-id\" : \"_B5EEE2B1-915C-44DC-BE43-C244DC066FD8\", \"decision-name\" : \"Should the driver be suspended?\" } ], \"inputs\" : [ { \"inputdata-id\" : \"_CEB959CD-3638-4A87-93BA-03CD0FB63AE3\", \"inputdata-name\" : \"Violation\", \"inputdata-typeRef\" : { \"namespace-uri\" : \"https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8\", \"local-part\" : \"tViolation\", \"prefix\" : \"\" } }, { \"inputdata-id\" : \"_B0E810E6-7596-430A-B5CF-67CE16863B6C\", \"inputdata-name\" : \"Driver\", \"inputdata-typeRef\" : { \"namespace-uri\" : \"https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8\", \"local-part\" : \"tDriver\", \"prefix\" : \"\" } } ], \"itemDefinitions\" : [ { \"itemdefinition-id\" : \"_13C7EFD8-B85C-43BF-94D3-14FABE39A4A0\", \"itemdefinition-name\" : \"tDriver\", \"itemdefinition-typeRef\" : null, \"itemdefinition-itemComponent\" : [ { \"itemdefinition-id\" : \"_EC11744C-4160-4549-9610-2C757F40DFE8\", \"itemdefinition-name\" : \"Name\", \"itemdefinition-typeRef\" : { \"namespace-uri\" : \"https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8\", \"local-part\" : \"string\", \"prefix\" : \"\" }, \"itemdefinition-itemComponent\" : [ ], \"itemdefinition-isCollection\" : false }, { \"itemdefinition-id\" : \"_E95BE3DB-4A51-4658-A166-02493EAAC9D2\", \"itemdefinition-name\" : \"Age\", \"itemdefinition-typeRef\" : { \"namespace-uri\" : \"https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8\", \"local-part\" : \"number\", \"prefix\" : \"\" }, \"itemdefinition-itemComponent\" : [ ], \"itemdefinition-isCollection\" : false }, { \"itemdefinition-id\" : \"_7B3023E2-BC44-4BF3-BF7E-773C240FB9AD\", \"itemdefinition-name\" : \"State\", \"itemdefinition-typeRef\" : { \"namespace-uri\" : \"https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8\", \"local-part\" : \"string\", \"prefix\" : \"\" }, \"itemdefinition-itemComponent\" : [ ], \"itemdefinition-isCollection\" : false }, { \"itemdefinition-id\" : \"_3D4B49DD-700C-4925-99A7-3B2B873F7800\", \"itemdefinition-name\" : \"City\", \"itemdefinition-typeRef\" : { \"namespace-uri\" : \"https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8\", \"local-part\" : \"string\", \"prefix\" : \"\" }, \"itemdefinition-itemComponent\" : [ ], \"itemdefinition-isCollection\" : false }, { \"itemdefinition-id\" : \"_B37C49E8-B0D9-4B20-9DC6-D655BB1CA7B1\", \"itemdefinition-name\" : \"Points\", \"itemdefinition-typeRef\" : { \"namespace-uri\" : \"https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8\", \"local-part\" : \"number\", \"prefix\" : \"\" }, \"itemdefinition-itemComponent\" : [ ], \"itemdefinition-isCollection\" : false } ], \"itemdefinition-isCollection\" : false }, { \"itemdefinition-id\" : \"_A4077C7E-B57A-4DEE-9C65-7769636316F3\", \"itemdefinition-name\" : \"tFine\", \"itemdefinition-typeRef\" : null, \"itemdefinition-itemComponent\" : [ { \"itemdefinition-id\" : \"_79B152A8-DE83-4001-B88B-52DFF0D73B2D\", \"itemdefinition-name\" : \"Amount\", \"itemdefinition-typeRef\" : { \"namespace-uri\" : \"https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8\", \"local-part\" : \"number\", \"prefix\" : \"\" }, \"itemdefinition-itemComponent\" : [ ], \"itemdefinition-isCollection\" : false }, { \"itemdefinition-id\" : \"_D7CB5F9C-9D55-48C2-83EE-D47045EC90D0\", \"itemdefinition-name\" : \"Points\", \"itemdefinition-typeRef\" : { \"namespace-uri\" : \"https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8\", \"local-part\" : \"number\", \"prefix\" : \"\" }, \"itemdefinition-itemComponent\" : [ ], \"itemdefinition-isCollection\" : false } ], \"itemdefinition-isCollection\" : false }, { \"itemdefinition-id\" : \"_9C758F4A-7D72-4D0F-B63F-2F5B8405980E\", \"itemdefinition-name\" : \"tViolation\", \"itemdefinition-typeRef\" : null, \"itemdefinition-itemComponent\" : [ { \"itemdefinition-id\" : \"_0B6FF1E2-ACE9-4FB3-876B-5BB30B88009B\", \"itemdefinition-name\" : \"Code\", \"itemdefinition-typeRef\" : { \"namespace-uri\" : \"https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8\", \"local-part\" : \"string\", \"prefix\" : \"\" }, \"itemdefinition-itemComponent\" : [ ], \"itemdefinition-isCollection\" : false }, { \"itemdefinition-id\" : \"_27A5DA18-3CA7-4C06-81B7-CF7F2F050E29\", \"itemdefinition-name\" : \"Date\", \"itemdefinition-typeRef\" : { \"namespace-uri\" : \"https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8\", \"local-part\" : \"date\", \"prefix\" : \"\" }, \"itemdefinition-itemComponent\" : [ ], \"itemdefinition-isCollection\" : false }, { \"itemdefinition-id\" : \"_8961969A-8A80-4F12-B568-346920C0F038\", \"itemdefinition-name\" : \"Type\", \"itemdefinition-typeRef\" : { \"namespace-uri\" : \"https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8\", \"local-part\" : \"string\", \"prefix\" : \"\" }, \"itemdefinition-itemComponent\" : [ ], \"itemdefinition-isCollection\" : false }, { \"itemdefinition-id\" : \"_7450F12A-3E95-4D5E-8DCE-2CB1FAC2BDD4\", \"itemdefinition-name\" : \"Speed Limit\", \"itemdefinition-typeRef\" : { \"namespace-uri\" : \"https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8\", \"local-part\" : \"number\", \"prefix\" : \"\" }, \"itemdefinition-itemComponent\" : [ ], \"itemdefinition-isCollection\" : false }, { \"itemdefinition-id\" : \"_0A9A6F26-6C14-414D-A9BF-765E5850429A\", \"itemdefinition-name\" : \"Actual Speed\", \"itemdefinition-typeRef\" : { \"namespace-uri\" : \"https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8\", \"local-part\" : \"number\", \"prefix\" : \"\" }, \"itemdefinition-itemComponent\" : [ ], \"itemdefinition-isCollection\" : false } ], \"itemdefinition-isCollection\" : false } ], \"decisionServices\" : [ ] } ] } } }", "curl -u wbadmin:wbadmin -H \"accept: application/json\" -H \"content-type: application/json\" -X POST \"http://localhost:8080/kie-server/services/rest/server/containers/traffic-violation_1.0.0-SNAPSHOT/dmn\" -d \"{ \\\"model-namespace\\\" : \\\"https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8\\\", \\\"model-name\\\" : \\\"Traffic Violation\\\", \\\"dmn-context\\\" : {\\\"Driver\\\" : {\\\"Points\\\" : 15}, \\\"Violation\\\" : {\\\"Type\\\" : \\\"speed\\\", \\\"Actual Speed\\\" : 135, \\\"Speed Limit\\\" : 100}}}\"", "{ \"model-namespace\" : \"https://kiegroup.org/dmn/_60B01F4D-E407-43F7-848E-258723B5FAC8\", \"model-name\" : \"Traffic Violation\", \"dmn-context\" : { \"Driver\" : { \"Points\" : 15 }, \"Violation\" : { \"Type\" : \"speed\", \"Actual Speed\" : 135, \"Speed Limit\" : 100 } } }", "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <dmn-evaluation-context> <dmn-context xsi:type=\"jaxbListWrapper\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"> <type>MAP</type> <element xsi:type=\"jaxbStringObjectPair\" key=\"Violation\"> <value xsi:type=\"jaxbListWrapper\"> <type>MAP</type> <element xsi:type=\"jaxbStringObjectPair\" key=\"Type\"> <value xsi:type=\"xs:string\" xmlns:xs=\"http://www.w3.org/2001/XMLSchema\">speed</value> </element> <element xsi:type=\"jaxbStringObjectPair\" key=\"Speed Limit\"> <value xsi:type=\"xs:decimal\" xmlns:xs=\"http://www.w3.org/2001/XMLSchema\">100</value> </element> <element xsi:type=\"jaxbStringObjectPair\" key=\"Actual Speed\"> <value xsi:type=\"xs:decimal\" xmlns:xs=\"http://www.w3.org/2001/XMLSchema\">135</value> </element> </value> </element> <element xsi:type=\"jaxbStringObjectPair\" key=\"Driver\"> <value xsi:type=\"jaxbListWrapper\"> <type>MAP</type> <element xsi:type=\"jaxbStringObjectPair\" key=\"Points\"> <value xsi:type=\"xs:decimal\" xmlns:xs=\"http://www.w3.org/2001/XMLSchema\">15</value> </element> </value> </element> </dmn-context> </dmn-evaluation-context>", "{ \"type\": \"SUCCESS\", \"msg\": \"OK from container 'Traffic-Violation_1.0.0-SNAPSHOT'\", \"result\": { \"dmn-evaluation-result\": { \"messages\": [], \"model-namespace\": \"https://kiegroup.org/dmn/_7D8116DE-ADF5-4560-A116-FE1A2EAFFF48\", \"model-name\": \"Traffic Violation\", \"decision-name\": [], \"dmn-context\": { \"Violation\": { \"Type\": \"speed\", \"Speed Limit\": 100, \"Actual Speed\": 135 }, \"Should Driver be Suspended?\": \"Yes\", \"Driver\": { \"Points\": 15 }, \"Fine\": { \"Points\": 7, \"Amount\": 1000 } }, \"decision-results\": { \"_E1AF5AC2-E259-455C-96E4-596E30D3BC86\": { \"messages\": [], \"decision-id\": \"_E1AF5AC2-E259-455C-96E4-596E30D3BC86\", \"decision-name\": \"Should the Driver be Suspended?\", \"result\": \"Yes\", \"status\": \"SUCCEEDED\" }, \"_D7F02CE0-AF50-4505-AB80-C7D6DE257920\": { \"messages\": [], \"decision-id\": \"_D7F02CE0-AF50-4505-AB80-C7D6DE257920\", \"decision-name\": \"Fine\", \"result\": { \"Points\": 7, \"Amount\": 1000 }, \"status\": \"SUCCEEDED\" } } } } }", "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <response type=\"SUCCESS\" msg=\"OK from container 'Traffic_1.0.0-SNAPSHOT'\"> <dmn-evaluation-result> <model-namespace>https://kiegroup.org/dmn/_A4BCA8B8-CF08-433F-93B2-A2598F19ECFF</model-namespace> <model-name>Traffic Violation</model-name> <dmn-context xsi:type=\"jaxbListWrapper\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"> <type>MAP</type> <element xsi:type=\"jaxbStringObjectPair\" key=\"Violation\"> <value xsi:type=\"jaxbListWrapper\"> <type>MAP</type> <element xsi:type=\"jaxbStringObjectPair\" key=\"Type\"> <value xsi:type=\"xs:string\" xmlns:xs=\"http://www.w3.org/2001/XMLSchema\">speed</value> </element> <element xsi:type=\"jaxbStringObjectPair\" key=\"Speed Limit\"> <value xsi:type=\"xs:decimal\" xmlns:xs=\"http://www.w3.org/2001/XMLSchema\">100</value> </element> <element xsi:type=\"jaxbStringObjectPair\" key=\"Actual Speed\"> <value xsi:type=\"xs:decimal\" xmlns:xs=\"http://www.w3.org/2001/XMLSchema\">135</value> </element> </value> </element> <element xsi:type=\"jaxbStringObjectPair\" key=\"Driver\"> <value xsi:type=\"jaxbListWrapper\"> <type>MAP</type> <element xsi:type=\"jaxbStringObjectPair\" key=\"Points\"> <value xsi:type=\"xs:decimal\" xmlns:xs=\"http://www.w3.org/2001/XMLSchema\">15</value> </element> </value> </element> <element xsi:type=\"jaxbStringObjectPair\" key=\"Fine\"> <value xsi:type=\"jaxbListWrapper\"> <type>MAP</type> <element xsi:type=\"jaxbStringObjectPair\" key=\"Points\"> <value xsi:type=\"xs:decimal\" xmlns:xs=\"http://www.w3.org/2001/XMLSchema\">7</value> </element> <element xsi:type=\"jaxbStringObjectPair\" key=\"Amount\"> <value xsi:type=\"xs:decimal\" xmlns:xs=\"http://www.w3.org/2001/XMLSchema\">1000</value> </element> </value> </element> <element xsi:type=\"jaxbStringObjectPair\" key=\"Should the driver be suspended?\"> <value xsi:type=\"xs:string\" xmlns:xs=\"http://www.w3.org/2001/XMLSchema\">Yes</value> </element> </dmn-context> <messages/> <decisionResults> <entry> <key>_4055D956-1C47-479C-B3F4-BAEB61F1C929</key> <value> <decision-id>_4055D956-1C47-479C-B3F4-BAEB61F1C929</decision-id> <decision-name>Fine</decision-name> <result xsi:type=\"jaxbListWrapper\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"> <type>MAP</type> <element xsi:type=\"jaxbStringObjectPair\" key=\"Points\"> <value xsi:type=\"xs:decimal\" xmlns:xs=\"http://www.w3.org/2001/XMLSchema\">7</value> </element> <element xsi:type=\"jaxbStringObjectPair\" key=\"Amount\"> <value xsi:type=\"xs:decimal\" xmlns:xs=\"http://www.w3.org/2001/XMLSchema\">1000</value> </element> </result> <messages/> <status>SUCCEEDED</status> </value> </entry> <entry> <key>_8A408366-D8E9-4626-ABF3-5F69AA01F880</key> <value> <decision-id>_8A408366-D8E9-4626-ABF3-5F69AA01F880</decision-id> <decision-name>Should the driver be suspended?</decision-name> <result xsi:type=\"xs:string\" xmlns:xs=\"http://www.w3.org/2001/XMLSchema\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">Yes</result> <messages/> <status>SUCCEEDED</status> </value> </entry> </decisionResults> </dmn-evaluation-result> </response>" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_process_automation_manager/dmn-execution-con_getting-started-decision-services
Chapter 1. Schedule and quota APIs
Chapter 1. Schedule and quota APIs 1.1. AppliedClusterResourceQuota [quota.openshift.io/v1] Description AppliedClusterResourceQuota mirrors ClusterResourceQuota at a project scope, for projection into a project. It allows a project-admin to know which ClusterResourceQuotas are applied to his project and their associated usage. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. ClusterResourceQuota [quota.openshift.io/v1] Description ClusterResourceQuota mirrors ResourceQuota at a cluster scope. This object is easily convertible to synthetic ResourceQuota object to allow quota evaluation re-use. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.3. FlowSchema [flowcontrol.apiserver.k8s.io/v1] Description FlowSchema defines the schema of a group of flows. Note that a flow is made up of a set of inbound API requests with similar attributes and is identified by a pair of strings: the name of the FlowSchema and a "flow distinguisher". Type object 1.4. LimitRange [v1] Description LimitRange sets resource usage limits for each kind of resource in a Namespace. Type object 1.5. PriorityClass [scheduling.k8s.io/v1] Description PriorityClass defines mapping from a priority class name to the priority integer value. The value can be any valid integer. Type object 1.6. PriorityLevelConfiguration [flowcontrol.apiserver.k8s.io/v1] Description PriorityLevelConfiguration represents the configuration of a priority level. Type object 1.7. ResourceQuota [v1] Description ResourceQuota sets aggregate quota restrictions enforced per namespace Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/schedule_and_quota_apis/schedule-and-quota-apis
4.11. Information Gathering Tools
4.11. Information Gathering Tools The utilities listed below are command-line tools that provide well-formatted information, such as access vector cache statistics or the number of classes, types, or Booleans. avcstat This command provides a short output of the access vector cache statistics since boot. You can watch the statistics in real time by specifying a time interval in seconds. This provides updated statistics since the initial output. The statistics file used is /sys/fs/selinux/avc/cache_stats , and you can specify a different cache file with the -f /path/to/file option. seinfo This utility is useful in describing the break-down of a policy, such as the number of classes, types, Booleans, allow rules, and others. seinfo is a command-line utility that uses a policy.conf file, a binary policy file, a modular list of policy packages, or a policy list file as input. You must have the setools-console package installed to use the seinfo utility. The output of seinfo will vary between binary and source files. For example, the policy source file uses the { } brackets to group multiple rule elements onto a single line. A similar effect happens with attributes, where a single attribute expands into one or many types. Because these are expanded and no longer relevant in the binary policy file, they have a return value of zero in the search results. However, the number of rules greatly increases as each formerly one line rule using brackets is now a number of individual lines. Some items are not present in the binary policy. For example, neverallow rules are only checked during policy compile, not during runtime, and initial Security Identifiers (SIDs) are not part of the binary policy since they are required prior to the policy being loaded by the kernel during boot. The seinfo utility can also list the number of types with the domain attribute, giving an estimate of the number of different confined processes: Not all domain types are confined. To look at the number of unconfined domains, use the unconfined_domain attribute: Permissive domains can be counted with the --permissive option: Remove the additional | wc -l command in the above commands to see the full lists. sesearch You can use the sesearch utility to search for a particular rule in the policy. It is possible to search either policy source files or the binary file. For example: The sesearch utility can provide the number of allow rules: And the number of dontaudit rules:
[ "~]# avcstat lookups hits misses allocs reclaims frees 47517410 47504630 12780 12780 12176 12275", "~]# seinfo Statistics for policy file: /sys/fs/selinux/policy Policy Version & Type: v.28 (binary, mls) Classes: 77 Permissions: 229 Sensitivities: 1 Categories: 1024 Types: 3001 Attributes: 244 Users: 9 Roles: 13 Booleans: 158 Cond. Expr.: 193 Allow: 262796 Neverallow: 0 Auditallow: 44 Dontaudit: 156710 Type_trans: 10760 Type_change: 38 Type_member: 44 Role allow: 20 Role_trans: 237 Range_trans: 2546 Constraints: 62 Validatetrans: 0 Initial SIDs: 27 Fs_use: 22 Genfscon: 82 Portcon: 373 Netifcon: 0 Nodecon: 0 Permissives: 22 Polcap: 2", "~]# seinfo -adomain -x | wc -l 550", "~]# seinfo -aunconfined_domain_type -x | wc -l 52", "~]# seinfo --permissive -x | wc -l 31", "~]USD sesearch --role_allow -t httpd_sys_content_t Found 20 role allow rules: allow system_r sysadm_r; allow sysadm_r system_r; allow sysadm_r staff_r; allow sysadm_r user_r; allow system_r git_shell_r; allow system_r guest_r; allow logadm_r system_r; allow system_r logadm_r; allow system_r nx_server_r; allow system_r staff_r; allow staff_r logadm_r; allow staff_r sysadm_r; allow staff_r unconfined_r; allow staff_r webadm_r; allow unconfined_r system_r; allow system_r unconfined_r; allow system_r user_r; allow webadm_r system_r; allow system_r webadm_r; allow system_r xguest_r;", "~]# sesearch --allow | wc -l 262798", "~]# sesearch --dontaudit | wc -l 156712" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-security-enhanced_linux-maintaining_selinux_labels-information_gathering_tools
Chapter 24. Configuring a custom PKI
Chapter 24. Configuring a custom PKI Some platform components, such as the web console, use Routes for communication and must trust other components' certificates to interact with them. If you are using a custom public key infrastructure (PKI), you must configure it so its privately signed CA certificates are recognized across the cluster. You can leverage the Proxy API to add cluster-wide trusted CA certificates. You must do this either during installation or at runtime. During installation , configure the cluster-wide proxy . You must define your privately signed CA certificates in the install-config.yaml file's additionalTrustBundle setting. The installation program generates a ConfigMap that is named user-ca-bundle that contains the additional CA certificates you defined. The Cluster Network Operator then creates a trusted-ca-bundle ConfigMap that merges these CA certificates with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle; this ConfigMap is referenced in the Proxy object's trustedCA field. At runtime , modify the default Proxy object to include your privately signed CA certificates (part of cluster's proxy enablement workflow). This involves creating a ConfigMap that contains the privately signed CA certificates that should be trusted by the cluster, and then modifying the proxy resource with the trustedCA referencing the privately signed certificates' ConfigMap. Note The installer configuration's additionalTrustBundle field and the proxy resource's trustedCA field are used to manage the cluster-wide trust bundle; additionalTrustBundle is used at install time and the proxy's trustedCA is used at runtime. The trustedCA field is a reference to a ConfigMap containing the custom certificate and key pair used by the cluster component. 24.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 24.2. Enabling the cluster-wide proxy The Proxy object is used to manage the cluster-wide egress proxy. When a cluster is installed or upgraded without the proxy configured, a Proxy object is still generated but it will have a nil spec . For example: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: "" status: A cluster administrator can configure the proxy for OpenShift Container Platform by modifying this cluster Proxy object. Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Prerequisites Cluster administrator permissions OpenShift Container Platform oc CLI tool installed Procedure Create a config map that contains any additional CA certificates required for proxying HTTPS connections. Note You can skip this step if the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Create a file called user-ca-bundle.yaml with the following contents, and provide the values of your PEM-encoded certificates: apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4 1 This data key must be named ca-bundle.crt . 2 One or more PEM-encoded X.509 certificates used to sign the proxy's identity certificate. 3 The config map name that will be referenced from the Proxy object. 4 The config map must be in the openshift-config namespace. Create the config map from this file: USD oc create -f user-ca-bundle.yaml Use the oc edit command to modify the Proxy object: USD oc edit proxy/cluster Configure the necessary fields for the proxy: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. The URL scheme must be either http or https . Specify a URL for the proxy that supports the URL scheme. For example, most proxies will report an error if they are configured to use https but they only support http . This failure message may not propagate to the logs and can appear to be a network connection failure instead. If using a proxy that listens for https connections from the cluster, you may need to configure the cluster to accept the CAs and certificates that the proxy uses. 3 A comma-separated list of destination domain names, domains, IP addresses or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy or httpsProxy fields are set. 4 One or more URLs external to the cluster to use to perform a readiness check before writing the httpProxy and httpsProxy values to status. 5 A reference to the config map in the openshift-config namespace that contains additional CA certificates required for proxying HTTPS connections. Note that the config map must already exist before referencing it here. This field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Save the file to apply the changes. 24.3. Certificate injection using Operators Once your custom CA certificate is added to the cluster via ConfigMap, the Cluster Network Operator merges the user-provided and system CA certificates into a single bundle and injects the merged bundle into the Operator requesting the trust bundle injection. Important After adding a config.openshift.io/inject-trusted-cabundle="true" label to the config map, existing data in it is deleted. The Cluster Network Operator takes ownership of a config map and only accepts ca-bundle as data. You must use a separate config map to store service-ca.crt by using the service.beta.openshift.io/inject-cabundle=true annotation or a similar configuration. Adding a config.openshift.io/inject-trusted-cabundle="true" label and service.beta.openshift.io/inject-cabundle=true annotation on the same config map can cause issues. Operators request this injection by creating an empty ConfigMap with the following label: config.openshift.io/inject-trusted-cabundle="true" An example of the empty ConfigMap: apiVersion: v1 data: {} kind: ConfigMap metadata: labels: config.openshift.io/inject-trusted-cabundle: "true" name: ca-inject 1 namespace: apache 1 Specifies the empty ConfigMap name. The Operator mounts this ConfigMap into the container's local trust store. Note Adding a trusted CA certificate is only needed if the certificate is not included in the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. Certificate injection is not limited to Operators. The Cluster Network Operator injects certificates across any namespace when an empty ConfigMap is created with the config.openshift.io/inject-trusted-cabundle=true label. The ConfigMap can reside in any namespace, but the ConfigMap must be mounted as a volume to each container within a pod that requires a custom CA. For example: apiVersion: apps/v1 kind: Deployment metadata: name: my-example-custom-ca-deployment namespace: my-example-custom-ca-ns spec: ... spec: ... containers: - name: my-container-that-needs-custom-ca volumeMounts: - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true volumes: - name: trusted-ca configMap: name: trusted-ca items: - key: ca-bundle.crt 1 path: tls-ca-bundle.pem 2 1 ca-bundle.crt is required as the ConfigMap key. 2 tls-ca-bundle.pem is required as the ConfigMap path.
[ "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:", "apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4", "oc create -f user-ca-bundle.yaml", "oc edit proxy/cluster", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5", "config.openshift.io/inject-trusted-cabundle=\"true\"", "apiVersion: v1 data: {} kind: ConfigMap metadata: labels: config.openshift.io/inject-trusted-cabundle: \"true\" name: ca-inject 1 namespace: apache", "apiVersion: apps/v1 kind: Deployment metadata: name: my-example-custom-ca-deployment namespace: my-example-custom-ca-ns spec: spec: containers: - name: my-container-that-needs-custom-ca volumeMounts: - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true volumes: - name: trusted-ca configMap: name: trusted-ca items: - key: ca-bundle.crt 1 path: tls-ca-bundle.pem 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/networking/configuring-a-custom-pki
Chapter 25. Managing user groups using Ansible playbooks
Chapter 25. Managing user groups using Ansible playbooks This section introduces user group management using Ansible playbooks. A user group is a set of users with common privileges, password policies, and other characteristics. A user group in Identity Management (IdM) can include: IdM users other IdM user groups external users, which are users that exist outside of IdM The section includes the following topics: The different group types in IdM Direct and indirect group members Ensuring the presence of IdM groups and group members using Ansible playbooks Using Ansible to enable AD users to administer IdM Ensuring the presence of member managers in IDM user groups using Ansible playbooks Ensuring the absence of member managers in IDM user groups using Ansible playbooks 25.1. The different group types in IdM IdM supports the following types of groups: POSIX groups (the default) POSIX groups support Linux POSIX attributes for their members. Note that groups that interact with Active Directory cannot use POSIX attributes. POSIX attributes identify users as separate entities. Examples of POSIX attributes relevant to users include uidNumber , a user number (UID), and gidNumber , a group number (GID). Non-POSIX groups Non-POSIX groups do not support POSIX attributes. For example, these groups do not have a GID defined. All members of this type of group must belong to the IdM domain. External groups Use external groups to add group members that exist in an identity store outside of the IdM domain, such as: A local system An Active Directory domain A directory service External groups do not support POSIX attributes. For example, these groups do not have a GID defined. Table 25.1. User groups created by default Group name Default group members ipausers All IdM users admins Users with administrative privileges, including the default admin user editors This is a legacy group that no longer has any special privileges trust admins Users with privileges to manage the Active Directory trusts When you add a user to a user group, the user gains the privileges and policies associated with the group. For example, to grant administrative privileges to a user, add the user to the admins group. Warning Do not delete the admins group. As admins is a pre-defined group required by IdM, this operation causes problems with certain commands. In addition, IdM creates user private groups by default whenever a new user is created in IdM. For more information about private groups, see Adding users without a private group . 25.2. Direct and indirect group members User group attributes in IdM apply to both direct and indirect members: when group B is a member of group A, all users in group B are considered indirect members of group A. For example, in the following diagram: User 1 and User 2 are direct members of group A. User 3, User 4, and User 5 are indirect members of group A. Figure 25.1. Direct and Indirect Group Membership If you set a password policy for user group A, the policy also applies to all users in user group B. 25.3. Ensuring the presence of IdM groups and group members using Ansible playbooks The following procedure describes ensuring the presence of IdM groups and group members - both users and user groups - using an Ansible playbook. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The users you want to reference in your Ansible playbook exist in IdM. For details on ensuring the presence of users using Ansible, see Managing user accounts using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the necessary user and group information: --- - name: Playbook to handle groups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create group ops with gid 1234 ipagroup: ipaadmin_password: "{{ ipaadmin_password }}" name: ops gidnumber: 1234 - name: Create group sysops ipagroup: ipaadmin_password: "{{ ipaadmin_password }}" name: sysops user: - idm_user - name: Create group appops ipagroup: ipaadmin_password: "{{ ipaadmin_password }}" name: appops - name: Add group members sysops and appops to group ops ipagroup: ipaadmin_password: "{{ ipaadmin_password }}" name: ops group: - sysops - appops Run the playbook: Verification You can verify if the ops group contains sysops and appops as direct members and idm_user as an indirect member by using the ipa group-show command: Log into ipaserver as administrator: Display information about ops : The appops and sysops groups - the latter including the idm_user user - exist in IdM. Additional resources See the /usr/share/doc/ansible-freeipa/README-group.md Markdown file. 25.4. Using Ansible to add multiple IdM groups in a single task You can use the ansible-freeipa ipagroup module to add, modify, and delete multiple Identity Management (IdM) user groups with a single Ansible task. For that, use the groups option of the ipagroup module. Using the groups option, you can also specify multiple group variables that only apply to a particular group. Define this group by the name variable, which is the only mandatory variable for the groups option. Complete this procedure to ensure the presence of the sysops and the appops groups in IdM in a single task. Define the sysops group as a nonposix group and the appops group as an external group. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You are using RHEL 9.3 and later. You have stored your ipaadmin_password in the secret.yml Ansible vault. Procedure Create your Ansible playbook file add-nonposix-and-external-groups.yml with the following content: Run the playbook: Additional resources The group module in ansible-freeipa upstream docs 25.5. Using Ansible to enable AD users to administer IdM Follow this procedure to use an Ansible playbook to ensure that a user ID override is present in an Identity Management (IdM) group. The user ID override is the override of an Active Directory (AD) user that you created in the Default Trust View after you established a trust with AD. As a result of running the playbook, an AD user, for example an AD administrator, is able to fully administer IdM without having two different accounts and passwords. Prerequisites You know the IdM admin password. You have installed a trust with AD . The user ID override of the AD user already exists in IdM. If it does not, create it with the ipa idoverrideuser-add 'default trust view' [email protected] command. The group to which you are adding the user ID override already exists in IdM . You are using the 4.8.7 version of IdM or later. To view the version of IdM you have installed on your server, enter ipa --version . You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Create an add-useridoverride-to-group.yml playbook with the following content: In the example: Secret123 is the IdM admin password. admins is the name of the IdM POSIX group to which you are adding the [email protected] ID override. Members of this group have full administrator privileges. [email protected] is the user ID override of an AD administrator. The user is stored in the AD domain with which a trust has been established. Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources ID overrides for AD users /usr/share/doc/ansible-freeipa/README-group.md /usr/share/doc/ansible-freeipa/playbooks/user Using ID views in Active Directory environments Enabling AD users to administer IdM 25.6. Ensuring the presence of member managers in IdM user groups using Ansible playbooks The following procedure describes ensuring the presence of IdM member managers - both users and user groups - using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You must have the name of the user or group you are adding as member managers and the name of the group you want them to manage. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the necessary user and group member management information: Run the playbook: Verification You can verify if the group_a group contains test as a member manager and group_admins is a member manager of group_a by using the ipa group-show command: Log into ipaserver as administrator: Display information about managergroup1 : Additional resources See ipa host-add-member-manager --help . See the ipa man page on your system. 25.7. Ensuring the absence of member managers in IdM user groups using Ansible playbooks The following procedure describes ensuring the absence of IdM member managers - both users and user groups - using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You must have the name of the existing member manager user or group you are removing and the name of the group they are managing. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the necessary user and group member management information: --- - name: Playbook to handle membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure member manager user and group members are absent for group_a ipagroup: ipaadmin_password: "{{ ipaadmin_password }}" name: group_a membermanager_user: test membermanager_group: group_admins action: member state: absent Run the playbook: Verification You can verify if the group_a group does not contain test as a member manager and group_admins as a member manager of group_a by using the ipa group-show command: Log into ipaserver as administrator: Display information about group_a: Additional resources See ipa host-remove-member-manager --help . See the ipa man page on your system.
[ "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle groups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create group ops with gid 1234 ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: ops gidnumber: 1234 - name: Create group sysops ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: sysops user: - idm_user - name: Create group appops ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: appops - name: Add group members sysops and appops to group ops ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: ops group: - sysops - appops", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-group-members.yml", "ssh [email protected] Password: [admin@server /]USD", "ipaserver]USD ipa group-show ops Group name: ops GID: 1234 Member groups: sysops, appops Indirect Member users: idm_user", "--- - name: Playbook to add nonposix and external groups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Add nonposix group sysops and external group appops ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" groups: - name: sysops nonposix: true - name: appops external: true", "ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/hosts <path_to_playbooks_directory>/add-nonposix-and-external-groups.yml", "cd ~/ MyPlaybooks /", "--- - name: Playbook to ensure presence of users in a group hosts: ipaserver - name: Ensure the [email protected] user ID override is a member of the admins group: ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: admins idoverrideuser: - [email protected]", "ansible-playbook --vault-password-file=password_file -v -i inventory add-useridoverride-to-group.yml", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure user test is present for group_a ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_a membermanager_user: test - name: Ensure group_admins is present for group_a ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_a membermanager_group: group_admins", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-member-managers-user-groups.yml", "ssh [email protected] Password: [admin@server /]USD", "ipaserver]USD ipa group-show group_a Group name: group_a GID: 1133400009 Membership managed by groups: group_admins Membership managed by users: test", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure member manager user and group members are absent for group_a ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_a membermanager_user: test membermanager_group: group_admins action: member state: absent", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-member-managers-are-absent.yml", "ssh [email protected] Password: [admin@server /]USD", "ipaserver]USD ipa group-show group_a Group name: group_a GID: 1133400009" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/managing-user-groups-using-ansible-playbooks_managing-users-groups-hosts
Chapter 4. Deploy standalone Multicloud Object Gateway in internal mode
Chapter 4. Deploy standalone Multicloud Object Gateway in internal mode Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component in internal mode, which involves the following steps: Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway Note Deploying standalone Multicloud Object Gateway component is not supported in external mode deployments. 4.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. Each node should include one disk and requires 3 disks (PVs). However, one PV remains eventually unused by default. This is an expected behavior. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.14 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 4.2. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node)
[ "oc annotate namespace openshift-storage openshift.io/node-selector=" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/deploy-standalone-multicloud-object-gateway
7.33. cronie
7.33. cronie 7.33.1. RHBA-2015:0754 - cronie bug fix update Updated cronie packages that fix one bug are now available for Red Hat Enterprise Linux 6. The cronie packages contain the standard UNIX daemon crond that runs specified programs at scheduled times and related tools. They are a fork of the original vixie-cron cron implementation and have security and configuration enhancements like the ability to use pam and SELinux. Bug Fix BZ# 1204175 Due to a regression in parsing the /etc/anacrontab file caused by the cronie erratum released in the Fastrack channel, environment variables set in the /etc/anacrontab file were not recognized, and error messages were logged. These updated cronie packages fix the regression, and the variables are now set correctly for anacron jobs. Users of cronie are advised to upgrade to these updated packages, which fix this bug. 7.33.2. RHBA-2015:0704 - cronie bug fix and enhancement update Updated cronie packages that fix two bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. The cronie packages contain the standard UNIX daemon crond that runs specified programs at scheduled times and the anacron tool that enables crond to run jobs also on machines that are not continuously switched on. Bug Fixes BZ# 1031383 Previously, the anacron process could terminate unexpectedly in cases when the anacrontab file contained incorrect configuration settings. To fix this bug, the configuration settings format check has been amended, and the anacron process no longer crashes. BZ# 1082232 Prior to this update, the crond pid file could be erroneously removed in case a crond sub-process terminated unexpectedly. With this update, handling of the crond sub-processes termination has been corrected, and the removal no longer occurs. Enhancements BZ# 1108384 The crond daemon now logs shutdowns. Its proper terminations are therefore distinguishable from abnormal ones. BZ# 1123984 The crond daemon now logs errors when jobs are skipped due to getpwnam() call failures. Users of cronie are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-cronie
Appendix A. Troubleshooting a Self-hosted Engine Deployment
Appendix A. Troubleshooting a Self-hosted Engine Deployment To confirm whether the self-hosted engine has already been deployed, run hosted-engine --check-deployed . An error will only be displayed if the self-hosted engine has not been deployed. A.1. Troubleshooting the Manager Virtual Machine Check the status of the Manager virtual machine by running hosted-engine --vm-status . Note Any changes made to the Manager virtual machine will take about 20 seconds before they are reflected in the status command output. Depending on the Engine status in the output, see the following suggestions to find or fix the issue. Engine status: "health": "good", "vm": "up" "detail": "up" If the Manager virtual machine is up and running as normal, you will see the following output: If the output is normal but you cannot connect to the Manager, check the network connection. Engine status: "reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "up" If the health is bad and the vm is up , the HA services will try to restart the Manager virtual machine to get the Manager back. If it does not succeed within a few minutes, enable the global maintenance mode from the command line so that the hosts are no longer managed by the HA services. Connect to the console. When prompted, enter the operating system's root password. For more console options, see https://access.redhat.com/solutions/2221461 . Ensure that the Manager virtual machine's operating system is running by logging in. Check the status of the ovirt-engine service: Check the following logs: /var/log/messages , /var/log/ovirt-engine/engine.log, and /var/log/ovirt-engine/server.log . After fixing the issue, reboot the Manager virtual machine manually from one of the self-hosted engine nodes: Note When the self-hosted engine nodes are in global maintenance mode, the Manager virtual machine must be rebooted manually. If you try to reboot the Manager virtual machine by sending a reboot command from the command line, the Manager virtual machine will remain powered off. This is by design. On the Manager virtual machine, verify that the ovirt-engine service is up and running: After ensuring the Manager virtual machine is up and running, close the console session and disable the maintenance mode to enable the HA services again: Engine status: "vm": "down", "health": "bad", "detail": "unknown", "reason": "vm not running on this host" If you have more than one host in your environment, ensure that another host is not currently trying to restart the Manager virtual machine. Ensure that you are not in global maintenance mode. Check the ovirt-ha-agent logs in /var/log/ovirt-hosted-engine-ha/agent.log . Try to reboot the Manager virtual machine manually from one of the self-hosted engine nodes: Engine status: "vm": "unknown", "health": "unknown", "detail": "unknown", "reason": "failed to getVmStats" This status means that ovirt-ha-agent failed to get the virtual machine's details from VDSM. Check the VDSM logs in /var/log/vdsm/vdsm.log . Check the ovirt-ha-agent logs in /var/log/ovirt-hosted-engine-ha/agent.log . Engine status: The self-hosted engine's configuration has not been retrieved from shared storage If you receive the status The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable there is an issue with the ovirt-ha-agent service, or with the storage, or both. Check the status of ovirt-ha-agent on the host: If the ovirt-ha-agent is down, restart it: Check the ovirt-ha-agent logs in /var/log/ovirt-hosted-engine-ha/agent.log . Check that you can ping the shared storage. Check whether the shared storage is mounted. Additional Troubleshooting Commands Important Contact the Red Hat Support Team if you feel you need to run any of these commands to troubleshoot your self-hosted engine environment. hosted-engine --reinitialize-lockspace : This command is used when the sanlock lockspace is broken. Ensure that the global maintenance mode is enabled and that the Manager virtual machine is stopped before reinitializing the sanlock lockspaces. hosted-engine --clean-metadata : Remove the metadata for a host's agent from the global status database. This makes all other hosts forget about this host. Ensure that the target host is down and that the global maintenance mode is enabled. hosted-engine --check-liveliness : This command checks the liveliness page of the ovirt-engine service. You can also check by connecting to https:// engine-fqdn /ovirt-engine/services/health/ in a web browser. hosted-engine --connect-storage : This command instructs VDSM to prepare all storage connections needed for the host and and the Manager virtual machine. This is normally run in the back-end during the self-hosted engine deployment. Ensure that the global maintenance mode is enabled if you need to run this command to troubleshoot storage issues. A.2. Cleaning Up a Failed Self-hosted Engine Deployment If a self-hosted engine deployment was interrupted, subsequent deployments will fail with an error message. The error will differ depending on the stage in which the deployment failed. If you receive an error message, you can run the cleanup script on the deployment host to clean up the failed deployment. However, it's best to reinstall your base operating system and start the deployment from the beginning. Note The cleanup script has the following limitations: A disruption in the network connection while the script is running might cause the script to fail to remove the management bridge or to recreate a working network configuration. The script is not designed to clean up any shared storage device used during a failed deployment. You need to clean the shared storage device before you can reuse it in a subsequent deployment. Procedure Run /usr/sbin/ovirt-hosted-engine-cleanup and select y to remove anything left over from the failed self-hosted engine deployment. Define whether to reinstall on the same shared storage device or select a different shared storage device. To deploy the installation on the same storage domain, clean up the storage domain by running the following command in the appropriate directory on the server for NFS, Gluster, PosixFS or local storage domains: For iSCSI or Fibre Channel Protocol (FCP) storage, see https://access.redhat.com/solutions/2121581 for information on how to clean up the storage. Alternatively, select a different shared storage device. Redeploy the self-hosted engine.
[ "--== Host 1 status ==-- Status up-to-date : True Hostname : hypervisor.example.com Host ID : 1 Engine status : {\"health\": \"good\", \"vm\": \"up\", \"detail\": \"up\"} Score : 3400 stopped : False Local maintenance : False crc32 : 99e57eba Host timestamp : 248542", "hosted-engine --set-maintenance --mode=global", "hosted-engine --console", "systemctl status -l ovirt-engine journalctl -u ovirt-engine", "hosted-engine --vm-shutdown hosted-engine --vm-start", "systemctl status ovirt-engine.service", "hosted-engine --set-maintenance --mode=none", "hosted-engine --vm-shutdown hosted-engine --vm-start", "systemctl status -l ovirt-ha-agent journalctl -u ovirt-ha-agent", "systemctl start ovirt-ha-agent", "/usr/sbin/ovirt-hosted-engine-cleanup This will de-configure the host to run ovirt-hosted-engine-setup from scratch. Caution, this operation should be used with care. Are you sure you want to proceed? [y/n]", "rm -rf storage_location /*" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_command_line/troubleshooting_she_she_cli_deploy
Chapter 1. Introduction
Chapter 1. Introduction SystemTap provides free software (GPL) infrastructure to simplify the gathering of information about the running Linux system. This assists diagnosis of a performance or functional problem. SystemTap eliminates the need for the developer to go through the tedious and disruptive instrument, recompile, install, and reboot sequence that may be otherwise required to collect data. SystemTap provides a simple command line interface and scripting language for writing instrumentation for a live, running kernel. This instrumentation uses probe points and functions provided in the tapset library. Simply put, tapsets are scripts that encapsulate knowledge about a kernel subsystem into pre-written probes and functions that can be used by other scripts. Tapsets are analogous to libraries for C programs. They hide the underlying details of a kernel area while exposing the key information needed to manage and monitor that aspect of the kernel. They are typically developed by kernel subject-matter experts. A tapset exposes the high-level data and state transitions of a subsystem. For the most part, good tapset developers assume that SystemTap users know little to nothing about the kernel subsystem's low-level details. As such, tapset developers write tapsets that help ordinary SystemTap users write meaningful and useful SystemTap scripts. 1.1. Documentation Goals This guide aims to document SystemTap's most useful and common tapset entries; it also contains guidelines on proper tapset development and documentation. The tapset definitions contained in this guide are extracted automatically from properly-formatted comments in the code of each tapset file. As such, any revisions to the definitions in this guide should be applied directly to their respective tapset file.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/introduction
4.12. Prioritizing and Disabling SELinux Policy Modules
4.12. Prioritizing and Disabling SELinux Policy Modules The SELinux module storage in /etc/selinux/ allows using a priority on SELinux modules. Enter the following command as root to show two module directories with a different priority: While the default priority used by semodule utility is 400, the priority used in selinux-policy packages is 100, so you can find most of the SELinux modules installed with the priority 100. You can override an existing module with a modified module with the same name using a higher priority. When there are more modules with the same name and different priorities, only a module with the highest priority is used when the policy is built. Example 4.1. Using SELinux Policy Modules Priority Prepare a new module with modified file context. Install the module with the semodule -i command and set the priority of the module to 400. We use sandbox.pp in the following example. To return back to the default module, enter the semodule -r command as root: Disabling a System Policy Module To disable a system policy module, enter the following command as root: Warning If you remove a system policy module using the semodule -r command, it is deleted on your system's storage and you cannot load it again. To avoid unnecessary reinstallations of the selinux-policy-targeted package for restoring all system policy modules, use the semodule -d command instead.
[ "~]# ls /etc/selinux/targeted/active/modules 100 400 disabled", "~]# semodule -X 400 -i sandbox.pp ~]# semodule --list-modules=full | grep sandbox 400 sandbox pp 100 sandbox pp", "~]# semodule -X 400 -r sandbox libsemanage.semanage_direct_remove_key: sandbox module at priority 100 is now active.", "semodule -d MODULE_NAME" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/security-enhanced_linux-prioritizing_selinux_modules
Chapter 146. AlterOffsets schema reference
Chapter 146. AlterOffsets schema reference Used in: KafkaConnectorSpec , KafkaMirrorMaker2ConnectorSpec Property Property type Description fromConfigMap LocalObjectReference Reference to the ConfigMap where the new offsets are stored.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-AlterOffsets-reference
Chapter 4. Tools and components
Chapter 4. Tools and components Learn more about the Red Hat Ansible Automation Platform tools and components you will use in creating automation execution environments. 4.1. About Ansible Builder Ansible Builder is a command line tool that automates the process of building automation execution environments by using the metadata defined in various Ansible Collections or by the user. Before Ansible Builder was developed, Red Hat Ansible Automation Platform users could run into dependency issues and errors when creating custom virtual environments or containers that included all of the required dependencies installed. Now, with Ansible Builder, you can easily create a customizable automation execution environments definition file that specifies the content you want to be included in your automation execution environments such as collections, third-party Python requirements, and system-level packages. This allows you to fulfill all of the necessary requirements and dependencies to get jobs running. Note Red Hat currently does not support users who choose to provide their own container images when building automation automation execution environments. 4.2. Uses for Automation content navigator Automation content navigator is a command line, content-creator-focused tool with a text-based user interface. You can use Automation content navigator to: Launch and watch jobs and playbooks. Share stored, completed playbook and job run artifacts in JSON format. Browse and introspect automation execution environments. Browse your file-based inventory. Render Ansible module documentation and extract examples you can use in your playbooks. View a detailed command output on the user interface. 4.3. About automation hub Automation hub provides a place for Red Hat subscribers to quickly find and use content that is supported by Red Hat and our technology partners to deliver additional reassurance for the most demanding environments. At a high level, automation hub provides an overview of all partners participating and providing certified, supported content. From a central view, users can dive deeper into each partner and check out the collections. Additionally, a searchable overview of all available collections is available. 4.4. About the Ansible command line interface Using Ansible on the command line is a useful way to run tasks that you do not repeat very often. The recommended way to handle repeated tasks is to write a playbook. An ad hoc command for Ansible on the command line follows this structure: USD ansible [pattern] -m [module] -a "[module options]" 4.5. Additional resources For more information on how to use Ansible as a command line tool, refer to working with command line tools in the Ansible User Guide . To upload content to automation hub, see Uploading content to automation hub in the Ansible Automation Platform product documentation.
[ "ansible [pattern] -m [module] -a \"[module options]\"" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_creator_guide/tools
Chapter 2. Upgrading to Red Hat Ansible Automation Platform 2.5
Chapter 2. Upgrading to Red Hat Ansible Automation Platform 2.5 To upgrade your Red Hat Ansible Automation Platform, start by reviewing Planning your installation to ensure a successful upgrade. You can then download the desired version of the Ansible Automation Platform installer, configure the inventory file in the installation bundle to reflect your environment, and then run the installer. 2.1. Prerequisites Upgrades to Ansible Automation Platform 2.5 include the platform gateway . Ensure you review the 2.5 Network ports and protocols for architectural changes and Tested deployment models for information on opinionated deployment models. You have reviewed the centralized Redis instance offered by Ansible Automation Platform for both standalone and clustered topologies. Prior to upgrading your Red Hat Ansible Automation Platform, ensure you have reviewed Planning your installation for a successful upgrade. You can then download the desired version of the Ansible Automation Platform installer, configure the inventory file in the installation bundle to reflect your environment, and then run the installer. 2.2. Ansible Automation Platform upgrade planning Before you begin the upgrade process, review the following considerations to plan and prepare your Ansible Automation Platform deployment: See System requirements in the Planning your installation guide to ensure you have the topologies that fit your use case. Note 2.4 to 2.5 upgrades now include Platform gateway . Ensure you review the 2.5 Network ports and protocols for architectural changes. Important When upgrading from Ansible Automation Platform 2.4 to 2.5, the API endpoints for the automation controller, automation hub, and Event-Driven Ansible controller are all available for use. These APIs are being deprecated and will be disabled in an upcoming release. This grace period is to allow for migration to the new APIs put in place with the platform gateway. Verify that you have a valid subscription before upgrading from a version of Ansible Automation Platform. Existing subscriptions are carried over during the upgrade process. Ensure you have a backup of an Ansible Automation Platform 2.4 environment before upgrading in case any issues occur. See Backup and restore and Backup and recovery for operator environments for the specific topology of the environment. Ensure you capture your inventory or instance group details before upgrading. Upgrades of Event-Driven Ansible version 2.4 to 2.5 are not supported. Database migrations between Event-Driven Ansible 2.4 and Event-Driven Ansible 2.5 are not compatible. For more information, see automation controller and automation hub 2.4 and Event-Driven Ansible 2.5 with unified UI upgrades . If you are currently running Event-Driven Ansible controller 2.5, it is recommended that you disable all Event-Driven Ansible activations before upgrading to ensure that only new activations run after the upgrade process is complete. Automation controller OAuth applications on the platform UI are not supported for 2.4 to 2.5 migration. See this Knowledgebase article for more information. To learn how to recreate your OAuth applications, see Applications in the Access management and authentication guide. During the upgrade process, user accounts from the individual services are migrated. If there are accounts from multiple services, they must be linked to access the unified platform. See Account linking for details. Ansible Automation Platform 2.5 offers a centralized Redis instance in both standalone and clustered topologies. For information on how to configure Redis, see Configuring Redis in the RPM installation guide. When upgrading from Ansible Automation Platform 2.4 to 2.5, connections to the platform gateway URL might fail on the platform gateway UI if you are using the automation controller behind a load balancer. The following error message is displayed: Error connecting to Controller API To resolve this issue, for each controller host, add the platform gateway URL as a trusted source in the CSRF_TRUSTED_ORIGIN setting in the settings.py file for each controller host. You must then restart each controller host so that the URL changes are implemented. For more information, see Upgrading in Troubleshooting Ansible Automation Platform . Additional resources Attaching a subscription Backup and restore Clustering Planning your installation 2.3. Choosing and obtaining a Red Hat Ansible Automation Platform installer Choose the Red Hat Ansible Automation Platform installer you need based on your Red Hat Enterprise Linux environment internet connectivity. Review the following scenarios and decide on which Red Hat Ansible Automation Platform installer meets your needs. Note A valid Red Hat customer account is required to access Red Hat Ansible Automation Platform installer downloads on the Red Hat Customer Portal. 2.3.1. Installing with internet access Choose the Red Hat Ansible Automation Platform installer if your Red Hat Enterprise Linux environment is connected to the internet. Installing with internet access retrieves the latest required repositories, packages, and dependencies. Choose one of the following ways to set up your Ansible Automation Platform installer. Tarball install Procedure Navigate to the Red Hat Ansible Automation Platform download page. In the Product software tab, click Download Now for the Ansible Automation Platform <latest-version> Setup . Extract the files: USD tar xvzf ansible-automation-platform-setup-<latest-version>.tar.gz RPM install Procedure Install the Ansible Automation Platform Installer Package. v.2.5 for RHEL 8 for x86_64: USD sudo dnf install --enablerepo=ansible-automation-platform-2.5-for-rhel-8-x86_64-rpms ansible-automation-platform-installer v.2.5 for RHEL 9 for x86-64: USD sudo dnf install --enablerepo=ansible-automation-platform-2.5-for-rhel-9-x86_64-rpms ansible-automation-platform-installer dnf install enables the repo as the repo is disabled by default. When you use the RPM installer, the files are placed under the /opt/ansible-automation-platform/installer directory. 2.3.2. Installing without internet access Use the Red Hat Ansible Automation Platform Bundle installer if you are unable to access the internet, or would prefer not to install separate components and dependencies from online repositories. Access to Red Hat Enterprise Linux repositories is still needed. All other dependencies are included in the tar archive. Procedure Navigate to the Red Hat Ansible Automation Platform download page. In the Product software tab, click Download Now for the Ansible Automation Platform <latest-version> Setup Bundle . Extract the files: USD tar xvzf ansible-automation-platform-setup-bundle-<latest-version>.tar.gz 2.4. Setting up the inventory file Before upgrading your Red Hat Ansible Automation Platform installation, edit the inventory file so that it matches your desired configuration. You can keep the same parameters from your existing Ansible Automation Platform deployment or you can modify the parameters to match any changes to your environment. You can find sample inventory files in the Test topologies GitHub repository, or in our Tested deployment models guide. Procedure Navigate to the installation program directory. Bundled installer USD cd ansible-automation-platform-setup-bundle-2.5-4-x86_64 Online installer USD cd ansible-automation-platform-setup-2.5-4 Open the inventory file for editing. Modify the inventory file to provision new nodes, deprovision nodes or groups, and import or generate automation hub API tokens. You can use the same inventory file from an existing Ansible Automation Platform installation if there are no changes to the environment. Note Provide a reachable IP address or fully qualified domain name (FQDN) for all hosts to ensure that users can synchronize and install content from Ansible automation hub from a different node. Do not use localhost . If localhost is used, the upgrade will be stopped as part of preflight checks. Provisioning new nodes in a cluster Add new nodes alongside existing nodes in the inventory file as follows: [automationcontroller] clusternode1.example.com clusternode2.example.com clusternode3.example.com [all:vars] admin_password='password' pg_host='<host_name>' pg_database='<database_name>' pg_username='<your_username>' pg_password='<your_password>' 2.5. Backing up your Ansible Automation Platform instance Back up an existing Ansible Automation Platform instance by running the .setup.sh script with the backup_dir flag, which saves the content and configuration of your current environment. Use the compression flags use_archive_compression and use_db_compression to compress the backup artifacts before they are sent to the host running the backup operation. Procedure Navigate to your Ansible Automation Platform installation directory. Run the ./setup.sh script following the example below: USD ./setup.sh -e 'backup_dir=/ansible/mybackup' -e 'use_archive_compression=true' 'use_db_compression=true' @credentials.yml -b Where: backup_dir : Specifies a directory to save your backup to. use_archive_compression=true and use_db_compression=true : Compresses the backup artifacts before they are sent to the host running the backup operation. You can use the following variables to customize the compression: For global control of compression for filesystem related backup files: use_archive_compression=true For component-level control of compression for filesystem related backup files: <componentName>_use_archive_compression For example: automationgateway_use_archive_compression=true automationcontroller_use_archive_compression=true automationhub_use_archive_compression=true automationedacontroller_use_archive_compression=true For global control of compression for database related backup files: use_db_compression=true For component-level control of compression for database related backup files: <componentName>_use_db_compression=true For example: automationgateway_use_db_compression=true automationcontroller_use_db_compression=true automationhub_use_db_compression=true automationedacontroller_use_db_compression=true After a successful backup, a backup file is created at /ansible/mybackup/automation-platform-backup-<date/time>.tar.gz . 2.6. Running the Red Hat Ansible Automation Platform installer setup script You can run the setup script once you have finished updating the inventory file. Procedure Run the setup.sh script: USD ./setup.sh The installation will begin. steps If you are upgrading from Ansible Automation Platform 2.4 to 2.5, proceed to Linking your account to link your existing service level accounts to a single unified platform account. 2.7. Automation controller and automation hub 2.4 and Event-Driven Ansible 2.5 with unified UI upgrades Ansible Automation Platform 2.5 supports upgrades from Ansible Automation Platform 2.4 environments for all components, with the exception of Event-Driven Ansible. You can also configure a mixed environment with Event-Driven Ansible from 2.5 connected to a legacy 2.4 cluster. Combining install methods (OCP, RPM, Containerized) within such a topology is not supported by Ansible Automation Platform. Note If you are running the 2.4 version of Event-Driven Ansible in production, before you upgrade, contact Red Hat support or your account represenative for more information on how to move to Ansible Automation Platform 2.5. Supported topologies described in this document assume that: 2.4 services will only include automation controller and automation hub. 2.5 parts will always include Event-Driven Ansible and the unified UI (platform gateway). Combining install methods for these topologies is not supported. 2.7.1. Upgrade considerations You must maintain two separate inventory files: one for the 2.4 services and one for the 2.5 services. You must maintain two separate "installations" wihtin this scenario: one for the 2.4 services and one for the 2.5 services. You must "upgrade" the two separate "installations" separately. To upgrade to a consistent component version topology, consider the following: You must manually combine the inventory file configuration from the 2.4 inventory into the 2.5 inventory and run upgrade on ONLY the 2.5 inventory file. You must be using an external database for both the 2.4 inventory as well as the 2.5 inventory. Customers using "managed database" instances for either 2.4 or 2.5 inventory must migrate to an external database first, before upgrading. Prerequisites An inventory from 2.4 for automation controller and automation hub and a 2.5 inventory for unified UI (platform gateway) and Event-Driven Ansible. You must run upgrades on 2.4 services (using the inventory file to specify only automation controller and automation hub VMs) to get them to the invitial version of Ansible Automation Platform 2.5 first. When all the services are at the same version, run an upgrade (using a complete inventory file) on all the services to go to the latest version of Ansible Automation Platform 2.5. Important DO NOT upgrade Event-Driven Ansible and the unified UI (platform gateway) to the latest version of Ansible Automation Platform 2.5 without first upgrading the individual services (automation controller and automation hub) to the initial version of Ansible Automation Platform 2.5. 2.7.1.1. Migration path for 2.4 instances with managed databases Procedure Standalone node managed database Convert the database node to an external one, removing it from the inventory. The PostgreSQL node will continue working and will not lose the Ansible Automation Platform-provided setup, but you are responsible for managing its configuration afterward. Collocated managed database Backup Restore with standalone managed database node instead of collocated. Unmanaged standalone database 2.7.1.2. Migration path for 2.4 services with 2.5 services If you installed Ansible Automation Platform 2.5 to use Event-Driven Ansible in a supported scenario, you can upgrade your Ansible Automation Platform 2.4 automation controller and automation hub to Ansible Automation Platform 2.5 by following these steps: Merge 2.4 inventory data into the 2.5 inventory. The example below shows the inventory file for automation controller and automation hub for 2.4 and the inventory file for Event-Driven Ansible and the unified UI (platform gateway) for 2.5, respectively, as the starting point, and what the merged inventory looks like. Inventory files from 2.4 [automationcontroller] controller-1 controller-2 [automationhub] hub-1 hub-2 [all:vars] # Here we have the admin passwd, db credentials, etc. Inventory files from 2.5 [edacontroller] eda-1 eda-2 [gateway] gw-1 gw-2 [all:vars] # Here we have admin passwd, db credentials etc. Merged Inventory [automationcontroller] controller-1 controller-2 [automationhub] hub-1 hub-2 [edacontroller] eda-1 eda-2 [gateway] gw-1 gw-2 [all:vars] # Here we have admin passwd, db credentials etc from both inventories above Run setup.sh The installer upgrades automation controller and automation hub from 2.4 to Ansible Automation Platform 2.5.latest, Event-Driven Ansible and the unified UI (platform gateway) from the fresh install of 2.5 to the latest version of 2.5, and connects automation controller and automation hub properly with the unified UI (platform gateway) node to initialize the unified experience. Verification Verify that everything has upgraded to 2.5 and is working properly in one of two ways: performing an SSH to automation controller and Event-Driven Ansible. In the unified UI, navigate to Help > About to verify the RPM versions are at 2.5.
[ "tar xvzf ansible-automation-platform-setup-<latest-version>.tar.gz", "sudo dnf install --enablerepo=ansible-automation-platform-2.5-for-rhel-8-x86_64-rpms ansible-automation-platform-installer", "sudo dnf install --enablerepo=ansible-automation-platform-2.5-for-rhel-9-x86_64-rpms ansible-automation-platform-installer", "tar xvzf ansible-automation-platform-setup-bundle-<latest-version>.tar.gz", "cd ansible-automation-platform-setup-bundle-2.5-4-x86_64", "cd ansible-automation-platform-setup-2.5-4", "[automationcontroller] clusternode1.example.com clusternode2.example.com clusternode3.example.com [all:vars] admin_password='password' pg_host='<host_name>' pg_database='<database_name>' pg_username='<your_username>' pg_password='<your_password>'", "./setup.sh -e 'backup_dir=/ansible/mybackup' -e 'use_archive_compression=true' 'use_db_compression=true' @credentials.yml -b", "./setup.sh", "[automationcontroller] controller-1 controller-2 [automationhub] hub-1 hub-2 Here we have the admin passwd, db credentials, etc.", "[edacontroller] eda-1 eda-2 [gateway] gw-1 gw-2 Here we have admin passwd, db credentials etc.", "[automationcontroller] controller-1 controller-2 [automationhub] hub-1 hub-2 [edacontroller] eda-1 eda-2 [gateway] gw-1 gw-2 Here we have admin passwd, db credentials etc from both inventories above" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_upgrade_and_migration/aap-upgrading-platform
Chapter 1. Introduction
Chapter 1. Introduction 1.1. About the User Interface Guide This guide is for architects, engineers, consultants, and others who want to use the Migration Toolkit for Applications (MTA) user interface to accelerate large-scale application modernization efforts across hybrid cloud environments on Red Hat OpenShift. This solution provides insight throughout the adoption process, at both the portfolio and application levels: inventory, assess, analyze, and manage applications for faster migration to OpenShift via the user interface. 1.2. About the Migration Toolkit for Applications What is the Migration Toolkit for Applications? Migration Toolkit for Applications (MTA) accelerates large-scale application modernization efforts across hybrid cloud environments on Red Hat OpenShift. This solution provides insight throughout the adoption process, at both the portfolio and application levels: inventory, assess, analyze, and manage applications for faster migration to OpenShift via the user interface. In MTA 7.1 and later, when you add an application to the Application Inventory , MTA automatically creates and executes language and technology discovery tasks. Language discovery identifies the programming languages used in the application. Technology discovery identifies technologies, such as Enterprise Java Beans (EJB), Spring, etc. Then, each task assigns appropriate tags to the application, reducing the time and effort you spend manually tagging the application. MTA uses an extensive default questionnaire as the basis for assessing your applications, or you can create your own custom questionnaire, enabling you to estimate the difficulty, time, and other resources needed to prepare an application for containerization. You can use the results of an assessment as the basis for discussions between stakeholders to determine which applications are good candidates for containerization, which require significant work first, and which are not suitable for containerization. MTA analyzes applications by applying one or more rulesets to each application considered to determine which specific lines of that application must be modified before it can be modernized. MTA examines application artifacts, including project source directories and application archives, and then produces an HTML report highlighting areas needing changes. How does the Migration Toolkit for Applications simplify migration? The Migration Toolkit for Applications looks for common resources and known trouble spots when migrating applications. It provides a high-level view of the technologies used by the application. MTA generates a detailed report evaluating a migration or modernization path. This report can help you to estimate the effort required for large-scale projects and to reduce the work involved. 1.3. About the user interface The user interface for the Migration Toolkit for Applications allows a team of users to assess and analyze applications for risks and suitability for migration to hybrid cloud environments on Red Hat OpenShift. Use the user interface to assess and analyze your applications to get insights about potential pitfalls in the adoption process, at both the portfolio and application levels as you inventory, assess, analyze, and manage applications for faster migration to OpenShift.
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/user_interface_guide/mta-ui-guide-introduction
3.17. Starting a Virtual Machine with Cloud-Init
3.17. Starting a Virtual Machine with Cloud-Init This Ruby example starts a virtual machine using the Cloud-Init tool to set the root password and network configuration. # Find the virtual machine: vms_service = connection.system_service.vms_service vm = vms_service.list(search: 'name=myvm')[0] # Find the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) # Create a cloud-init script to execute in the # deployed virtual machine. The script must be correctly # formatted and indented because it uses YAML. my_script = " write_files: - content: | Hello, world! path: /tmp/greeting.txt permissions: '0644' " # Start the virtual machine, enabling cloud-init and providing the # password for the `root` user and the network configuration: vm_service.start( use_cloud_init: true, vm: { initialization: { user_name: 'root', root_password: 'redhat123', host_name: 'myvm.example.com', nic_configurations: [ { name: 'eth0', on_boot: true, boot_protocol: OvirtSDK4::BootProtocol::STATIC, ip: { version: OvirtSDK4::IpVersion::V4, address: '192.168.0.100', netmask: '255.255.255.0', gateway: '192.168.0.1' } } ], dns_servers: '192.168.0.1 192.168.0.2 192.168.0.3', dns_search: 'example.com', custom_script: my_script } } ) For more information, see VmService:start .
[ "Find the virtual machine: vms_service = connection.system_service.vms_service vm = vms_service.list(search: 'name=myvm')[0] Find the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) Create a cloud-init script to execute in the deployed virtual machine. The script must be correctly formatted and indented because it uses YAML. my_script = \" write_files: - content: | Hello, world! path: /tmp/greeting.txt permissions: '0644' \" Start the virtual machine, enabling cloud-init and providing the password for the `root` user and the network configuration: vm_service.start( use_cloud_init: true, vm: { initialization: { user_name: 'root', root_password: 'redhat123', host_name: 'myvm.example.com', nic_configurations: [ { name: 'eth0', on_boot: true, boot_protocol: OvirtSDK4::BootProtocol::STATIC, ip: { version: OvirtSDK4::IpVersion::V4, address: '192.168.0.100', netmask: '255.255.255.0', gateway: '192.168.0.1' } } ], dns_servers: '192.168.0.1 192.168.0.2 192.168.0.3', dns_search: 'example.com', custom_script: my_script } } )" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/ruby_sdk_guide/starting_a_virtual_machine_with_cloud-init
1.3. What Is GNOME Classic?
1.3. What Is GNOME Classic? GNOME Classic is a GNOME Shell feature and mode for users who prefer a more traditional desktop experience. While GNOME Classic is based on GNOME 3 technologies, it provides a number of changes to the user interface: The Applications and Places menus. The Applications menu is displayed at the top left of the screen. It gives the user access to applications organized into categories. The user can also open the Activities Overview from that menu. The Places menu is displayed to the Applications menu on the top bar . It gives the user quick access to important folders, for example Downloads or Pictures . The taskbar. The taskbar is displayed at the bottom of the screen, and features: a window list, a notification icon displayed to the window list, a short identifier for the current workspace and total number of available workspaces displayed to the notification icon. Four available workspaces. In GNOME Classic, the number of workspaces available to the user is by default set to 4. Minimize and maximize buttons. Window titlebars in GNOME Classic feature the minimize and maximize buttons that let the user quickly minimize the windows to the window list, or maximize them to take up all of the space on the desktop. A traditional Super + Tab window switcher. In GNOME Classic, windows in the Super + Tab window switcher are not grouped by application. The system menu. The system menu is in the top right corner. You can update some of your settings, find information about your Wi-Fi connection, switch user, log out, and turn off your computer from this menu. Figure 1.2. GNOME Classic with the Calculator application and the Accessories submenu of the Applications menu 1.3.1. The GNOME Classic Extensions GNOME Classic is distributed as a set of GNOME Shell extensions . The GNOME Classic extensions are installed as dependencies of the gnome-classic-session package, which provides components required to run a GNOME Classic session. Because the GNOME Classic extensions are enabled by default on Red Hat Enterprise Linux 7, GNOME Classic is the default Red Hat Enterprise Linux 7 desktop user interface. AlternateTab ( [email protected] ), Applications Menu ( [email protected] ), Launch new instance ( [email protected] ), Places Status Indicator ( [email protected] ), Window List ( [email protected] ). 1.3.2. Switching from GNOME Classic to GNOME and Back The user can switch from GNOME Classic to GNOME by logging out and clicking on the cogwheel to Sign In . The cogwheel opens a drop-down menu, which contains GNOME Classic. To switch from GNOME Classic to GNOME from within the user session, run the following command: To switch back to GNOME Classic from within the same user session, run the following command: 1.3.3. Disabling GNOME Classic as the Default Session For all newly created users on Red Hat Enterprise Linux 7, GNOME Classic is set as the default session. To override that setting for a specific user, you need to modify the user's account service in the /var/lib/AccountsService/users/ username file. See Section 14.3.2, "Configuring a User Default Session" for details on how to do that. Getting More Information Users can find more information on using GNOME 3, GNOME Shell, or GNOME Classic in GNOME Help, which is provided by the gnome-user-docs package. To access GNOME Help, press the Super key to enter the Activities Overview , type help , and then press Enter .
[ "gnome-shell --mode=user -r &", "gnome-shell --mode=classic -r &" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/what-is-gnome-classic
20.15. Displaying the IP Address and Port Number for the VNC Display
20.15. Displaying the IP Address and Port Number for the VNC Display The virsh vncdisplay command returns the IP address and port number of the VNC display for the specified guest virtual machine. If the information is unavailable for the guest, the exit code 1 is displayed. Note that for this command to work, VNC has to be specified as a graphics type in the devices element of the guest's XML file. For further information, see Section 23.17.11, "Graphical Framebuffers" . Example 20.37. How to display the IP address and port number for VNC The following example displays the port number for the VNC display of the guest1 virtual machine:
[ "virsh vncdisplay guest1 127.0.0.1:0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-editing_a_guest_virtual_machines_configuration_file-displaying_the_ip_address_and_port_number_for_the_vnc_display
11.2. Top Three Causes of Problems
11.2. Top Three Causes of Problems The following sections describe the top three causes of problems: labeling problems, configuring Booleans and ports for services, and evolving SELinux rules. 11.2.1. Labeling Problems On systems running SELinux, all processes and files are labeled with a label that contains security-relevant information. This information is called the SELinux context. If these labels are wrong, access may be denied. An incorrectly labeled application may cause an incorrect label to be assigned to its process. This may cause SELinux to deny access, and the process may create mislabeled files. A common cause of labeling problems is when a non-standard directory is used for a service. For example, instead of using /var/www/html/ for a website, an administrator wants to use /srv/myweb/ . On Red Hat Enterprise Linux, the /srv directory is labeled with the var_t type. Files and directories created in /srv inherit this type. Also, newly-created objects in top-level directories (such as /myserver ) may be labeled with the default_t type. SELinux prevents the Apache HTTP Server ( httpd ) from accessing both of these types. To allow access, SELinux must know that the files in /srv/myweb/ are to be accessible to httpd : This semanage command adds the context for the /srv/myweb/ directory (and all files and directories under it) to the SELinux file-context configuration [8] . The semanage utility does not change the context. As root, run the restorecon utility to apply the changes: See Section 4.7.2, "Persistent Changes: semanage fcontext" for further information about adding contexts to the file-context configuration. 11.2.1.1. What is the Correct Context? The matchpathcon utility checks the context of a file path and compares it to the default label for that path. The following example demonstrates using matchpathcon on a directory that contains incorrectly labeled files: In this example, the index.html and page1.html files are labeled with the user_home_t type. This type is used for files in user home directories. Using the mv command to move files from your home directory may result in files being labeled with the user_home_t type. This type should not exist outside of home directories. Use the restorecon utility to restore such files to their correct type: To restore the context for all files under a directory, use the -R option: See Section 4.10.3, "Checking the Default SELinux Context" for a more detailed example of matchpathcon . 11.2.2. How are Confined Services Running? Services can be run in a variety of ways. To cater for this, you need to specify how you run your services. This can be achieved through Booleans that allow parts of SELinux policy to be changed at runtime, without any knowledge of SELinux policy writing. This allows changes, such as allowing services access to NFS volumes, without reloading or recompiling SELinux policy. Also, running services on non-default port numbers requires policy configuration to be updated using the semanage command. For example, to allow the Apache HTTP Server to communicate with MariaDB, enable the httpd_can_network_connect_db Boolean: If access is denied for a particular service, use the getsebool and grep utilities to see if any Booleans are available to allow access. For example, use the getsebool -a | grep ftp command to search for FTP related Booleans: For a list of Booleans and whether they are on or off, run the getsebool -a command. For a list of Booleans, an explanation of what each one is, and whether they are on or off, run the semanage boolean -l command as root. See Section 4.6, "Booleans" for information about listing and configuring Booleans. Port Numbers Depending on policy configuration, services may only be allowed to run on certain port numbers. Attempting to change the port a service runs on without changing policy may result in the service failing to start. For example, run the semanage port -l | grep http command as root to list http related ports: The http_port_t port type defines the ports Apache HTTP Server can listen on, which in this case, are TCP ports 80, 443, 488, 8008, 8009, and 8443. If an administrator configures httpd.conf so that httpd listens on port 9876 ( Listen 9876 ), but policy is not updated to reflect this, the following command fails: An SELinux denial message similar to the following is logged to /var/log/audit/audit.log : To allow httpd to listen on a port that is not listed for the http_port_t port type, enter the semanage port command to add a port to policy configuration [9] : The -a option adds a new record; the -t option defines a type; and the -p option defines a protocol. The last argument is the port number to add. 11.2.3. Evolving Rules and Broken Applications Applications may be broken, causing SELinux to deny access. Also, SELinux rules are evolving - SELinux may not have seen an application running in a certain way, possibly causing it to deny access, even though the application is working as expected. For example, if a new version of PostgreSQL is released, it may perform actions the current policy has not seen before, causing access to be denied, even though access should be allowed. For these situations, after access is denied, use the audit2allow utility to create a custom policy module to allow access. See Section 11.3.8, "Allowing Access: audit2allow" for information about using audit2allow . [8] Files in /etc/selinux/targeted/contexts/files/ define contexts for files and directories. Files in this directory are read by the restorecon and setfiles utilities to restore files and directories to their default contexts. [9] The semanage port -a command adds an entry to the /etc/selinux/targeted/modules/active/ports.local file. Note that by default, this file can only be viewed by root.
[ "~]# semanage fcontext -a -t httpd_sys_content_t \"/srv/myweb(/.*)?\"", "~]# restorecon -R -v /srv/myweb", "~]USD matchpathcon -V /var/www/html/* /var/www/html/index.html has context unconfined_u:object_r:user_home_t:s0, should be system_u:object_r:httpd_sys_content_t:s0 /var/www/html/page1.html has context unconfined_u:object_r:user_home_t:s0, should be system_u:object_r:httpd_sys_content_t:s0", "~]# restorecon -v /var/www/html/index.html restorecon reset /var/www/html/index.html context unconfined_u:object_r:user_home_t:s0->system_u:object_r:httpd_sys_content_t:s0", "~]# restorecon -R -v /var/www/html/ restorecon reset /var/www/html/page1.html context unconfined_u:object_r:samba_share_t:s0->system_u:object_r:httpd_sys_content_t:s0 restorecon reset /var/www/html/index.html context unconfined_u:object_r:samba_share_t:s0->system_u:object_r:httpd_sys_content_t:s0", "~]# setsebool -P httpd_can_network_connect_db on", "~]USD getsebool -a | grep ftp ftpd_anon_write --> off ftpd_full_access --> off ftpd_use_cifs --> off ftpd_use_nfs --> off ftpd_connect_db --> off httpd_enable_ftp_server --> off tftp_anon_write --> off", "~]# semanage port -l | grep http http_cache_port_t tcp 3128, 8080, 8118 http_cache_port_t udp 3130 http_port_t tcp 80, 443, 488, 8008, 8009, 8443 pegasus_http_port_t tcp 5988 pegasus_https_port_t tcp 5989", "~]# systemctl start httpd.service Job for httpd.service failed. See 'systemctl status httpd.service' and 'journalctl -xn' for details.", "~]# systemctl status httpd.service httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled) Active: failed (Result: exit-code) since Thu 2013-08-15 09:57:05 CEST; 59s ago Process: 16874 ExecStop=/usr/sbin/httpd USDOPTIONS -k graceful-stop (code=exited, status=0/SUCCESS) Process: 16870 ExecStart=/usr/sbin/httpd USDOPTIONS -DFOREGROUND (code=exited, status=1/FAILURE)", "type=AVC msg=audit(1225948455.061:294): avc: denied { name_bind } for pid=4997 comm=\"httpd\" src=9876 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=system_u:object_r:port_t:s0 tclass=tcp_socket", "~]# semanage port -a -t http_port_t -p tcp 9876" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-security-enhanced_linux-troubleshooting-top_three_causes_of_problems
Chapter 11. Migrating
Chapter 11. Migrating Warning The Red Hat OpenShift distributed tracing platform (Jaeger) 3.5 is the last release of the Red Hat OpenShift distributed tracing platform (Jaeger) that Red Hat plans to support. In the Red Hat OpenShift distributed tracing platform 3.5, Jaeger and support for Elasticsearch remain deprecated. Support for the Red Hat OpenShift distributed tracing platform (Jaeger) ends on November 3, 2025. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog on November 3, 2025. For more information, see the Red Hat Knowledgebase solution Jaeger Deprecation and Removal in OpenShift . You must migrate to the Red Hat build of OpenTelemetry Operator and the Tempo Operator for distributed tracing collection and storage. For more information, see "Migrating" in the Red Hat build of OpenTelemetry documentation, "Installing" in the Red Hat build of OpenTelemetry documentation, and "Installing" in the distributed tracing platform (Tempo) documentation. If you are already using the Red Hat OpenShift distributed tracing platform (Jaeger) for your applications, you can migrate to the Red Hat build of OpenTelemetry, which is based on the OpenTelemetry open-source project. The Red Hat build of OpenTelemetry provides a set of APIs, libraries, agents, and instrumentation to facilitate observability in distributed systems. The OpenTelemetry Collector in the Red Hat build of OpenTelemetry can ingest the Jaeger protocol, so you do not need to change the SDKs in your applications. Migration from the distributed tracing platform (Jaeger) to the Red Hat build of OpenTelemetry requires configuring the OpenTelemetry Collector and your applications to report traces seamlessly. You can migrate sidecar and sidecarless deployments. 11.1. Migrating with sidecars The Red Hat build of OpenTelemetry Operator supports sidecar injection into deployment workloads, so you can migrate from a distributed tracing platform (Jaeger) sidecar to a Red Hat build of OpenTelemetry sidecar. Prerequisites The Red Hat OpenShift distributed tracing platform (Jaeger) is used on the cluster. The Red Hat build of OpenTelemetry is installed. Procedure Configure the OpenTelemetry Collector as a sidecar. apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <otel-collector-namespace> spec: mode: sidecar config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: "tempo-<example>-gateway:8090" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp] 1 This endpoint points to the Gateway of a TempoStack instance deployed by using the <example> Tempo Operator. Create a service account for running your application. apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar Create a cluster role for the permissions needed by some processors. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector-sidecar rules: 1 - apiGroups: ["config.openshift.io"] resources: ["infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] 1 The resourcedetectionprocessor requires permissions for infrastructures and infrastructures/status. Create a ClusterRoleBinding to set the permissions for the service account. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector-sidecar subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Deploy the OpenTelemetry Collector as a sidecar. Remove the injected Jaeger Agent from your application by removing the "sidecar.jaegertracing.io/inject": "true" annotation from your Deployment object. Enable automatic injection of the OpenTelemetry sidecar by adding the sidecar.opentelemetry.io/inject: "true" annotation to the .spec.template.metadata.annotations field of your Deployment object. Use the created service account for the deployment of your application to allow the processors to get the correct information and add it to your traces. 11.2. Migrating without sidecars You can migrate from the distributed tracing platform (Jaeger) to the Red Hat build of OpenTelemetry without sidecar deployment. Prerequisites The Red Hat OpenShift distributed tracing platform (Jaeger) is used on the cluster. The Red Hat build of OpenTelemetry is installed. Procedure Configure OpenTelemetry Collector deployment. Create the project where the OpenTelemetry Collector will be deployed. apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability Create a service account for running the OpenTelemetry Collector instance. apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability Create a cluster role for setting the required permissions for the processors. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] 1 Permissions for the pods and namespaces resources are required for the k8sattributesprocessor . 2 Permissions for infrastructures and infrastructures/status are required for resourcedetectionprocessor . Create a ClusterRoleBinding to set the permissions for the service account. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Create the OpenTelemetry Collector instance. Note This collector will export traces to a TempoStack instance. You must create your TempoStack instance by using the Red Hat Tempo Operator and place here the correct endpoint. apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: "tempo-example-gateway:8090" tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] Point your tracing endpoint to the OpenTelemetry Operator. If you are exporting your traces directly from your application to Jaeger, change the API endpoint from the Jaeger endpoint to the OpenTelemetry Collector endpoint. Example of exporting traces by using the jaegerexporter with Golang exp, err := jaeger.New(jaeger.WithCollectorEndpoint(jaeger.WithEndpoint(url))) 1 1 The URL points to the OpenTelemetry Collector API endpoint.
[ "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <otel-collector-namespace> spec: mode: sidecar config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: \"tempo-<example>-gateway:8090\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp]", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector-sidecar rules: 1 - apiGroups: [\"config.openshift.io\"] resources: [\"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector-sidecar subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io", "apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-example-gateway:8090\" tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]", "exp, err := jaeger.New(jaeger.WithCollectorEndpoint(jaeger.WithEndpoint(url))) 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/red_hat_build_of_opentelemetry/dist-tracing-otel-migrating
Chapter 57. Data objects
Chapter 57. Data objects Data objects are the building blocks for the rule assets that you create. Data objects are custom data types implemented as Java objects in specified packages of your project. For example, you might create a Person object with data fields Name , Address , and DateOfBirth to specify personal details for loan application rules. These custom data types determine what data your assets and your decision services are based on. 57.1. Creating data objects The following procedure is a generic overview of creating data objects. It is not specific to a particular business asset. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Data Object . Enter a unique Data Object name and select the Package where you want the data object to be available for other rule assets. Data objects with the same name cannot exist in the same package. In the specified DRL file, you can import a data object from any package. Importing data objects from other packages You can import an existing data object from another package directly into the asset designers like guided rules or guided decision table designers. Select the relevant rule asset within the project and in the asset designer, go to Data Objects New item to select the object to be imported. To make your data object persistable, select the Persistable checkbox. Persistable data objects are able to be stored in a database according to the JPA specification. The default JPA is Hibernate. Click Ok . In the data object designer, click add field to add a field to the object with the attributes Id , Label , and Type . Required attributes are marked with an asterisk (*). Id: Enter the unique ID of the field. Label: (Optional) Enter a label for the field. Type: Enter the data type of the field. List: (Optional) Select this check box to enable the field to hold multiple items for the specified type. Figure 57.1. Add data fields to a data object Click Create to add the new field, or click Create and continue to add the new field and continue adding other fields. Note To edit a field, select the field row and use the general properties on the right side of the screen.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/data-objects-con_guided-rule-templates
8.197. seabios
8.197. seabios 8.197.1. RHBA-2013:1655 - seabios bug fix and enhancement update Updated seabios packages that fix several bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. The seabios packages contain an open-source legacy BIOS implementation which can be used as a coreboot payload. It implements the standard BIOS calling interfaces that a typical x86 proprietary BIOS implements. Bug Fixes BZ# 846519 Due to a bug in the Advanced Configuration and Power Interface (ACPI) description table, a stop error (known as Blue Screen of Death, or BSoD) could occur on certain Windows guests. The problem was observed on the guests using the virtio-win small computer system interface (SCSI) drivers during heavy S3 and S4 power state transitions, for example when running the CrystalDiskMark benchmark. A patch has been applied to fix this bug, and a stop error no longer occurs in the described scenario. BZ# 846912 The multiple ACPI description table (MADT) previously contained an incorrect definition. Consequently, after switching to the S3 or S4 power state, the SCSI driver could not have been disabled and was always attached to the Global System Interrupt (GSI) number 9. The incorrect MADT definition has been fixed and the SCSI driver can now be disabled and enabled in this situation as expected. BZ# 888633 The SeaBIOS utility previously allowed booting even from deselected devices when no bootable device was found. This problem has been fixed by adding support for the HALT instruction, which prevents SeaBIOS from default boot attempts if no bootable device is available. Enhancements BZ#876250 With this update, the SMBIOS GUID number (the same number as the system's UUID) is now displayed on the BIOS Power-on self-test (POST) screen. BZ#963312 This update modifies the virsh management user interface so the "virsh dump" command now supports automatic core dumps. With appropriate kdump mechanism settings, the vmcore file is automatically captured and the subsequent actions are executed automatically. Users of seabios are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/seabios
Project APIs
Project APIs OpenShift Container Platform 4.13 Reference guide for project APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/project_apis/index
Chapter 1. Preparing to deploy OpenShift Data Foundation
Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Before you begin the deployment of Red Hat OpenShift Data Foundation, follow these steps: Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) HashiCorp Vault, follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Token authentication using KMS . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Kubernetes authentication using KMS . Ensure that you are using signed certificates on your Vault servers. Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) Thales CipherTrust Manager, you must first enable the Key Management Interoperability Protocol (KMIP) and use signed certificates on your server. Follow these steps: Create a KMIP client if one does not exist. From the user interface, select KMIP Client Profile Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token by navigating to KMIP Registration Token New Registration Token . Copy the token for the step. To register the client, navigate to KMIP Registered Clients Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings Interfaces Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both metadata and material when the key is deleted. It is disabled by default. Select the CA to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Optional: If StorageClass encryption is to be enabled during deployment, create a key to act as the Key Encryption Key (KEK): Navigate to Keys Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. Minimum starting node requirements An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in Planning guide. Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/preparing_to_deploy_openshift_data_foundation
11.3. Date Specifications
11.3. Date Specifications Date specifications are used to create cron-like expressions relating to time. Each field can contain a single number or a single range. Instead of defaulting to zero, any field not supplied is ignored. For example, monthdays="1" matches the first day of every month and hours="09-17" matches the hours between 9 am and 5 pm (inclusive). However, you cannot specify weekdays="1,2" or weekdays="1-2,5-6" since they contain multiple ranges. Table 11.5. Properties of a Date Specification Field Description id A unique name for the date hours Allowed values: 0-23 monthdays Allowed values: 0-31 (depending on month and year) weekdays Allowed values: 1-7 (1=Monday, 7=Sunday) yeardays Allowed values: 1-366 (depending on the year) months Allowed values: 1-12 weeks Allowed values: 1-53 (depending on weekyear ) years Year according the Gregorian calendar weekyears May differ from Gregorian years; for example, 2005-001 Ordinal is also 2005-01-01 Gregorian is also 2004-W53-6 Weekly moon Allowed values: 0-7 (0 is new, 4 is full moon).
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/_date_specifications
Chapter 8. Accessing Support Using the Red Hat Support Tool
Chapter 8. Accessing Support Using the Red Hat Support Tool The Red Hat Support Tool , in the redhat-support-tool package, can function as both an interactive shell and as a single-execution program. It can be run over SSH or from any terminal. It enables, for example, searching the Red Hat Knowledgebase from the command line, copying solutions directly on the command line, opening and updating support cases, and sending files to Red Hat for analysis. 8.1. Installing the Red Hat Support Tool The Red Hat Support Tool is installed by default on Red Hat Enterprise Linux. If required, to ensure that it is, enter the following command as root : 8.2. Registering the Red Hat Support Tool Using the Command Line To register the Red Hat Support Tool to the customer portal using the command line, run the following commands: Where username is the user name of the Red Hat Customer Portal account. 8.3. Using the Red Hat Support Tool in Interactive Shell Mode To start the tool in interactive mode, enter the following command: The tool can be run as an unprivileged user, with a consequently reduced set of commands, or as root . The commands can be listed by entering the ? character. The program or menu selection can be exited by entering the q or e character. You will be prompted for your Red Hat Customer Portal user name and password when you first search the Knowledgebase or support cases. Alternately, set the user name and password for your Red Hat Customer Portal account using interactive mode, and optionally save it to the configuration file. 8.4. Configuring the Red Hat Support Tool When in interactive mode, the configuration options can be listed by entering the command config --help : Registering the Red Hat Support Tool Using Interactive Mode To register the Red Hat Support Tool to the customer portal using interactive mode, proceed as follows: Start the tool by entering the following command: Enter your Red Hat Customer Portal user name: To save your user name to the global configuration file, add the -g option. Enter your Red Hat Customer Portal password: 8.4.1. Saving Settings to the Configuration Files The Red Hat Support Tool , unless otherwise directed, stores values and options locally in the home directory of the current user, using the ~/.redhat-support-tool/redhat-support-tool.conf configuration file. If required, it is recommended to save passwords to this file because it is only readable by that particular user. When the tool starts, it will read values from the global configuration file /etc/redhat-support-tool.conf and from the local configuration file. Locally stored values and options take precedence over globally stored settings. Warning It is recommended not to save passwords in the global /etc/redhat-support-tool.conf configuration file because the password is just base64 encoded and can easily be decoded. In addition, the file is world readable. To save a value or option to the global configuration file, add the -g, --global option as follows: Note In order to be able to save settings globally, using the -g, --global option, the Red Hat Support Tool must be run as root because normal users do not have the permissions required to write to /etc/redhat-support-tool.conf . To remove a value or option from the local configuration file, add the -u, --unset option as follows: This will clear, unset, the parameter from the tool and fall back to the equivalent setting in the global configuration file, if available. Note When running as an unprivileged user, values stored in the global configuration file cannot be removed using the -u, --unset option, but they can be cleared, unset, from the current running instance of the tool by using the -g, --global option simultaneously with the -u, --unset option. If running as root , values and options can be removed from the global configuration file using -g, --global simultaneously with the -u, --unset option. 8.5. Opening and Updating Support Cases Using Interactive Mode Opening a New Support Case Using Interactive Mode To open a new support case using interactive mode, proceed as follows: Start the tool by entering the following command: Enter the opencase command: Follow the on screen prompts to select a product and then a version. Enter a summary of the case. Enter a description of the case and press Ctrl + D on an empty line when complete. Select a severity of the case. Optionally chose to see if there is a solution to this problem before opening a support case. Confirm you would still like to open the support case. Optionally chose to attach an SOS report. Optionally chose to attach a file. Viewing and Updating an Existing Support Case Using Interactive Mode To view and update an existing support case using interactive mode, proceed as follows: Start the tool by entering the following command: Enter the getcase command: Where case-number is the number of the case you want to view and update. Follow the on screen prompts to view the case, modify or add comments, and get or add attachments. Modifying an Existing Support Case Using Interactive Mode To modify the attributes of an existing support case using interactive mode, proceed as follows: Start the tool by entering the following command: Enter the modifycase command: Where case-number is the number of the case you want to view and update. The modify selection list appears: Follow the on screen prompts to modify one or more of the options. For example, to modify the status, enter 3 : 8.6. Viewing Support Cases on the Command Line Viewing the contents of a case on the command line provides a quick and easy way to apply solutions from the command line. To view an existing support case on the command line, enter a command as follows: Where case-number is the number of the case you want to download. 8.7. Additional Resources The Red Hat Knowledgebase article Red Hat Support Tool has additional information, examples, and video tutorials.
[ "~]# yum install redhat-support-tool", "~]# redhat-support-tool config user username", "~]# redhat-support-tool config password Please enter the password for username :", "~]USD redhat-support-tool Welcome to the Red Hat Support Tool. Command (? for help):", "~]# redhat-support-tool Welcome to the Red Hat Support Tool. Command (? for help): config --help Usage: config [options] config.option <new option value> Use the 'config' command to set or get configuration file values. Options: -h, --help show this help message and exit -g, --global Save configuration option in /etc/redhat-support-tool.conf. -u, --unset Unset configuration option. The configuration file options which can be set are: user : The Red Hat Customer Portal user. password : The Red Hat Customer Portal password. debug : CRITICAL, ERROR, WARNING, INFO, or DEBUG url : The support services URL. Default=https://api.access.redhat.com proxy_url : A proxy server URL. proxy_user: A proxy server user. proxy_password: A password for the proxy server user. ssl_ca : Path to certificate authorities to trust during communication. kern_debug_dir: Path to the directory where kernel debug symbols should be downloaded and cached. Default=/var/lib/redhat-support-tool/debugkernels Examples: - config user - config user my-rhn-username - config --unset user", "~]# redhat-support-tool", "Command (? for help): config user username", "Command (? for help): config password Please enter the password for username :", "Command (? for help): config setting -g value", "Command (? for help): config setting -u value", "~]# redhat-support-tool", "Command (? for help): opencase", "Support case 0123456789 has successfully been opened", "~]# redhat-support-tool", "Command (? for help): getcase case-number", "~]# redhat-support-tool", "Command (? for help): modifycase case-number", "Type the number of the attribute to modify or 'e' to return to the previous menu. 1 Modify Type 2 Modify Severity 3 Modify Status 4 Modify Alternative-ID 5 Modify Product 6 Modify Version End of options.", "Selection: 3 1 Waiting on Customer 2 Waiting on Red Hat 3 Closed Please select a status (or 'q' to exit):", "~]# redhat-support-tool getcase case-number" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/ch-Red_Hat_Support_Tool
Chapter 6. Deploying Red Hat Quay using the Operator
Chapter 6. Deploying Red Hat Quay using the Operator Red Hat Quay on OpenShift Container Platform can be deployed using command-line interface or from the OpenShift Container Platform console. The steps are fundamentally the same. 6.1. Deploying Red Hat Quay from the command line Use the following procedure to deploy Red Hat Quay from using the command-line interface (CLI). Prerequisites You have logged into OpenShift Container Platform using the CLI. Procedure Create a namespace, for example, quay-enterprise , by entering the following command: USD oc new-project quay-enterprise Optional. If you want to pre-configure any aspects of your Red Hat Quay deployment, create a Secret for the config bundle: USD oc create secret generic quay-enterprise-config-bundle --from-file=config-bundle.tar.gz=/path/to/config-bundle.tar.gz Create a QuayRegistry custom resource in a file called quayregistry.yaml For a minimal deployment, using all the defaults: quayregistry.yaml: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise Optional. If you want to have some components unmanaged, add this information in the spec field. A minimal deployment might look like the following example: Example quayregistry.yaml with unmanaged components apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: clair managed: false - kind: horizontalpodautoscaler managed: false - kind: mirror managed: false - kind: monitoring managed: false Optional. If you have created a config bundle, for example, init-config-bundle-secret , reference it in the quayregistry.yaml file: Example quayregistry.yaml with a config bundle apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: init-config-bundle-secret Optional. If you have a proxy configured, you can add the information using overrides for Red Hat Quay, Clair, and mirroring: Example quayregistry.yaml with proxy configured kind: QuayRegistry metadata: name: quay37 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: mirror managed: true overrides: env: - name: DEBUGLOG value: "true" - name: HTTP_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: HTTPS_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: NO_PROXY value: svc.cluster.local,localhost,quay370.apps.quayperf370.perfscale.devcluster.openshift.com - kind: tls managed: false - kind: clair managed: true overrides: env: - name: HTTP_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: HTTPS_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: NO_PROXY value: svc.cluster.local,localhost,quay370.apps.quayperf370.perfscale.devcluster.openshift.com - kind: quay managed: true overrides: env: - name: DEBUGLOG value: "true" - name: NO_PROXY value: svc.cluster.local,localhost,quay370.apps.quayperf370.perfscale.devcluster.openshift.com - name: HTTP_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: HTTPS_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 Create the QuayRegistry in the specified namespace by entering the following command: USD oc create -n quay-enterprise -f quayregistry.yaml Enter the following command to see when the status.registryEndpoint is populated: USD oc get quayregistry -n quay-enterprise example-registry -o jsonpath="{.status.registryEndpoint}" -w Additional resources For more information about how to track the progress of your Red Hat Quay deployment, see Monitoring and debugging the deployment process . 6.1.1. Using the API to create the first user Use the following procedure to create the first user in your Red Hat Quay organization. Prerequisites The config option FEATURE_USER_INITIALIZE must be set to true . No users can already exist in the database. Procedure This procedure requests an OAuth token by specifying "access_token": true . Open your Red Hat Quay configuration file and update the following configuration fields: FEATURE_USER_INITIALIZE: true SUPER_USERS: - quayadmin Stop the Red Hat Quay service by entering the following command: USD sudo podman stop quay Start the Red Hat Quay service by entering the following command: USD sudo podman run -d -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z {productrepo}/{quayimage}:{productminv} Run the following CURL command to generate a new user with a username, password, email, and access token: USD curl -X POST -k http://quay-server.example.com/api/v1/user/initialize --header 'Content-Type: application/json' --data '{ "username": "quayadmin", "password":"quaypass12345", "email": "[email protected]", "access_token": true}' If successful, the command returns an object with the username, email, and encrypted password. For example: {"access_token":"6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED", "email":"[email protected]","encrypted_password":"1nZMLH57RIE5UGdL/yYpDOHLqiNCgimb6W9kfF8MjZ1xrfDpRyRs9NUnUuNuAitW","username":"quayadmin"} # gitleaks:allow If a user already exists in the database, an error is returned: {"message":"Cannot initialize user in a non-empty database"} If your password is not at least eight characters or contains whitespace, an error is returned: {"message":"Failed to initialize user: Invalid password, password must be at least 8 characters and contain no whitespace."} Log in to your Red Hat Quay deployment by entering the following command: USD sudo podman login -u quayadmin -p quaypass12345 http://quay-server.example.com --tls-verify=false Example output Login Succeeded! 6.1.2. Viewing created components using the command line Use the following procedure to view deployed Red Hat Quay components. Prerequisites You have deployed Red Hat Quay on OpenShift Container Platform. Procedure Enter the following command to view the deployed components: USD oc get pods -n quay-enterprise Example output NAME READY STATUS RESTARTS AGE example-registry-clair-app-5ffc9f77d6-jwr9s 1/1 Running 0 3m42s example-registry-clair-app-5ffc9f77d6-wgp7d 1/1 Running 0 3m41s example-registry-clair-postgres-54956d6d9c-rgs8l 1/1 Running 0 3m5s example-registry-quay-app-79c6b86c7b-8qnr2 1/1 Running 4 3m42s example-registry-quay-app-79c6b86c7b-xk85f 1/1 Running 4 3m41s example-registry-quay-app-upgrade-5kl5r 0/1 Completed 4 3m50s example-registry-quay-database-b466fc4d7-tfrnx 1/1 Running 2 3m42s example-registry-quay-mirror-6d9bd78756-6lj6p 1/1 Running 0 2m58s example-registry-quay-mirror-6d9bd78756-bv6gq 1/1 Running 0 2m58s example-registry-quay-postgres-init-dzbmx 0/1 Completed 0 3m43s example-registry-quay-redis-8bd67b647-skgqx 1/1 Running 0 3m42s 6.1.3. Horizontal Pod Autoscaling A default deployment shows the following running pods: Two pods for the Red Hat Quay application itself ( example-registry-quay-app-*` ) One Redis pod for Red Hat Quay logging ( example-registry-quay-redis-* ) One database pod for PostgreSQL used by Red Hat Quay for metadata storage ( example-registry-quay-database-* ) Two Quay mirroring pods ( example-registry-quay-mirror-* ) Two pods for the Clair application ( example-registry-clair-app-* ) One PostgreSQL pod for Clair ( example-registry-clair-postgres-* ) Horizontal PPod Autoscaling is configured by default to be managed , and the number of pods for Quay, Clair and repository mirroring is set to two. This facilitates the avoidance of downtime when updating or reconfiguring Red Hat Quay through the Red Hat Quay Operator or during rescheduling events. You can enter the following command to view information about HPA objects: USD oc get hpa -n quay-enterprise Example output NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE example-registry-clair-app Deployment/example-registry-clair-app 16%/90%, 0%/90% 2 10 2 13d example-registry-quay-app Deployment/example-registry-quay-app 31%/90%, 1%/90% 2 20 2 13d example-registry-quay-mirror Deployment/example-registry-quay-mirror 27%/90%, 0%/90% 2 20 2 13d Additional resources For more information on pre-configuring your Red Hat Quay deployment, see the section Pre-configuring Red Hat Quay for automation 6.1.4. Monitoring and debugging the deployment process Users can now troubleshoot problems during the deployment phase. The status in the QuayRegistry object can help you monitor the health of the components during the deployment an help you debug any problems that may arise. Procedure Enter the following command to check the status of your deployment: USD oc get quayregistry -n quay-enterprise -o yaml Example output Immediately after deployment, the QuayRegistry object will show the basic configuration: apiVersion: v1 items: - apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: creationTimestamp: "2021-09-14T10:51:22Z" generation: 3 name: example-registry namespace: quay-enterprise resourceVersion: "50147" selfLink: /apis/quay.redhat.com/v1/namespaces/quay-enterprise/quayregistries/example-registry uid: e3fc82ba-e716-4646-bb0f-63c26d05e00e spec: components: - kind: postgres managed: true - kind: clair managed: true - kind: redis managed: true - kind: horizontalpodautoscaler managed: true - kind: objectstorage managed: true - kind: route managed: true - kind: mirror managed: true - kind: monitoring managed: true - kind: tls managed: true configBundleSecret: example-registry-config-bundle-kt55s kind: List metadata: resourceVersion: "" selfLink: "" Use the oc get pods command to view the current state of the deployed components: USD oc get pods -n quay-enterprise Example output NAME READY STATUS RESTARTS AGE example-registry-clair-app-86554c6b49-ds7bl 0/1 ContainerCreating 0 2s example-registry-clair-app-86554c6b49-hxp5s 0/1 Running 1 17s example-registry-clair-postgres-68d8857899-lbc5n 0/1 ContainerCreating 0 17s example-registry-quay-app-upgrade-h2v7h 0/1 ContainerCreating 0 9s example-registry-quay-database-66f495c9bc-wqsjf 0/1 ContainerCreating 0 17s example-registry-quay-mirror-854c88457b-d845g 0/1 Init:0/1 0 2s example-registry-quay-mirror-854c88457b-fghxv 0/1 Init:0/1 0 17s example-registry-quay-postgres-init-bktdt 0/1 Terminating 0 17s example-registry-quay-redis-f9b9d44bf-4htpz 0/1 ContainerCreating 0 17s While the deployment is in progress, the QuayRegistry object will show the current status. In this instance, database migrations are taking place, and other components are waiting until completion: status: conditions: - lastTransitionTime: "2021-09-14T10:52:04Z" lastUpdateTime: "2021-09-14T10:52:04Z" message: all objects created/updated successfully reason: ComponentsCreationSuccess status: "False" type: RolloutBlocked - lastTransitionTime: "2021-09-14T10:52:05Z" lastUpdateTime: "2021-09-14T10:52:05Z" message: running database migrations reason: MigrationsInProgress status: "False" type: Available lastUpdated: 2021-09-14 10:52:05.371425635 +0000 UTC unhealthyComponents: clair: - lastTransitionTime: "2021-09-14T10:51:32Z" lastUpdateTime: "2021-09-14T10:51:32Z" message: 'Deployment example-registry-clair-postgres: Deployment does not have minimum availability.' reason: MinimumReplicasUnavailable status: "False" type: Available - lastTransitionTime: "2021-09-14T10:51:32Z" lastUpdateTime: "2021-09-14T10:51:32Z" message: 'Deployment example-registry-clair-app: Deployment does not have minimum availability.' reason: MinimumReplicasUnavailable status: "False" type: Available mirror: - lastTransitionTime: "2021-09-14T10:51:32Z" lastUpdateTime: "2021-09-14T10:51:32Z" message: 'Deployment example-registry-quay-mirror: Deployment does not have minimum availability.' reason: MinimumReplicasUnavailable status: "False" type: Available When the deployment process finishes successfully, the status in the QuayRegistry object shows no unhealthy components: status: conditions: - lastTransitionTime: "2021-09-14T10:52:36Z" lastUpdateTime: "2021-09-14T10:52:36Z" message: all registry component healthchecks passing reason: HealthChecksPassing status: "True" type: Available - lastTransitionTime: "2021-09-14T10:52:46Z" lastUpdateTime: "2021-09-14T10:52:46Z" message: all objects created/updated successfully reason: ComponentsCreationSuccess status: "False" type: RolloutBlocked currentVersion: {producty} lastUpdated: 2021-09-14 10:52:46.104181633 +0000 UTC registryEndpoint: https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org unhealthyComponents: {} 6.2. Deploying Red Hat Quay from the OpenShift Container Platform console Create a namespace, for example, quay-enterprise . Select Operators Installed Operators , then select the Quay Operator to navigate to the Operator detail view. Click 'Create Instance' on the 'Quay Registry' tile under 'Provided APIs'. Optionally change the 'Name' of the QuayRegistry . This will affect the hostname of the registry. All other fields have been populated with defaults. Click 'Create' to submit the QuayRegistry to be deployed by the Quay Operator. You should be redirected to the QuayRegistry list view. Click on the QuayRegistry you just created to see the details view. Once the 'Registry Endpoint' has a value, click it to access your new Quay registry via the UI. You can now select 'Create Account' to create a user and sign in. 6.2.1. Using the Red Hat Quay UI to create the first user Use the following procedure to create the first user by the Red Hat Quay UI. Note This procedure assumes that the FEATURE_USER_CREATION config option has not been set to false. If it is false , the Create Account functionality on the UI will be disabled, and you will have to use the API to create the first user. Procedure In the OpenShift Container Platform console, navigate to Operators Installed Operators , with the appropriate namespace / project. Click on the newly installed QuayRegistry object to view the details. For example: After the Registry Endpoint has a value, navigate to this URL in your browser. Select Create Account in the Red Hat Quay registry UI to create a user. For example: Enter the details for Username , Password , Email , and then click Create Account . For example: After creating the first user, you are automatically logged in to the Red Hat Quay registry. For example:
[ "oc new-project quay-enterprise", "oc create secret generic quay-enterprise-config-bundle --from-file=config-bundle.tar.gz=/path/to/config-bundle.tar.gz", "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise", "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: clair managed: false - kind: horizontalpodautoscaler managed: false - kind: mirror managed: false - kind: monitoring managed: false", "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: init-config-bundle-secret", "kind: QuayRegistry metadata: name: quay37 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: mirror managed: true overrides: env: - name: DEBUGLOG value: \"true\" - name: HTTP_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: HTTPS_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: NO_PROXY value: svc.cluster.local,localhost,quay370.apps.quayperf370.perfscale.devcluster.openshift.com - kind: tls managed: false - kind: clair managed: true overrides: env: - name: HTTP_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: HTTPS_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: NO_PROXY value: svc.cluster.local,localhost,quay370.apps.quayperf370.perfscale.devcluster.openshift.com - kind: quay managed: true overrides: env: - name: DEBUGLOG value: \"true\" - name: NO_PROXY value: svc.cluster.local,localhost,quay370.apps.quayperf370.perfscale.devcluster.openshift.com - name: HTTP_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: HTTPS_PROXY value: quayproxy.qe.devcluster.openshift.com:3128", "oc create -n quay-enterprise -f quayregistry.yaml", "oc get quayregistry -n quay-enterprise example-registry -o jsonpath=\"{.status.registryEndpoint}\" -w", "FEATURE_USER_INITIALIZE: true SUPER_USERS: - quayadmin", "sudo podman stop quay", "sudo podman run -d -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z {productrepo}/{quayimage}:{productminv}", "curl -X POST -k http://quay-server.example.com/api/v1/user/initialize --header 'Content-Type: application/json' --data '{ \"username\": \"quayadmin\", \"password\":\"quaypass12345\", \"email\": \"[email protected]\", \"access_token\": true}'", "{\"access_token\":\"6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED\", \"email\":\"[email protected]\",\"encrypted_password\":\"1nZMLH57RIE5UGdL/yYpDOHLqiNCgimb6W9kfF8MjZ1xrfDpRyRs9NUnUuNuAitW\",\"username\":\"quayadmin\"} # gitleaks:allow", "{\"message\":\"Cannot initialize user in a non-empty database\"}", "{\"message\":\"Failed to initialize user: Invalid password, password must be at least 8 characters and contain no whitespace.\"}", "sudo podman login -u quayadmin -p quaypass12345 http://quay-server.example.com --tls-verify=false", "Login Succeeded!", "oc get pods -n quay-enterprise", "NAME READY STATUS RESTARTS AGE example-registry-clair-app-5ffc9f77d6-jwr9s 1/1 Running 0 3m42s example-registry-clair-app-5ffc9f77d6-wgp7d 1/1 Running 0 3m41s example-registry-clair-postgres-54956d6d9c-rgs8l 1/1 Running 0 3m5s example-registry-quay-app-79c6b86c7b-8qnr2 1/1 Running 4 3m42s example-registry-quay-app-79c6b86c7b-xk85f 1/1 Running 4 3m41s example-registry-quay-app-upgrade-5kl5r 0/1 Completed 4 3m50s example-registry-quay-database-b466fc4d7-tfrnx 1/1 Running 2 3m42s example-registry-quay-mirror-6d9bd78756-6lj6p 1/1 Running 0 2m58s example-registry-quay-mirror-6d9bd78756-bv6gq 1/1 Running 0 2m58s example-registry-quay-postgres-init-dzbmx 0/1 Completed 0 3m43s example-registry-quay-redis-8bd67b647-skgqx 1/1 Running 0 3m42s", "oc get hpa -n quay-enterprise", "NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE example-registry-clair-app Deployment/example-registry-clair-app 16%/90%, 0%/90% 2 10 2 13d example-registry-quay-app Deployment/example-registry-quay-app 31%/90%, 1%/90% 2 20 2 13d example-registry-quay-mirror Deployment/example-registry-quay-mirror 27%/90%, 0%/90% 2 20 2 13d", "oc get quayregistry -n quay-enterprise -o yaml", "apiVersion: v1 items: - apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: creationTimestamp: \"2021-09-14T10:51:22Z\" generation: 3 name: example-registry namespace: quay-enterprise resourceVersion: \"50147\" selfLink: /apis/quay.redhat.com/v1/namespaces/quay-enterprise/quayregistries/example-registry uid: e3fc82ba-e716-4646-bb0f-63c26d05e00e spec: components: - kind: postgres managed: true - kind: clair managed: true - kind: redis managed: true - kind: horizontalpodautoscaler managed: true - kind: objectstorage managed: true - kind: route managed: true - kind: mirror managed: true - kind: monitoring managed: true - kind: tls managed: true configBundleSecret: example-registry-config-bundle-kt55s kind: List metadata: resourceVersion: \"\" selfLink: \"\"", "oc get pods -n quay-enterprise", "NAME READY STATUS RESTARTS AGE example-registry-clair-app-86554c6b49-ds7bl 0/1 ContainerCreating 0 2s example-registry-clair-app-86554c6b49-hxp5s 0/1 Running 1 17s example-registry-clair-postgres-68d8857899-lbc5n 0/1 ContainerCreating 0 17s example-registry-quay-app-upgrade-h2v7h 0/1 ContainerCreating 0 9s example-registry-quay-database-66f495c9bc-wqsjf 0/1 ContainerCreating 0 17s example-registry-quay-mirror-854c88457b-d845g 0/1 Init:0/1 0 2s example-registry-quay-mirror-854c88457b-fghxv 0/1 Init:0/1 0 17s example-registry-quay-postgres-init-bktdt 0/1 Terminating 0 17s example-registry-quay-redis-f9b9d44bf-4htpz 0/1 ContainerCreating 0 17s", "status: conditions: - lastTransitionTime: \"2021-09-14T10:52:04Z\" lastUpdateTime: \"2021-09-14T10:52:04Z\" message: all objects created/updated successfully reason: ComponentsCreationSuccess status: \"False\" type: RolloutBlocked - lastTransitionTime: \"2021-09-14T10:52:05Z\" lastUpdateTime: \"2021-09-14T10:52:05Z\" message: running database migrations reason: MigrationsInProgress status: \"False\" type: Available lastUpdated: 2021-09-14 10:52:05.371425635 +0000 UTC unhealthyComponents: clair: - lastTransitionTime: \"2021-09-14T10:51:32Z\" lastUpdateTime: \"2021-09-14T10:51:32Z\" message: 'Deployment example-registry-clair-postgres: Deployment does not have minimum availability.' reason: MinimumReplicasUnavailable status: \"False\" type: Available - lastTransitionTime: \"2021-09-14T10:51:32Z\" lastUpdateTime: \"2021-09-14T10:51:32Z\" message: 'Deployment example-registry-clair-app: Deployment does not have minimum availability.' reason: MinimumReplicasUnavailable status: \"False\" type: Available mirror: - lastTransitionTime: \"2021-09-14T10:51:32Z\" lastUpdateTime: \"2021-09-14T10:51:32Z\" message: 'Deployment example-registry-quay-mirror: Deployment does not have minimum availability.' reason: MinimumReplicasUnavailable status: \"False\" type: Available", "status: conditions: - lastTransitionTime: \"2021-09-14T10:52:36Z\" lastUpdateTime: \"2021-09-14T10:52:36Z\" message: all registry component healthchecks passing reason: HealthChecksPassing status: \"True\" type: Available - lastTransitionTime: \"2021-09-14T10:52:46Z\" lastUpdateTime: \"2021-09-14T10:52:46Z\" message: all objects created/updated successfully reason: ComponentsCreationSuccess status: \"False\" type: RolloutBlocked currentVersion: {producty} lastUpdated: 2021-09-14 10:52:46.104181633 +0000 UTC registryEndpoint: https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org unhealthyComponents: {}" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/operator-deploy
Chapter 6. Geo-replication
Chapter 6. Geo-replication Note Currently, the geo-replication feature is not supported on IBM Power and IBM Z. Geo-replication allows multiple, geographically distributed Red Hat Quay deployments to work as a single registry from the perspective of a client or user. It significantly improves push and pull performance in a globally-distributed Red Hat Quay setup. Image data is asynchronously replicated in the background with transparent failover and redirect for clients. Deployments of Red Hat Quay with geo-replication is supported on standalone and Operator deployments. Additional resources For more information about the geo-replication feature's architecture, see the architecture guide , which includes technical diagrams and a high-level overview. 6.1. Geo-replication features When geo-replication is configured, container image pushes will be written to the preferred storage engine for that Red Hat Quay instance. This is typically the nearest storage backend within the region. After the initial push, image data will be replicated in the background to other storage engines. The list of replication locations is configurable and those can be different storage backends. An image pull will always use the closest available storage engine, to maximize pull performance. If replication has not been completed yet, the pull will use the source storage backend instead. 6.2. Geo-replication requirements and constraints In geo-replicated setups, Red Hat Quay requires that all regions are able to read and write to all other region's object storage. Object storage must be geographically accessible by all other regions. In case of an object storage system failure of one geo-replicating site, that site's Red Hat Quay deployment must be shut down so that clients are redirected to the remaining site with intact storage systems by a global load balancer. Otherwise, clients will experience pull and push failures. Red Hat Quay has no internal awareness of the health or availability of the connected object storage system. Users must configure a global load balancer (LB) to monitor the health of your distributed system and to route traffic to different sites based on their storage status. To check the status of your geo-replication deployment, you must use the /health/endtoend checkpoint, which is used for global health monitoring. You must configure the redirect manually using the /health/endtoend endpoint. The /health/instance end point only checks local instance health. If the object storage system of one site becomes unavailable, there will be no automatic redirect to the remaining storage system, or systems, of the remaining site, or sites. Geo-replication is asynchronous. The permanent loss of a site incurs the loss of the data that has been saved in that sites' object storage system but has not yet been replicated to the remaining sites at the time of failure. A single database, and therefore all metadata and Red Hat Quay configuration, is shared across all regions. Geo-replication does not replicate the database. In the event of an outage, Red Hat Quay with geo-replication enabled will not failover to another database. A single Redis cache is shared across the entire Red Hat Quay setup and needs to accessible by all Red Hat Quay pods. The exact same configuration should be used across all regions, with exception of the storage backend, which can be configured explicitly using the QUAY_DISTRIBUTED_STORAGE_PREFERENCE environment variable. Geo-replication requires object storage in each region. It does not work with local storage. Each region must be able to access every storage engine in each region, which requires a network path. Alternatively, the storage proxy option can be used. The entire storage backend, for example, all blobs, is replicated. Repository mirroring, by contrast, can be limited to a repository, or an image. All Red Hat Quay instances must share the same entrypoint, typically through a load balancer. All Red Hat Quay instances must have the same set of superusers, as they are defined inside the common configuration file. Geo-replication requires your Clair configuration to be set to unmanaged . An unmanaged Clair database allows the Red Hat Quay Operator to work in a geo-replicated environment, where multiple instances of the Red Hat Quay Operator must communicate with the same database. For more information, see Advanced Clair configuration . Geo-Replication requires SSL/TLS certificates and keys. For more information, see Using SSL/TLS to protect connections to Red Hat Quay . If the above requirements cannot be met, you should instead use two or more distinct Red Hat Quay deployments and take advantage of repository mirroring functions. 6.2.1. Setting up geo-replication on OpenShift Container Platform Use the following procedure to set up geo-replication on OpenShift Container Platform. Procedure Deploy a postgres instance for Red Hat Quay. Login to the database by entering the following command: psql -U <username> -h <hostname> -p <port> -d <database_name> Create a database for Red Hat Quay named quay . For example: CREATE DATABASE quay; Enable pg_trm extension inside the database \c quay; CREATE EXTENSION IF NOT EXISTS pg_trgm; Deploy a Redis instance: Note Deploying a Redis instance might be unnecessary if your cloud provider has its own service. Deploying a Redis instance is required if you are leveraging Builders. Deploy a VM for Redis Verify that it is accessible from the clusters where Red Hat Quay is running Port 6379/TCP must be open Run Redis inside the instance sudo dnf install -y podman podman run -d --name redis -p 6379:6379 redis Create two object storage backends, one for each cluster. Ideally, one object storage bucket will be close to the first, or primary, cluster, and the other will run closer to the second, or secondary, cluster. Deploy the clusters with the same config bundle, using environment variable overrides to select the appropriate storage backend for an individual cluster. Configure a load balancer to provide a single entry point to the clusters. 6.2.1.1. Configuring geo-replication for the Red Hat Quay on OpenShift Container Platform Use the following procedure to configure geo-replication for the Red Hat Quay on OpenShift Container Platform. Procedure Create a config.yaml file that is shared between clusters. This config.yaml file contains the details for the common PostgreSQL, Redis and storage backends: Geo-replication config.yaml file SERVER_HOSTNAME: <georep.quayteam.org or any other name> 1 DB_CONNECTION_ARGS: autorollback: true threadlocals: true DB_URI: postgresql://postgres:[email protected]:5432/quay 2 BUILDLOGS_REDIS: host: 10.19.0.2 port: 6379 USER_EVENTS_REDIS: host: 10.19.0.2 port: 6379 DISTRIBUTED_STORAGE_CONFIG: usstorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQABCDEFG bucket_name: georep-test-bucket-0 secret_key: AYWfEaxX/u84XRA2vUX5C987654321 storage_path: /quaygcp eustorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQWERTYUIOP bucket_name: georep-test-bucket-1 secret_key: AYWfEaxX/u84XRA2vUX5Cuj12345678 storage_path: /quaygcp DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - usstorage - eustorage DISTRIBUTED_STORAGE_PREFERENCE: - usstorage - eustorage FEATURE_STORAGE_REPLICATION: true 1 A proper SERVER_HOSTNAME must be used for the route and must match the hostname of the global load balancer. 2 To retrieve the configuration file for a Clair instance deployed using the OpenShift Container Platform Operator, see Retrieving the Clair config . Create the configBundleSecret by entering the following command: USD oc create secret generic --from-file config.yaml=./config.yaml georep-config-bundle In each of the clusters, set the configBundleSecret and use the QUAY_DISTRIBUTED_STORAGE_PREFERENCE environmental variable override to configure the appropriate storage for that cluster. For example: Note The config.yaml file between both deployments must match. If making a change to one cluster, it must also be changed in the other. US cluster QuayRegistry example apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage Note Because SSL/TLS is unmanaged, and the route is managed, you must supply the certificates directly in the config bundle. For more information, see Configuring TLS and routes . European cluster apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage Note Because SSL/TLS is unmanaged, and the route is managed, you must supply the certificates with either with the config tool or directly in the config bundle. For more information, see Configuring TLS and routes . 6.2.2. Mixed storage for geo-replication Red Hat Quay geo-replication supports the use of different and multiple replication targets, for example, using AWS S3 storage on public cloud and using Ceph storage on premise. This complicates the key requirement of granting access to all storage backends from all Red Hat Quay pods and cluster nodes. As a result, it is recommended that you use the following: A VPN to prevent visibility of the internal storage, or A token pair that only allows access to the specified bucket used by Red Hat Quay This results in the public cloud instance of Red Hat Quay having access to on-premise storage, but the network will be encrypted, protected, and will use ACLs, thereby meeting security requirements. If you cannot implement these security measures, it might be preferable to deploy two distinct Red Hat Quay registries and to use repository mirroring as an alternative to geo-replication. 6.3. Upgrading a geo-replication deployment of Red Hat Quay on OpenShift Container Platform Use the following procedure to upgrade your geo-replicated Red Hat Quay on OpenShift Container Platform deployment. Important When upgrading geo-replicated Red Hat Quay on OpenShift Container Platform deployment to the y-stream release (for example, Red Hat Quay 3.7 Red Hat Quay 3.8), you must stop operations before upgrading. There is intermittent downtime down upgrading from one y-stream release to the . It is highly recommended to back up your Red Hat Quay on OpenShift Container Platform deployment before upgrading. Procedure This procedure assumes that you are running the Red Hat Quay registry on three or more systems. For this procedure, we will assume three systems named System A, System B, and System C . System A will serve as the primary system in which the Red Hat Quay Operator is deployed. On System B and System C, scale down your Red Hat Quay registry. This is done by disabling auto scaling and overriding the replica county for Red Hat Quay, mirror workers, and Clair if it is managed. Use the following quayregistry.yaml file as a reference: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: ... - kind: horizontalpodautoscaler managed: false 1 - kind: quay managed: true overrides: 2 replicas: 0 - kind: clair managed: true overrides: replicas: 0 - kind: mirror managed: true overrides: replicas: 0 ... 1 Disable auto scaling of Quay , Clair and Mirroring workers 2 Set the replica count to 0 for components accessing the database and objectstorage Note You must keep the Red Hat Quay registry running on System A. Do not update the quayregistry.yaml file on System A. Wait for the registry-quay-app , registry-quay-mirror , and registry-clair-app pods to disappear. Enter the following command to check their status: oc get pods -n <quay-namespace> Example output quay-operator.v3.7.1-6f9d859bd-p5ftc 1/1 Running 0 12m quayregistry-clair-postgres-7487f5bd86-xnxpr 1/1 Running 1 (12m ago) 12m quayregistry-quay-app-upgrade-xq2v6 0/1 Completed 0 12m quayregistry-quay-redis-84f888776f-hhgms 1/1 Running 0 12m On System A, initiate a Red Hat Quay upgrade to the latest y-stream version. This is a manual process. For more information about upgrading installed Operators, see Upgrading installed Operators . For more information about Red Hat Quay upgrade paths, see Upgrading the Red Hat Quay Operator . After the new Red Hat Quay registry is installed, the necessary upgrades on the cluster are automatically completed. Afterwards, new Red Hat Quay pods are started with the latest y-stream version. Additionally, new Quay pods are scheduled and started. Confirm that the update has properly worked by navigating to the Red Hat Quay UI: In the OpenShift console, navigate to Operators Installed Operators , and click the Registry Endpoint link. Important Do not execute the following step until the Red Hat Quay UI is available. Do not upgrade the Red Hat Quay registry on System B and on System C until the UI is available on System A. After confirming that the update has properly worked on System A, initiate the Red Hat Quay upgrade on System B and on System C. The Operator upgrade results in an upgraded Red Hat Quay installation, and the pods are restarted. Note Because the database schema is correct for the new y-stream installation, the new pods on System B and on System C should quickly start. 6.3.1. Removing a geo-replicated site from your Red Hat Quay on OpenShift Container Platform deployment By using the following procedure, Red Hat Quay administrators can remove sites in a geo-replicated setup. Prerequisites You are logged into OpenShift Container Platform. You have configured Red Hat Quay geo-replication with at least two sites, for example, usstorage and eustorage . Each site has its own Organization, Repository, and image tags. Procedure Sync the blobs between all of your defined sites by running the following command: USD python -m util.backfillreplication Warning Prior to removing storage engines from your Red Hat Quay config.yaml file, you must ensure that all blobs are synced between all defined sites. When running this command, replication jobs are created which are picked up by the replication worker. If there are blobs that need replicated, the script returns UUIDs of blobs that will be replicated. If you run this command multiple times, and the output from the return script is empty, it does not mean that the replication process is done; it means that there are no more blobs to be queued for replication. Customers should use appropriate judgement before proceeding, as the allotted time replication takes depends on the number of blobs detected. Alternatively, you could use a third party cloud tool, such as Microsoft Azure, to check the synchronization status. This step must be completed before proceeding. In your Red Hat Quay config.yaml file for site usstorage , remove the DISTRIBUTED_STORAGE_CONFIG entry for the eustorage site. Enter the following command to identify your Quay application pods: USD oc get pod -n <quay_namespace> Example output quay390usstorage-quay-app-5779ddc886-2drh2 quay390eustorage-quay-app-66969cd859-n2ssm Enter the following command to open an interactive shell session in the usstorage pod: USD oc rsh quay390usstorage-quay-app-5779ddc886-2drh2 Enter the following command to permanently remove the eustorage site: Important The following action cannot be undone. Use with caution. sh-4.4USD python -m util.removelocation eustorage Example output WARNING: This is a destructive operation. Are you sure you want to remove eustorage from your storage locations? [y/n] y Deleted placement 30 Deleted placement 31 Deleted placement 32 Deleted placement 33 Deleted location eustorage
[ "psql -U <username> -h <hostname> -p <port> -d <database_name>", "CREATE DATABASE quay;", "\\c quay; CREATE EXTENSION IF NOT EXISTS pg_trgm;", "sudo dnf install -y podman run -d --name redis -p 6379:6379 redis", "SERVER_HOSTNAME: <georep.quayteam.org or any other name> 1 DB_CONNECTION_ARGS: autorollback: true threadlocals: true DB_URI: postgresql://postgres:[email protected]:5432/quay 2 BUILDLOGS_REDIS: host: 10.19.0.2 port: 6379 USER_EVENTS_REDIS: host: 10.19.0.2 port: 6379 DISTRIBUTED_STORAGE_CONFIG: usstorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQABCDEFG bucket_name: georep-test-bucket-0 secret_key: AYWfEaxX/u84XRA2vUX5C987654321 storage_path: /quaygcp eustorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQWERTYUIOP bucket_name: georep-test-bucket-1 secret_key: AYWfEaxX/u84XRA2vUX5Cuj12345678 storage_path: /quaygcp DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - usstorage - eustorage DISTRIBUTED_STORAGE_PREFERENCE: - usstorage - eustorage FEATURE_STORAGE_REPLICATION: true", "oc create secret generic --from-file config.yaml=./config.yaml georep-config-bundle", "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage", "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage", "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: ... - kind: horizontalpodautoscaler managed: false 1 - kind: quay managed: true overrides: 2 replicas: 0 - kind: clair managed: true overrides: replicas: 0 - kind: mirror managed: true overrides: replicas: 0 ...", "get pods -n <quay-namespace>", "quay-operator.v3.7.1-6f9d859bd-p5ftc 1/1 Running 0 12m quayregistry-clair-postgres-7487f5bd86-xnxpr 1/1 Running 1 (12m ago) 12m quayregistry-quay-app-upgrade-xq2v6 0/1 Completed 0 12m quayregistry-quay-redis-84f888776f-hhgms 1/1 Running 0 12m", "python -m util.backfillreplication", "oc get pod -n <quay_namespace>", "quay390usstorage-quay-app-5779ddc886-2drh2 quay390eustorage-quay-app-66969cd859-n2ssm", "oc rsh quay390usstorage-quay-app-5779ddc886-2drh2", "sh-4.4USD python -m util.removelocation eustorage", "WARNING: This is a destructive operation. Are you sure you want to remove eustorage from your storage locations? [y/n] y Deleted placement 30 Deleted placement 31 Deleted placement 32 Deleted placement 33 Deleted location eustorage" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/red_hat_quay_operator_features/georepl-intro
Chapter 31. Configuring Host-Based Access Control
Chapter 31. Configuring Host-Based Access Control This chapter describes host-based access control (HBAC) in Identity Management (IdM) and explains how you can use HBAC to manage access control in your IdM domain. 31.1. How Host-Based Access Control Works in IdM Host-based access control defines which users (or user groups) can access specified hosts (or host groups) by using specified services (or services in a service group). For example, you can: Limit access to a specified system in your domain to members of a specific user group. Allow only a specific service to be used to access the systems in your domain. The administrator configures host-based access control by using a set of allowing rules named HBAC rules . By default, IdM is configured with a default HBAC rule named allow_all , which allows universal access in the whole IdM domain. Applying HBAC Rules to Groups For centralized and simplified access control management, you can apply HBAC rules to whole user, host, or service groups instead of individual users, hosts, or services. Figure 31.1. Host Groups and Host-Based Access Control When applying HBAC rules to groups, consider using automember rules . See Section 13.6, "Defining Automatic Group Membership for Users and Hosts" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/configuring-host-access
A.5. tuned-adm
A.5. tuned-adm tuned-adm is a command line tool that enables you to switch between Tuned profiles to improve performance in a number of specific use cases. It also provides the tuned-adm recommend sub-command that assesses your system and outputs a recommended tuning profile. As of Red Hat Enterprise Linux 7, Tuned includes the ability to run any shell command as part of enabling or disabling a tuning profile. This allows you to extend Tuned profiles with functionality that has not been integrated into Tuned yet. Red Hat Enterprise Linux 7 also provides the include parameter in profile definition files, allowing you to base your own Tuned profiles on existing profiles. The following tuning profiles are provided with Tuned and are supported in Red Hat Enterprise Linux 7. throughput-performance A server profile focused on improving throughput. This is the default profile, and is recommended for most systems. This profile favors performance over power savings by setting intel_pstate and min_perf_pct=100 . It enables transparent huge pages and uses cpupower to set the performance cpufreq governor. It also sets kernel.sched_min_granularity_ns to 10 ms, kernel.sched_wakeup_granularity_ns to 15 ms, and vm.dirty_ratio to 40 %. latency-performance A server profile focused on lowering latency. This profile is recommended for latency-sensitive workloads that benefit from c-state tuning and the increased TLB efficiency of transparent huge pages. This profile favors performance over power savings by setting intel_pstate and max_perf_pct=100 . It enables transparent huge pages, uses cpupower to set the performance cpufreq governor, and requests a cpu_dma_latency value of 1 . network-latency A server profile focused on lowering network latency. This profile favors performance over power savings by setting intel_pstate and min_perf_pct=100 . It disables transparent huge pages, and automatic NUMA balancing. It also uses cpupower to set the performance cpufreq governor, and requests a cpu_dma_latency value of 1 . It also sets busy_read and busy_poll times to 50 ms, and tcp_fastopen to 3 . network-throughput A server profile focused on improving network throughput. This profile favors performance over power savings by setting intel_pstate and max_perf_pct=100 and increasing kernel network buffer sizes. It enables transparent huge pages, and uses cpupower to set the performance cpufreq governor. It also sets kernel.sched_min_granularity_ns to 10 ms, kernel.sched_wakeup_granularity_ns to 15 ms, and vm.dirty_ratio to 40 %. virtual-guest A profile focused on optimizing performance in Red Hat Enterprise Linux 7 virtual machines as well as VMware guests. This profile favors performance over power savings by setting intel_pstate and max_perf_pct=100 . It also decreases the swappiness of virtual memory. It enables transparent huge pages, and uses cpupower to set the performance cpufreq governor. It also sets kernel.sched_min_granularity_ns to 10 ms, kernel.sched_wakeup_granularity_ns to 15 ms, and vm.dirty_ratio to 40 %. virtual-host A profile focused on optimizing performance in Red Hat Enterprise Linux 7 virtualization hosts. This profile favors performance over power savings by setting intel_pstate and max_perf_pct=100 . It also decreases the swappiness of virtual memory. This profile enables transparent huge pages and writes dirty pages back to disk more frequently. It uses cpupower to set the performance cpufreq governor. It also sets kernel.sched_min_granularity_ns to 10 ms, kernel.sched_wakeup_granularity_ns to 15 ms, kernel.sched_migration_cost to 5 ms, and vm.dirty_ratio to 40 %. cpu-partitioning The cpu-partitioning profile partitions the system CPUs into isolated and housekeeping CPUs. To reduce jitter and interruptions on an isolated CPU, the profile clears the isolated CPU from user-space processes, movable kernel threads, interrupt handlers, and kernel timers. A housekeeping CPU can run all services, shell processes, and kernel threads. You can configure the cpu-partitioning profile in the /etc/tuned/cpu-partitioning-variables.conf file. The configuration options are: isolated_cores= cpu-list Lists CPUs to isolate. The list of isolated CPUs is comma-separated or the user can specify the range. You can specify a range using a dash, such as 3-5 . This option is mandatory. Any CPU missing from this list is automatically considered a housekeeping CPU. no_balance_cores= cpu-list Lists CPUs which are not considered by the kernel during system wide process load-balancing. This option is optional. This is usually the same list as isolated_cores . For more information on cpu-partitioning , see the tuned-profiles-cpu-partitioning (7) man page. For detailed information about the power saving profiles provided with tuned-adm, see the Red Hat Enterprise Linux 7 Power Management Guide . For detailed information about using tuned-adm , see the man page:
[ "man tuned-adm" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Tool_Reference-tuned_adm
Chapter 18. Requesting certificates using RHEL System Roles
Chapter 18. Requesting certificates using RHEL System Roles With the certificate System Role, you can use Red Hat Ansible Core to issue and manage certificates. This chapter covers the following topics: The certificate System Role Requesting a new self-signed certificate using the certificate System Role Requesting a new certificate from IdM CA using the certificate System Role 18.1. The certificate System Role Using the certificate System Role, you can manage issuing and renewing TLS and SSL certificates using Ansible Core. The role uses certmonger as the certificate provider, and currently supports issuing and renewing self-signed certificates and using the IdM integrated certificate authority (CA). You can use the following variables in your Ansible playbook with the certificate System Role: certificate_wait to specify if the task should wait for the certificate to be issued. certificate_requests to represent each certificate to be issued and its parameters. Additional resources See the /usr/share/ansible/roles/rhel-system-roles.certificate/README.md file. Preparing a control node and managed nodes to use RHEL System Roles 18.2. Requesting a new self-signed certificate using the certificate System Role With the certificate System Role, you can use Ansible Core to issue self-signed certificates. This process uses the certmonger provider and requests the certificate through the getcert command. Note By default, certmonger automatically tries to renew the certificate before it expires. You can disable this by setting the auto_renew parameter in the Ansible playbook to no . Prerequisites The Ansible Core package is installed on the control machine. You have the rhel-system-roles package installed on the system from which you want to run the playbook. Procedure Optional: Create an inventory file, for example inventory.file : Open your inventory file and define the hosts on which you want to request the certificate, for example: Create a playbook file, for example request-certificate.yml : Set hosts to include the hosts on which you want to request the certificate, such as webserver . Set the certificate_requests variable to include the following: Set the name parameter to the desired name of the certificate, such as mycert . Set the dns parameter to the domain to be included in the certificate, such as *.example.com . Set the ca parameter to self-sign . Set the rhel-system-roles.certificate role under roles . This is the playbook file for this example: Save the file. Run the playbook: Additional resources See the /usr/share/ansible/roles/rhel-system-roles.certificate/README.md file. See the ansible-playbook(1) man page. 18.3. Requesting a new certificate from IdM CA using the certificate System Role With the certificate System Role, you can use anible-core to issue certificates while using an IdM server with an integrated certificate authority (CA). Therefore, you can efficiently and consistently manage the certificate trust chain for multiple systems when using IdM as the CA. This process uses the certmonger provider and requests the certificate through the getcert command. Note By default, certmonger automatically tries to renew the certificate before it expires. You can disable this by setting the auto_renew parameter in the Ansible playbook to no . Prerequisites The Ansible Core package is installed on the control machine. You have the rhel-system-roles package installed on the system from which you want to run the playbook. Procedure Optional: Create an inventory file, for example inventory.file : Open your inventory file and define the hosts on which you want to request the certificate, for example: Create a playbook file, for example request-certificate.yml : Set hosts to include the hosts on which you want to request the certificate, such as webserver . Set the certificate_requests variable to include the following: Set the name parameter to the desired name of the certificate, such as mycert . Set the dns parameter to the domain to be included in the certificate, such as www.example.com . Set the principal parameter to specify the Kerberos principal, such as HTTP/[email protected] . Set the ca parameter to ipa . Set the rhel-system-roles.certificate role under roles . This is the playbook file for this example: Save the file. Run the playbook: Additional resources See the /usr/share/ansible/roles/rhel-system-roles.certificate/README.md file. See the ansible-playbook(1) man page. 18.4. Specifying commands to run before or after certificate issuance using the certificate System Role With the certificate Role, you can use Ansible Core to execute a command before and after a certificate is issued or renewed. In the following example, the administrator ensures stopping the httpd service before a self-signed certificate for www.example.com is issued or renewed, and restarting it afterwards. Note By default, certmonger automatically tries to renew the certificate before it expires. You can disable this by setting the auto_renew parameter in the Ansible playbook to no . Prerequisites The Ansible Core package is installed on the control machine. You have the rhel-system-roles package installed on the system from which you want to run the playbook. Procedure Optional: Create an inventory file, for example inventory.file : Open your inventory file and define the hosts on which you want to request the certificate, for example: Create a playbook file, for example request-certificate.yml : Set hosts to include the hosts on which you want to request the certificate, such as webserver . Set the certificate_requests variable to include the following: Set the name parameter to the desired name of the certificate, such as mycert . Set the dns parameter to the domain to be included in the certificate, such as www.example.com . Set the ca parameter to the CA you want to use to issue the certificate, such as self-sign . Set the run_before parameter to the command you want to execute before this certificate is issued or renewed, such as systemctl stop httpd.service . Set the run_after parameter to the command you want to execute after this certificate is issued or renewed, such as systemctl start httpd.service . Set the rhel-system-roles.certificate role under roles . This is the playbook file for this example: Save the file. Run the playbook: Additional resources See the /usr/share/ansible/roles/rhel-system-roles.certificate/README.md file. See the ansible-playbook(1) man page.
[ "*touch inventory.file*", "[webserver] server.idm.example.com", "--- - hosts: webserver vars: certificate_requests: - name: mycert dns: \"*.example.com\" ca: self-sign roles: - rhel-system-roles.certificate", "*ansible-playbook -i inventory.file request-certificate.yml*", "*touch inventory.file*", "[webserver] server.idm.example.com", "--- - hosts: webserver vars: certificate_requests: - name: mycert dns: www.example.com principal: HTTP/[email protected] ca: ipa roles: - rhel-system-roles.certificate", "*ansible-playbook -i inventory.file request-certificate.yml*", "*touch inventory.file*", "[webserver] server.idm.example.com", "--- - hosts: webserver vars: certificate_requests: - name: mycert dns: www.example.com ca: self-sign run_before: systemctl stop httpd.service run_after: systemctl start httpd.service roles: - rhel-system-roles.certificate", "*ansible-playbook -i inventory.file request-certificate.yml*" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/automating_system_administration_by_using_rhel_system_roles_in_rhel_7.9/requesting-certificates-using-rhel-system-roles_automating-system-administration-by-using-rhel-system-roles
Chapter 2. Architectures
Chapter 2. Architectures Red Hat Enterprise Linux 7.3 is available as a single kit on the following architectures: [1] 64-bit AMD 64-bit Intel IBM POWER7+ and POWER8 (big endian) [2] IBM POWER8 (little endian) [3] IBM z Systems [4] [1] Note that the Red Hat Enterprise Linux 7.3 installation is supported only on 64-bit hardware. Red Hat Enterprise Linux 7.3 is able to run 32-bit operating systems, including versions of Red Hat Enterprise Linux, as virtual machines. [2] Red Hat Enterprise Linux 7.3 (big endian) is currently supported as a KVM guest on Red Hat Enterprise Virtualization for Power, and on PowerVM. [3] Red Hat Enterprise Linux 7.3 (little endian) is currently supported as a KVM guest on Red Hat Enterprise Virtualization for Power, on PowerVM and PowerNV (bare metal). [4] Note that Red Hat Enterprise Linux 7.3 supports IBM zEnterprise 196 hardware or later; IBM z10 Systems mainframe systems are no longer supported and will not boot Red Hat Enterprise Linux 7.3.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.3_release_notes/chap-red_hat_enterprise_linux-7.3_release_notes-architectures
Chapter 7. Uninstalling a cluster on vSphere that uses installer-provisioned infrastructure
Chapter 7. Uninstalling a cluster on vSphere that uses installer-provisioned infrastructure You can remove a cluster that you deployed in your VMware vSphere instance by using installer-provisioned infrastructure. Note When you run the openshift-install destroy cluster command to uninstall OpenShift Container Platform, vSphere volumes are not automatically deleted. The cluster administrator must manually find the vSphere volumes and delete them. 7.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program.
[ "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_vsphere/uninstalling-cluster-vsphere-installer-provisioned
Chapter 1. Overview
Chapter 1. Overview AMQ .NET is a lightweight AMQP 1.0 library for the .NET platform. It enables you to write .NET applications that send and receive AMQP messages. AMQ .NET is part of AMQ Clients, a suite of messaging libraries supporting multiple languages and platforms. For an overview of the clients, see AMQ Clients Overview . For information about this release, see AMQ Clients 2.10 Release Notes . AMQ .NET is based on AMQP.Net Lite . For detailed API documentation, see the AMQ .NET API reference . 1.1. Key features SSL/TLS for secure communication Flexible SASL authentication Seamless conversion between AMQP and native data types Access to all the features and capabilities of AMQP 1.0 An integrated development environment with full IntelliSense API documentation 1.2. Supported standards and protocols AMQ .NET supports the following industry-recognized standards and network protocols: Version 1.0 of the Advanced Message Queueing Protocol (AMQP) Versions 1.0, 1.1, 1.2, and 1.3 of the Transport Layer Security (TLS) protocol, the successor to SSL Simple Authentication and Security Layer (SASL) mechanisms ANONYMOUS, PLAIN, and EXTERNAL Modern TCP with IPv6 1.3. Supported configurations Refer to Red Hat AMQ 7 Supported Configurations on the Red Hat Customer Portal for current information regarding AMQ .NET supported configurations. 1.4. Terms and concepts This section introduces the core API entities and describes how they operate together. Table 1.1. API terms Entity Description Connection A channel for communication between two peers on a network Session A context for sending and receiving messages Sender link A channel for sending messages to a target Receiver link A channel for receiving messages from a source Source A named point of origin for messages Target A named destination for messages Message A mutable holder of application data AMQ .NET sends and receives messages . Messages are transferred between connected peers over links . Links are established over sessions . Sessions are established over connections . A sending peer creates a sender link to send messages. The sender link has a target that identifies a queue or topic at the remote peer. A receiving client creates a receiver link to receive messages. The receiver link has a source that identifies a queue or topic at the remote peer. 1.5. Document conventions The sudo command In this document, sudo is used for any command that requires root privileges. Exercise caution when using sudo because any changes can affect the entire system. For more information about sudo , see Using the sudo command . File paths In this document, all file paths are valid for Linux, UNIX, and similar operating systems (for example, /home/andrea ). On Microsoft Windows, you must use the equivalent Windows paths (for example, C:\Users\andrea ). Variable text This document contains code blocks with variables that you must replace with values specific to your environment. Variable text is enclosed in arrow braces and styled as italic monospace. For example, in the following command, replace <project-dir> with the value for your environment: USD cd <project-dir>
[ "cd <project-dir>" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_.net_client/overview
1.2. haproxy
1.2. haproxy HAProxy offers load balanced services to HTTP and TCP-based services, such as Internet-connected services and web-based applications. Depending on the load balancer scheduling algorithm chosen, haproxy is able to process several events on thousands of connections across a pool of multiple real servers acting as one virtual server. The scheduler determines the volume of connections and either assigns them equally in non-weighted schedules or given higher connection volume to servers that can handle higher capacity in weighted algorithms. HAProxy allows users to define several proxy services, and performs load balancing services of the traffic for the proxies. Proxies are made up of a frontend system and one or more back-end systems. The front-end system defines the IP address (the VIP) and port on which the proxy listens, as well as the back-end systems to use for a particular proxy. The back-end system is a pool of real servers, and defines the load balancing algorithm. HAProxy performs load-balancing management on layer 7, or the Application layer. In most cases, administrators deploy HAProxy for HTTP-based load balancing, such as production web applications, for which high availability infrastructure is a necessary part of business continuity.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/load_balancer_administration/s1-lvs-haproxy-VSA
Chapter 31. SystemTap Translator Tapset
Chapter 31. SystemTap Translator Tapset This family of user-space probe points is used to probe the operation of the SystemTap translator ( stap ) and run command ( staprun ). The tapset includes probes to watch the various phases of SystemTap and SystemTap's management of instrumentation cache. It contains the following probe points:
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/stap_staticmarkers-dot-stp
4.3. JGroups Encryption
4.3. JGroups Encryption JGroups includes the SYM_ENCRYPT and ASYM_ENCRYPT protocols to provide encryption for cluster traffic. Important The ENCRYPT protocol has been deprecated and should not be used in production environments. It is recommended to use either SYM_ENCRYPT or ASYM_ENCRYPT By default, both of these protocols only encrypt the message body; they do not encrypt message headers. To encrypt the entire message, including all headers, as well as destination and source addresses, the property encrypt_entire_message must be true . When defining these protocols they should be placed directly under NAKACK2 . Both protocols may be used to encrypt and decrypt communication in JGroups, and are used in the following ways: SYM_ENCRYPT : Configured with a secret key in a keystore using the JCEKS store type. ASYM_ENCRYPT : Configured with algorithms and key sizes. In this scenario the secret key is not retrieved from the keystore, but instead generated by the coordinator and distributed to new members. Once a member joins the cluster they send a request for the secret key to the coordinator; the coordinator responds with the secret key back to the new member encrypted with the member's public key. Each message is identified as encrypted with a specific encryption header identifying the encrypt header and an MD5 digest identifying the version of the key being used to encrypt and decrypt messages. Report a bug 4.3.1. Configuring JGroups Encryption Protocols JGroups encryption protocols are placed in the JGroups configuration file, and there are three methods of including this file depending on how JBoss Data Grid is in use: Standard Java properties can also be used in the configuration, and it is possible to pass the path to JGroups configuration via the -D option during start up. The default, pre-configured JGroups files are packaged in infinispan-embedded.jar , alternatively, you can create your own configuration file. In Remote Client-Server mode, the JGroups configuration is part of the main server configuration file. Refer to the Administration and Configuration Guide for details on defining the JGroups configuration. When defining both the SYM_ENCRYPT and ASYM_ENCRYPT protocols, place them directly under NAKACK2 in the configuration file. Report a bug 4.3.2. SYM_ENCRYPT: Using a Key Store SYM_ENCRYPT uses store type JCEKS. To generate a keystore compatible with JCEKS, use the following command line options to keytool: SYM_ENCRYPT can then be configured by adding the following information to the JGroups file used by the application. Note The defaultStore.keystore must be found in the classpath. Report a bug 4.3.3. ASYM_ENCRYPT: Configured with Algorithms and Key Sizes In this encryption mode, the coordinator selects the secretKey and distributes it to all peers. There is no keystore, and keys are distributed using a public/private key exchange. Instead, encryption occurs as follows: The secret key is generated and distributed by the coordinator. When a view change occurs, a peer requests the secret key by sending a key request with its own public key. The coordinator encrypts the secret key with the public key, and sends it back to the peer. The peer then decrypts and installs the key as its own secret key. Any further communications are encrypted and decrypted using the secret key. Example 4.9. ASYM_ENCRYPT Example In the provided example, ASYM_ENCRYPT has been placed immediately below NAKACK2 , and encrypt_entire_message has been enabled, indicating that the message headers will be encrypted along with the message body. This means that the NAKACK2 and UNICAST3 protocols are also encrypted. In addition, AUTH has been included as part of the configuration, so that only authenticated nodes may request the secret key from the coordinator. View changes that identify a new controller result in a new secret key being generated and distributed to all peers. This is a substantial overhead in an application with high peer churn. A new secret key may optionally be generated when a cluster member leaves by setting change_key_on_leave to true. When encrypting an entire message, the message must be marshalled into a byte buffer before being encrypted, resulting in decreased performance. Report a bug 4.3.4. JGroups Encryption Configuration Parameters The following table provides configuration parameters for the ENCRYPT JGroups protocol, which both SYM_ENCRYPT and ASYM_ENCRYPT extend: Table 4.1. ENCRYPT Configuration Parameters Name Description asym_algorithm Cipher engine transformation for asymmetric algorithm. Default is RSA. asym_keylength Initial public/private key length. Default is 512. asym_provider Cryptographic Service Provider. Default is Bouncy Castle Provider. encrypt_entire_message By default only the message body is encrypted. Enabling encrypt_entire_message ensures that all headers, destination and source addresses, and the message body is encrypted. sym_algorithm Cipher engine transformation for symmetric algorithm. Default is AES. sym_keylength Initial key length for matching symmetric algorithm. Default is 128. sym_provider Cryptographic Service Provider. Default is Bouncy Castle Provider. The following table provides a list of the SYM_ENCRYPT protocol parameters Table 4.2. SYM_ENCRYPT Configuration Parameters Name Description alias Alias used for recovering the key. Change the default. key_password Password for recovering the key. Change the default. keystore_name File on classpath that contains keystore repository. store_password Password used to check the integrity/unlock the keystore. Change the default. The following table provides a list of the ASYM_ENCRYPT protocol parameters Table 4.3. ASYM_ENCRYPT Configuration Parameters Name Description change_key_on_leave When a member leaves the view, change the secret key, preventing old members from eavesdropping. Report a bug
[ "keytool -genseckey -alias myKey -keypass changeit -storepass changeit -keyalg Blowfish -keysize 56 -keystore defaultStore.keystore -storetype JCEKS", "<SYM_ENCRYPT sym_algorithm=\"AES\" encrypt_entire_message=\"true\" keystore_name=\"defaultStore.keystore\" store_password=\"changeit\" alias=\"myKey\"/>", "<VERIFY_SUSPECT/> <ASYM_ENCRYPT encrypt_entire_message=\"true\" sym_keylength=\"128\" sym_algorithm=\"AES/ECB/PKCS5Padding\" asym_keylength=\"512\" asym_algorithm=\"RSA\"/> <pbcast.NAKACK2/> <UNICAST3/> <pbcast.STABLE/> <FRAG2/> <AUTH auth_class=\"org.jgroups.auth.MD5Token\" auth_value=\"chris\" token_hash=\"MD5\"/> <pbcast.GMS join_timeout=\"2000\" />" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/security_guide/sect-jgroups_encrypt
Web console
Web console Red Hat OpenShift Service on AWS 4 Getting started with web console in Red Hat OpenShift Service on AWS Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/web_console/index
Chapter 10. Integrating by using the syslog protocol
Chapter 10. Integrating by using the syslog protocol Syslog is an event logging protocol that applications use to send messages to a central location, such as a SIEM or a syslog collector, for data retention and security investigations. With Red Hat Advanced Cluster Security for Kubernetes, you can send alerts and audit events using the syslog protocol. Note Forwarding events by using the syslog protocol requires the Red Hat Advanced Cluster Security for Kubernetes version 3.0.52 or newer. When you use the syslog integration, Red Hat Advanced Cluster Security for Kubernetes forwards both violation alerts that you configure and all audit events. Currently, Red Hat Advanced Cluster Security for Kubernetes only supports CEF (Common Event Format). The following steps represent a high-level workflow for integrating Red Hat Advanced Cluster Security for Kubernetes with a syslog events receiver: Set up a syslog events receiver to receive alerts. Use the receiver's address and port number to set up notifications in the Red Hat Advanced Cluster Security for Kubernetes. After the configuration, Red Hat Advanced Cluster Security for Kubernetes automatically sends all violations and audit events to the configured syslog receiver. 10.1. Configuring syslog integration with Red Hat Advanced Cluster Security for Kubernetes Create a new syslog integration in Red Hat Advanced Cluster Security for Kubernetes (RHACS). Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll down to the Notifier Integrations section and select Syslog . Click New Integration (add icon). Enter a name for Integration Name . Select the Logging Facility value from local0 through local7 . Enter your Receiver Host address and Receiver Port number. If you are using TLS, turn on the Use TLS toggle. If your syslog receiver uses a certificate that is not trusted, turn on the Disable TLS Certificate Validation (Insecure) toggle. Otherwise, leave this toggle off. Click Add new extra field to add extra fields. For example, if your syslog receiver accepts objects from multiple sources, type source and rhacs in the Key and Value fields. You can filter using the custom values in your syslog receiver to identify all alerts from RHACS. Select Test ( checkmark icon) to send a test message to verify that the integration with your generic webhook is working. Select Create ( save icon) to create the configuration.
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/integrating/integrate-using-syslog-protocol
3.4. Guest CPU Models
3.4. Guest CPU Models CPU models define which host CPU features are exposed to the guest operating system. KVM and libvirt contain definitions for several current processor models, allowing users to enable CPU features that are available only in newer CPU models. The set of CPU features that can be exposed to guests depends on support in the host CPU, the kernel, and KVM code. To ensure safe migration of virtual machines between hosts with different sets of CPU features, KVM does not expose all features of the host CPU to guest operating systems by default. Instead, CPU features are based on the selected CPU model. If a virtual machine has a given CPU feature enabled, it cannot be migrated to a host that does not support exposing that feature to guests. Note For more information on guest CPU models, refer to the Red Hat Enterprise Linux 6 Virtualization Host Configuration and Guest Installation Guide .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_getting_started_guide/para-cpu_models
Chapter 5. Configuring kernel parameters at runtime
Chapter 5. Configuring kernel parameters at runtime As a system administrator, you can modify many facets of the Red Hat Enterprise Linux kernel's behavior at runtime. Configure kernel parameters at runtime by using the sysctl command and by modifying the configuration files in the /etc/sysctl.d/ and /proc/sys/ directories. Important Configuring kernel parameters on a production system requires careful planning. Unplanned changes can render the kernel unstable, requiring a system reboot. Verify that you are using valid options before changing any kernel values. For more information about tuning kernel on IBM DB2, see Tuning Red Hat Enterprise Linux for IBM DB2 . 5.1. What are kernel parameters Kernel parameters are tunable values that you can adjust while the system is running. Note that for changes to take effect, you do not need to reboot the system or recompile the kernel. It is possible to address the kernel parameters through: The sysctl command The virtual file system mounted at the /proc/sys/ directory The configuration files in the /etc/sysctl.d/ directory Tunables are divided into classes by the kernel subsystem. Red Hat Enterprise Linux has the following tunable classes: Table 5.1. Table of sysctl classes Tunable class Subsystem abi Execution domains and personalities crypto Cryptographic interfaces debug Kernel debugging interfaces dev Device-specific information fs Global and specific file system tunables kernel Global kernel tunables net Network tunables sunrpc Sun Remote Procedure Call (NFS) user User Namespace limits vm Tuning and management of memory, buffers, and cache Additional resources sysctl(8) , and sysctl.d(5) manual pages 5.2. Configuring kernel parameters temporarily with sysctl Use the sysctl command to temporarily set kernel parameters at runtime. The command is also useful for listing and filtering tunables. Prerequisites Root permissions Procedure List all parameters and their values. Note The # sysctl -a command displays kernel parameters, which can be adjusted at runtime and at boot time. To configure a parameter temporarily, enter: The sample command above changes the parameter value while the system is running. The changes take effect immediately, without a need for restart. Note The changes return back to default after your system reboots. Additional resources The sysctl(8) manual page Using configuration files in /etc/sysctl.d/ to adjust kernel parameters 5.3. Configuring kernel parameters permanently with sysctl Use the sysctl command to permanently set kernel parameters. Prerequisites Root permissions Procedure List all parameters. The command displays all kernel parameters that can be configured at runtime. Configure a parameter permanently: The sample command changes the tunable value and writes it to the /etc/sysctl.conf file, which overrides the default values of kernel parameters. The changes take effect immediately and persistently, without a need for restart. Note To permanently modify kernel parameters, you can also make manual changes to the configuration files in the /etc/sysctl.d/ directory. Additional resources The sysctl(8) and sysctl.conf(5) manual pages Using configuration files in /etc/sysctl.d/ to adjust kernel parameters 5.4. Using configuration files in /etc/sysctl.d/ to adjust kernel parameters You must modify the configuration files in the /etc/sysctl.d/ directory manually to permanently set kernel parameters. Prerequisites You have root permissions. Procedure Create a new configuration file in /etc/sysctl.d/ : Include kernel parameters, one per line: Save the configuration file. Reboot the machine for the changes to take effect. Alternatively, apply changes without rebooting: The command enables you to read values from the configuration file, which you created earlier. Additional resources sysctl(8) , sysctl.d(5) manual pages 5.5. Configuring kernel parameters temporarily through /proc/sys/ Set kernel parameters temporarily through the files in the /proc/sys/ virtual file system directory. Prerequisites Root permissions Procedure Identify a kernel parameter you want to configure. The writable files returned by the command can be used to configure the kernel. The files with read-only permissions provide feedback on the current settings. Assign a target value to the kernel parameter. The configuration changes applied by using a command are not permanent and will disappear once the system is restarted. Verification Verify the value of the newly set kernel parameter.
[ "sysctl -a", "sysctl <TUNABLE_CLASS>.<PARAMETER>=<TARGET_VALUE>", "sysctl -a", "sysctl -w <TUNABLE_CLASS>.<PARAMETER>=<TARGET_VALUE> >> /etc/sysctl.conf", "vim /etc/sysctl.d/< some_file.conf >", "< TUNABLE_CLASS >.< PARAMETER >=< TARGET_VALUE > < TUNABLE_CLASS >.< PARAMETER >=< TARGET_VALUE >", "sysctl -p /etc/sysctl.d/< some_file.conf >", "ls -l /proc/sys/< TUNABLE_CLASS >/", "echo < TARGET_VALUE > > /proc/sys/< TUNABLE_CLASS >/< PARAMETER >", "cat /proc/sys/< TUNABLE_CLASS >/< PARAMETER >" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/configuring-kernel-parameters-at-runtime_managing-monitoring-and-updating-the-kernel
8.194. sblim-wbemcli
8.194. sblim-wbemcli 8.194.1. RHBA-2013:1047 - sblim-wbemcli bug fix update Updated sblim-wbemcli packages that fix two bugs are now available for Red Hat Enterprise Linux 6. WBEM (Web-Based Enterprise Management) CLI is a standalone, command-line WBEM client. It can be used in scripts for basic system management tasks. Bug Fixes BZ#745264 Previously, the spec file of the sblim-wbemcli package contained a requirement for the tog-pegasus CIM server, which could cause problems. This update amends the spec file and top-pegasus is no longer required when installing sblim-wbemcli. BZ#868905 Due to incorrect usage of the curl API in the code, when the wbemcli utility was called with an HTTPS scheme, wbemcli terminated unexpectedly with a segmentation fault. With this update, the curl API is used properly and wbemcli no longer crashes in the described scenario. Users of sblim-wbemcli are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/sblim-wbemcli
Chapter 4. Producer configuration properties
Chapter 4. Producer configuration properties key.serializer Type: class Importance: high Serializer class for key that implements the org.apache.kafka.common.serialization.Serializer interface. value.serializer Type: class Importance: high Serializer class for value that implements the org.apache.kafka.common.serialization.Serializer interface. bootstrap.servers Type: list Default: "" Valid Values: non-null string Importance: high A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping-this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,... . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). buffer.memory Type: long Default: 33554432 Valid Values: [0,... ] Importance: high The total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than they can be delivered to the server the producer will block for max.block.ms after which it will throw an exception. This setting should correspond roughly to the total memory the producer will use, but is not a hard bound since not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if compression is enabled) as well as for maintaining in-flight requests. compression.type Type: string Default: none Valid Values: [none, gzip, snappy, lz4, zstd] Importance: high The compression type for all data generated by the producer. The default is none (i.e. no compression). Valid values are none , gzip , snappy , lz4 , or zstd . Compression is of full batches of data, so the efficacy of batching will also impact the compression ratio (more batching means better compression). retries Type: int Default: 2147483647 Valid Values: [0,... ,2147483647] Importance: high Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Produce requests will be failed before the number of retries has been exhausted if the timeout configured by delivery.timeout.ms expires first before successful acknowledgement. Users should generally prefer to leave this config unset and instead use delivery.timeout.ms to control retry behavior. Enabling idempotence requires this config value to be greater than 0. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled. Allowing retries while setting enable.idempotence to false and max.in.flight.requests.per.connection to greater than 1 will potentially change the ordering of records because if two batches are sent to a single partition, and the first fails and is retried but the second succeeds, then the records in the second batch may appear first. ssl.key.password Type: password Default: null Importance: high The password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'. ssl.keystore.certificate.chain Type: password Default: null Importance: high Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates. ssl.keystore.key Type: password Default: null Importance: high Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'. ssl.keystore.location Type: string Default: null Importance: high The location of the key store file. This is optional for client and can be used for two-way authentication for client. ssl.keystore.password Type: password Default: null Importance: high The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format. ssl.truststore.certificates Type: password Default: null Importance: high Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates. ssl.truststore.location Type: string Default: null Importance: high The location of the trust store file. ssl.truststore.password Type: password Default: null Importance: high The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format. batch.size Type: int Default: 16384 Valid Values: [0,... ] Importance: medium The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This helps performance on both the client and the server. This configuration controls the default batch size in bytes. No attempt will be made to batch records larger than this size. Requests sent to brokers will contain multiple batches, one for each partition with data available to be sent. A small batch size will make batching less common and may reduce throughput (a batch size of zero will disable batching entirely). A very large batch size may use memory a bit more wastefully as we will always allocate a buffer of the specified batch size in anticipation of additional records. Note: This setting gives the upper bound of the batch size to be sent. If we have fewer than this many bytes accumulated for this partition, we will 'linger' for the linger.ms time waiting for more records to show up. This linger.ms setting defaults to 0, which means we'll immediately send out a record even the accumulated batch size is under this batch.size setting. client.dns.lookup Type: string Default: use_all_dns_ips Valid Values: [use_all_dns_ips, resolve_canonical_bootstrap_servers_only] Importance: medium Controls how the client uses DNS lookups. If set to use_all_dns_ips , connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to resolve_canonical_bootstrap_servers_only , resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as use_all_dns_ips . client.id Type: string Default: "" Importance: medium An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging. connections.max.idle.ms Type: long Default: 540000 (9 minutes) Importance: medium Close idle connections after the number of milliseconds specified by this config. delivery.timeout.ms Type: int Default: 120000 (2 minutes) Valid Values: [0,... ] Importance: medium An upper bound on the time to report success or failure after a call to send() returns. This limits the total time that a record will be delayed prior to sending, the time to await acknowledgement from the broker (if expected), and the time allowed for retriable send failures. The producer may report failure to send a record earlier than this config if either an unrecoverable error is encountered, the retries have been exhausted, or the record is added to a batch which reached an earlier delivery expiration deadline. The value of this config should be greater than or equal to the sum of request.timeout.ms and linger.ms . linger.ms Type: long Default: 0 Valid Values: [0,... ] Importance: medium The producer groups together any records that arrive in between request transmissions into a single batched request. Normally this occurs only under load when records arrive faster than they can be sent out. However in some circumstances the client may want to reduce the number of requests even under moderate load. This setting accomplishes this by adding a small amount of artificial delay-that is, rather than immediately sending out a record, the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together. This can be thought of as analogous to Nagle's algorithm in TCP. This setting gives the upper bound on the delay for batching: once we get batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if we have fewer than this many bytes accumulated for this partition we will 'linger' for the specified time waiting for more records to show up. This setting defaults to 0 (i.e. no delay). Setting linger.ms=5 , for example, would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absence of load. max.block.ms Type: long Default: 60000 (1 minute) Valid Values: [0,... ] Importance: medium The configuration controls how long the KafkaProducer's `send() , partitionsFor() , initTransactions() , sendOffsetsToTransaction() , commitTransaction() and abortTransaction() methods will block. For send() this timeout bounds the total time waiting for both metadata fetch and buffer allocation (blocking in the user-supplied serializers or partitioner is not counted against this timeout). For partitionsFor() this timeout bounds the time spent waiting for metadata if it is unavailable. The transaction-related methods always block, but may timeout if the transaction coordinator could not be discovered or did not respond within the timeout. max.request.size Type: int Default: 1048576 Valid Values: [0,... ] Importance: medium The maximum size of a request in bytes. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. This is also effectively a cap on the maximum uncompressed record batch size. Note that the server has its own cap on the record batch size (after compression if compression is enabled) which may be different from this. partitioner.class Type: class Default: null Importance: medium Determines which partition to send a record to when records are produced. Available options are: If not set, the default partitioning logic is used. This strategy send records to a partition until at least batch.size bytes is produced to the partition. It works with the strategy: Implementing the org.apache.kafka.clients.producer.Partitioner interface allows you to plug in a custom partitioner. partitioner.ignore.keys Type: boolean Default: false Importance: medium When set to 'true' the producer won't use record keys to choose a partition. If 'false', producer would choose a partition based on a hash of the key when a key is present. Note: this setting has no effect if a custom partitioner is used. receive.buffer.bytes Type: int Default: 32768 (32 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used. request.timeout.ms Type: int Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: medium The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. This should be larger than replica.lag.time.max.ms (a broker configuration) to reduce the possibility of message duplication due to unnecessary producer retries. sasl.client.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface. sasl.jaas.config Type: password Default: null Importance: medium JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here . The format for the value is: loginModuleClass controlFlag (optionName=optionValue)*; . For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;. sasl.kerberos.service.name Type: string Default: null Importance: medium The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config. sasl.login.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler. sasl.login.class Type: class Default: null Importance: medium The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin. sasl.mechanism Type: string Default: GSSAPI Importance: medium SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism. sasl.oauthbearer.jwks.endpoint.url Type: string Default: null Importance: medium The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.token.endpoint.url Type: string Default: null Importance: medium The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization. security.protocol Type: string Default: PLAINTEXT Valid Values: (case insensitive) [SASL_SSL, PLAINTEXT, SSL, SASL_PLAINTEXT] Importance: medium Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. send.buffer.bytes Type: int Default: 131072 (128 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used. socket.connection.setup.timeout.max.ms Type: long Default: 30000 (30 seconds) Importance: medium The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value. socket.connection.setup.timeout.ms Type: long Default: 10000 (10 seconds) Importance: medium The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the socket.connection.setup.timeout.max.ms value. ssl.enabled.protocols Type: list Default: TLSv1.2,TLSv1.3 Importance: medium The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for ssl.protocol . ssl.keystore.type Type: string Default: JKS Importance: medium The file format of the key store file. This is optional for client. The values currently supported by the default ssl.engine.factory.class are [JKS, PKCS12, PEM]. ssl.protocol Type: string Default: TLSv1.3 Importance: medium The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'. ssl.provider Type: string Default: null Importance: medium The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. ssl.truststore.type Type: string Default: JKS Importance: medium The file format of the trust store file. The values currently supported by the default ssl.engine.factory.class are [JKS, PKCS12, PEM]. acks Type: string Default: all Valid Values: [all, -1, 0, 1] Importance: low The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. The following settings are allowed: acks=0 If set to zero then the producer will not wait for any acknowledgment from the server at all. The record will be immediately added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and the retries configuration will not take effect (as the client won't generally know of any failures). The offset given back for each record will always be set to -1 . acks=1 This will mean the leader will write the record to its local log but will respond without awaiting full acknowledgement from all followers. In this case should the leader fail immediately after acknowledging the record but before the followers have replicated it then the record will be lost. acks=all This means the leader will wait for the full set of in-sync replicas to acknowledge the record. This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee. This is equivalent to the acks=-1 setting. Note that enabling idempotence requires this config value to be 'all'. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled. auto.include.jmx.reporter Type: boolean Default: true Importance: low Deprecated. Whether to automatically include JmxReporter even if it's not listed in metric.reporters . This configuration will be removed in Kafka 4.0, users should instead include org.apache.kafka.common.metrics.JmxReporter in metric.reporters in order to enable the JmxReporter. enable.idempotence Type: boolean Default: true Importance: low When set to 'true', the producer will ensure that exactly one copy of each message is written in the stream. If 'false', producer retries due to broker failures, etc., may write duplicates of the retried message in the stream. Note that enabling idempotence requires max.in.flight.requests.per.connection to be less than or equal to 5 (with message ordering preserved for any allowable value), retries to be greater than 0, and acks must be 'all'. Idempotence is enabled by default if no conflicting configurations are set. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled. If idempotence is explicitly enabled and conflicting configurations are set, a ConfigException is thrown. enable.metrics.push Type: boolean Default: true Importance: low Whether to enable pushing of client metrics to the cluster, if the cluster has a client metrics subscription which matches this client. interceptor.classes Type: list Default: "" Valid Values: non-null string Importance: low A list of classes to use as interceptors. Implementing the org.apache.kafka.clients.producer.ProducerInterceptor interface allows you to intercept (and possibly mutate) the records received by the producer before they are published to the Kafka cluster. By default, there are no interceptors. max.in.flight.requests.per.connection Type: int Default: 5 Valid Values: [1,... ] Importance: low The maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this configuration is set to be greater than 1 and enable.idempotence is set to false, there is a risk of message reordering after a failed send due to retries (i.e., if retries are enabled); if retries are disabled or if enable.idempotence is set to true, ordering will be preserved. Additionally, enabling idempotence requires the value of this configuration to be less than or equal to 5. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled. metadata.max.age.ms Type: long Default: 300000 (5 minutes) Valid Values: [0,... ] Importance: low The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions. metadata.max.idle.ms Type: long Default: 300000 (5 minutes) Valid Values: [5000,... ] Importance: low Controls how long the producer will cache metadata for a topic that's idle. If the elapsed time since a topic was last produced to exceeds the metadata idle duration, then the topic's metadata is forgotten and the access to it will force a metadata fetch request. metric.reporters Type: list Default: "" Valid Values: non-null string Importance: low A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. metrics.num.samples Type: int Default: 2 Valid Values: [1,... ] Importance: low The number of samples maintained to compute metrics. metrics.recording.level Type: string Default: INFO Valid Values: [INFO, DEBUG, TRACE] Importance: low The highest recording level for metrics. metrics.sample.window.ms Type: long Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: low The window of time a metrics sample is computed over. partitioner.adaptive.partitioning.enable Type: boolean Default: true Importance: low When set to 'true', the producer will try to adapt to broker performance and produce more messages to partitions hosted on faster brokers. If 'false', producer will try to distribute messages uniformly. Note: this setting has no effect if a custom partitioner is used. partitioner.availability.timeout.ms Type: long Default: 0 Valid Values: [0,... ] Importance: low If a broker cannot process produce requests from a partition for partitioner.availability.timeout.ms time, the partitioner treats that partition as not available. If the value is 0, this logic is disabled. Note: this setting has no effect if a custom partitioner is used or partitioner.adaptive.partitioning.enable is set to 'false'. reconnect.backoff.max.ms Type: long Default: 1000 (1 second) Valid Values: [0,... ] Importance: low The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms. reconnect.backoff.ms Type: long Default: 50 Valid Values: [0,... ] Importance: low The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the reconnect.backoff.max.ms value. retry.backoff.max.ms Type: long Default: 1000 (1 second) Valid Values: [0,... ] Importance: low The maximum amount of time in milliseconds to wait when retrying a request to the broker that has repeatedly failed. If provided, the backoff per client will increase exponentially for each failed request, up to this maximum. To prevent all clients from being synchronized upon retry, a randomized jitter with a factor of 0.2 will be applied to the backoff, resulting in the backoff falling within a range between 20% below and 20% above the computed value. If retry.backoff.ms is set to be higher than retry.backoff.max.ms , then retry.backoff.max.ms will be used as a constant backoff from the beginning without any exponential increase. retry.backoff.ms Type: long Default: 100 Valid Values: [0,... ] Importance: low The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. This value is the initial backoff value and will increase exponentially for each failed request, up to the retry.backoff.max.ms value. sasl.kerberos.kinit.cmd Type: string Default: /usr/bin/kinit Importance: low Kerberos kinit command path. sasl.kerberos.min.time.before.relogin Type: long Default: 60000 Importance: low Login thread sleep time between refresh attempts. sasl.kerberos.ticket.renew.jitter Type: double Default: 0.05 Importance: low Percentage of random jitter added to the renewal time. sasl.kerberos.ticket.renew.window.factor Type: double Default: 0.8 Importance: low Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket. sasl.login.connect.timeout.ms Type: int Default: null Importance: low The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHBEARER. sasl.login.read.timeout.ms Type: int Default: null Importance: low The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER. sasl.login.refresh.buffer.seconds Type: short Default: 300 Valid Values: [0,... ,3600] Importance: low The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.min.period.seconds Type: short Default: 60 Valid Values: [0,... ,900] Importance: low The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.factor Type: double Default: 0.8 Valid Values: [0.5,... ,1.0] Importance: low Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.jitter Type: double Default: 0.05 Valid Values: [0.0,... ,0.25] Importance: low The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.retry.backoff.max.ms Type: long Default: 10000 (10 seconds) Importance: low The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER. sasl.login.retry.backoff.ms Type: long Default: 100 Importance: low The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER. sasl.oauthbearer.clock.skew.seconds Type: int Default: 30 Importance: low The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker. sasl.oauthbearer.expected.audience Type: list Default: null Importance: low The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.expected.issuer Type: string Default: null Importance: low The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.jwks.endpoint.refresh.ms Type: long Default: 3600000 (1 hour) Importance: low The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT. sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms Type: long Default: 10000 (10 seconds) Importance: low The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting. sasl.oauthbearer.jwks.endpoint.retry.backoff.ms Type: long Default: 100 Importance: low The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting. sasl.oauthbearer.scope.claim.name Type: string Default: scope Importance: low The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim. sasl.oauthbearer.sub.claim.name Type: string Default: sub Importance: low The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim. security.providers Type: string Default: null Importance: low A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the org.apache.kafka.common.security.auth.SecurityProviderCreator interface. ssl.cipher.suites Type: list Default: null Importance: low A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported. ssl.endpoint.identification.algorithm Type: string Default: https Importance: low The endpoint identification algorithm to validate server hostname using server certificate. ssl.engine.factory.class Type: class Default: null Importance: low The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. Alternatively, setting this to org.apache.kafka.common.security.ssl.CommonNameLoggingSslEngineFactory will log the common name of expired SSL certificates used by clients to authenticate at any of the brokers with log level INFO. Note that this will cause a tiny delay during establishment of new connections from mTLS clients to brokers due to the extra code for examining the certificate chain provided by the client. Note further that the implementation uses a custom truststore based on the standard Java truststore and thus might be considered a security risk due to not being as mature as the standard one. ssl.keymanager.algorithm Type: string Default: SunX509 Importance: low The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. ssl.secure.random.implementation Type: string Default: null Importance: low The SecureRandom PRNG implementation to use for SSL cryptography operations. ssl.trustmanager.algorithm Type: string Default: PKIX Importance: low The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine. transaction.timeout.ms Type: int Default: 60000 (1 minute) Importance: low The maximum amount of time in milliseconds that a transaction will remain open before the coordinator proactively aborts it. The start of the transaction is set at the time that the first partition is added to it. If this value is larger than the transaction.max.timeout.ms setting in the broker, the request will fail with a InvalidTxnTimeoutException error. transactional.id Type: string Default: null Valid Values: non-empty string Importance: low The TransactionalId to use for transactional delivery. This enables reliability semantics which span multiple producer sessions since it allows the client to guarantee that transactions using the same TransactionalId have been completed prior to starting any new transactions. If no TransactionalId is provided, then the producer is limited to idempotent delivery. If a TransactionalId is configured, enable.idempotence is implied. By default the TransactionId is not configured, which means transactions cannot be used. Note that, by default, transactions require a cluster of at least three brokers which is the recommended setting for production; for development you can change this, by adjusting broker setting transaction.state.log.replication.factor .
[ "1) If no partition is specified but a key is present, choose a partition based on a hash of the key.", "2) If no partition or key is present, choose the sticky partition that changes when at least batch.size bytes are produced to the partition. * `org.apache.kafka.clients.producer.RoundRobinPartitioner`: A partitioning strategy where each record in a series of consecutive records is sent to a different partition, regardless of whether the 'key' is provided or not, until partitions run out and the process starts over again. Note: There's a known issue that will cause uneven distribution when a new batch is created. See KAFKA-9965 for more detail." ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/kafka_configuration_properties/producer-configuration-properties-str
Chapter 45. Using the standalone content manager perspective
Chapter 45. Using the standalone content manager perspective By using the standalone content manager perspective in your application, you can create and edit your application's content and its navigation menus. Procedure Log in to Business Central. In a web browser, enter the following web address in the address bar, http://localhost:8080/business-central/kie-wb.jsp?standalone=true&perspective=ContentManagerPerspective The standalone content manager perspective opens in the browser.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/managing_red_hat_decision_manager_and_kie_server_settings/using-standalone-perspectives-content-manager-proc
4.5. Configuring Console Options
4.5. Configuring Console Options 4.5.1. Console Options Connection protocols are the underlying technology used to provide graphical consoles for virtual machines and allow users to work with virtual machines in a similar way as they would with physical machines. Red Hat Virtualization currently supports the following connection protocols: SPICE Simple Protocol for Independent Computing Environments (SPICE) is the recommended connection protocol for both Linux virtual machines and Windows virtual machines. To open a console to a virtual machine using SPICE, use Remote Viewer. VNC Virtual Network Computing (VNC) can be used to open consoles to both Linux virtual machines and Windows virtual machines. To open a console to a virtual machine using VNC, use Remote Viewer or a VNC client. RDP Remote Desktop Protocol (RDP) can only be used to open consoles to Windows virtual machines, and is only available when you access a virtual machines from a Windows machine on which Remote Desktop has been installed. Before you can connect to a Windows virtual machine using RDP, you must set up remote sharing on the virtual machine and configure the firewall to allow remote desktop connections. Note SPICE is not supported on virtual machines running Windows 8 or Windows 8.1. If a virtual machine running one of these operating systems is configured to use the SPICE protocol, it detects the absence of the required SPICE drivers and runs in VGA compatibility mode. 4.5.2. Accessing Console Options You can configure several options for opening graphical consoles for virtual machines in the Administration Portal. Procedure Click Compute Virtual Machines and select a running virtual machine. Click Console Console Options . Note You can configure the connection protocols and video type in the Console tab of the Edit Virtual Machine window in the Administration Portal. Additional options specific to each of the connection protocols, such as the keyboard layout when using the VNC connection protocol, can be configured. See Virtual Machine Console settings explained for more information. 4.5.3. SPICE Console Options When the SPICE connection protocol is selected, the following options are available in the Console Options window. SPICE Options Map control-alt-del shortcut to ctrl+alt+end : Select this check box to map the Ctrl + Alt + Del key combination to Ctrl + Alt + End inside the virtual machine. Enable USB Auto-Share : Select this check box to automatically redirect USB devices to the virtual machine. If this option is not selected, USB devices will connect to the client machine instead of the guest virtual machine. To use the USB device on the guest machine, manually enable it in the SPICE client menu. Open in Full Screen : Select this check box for the virtual machine console to automatically open in full screen when you connect to the virtual machine. Press SHIFT + F11 to toggle full screen mode on or off. Enable SPICE Proxy : Select this check box to enable the SPICE proxy. 4.5.4. VNC Console Options When the VNC connection protocol is selected, the following options are available in the Console Options window. Console Invocation Native Client : When you connect to the console of the virtual machine, a file download dialog provides you with a file that opens a console to the virtual machine via Remote Viewer. noVNC : When you connect to the console of the virtual machine, a browser tab is opened that acts as the console. VNC Options Map control-alt-delete shortcut to ctrl+alt+end : Select this check box to map the Ctrl + Alt + Del key combination to Ctrl + Alt + End inside the virtual machine. 4.5.5. RDP Console Options When the RDP connection protocol is selected, the following options are available in the Console Options window. Console Invocation Auto : The Manager automatically selects the method for invoking the console. Native client : When you connect to the console of the virtual machine, a file download dialog provides you with a file that opens a console to the virtual machine via Remote Desktop. RDP Options Use Local Drives : Select this check box to make the drives on the client machine accessible on the guest virtual machine. 4.5.6. Remote Viewer Options 4.5.6.1. Remote Viewer Options When you specify the Native client console invocation option, you will connect to virtual machines using Remote Viewer. The Remote Viewer window provides a number of options for interacting with the virtual machine to which it is connected. Table 4.1. Remote Viewer Options Option Hotkey File Screenshot : Takes a screen capture of the active window and saves it in a location of your specification. USB device selection : If USB redirection has been enabled on your virtual machine, the USB device plugged into your client machine can be accessed from this menu. Quit : Closes the console. The hot key for this option is Shift + Ctrl + Q . View Full screen : Toggles full screen mode on or off. When enabled, full screen mode expands the virtual machine to fill the entire screen. When disabled, the virtual machine is displayed as a window. The hot key for enabling or disabling full screen is SHIFT + F11 . Zoom : Zooms in and out of the console window. Ctrl + + zooms in, Ctrl + - zooms out, and Ctrl + 0 returns the screen to its original size. Automatically resize : Select to enable the guest resolution to automatically scale according to the size of the console window. Displays : Allows users to enable and disable displays for the guest virtual machine. Send key Ctrl + Alt + Del : On a Red Hat Enterprise Linux virtual machine, it displays a dialog with options to suspend, shut down or restart the virtual machine. On a Windows virtual machine, it displays the task manager or Windows Security dialog. Ctrl + Alt + Backspace : On a Red Hat Enterprise Linux virtual machine, it restarts the X sever. On a Windows virtual machine, it does nothing. Ctrl + Alt + F1 Ctrl + Alt + F2 Ctrl + Alt + F3 Ctrl + Alt + F4 Ctrl + Alt + F5 Ctrl + Alt + F6 Ctrl + Alt + F7 Ctrl + Alt + F8 Ctrl + Alt + F9 Ctrl + Alt + F10 Ctrl + Alt + F11 Ctrl + Alt + F12 Printscreen : Passes the Printscreen keyboard option to the virtual machine. Help The About entry displays the version details of Virtual Machine Viewer that you are using. Release Cursor from Virtual Machine SHIFT + F12 4.5.6.2. Remote Viewer Hotkeys You can access the hotkeys for a virtual machine in both full screen mode and windowed mode. If you are using full screen mode, you can display the menu containing the button for hotkeys by moving the mouse pointer to the middle of the top of the screen. If you are using windowed mode, you can access the hotkeys via the Send key menu on the virtual machine window title bar. Note If vdagent is not running on the client machine, the mouse can become captured in a virtual machine window if it is used inside a virtual machine and the virtual machine is not in full screen. To unlock the mouse, press Shift + F12 . 4.5.6.3. Manually Associating console.vv Files with Remote Viewer If you are prompted to download a console.vv file when attempting to open a console to a virtual machine using the native client console option, and Remote Viewer is already installed, then you can manually associate console.vv files with Remote Viewer so that Remote Viewer can automatically use those files to open consoles. Manually Associating console.vv Files with Remote Viewer Start the virtual machine. Open the Console Options window: In the Administration Portal, click Console Console Options . In the VM Portal, click the virtual machine name and click the pencil icon beside Console . Change the console invocation method to Native client and click OK . Attempt to open a console to the virtual machine, then click Save when prompted to open or save the console.vv file. Click the location on your local machine where you saved the file. Double-click the console.vv file and select Select a program from a list of installed programs when prompted. In the Open with window, select Always use the selected program to open this kind of file and click the Browse button. Click the C:\Users_[user name]_\AppData\Local\virt-viewer\bin directory and select remote-viewer.exe . Click Open and then click OK . When you use the native client console invocation option to open a console to a virtual machine, Remote Viewer will automatically use the console.vv file that the Red Hat Virtualization Manager provides to open a console to that virtual machine without prompting you to select the application to use.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/sect-configuring_console_options
Index
Index , Cluster Creation A ACPI configuring, Configuring ACPI For Use with Integrated Fence Devices Action Property enabled, Resource Operations id, Resource Operations interval, Resource Operations name, Resource Operations on-fail, Resource Operations timeout, Resource Operations Action Property, Resource Operations attribute, Node Attribute Expressions Constraint Expression, Node Attribute Expressions Attribute Expression, Node Attribute Expressions attribute, Node Attribute Expressions operation, Node Attribute Expressions type, Node Attribute Expressions value, Node Attribute Expressions B batch-limit, Summary of Cluster Properties and Options Cluster Option, Summary of Cluster Properties and Options boolean-op, Pacemaker Rules Constraint Rule, Pacemaker Rules C Clone Option clone-max, Creating and Removing a Cloned Resource clone-node-max, Creating and Removing a Cloned Resource globally-unique, Creating and Removing a Cloned Resource interleave, Creating and Removing a Cloned Resource notify, Creating and Removing a Cloned Resource ordered, Creating and Removing a Cloned Resource Clone Option, Creating and Removing a Cloned Resource Clone Resources, Resource Clones clone-max, Creating and Removing a Cloned Resource Clone Option, Creating and Removing a Cloned Resource clone-node-max, Creating and Removing a Cloned Resource Clone Option, Creating and Removing a Cloned Resource Clones, Resource Clones Cluster Option batch-limit, Summary of Cluster Properties and Options cluster-delay, Summary of Cluster Properties and Options cluster-infrastructure, Summary of Cluster Properties and Options cluster-recheck-interval, Summary of Cluster Properties and Options dc-version, Summary of Cluster Properties and Options enable-acl, Summary of Cluster Properties and Options fence-reaction, Summary of Cluster Properties and Options last-lrm-refresh, Summary of Cluster Properties and Options maintenance-mode, Summary of Cluster Properties and Options migration-limit, Summary of Cluster Properties and Options no-quorum-policy, Summary of Cluster Properties and Options pe-error-series-max, Summary of Cluster Properties and Options pe-input-series-max, Summary of Cluster Properties and Options pe-warn-series-max, Summary of Cluster Properties and Options placement-strategy, Summary of Cluster Properties and Options shutdown-escalation, Summary of Cluster Properties and Options start-failure-is-fatal, Summary of Cluster Properties and Options stonith-action, Summary of Cluster Properties and Options stonith-enabled, Summary of Cluster Properties and Options stonith-timeout, Summary of Cluster Properties and Options stop-all-resources, Summary of Cluster Properties and Options stop-orphan-actions, Summary of Cluster Properties and Options stop-orphan-resources, Summary of Cluster Properties and Options symmetric-cluster, Summary of Cluster Properties and Options Querying Properties, Querying Cluster Property Settings Removing Properties, Setting and Removing Cluster Properties Setting Properties, Setting and Removing Cluster Properties cluster administration configuring ACPI, Configuring ACPI For Use with Integrated Fence Devices Cluster Option, Summary of Cluster Properties and Options Cluster Properties, Setting and Removing Cluster Properties , Querying Cluster Property Settings cluster status display, Displaying Cluster Status cluster-delay, Summary of Cluster Properties and Options Cluster Option, Summary of Cluster Properties and Options cluster-infrastructure, Summary of Cluster Properties and Options Cluster Option, Summary of Cluster Properties and Options cluster-recheck-interval, Summary of Cluster Properties and Options Cluster Option, Summary of Cluster Properties and Options Colocation, Colocation of Resources Constraint Attribute Expression, Node Attribute Expressions attribute, Node Attribute Expressions operation, Node Attribute Expressions type, Node Attribute Expressions value, Node Attribute Expressions Date Specification, Date Specifications hours, Date Specifications id, Date Specifications monthdays, Date Specifications months, Date Specifications moon, Date Specifications weekdays, Date Specifications weeks, Date Specifications weekyears, Date Specifications yeardays, Date Specifications years, Date Specifications Date/Time Expression, Time/Date Based Expressions end, Time/Date Based Expressions operation, Time/Date Based Expressions start, Time/Date Based Expressions Duration, Durations Rule, Pacemaker Rules boolean-op, Pacemaker Rules role, Pacemaker Rules score, Pacemaker Rules score-attribute, Pacemaker Rules Constraint Expression, Node Attribute Expressions , Time/Date Based Expressions Constraint Rule, Pacemaker Rules Constraints Colocation, Colocation of Resources Location id, Basic Location Constraints score, Basic Location Constraints Order, Order Constraints kind, Order Constraints D dampen, Moving Resources Due to Connectivity Changes Ping Resource Option, Moving Resources Due to Connectivity Changes Date Specification, Date Specifications hours, Date Specifications id, Date Specifications monthdays, Date Specifications months, Date Specifications moon, Date Specifications weekdays, Date Specifications weeks, Date Specifications weekyears, Date Specifications yeardays, Date Specifications years, Date Specifications Date/Time Expression, Time/Date Based Expressions end, Time/Date Based Expressions operation, Time/Date Based Expressions start, Time/Date Based Expressions dc-version, Summary of Cluster Properties and Options Cluster Option, Summary of Cluster Properties and Options Determine by Rules, Using Rules to Determine Resource Location Determine Resource Location, Using Rules to Determine Resource Location disabling resources, Enabling and Disabling Cluster Resources Duration, Durations E enable-acl, Summary of Cluster Properties and Options Cluster Option, Summary of Cluster Properties and Options enabled, Resource Operations Action Property, Resource Operations enabling resources, Enabling and Disabling Cluster Resources end, Time/Date Based Expressions Constraint Expression, Time/Date Based Expressions F failure-timeout, Resource Meta Options Resource Option, Resource Meta Options features, new and changed, New and Changed Features fence-reaction, Summary of Cluster Properties and Options Cluster Option, Summary of Cluster Properties and Options G globally-unique, Creating and Removing a Cloned Resource Clone Option, Creating and Removing a Cloned Resource Group Resources, Resource Groups Groups, Resource Groups , Group Stickiness H host_list, Moving Resources Due to Connectivity Changes Ping Resource Option, Moving Resources Due to Connectivity Changes hours, Date Specifications Date Specification, Date Specifications I id, Resource Properties , Resource Operations , Date Specifications Action Property, Resource Operations Date Specification, Date Specifications Location Constraints, Basic Location Constraints Multi-State Property, Multistate Resources: Resources That Have Multiple Modes Resource, Resource Properties integrated fence devices configuring ACPI, Configuring ACPI For Use with Integrated Fence Devices interleave, Creating and Removing a Cloned Resource Clone Option, Creating and Removing a Cloned Resource interval, Resource Operations Action Property, Resource Operations is-managed, Resource Meta Options Resource Option, Resource Meta Options K kind, Order Constraints Order Constraints, Order Constraints L last-lrm-refresh, Summary of Cluster Properties and Options Cluster Option, Summary of Cluster Properties and Options Location Determine by Rules, Using Rules to Determine Resource Location score, Basic Location Constraints Location Constraints, Basic Location Constraints Location Relative to other Resources, Colocation of Resources M maintenance-mode, Summary of Cluster Properties and Options Cluster Option, Summary of Cluster Properties and Options master-max, Multistate Resources: Resources That Have Multiple Modes Multi-State Option, Multistate Resources: Resources That Have Multiple Modes master-node-max, Multistate Resources: Resources That Have Multiple Modes Multi-State Option, Multistate Resources: Resources That Have Multiple Modes migration-limit, Summary of Cluster Properties and Options Cluster Option, Summary of Cluster Properties and Options migration-threshold, Resource Meta Options Resource Option, Resource Meta Options monthdays, Date Specifications Date Specification, Date Specifications months, Date Specifications Date Specification, Date Specifications moon, Date Specifications Date Specification, Date Specifications Moving, Manually Moving Resources Around the Cluster Resources, Manually Moving Resources Around the Cluster Multi-State Option master-max, Multistate Resources: Resources That Have Multiple Modes master-node-max, Multistate Resources: Resources That Have Multiple Modes Property id, Multistate Resources: Resources That Have Multiple Modes Multi-State Option, Multistate Resources: Resources That Have Multiple Modes Multi-State Property, Multistate Resources: Resources That Have Multiple Modes multiple-active, Resource Meta Options Resource Option, Resource Meta Options multiplier, Moving Resources Due to Connectivity Changes Ping Resource Option, Moving Resources Due to Connectivity Changes Multistate, Multistate Resources: Resources That Have Multiple Modes , Multistate Stickiness N name, Resource Operations Action Property, Resource Operations no-quorum-policy, Summary of Cluster Properties and Options Cluster Option, Summary of Cluster Properties and Options notify, Creating and Removing a Cloned Resource Clone Option, Creating and Removing a Cloned Resource O OCF return codes, OCF Return Codes on-fail, Resource Operations Action Property, Resource Operations operation, Node Attribute Expressions , Time/Date Based Expressions Constraint Expression, Node Attribute Expressions , Time/Date Based Expressions Option batch-limit, Summary of Cluster Properties and Options clone-max, Creating and Removing a Cloned Resource clone-node-max, Creating and Removing a Cloned Resource cluster-delay, Summary of Cluster Properties and Options cluster-infrastructure, Summary of Cluster Properties and Options cluster-recheck-interval, Summary of Cluster Properties and Options dampen, Moving Resources Due to Connectivity Changes dc-version, Summary of Cluster Properties and Options enable-acl, Summary of Cluster Properties and Options failure-timeout, Resource Meta Options fence-reaction, Summary of Cluster Properties and Options globally-unique, Creating and Removing a Cloned Resource host_list, Moving Resources Due to Connectivity Changes interleave, Creating and Removing a Cloned Resource is-managed, Resource Meta Options last-lrm-refresh, Summary of Cluster Properties and Options maintenance-mode, Summary of Cluster Properties and Options master-max, Multistate Resources: Resources That Have Multiple Modes master-node-max, Multistate Resources: Resources That Have Multiple Modes migration-limit, Summary of Cluster Properties and Options migration-threshold, Resource Meta Options multiple-active, Resource Meta Options multiplier, Moving Resources Due to Connectivity Changes no-quorum-policy, Summary of Cluster Properties and Options notify, Creating and Removing a Cloned Resource ordered, Creating and Removing a Cloned Resource pe-error-series-max, Summary of Cluster Properties and Options pe-input-series-max, Summary of Cluster Properties and Options pe-warn-series-max, Summary of Cluster Properties and Options placement-strategy, Summary of Cluster Properties and Options priority, Resource Meta Options requires, Resource Meta Options resource-stickiness, Resource Meta Options shutdown-escalation, Summary of Cluster Properties and Options start-failure-is-fatal, Summary of Cluster Properties and Options stonith-action, Summary of Cluster Properties and Options stonith-enabled, Summary of Cluster Properties and Options stonith-timeout, Summary of Cluster Properties and Options stop-all-resources, Summary of Cluster Properties and Options stop-orphan-actions, Summary of Cluster Properties and Options stop-orphan-resources, Summary of Cluster Properties and Options symmetric-cluster, Summary of Cluster Properties and Options target-role, Resource Meta Options Order kind, Order Constraints Order Constraints, Order Constraints symmetrical, Order Constraints ordered, Creating and Removing a Cloned Resource Clone Option, Creating and Removing a Cloned Resource Ordering, Order Constraints overview features, new and changed, New and Changed Features P pe-error-series-max, Summary of Cluster Properties and Options Cluster Option, Summary of Cluster Properties and Options pe-input-series-max, Summary of Cluster Properties and Options Cluster Option, Summary of Cluster Properties and Options pe-warn-series-max, Summary of Cluster Properties and Options Cluster Option, Summary of Cluster Properties and Options Ping Resource Option dampen, Moving Resources Due to Connectivity Changes host_list, Moving Resources Due to Connectivity Changes multiplier, Moving Resources Due to Connectivity Changes Ping Resource Option, Moving Resources Due to Connectivity Changes placement strategy, Utilization and Placement Strategy placement-strategy, Summary of Cluster Properties and Options Cluster Option, Summary of Cluster Properties and Options priority, Resource Meta Options Resource Option, Resource Meta Options Property enabled, Resource Operations id, Resource Properties , Resource Operations , Multistate Resources: Resources That Have Multiple Modes interval, Resource Operations name, Resource Operations on-fail, Resource Operations provider, Resource Properties standard, Resource Properties timeout, Resource Operations type, Resource Properties provider, Resource Properties Resource, Resource Properties Q Querying Cluster Properties, Querying Cluster Property Settings Querying Options, Querying Cluster Property Settings R Removing Cluster Properties, Setting and Removing Cluster Properties Removing Properties, Setting and Removing Cluster Properties requires, Resource Meta Options Resource, Resource Properties Constraint Attribute Expression, Node Attribute Expressions Date Specification, Date Specifications Date/Time Expression, Time/Date Based Expressions Duration, Durations Rule, Pacemaker Rules Constraints Colocation, Colocation of Resources Order, Order Constraints Location Determine by Rules, Using Rules to Determine Resource Location Location Relative to other Resources, Colocation of Resources Moving, Manually Moving Resources Around the Cluster Option failure-timeout, Resource Meta Options is-managed, Resource Meta Options migration-threshold, Resource Meta Options multiple-active, Resource Meta Options priority, Resource Meta Options requires, Resource Meta Options resource-stickiness, Resource Meta Options target-role, Resource Meta Options Property id, Resource Properties provider, Resource Properties standard, Resource Properties type, Resource Properties Start Order, Order Constraints Resource Option, Resource Meta Options resource-stickiness, Resource Meta Options Groups, Group Stickiness Multi-State, Multistate Stickiness Resource Option, Resource Meta Options Resources, Manually Moving Resources Around the Cluster Clones, Resource Clones Groups, Resource Groups Multistate, Multistate Resources: Resources That Have Multiple Modes resources cleanup, Cluster Resources Cleanup disabling, Enabling and Disabling Cluster Resources enabling, Enabling and Disabling Cluster Resources role, Pacemaker Rules Constraint Rule, Pacemaker Rules Rule, Pacemaker Rules boolean-op, Pacemaker Rules Determine Resource Location, Using Rules to Determine Resource Location role, Pacemaker Rules score, Pacemaker Rules score-attribute, Pacemaker Rules S score, Basic Location Constraints , Pacemaker Rules Constraint Rule, Pacemaker Rules Location Constraints, Basic Location Constraints score-attribute, Pacemaker Rules Constraint Rule, Pacemaker Rules Setting Cluster Properties, Setting and Removing Cluster Properties Setting Properties, Setting and Removing Cluster Properties shutdown-escalation, Summary of Cluster Properties and Options Cluster Option, Summary of Cluster Properties and Options standard, Resource Properties Resource, Resource Properties start, Time/Date Based Expressions Constraint Expression, Time/Date Based Expressions Start Order, Order Constraints start-failure-is-fatal, Summary of Cluster Properties and Options Cluster Option, Summary of Cluster Properties and Options status display, Displaying Cluster Status stonith-action, Summary of Cluster Properties and Options Cluster Option, Summary of Cluster Properties and Options stonith-enabled, Summary of Cluster Properties and Options Cluster Option, Summary of Cluster Properties and Options stonith-timeout, Summary of Cluster Properties and Options Cluster Option, Summary of Cluster Properties and Options stop-all-resources, Summary of Cluster Properties and Options Cluster Option, Summary of Cluster Properties and Options stop-orphan-actions, Summary of Cluster Properties and Options Cluster Option, Summary of Cluster Properties and Options stop-orphan-resources, Summary of Cluster Properties and Options Cluster Option, Summary of Cluster Properties and Options symmetric-cluster, Summary of Cluster Properties and Options Cluster Option, Summary of Cluster Properties and Options symmetrical, Order Constraints Order Constraints, Order Constraints T target-role, Resource Meta Options Resource Option, Resource Meta Options Time Based Expressions, Time/Date Based Expressions timeout, Resource Operations Action Property, Resource Operations type, Resource Properties , Node Attribute Expressions Constraint Expression, Node Attribute Expressions Resource, Resource Properties U utilization attributes, Utilization and Placement Strategy V value, Node Attribute Expressions Constraint Expression, Node Attribute Expressions W weekdays, Date Specifications Date Specification, Date Specifications weeks, Date Specifications Date Specification, Date Specifications weekyears, Date Specifications Date Specification, Date Specifications Y yeardays, Date Specifications Date Specification, Date Specifications years, Date Specifications Date Specification, Date Specifications
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/ix01
7.66. geronimo-specs
7.66. geronimo-specs 7.66.1. RHBA-2012:1397 - geronimo-specs bug fix update Updated geronimo-specs packages that fix one bug are now available for Red Hat Enterprise Linux 6. The geronimo-specs packages provide the specifications for Apache's ASF-licenced J2EE server Geronimo. Bug Fix BZ# 818755 Prior to this update, the geronimo-specs-compat package description contained inaccurate references. This update removes these references so that the description is now accurate. All users of geronimo-specs are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/geronimo-specs
C.2. Feature Restrictions
C.2. Feature Restrictions The hypervisor package included with Red Hat Enterprise Linux is qemu-kvm . This differs from the qemu-kvm-rhev package included with Red Hat Virtualization (RHV) and Red Hat OpenStack (RHOS) products. Many of the restrictions that apply to qemu-kvm do not apply to qemu-kvm-rhev . For more information about the differences between the qemu-kvm and qemu-kvm-rhev packages, see What are the differences between qemu-kvm and qemu-kvm-rhev and all sub-packages? The following restrictions apply to the KVM hypervisor included with Red Hat Enterprise Linux: Maximum vCPUs per guest Red Hat Enterprise Linux 7.2 and above supports 240 vCPUs per guest, up from 160 in Red Hat Enterprise Linux 7.0. Nested virtualization Nested virtualization is available as a Technology Preview in Red Hat Enterprise Linux 7.2 and later. This feature enables KVM to launch guests that can act as hypervisors and create their own guests. TCG support QEMU and libvirt include a dynamic translation mode using the QEMU Tiny Code Generator (TCG). This mode does not require hardware virtualization support. However, TCG is not supported by Red Hat. When the qemu-kvm package is used to create nested guests in a virtual machine, it uses TCG unless nested virtualization is enabled on the parent virtual machine. Note that nested virtualization is currently a Technology Preview. For more information, see Chapter 12, Nested Virtualization . A TCG-based guest can be recognized using the following: The domain XML file of the guests contains the <domain type='qemu'> line, whereas a KVM guest contains <domain type='kvm'> . In the Overview pane of the Virtual hardware details view, virt-manager displays the type of virtual machine as QEMU TCG , instead of KVM . Constant TSC bit Systems without a Constant Time Stamp Counter (TSC) require additional configuration. See Chapter 8, KVM Guest Timing Management for details on determining whether you have a Constant Time Stamp Counter and configuration steps for fixing any related issues. Emulated SCSI adapters SCSI device emulation is only supported with the virtio-scsi paravirtualized host bus adapter (HBA). Emulated SCSI HBAs are not supported with KVM in Red Hat Enterprise Linux. Emulated IDE devices KVM is limited to a maximum of four virtualized (emulated) IDE devices per virtual machine. Paravirtualized devices Paravirtualized devices are also known as VirtIO devices. They are purely virtual devices designed to work optimally in a virtual machine. Red Hat Enterprise Linux 7 supports 32 PCI device slots per virtual machine bus, and 8 PCI functions per device slot. This gives a theoretical maximum of 256 PCI functions per bus when multi-function capabilities are enabled in the virtual machine, and PCI bridges are used. Each PCI bridge adds a new bus, potentially enabling another 256 device addresses. However, some buses do not make all 256 device addresses available for the user; for example, the root bus has several built-in devices occupying slots. See Chapter 16, Guest Virtual Machine Device Configuration for more information on devices and Section 16.1.5, "PCI Bridges" for more information on PCI bridges. Migration restrictions Device assignment refers to physical devices that have been exposed to a virtual machine, for the exclusive use of that virtual machine. Because device assignment uses hardware on the specific host where the virtual machine runs, migration and save/restore are not supported when device assignment is in use. If the guest operating system supports hot plugging, assigned devices can be removed prior to the migration or save/restore operation to enable this feature. Live migration is only possible between hosts with the same CPU type (that is, Intel to Intel or AMD to AMD only). For live migration, both hosts must have the same value set for the No eXecution (NX) bit, either on or off . For migration to work, cache=none must be specified for all block devices opened in write mode. Warning Failing to include the cache=none option can result in disk corruption. Storage restrictions There are risks associated with giving guest virtual machines write access to entire disks or block devices (such as /dev/sdb ). If a guest virtual machine has access to an entire block device, it can share any volume label or partition table with the host machine. If bugs exist in the host system's partition recognition code, this can create a security risk. Avoid this risk by configuring the host machine to ignore devices assigned to a guest virtual machine. Warning Failing to adhere to storage restrictions can result in risks to security. Live snapshots The backup and restore API in KVM in Red Hat Enterprise Linux does not support live snapshots. Streaming, mirroring, and live-merge Streaming, mirroring, and live-merge are not supported. This prevents block-jobs. I/O throttling Red Hat Enterprise Linux does not support configuration of maximum input and output levels for operations on virtual disks. I/O threads Red Hat Enterprise Linux does not support creation of separate threads for input and output operations on disks with VirtIO interfaces. Memory hot plug and hot unplug Red Hat Enterprise Linux does not support hot plugging or hot unplugging memory from a virtual machine. vhost-user Red Hat Enterprise Linux does not support implementation of a user space vhost interface. CPU hot unplug Red Hat Enterprise Linux does not support hot-unplugging CPUs from a virtual machine. NUMA guest locality for PCIe Red Hat Enterprise Linux does not support binding a virtual PCIe device to a specific NUMA node. Core dumping restrictions Because core dumping is currently implemented on top of migration, it is not supported when device assignment is in use. Realtime kernel KVM currently does not support the realtime kernel, and thus cannot be used on Red Hat Enterprise Linux for Real Time.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-virtualization_restrictions-kvm_restrictions
Chapter 1. Configuring a Hyperconverged Infrastructure environment
Chapter 1. Configuring a Hyperconverged Infrastructure environment This section describes how to deploy a Hyperconverged Infrastructure (HCI) environment. A HCI environment contains data plane nodes that host both Ceph Storage and the Compute service. Create an HCI environment, by completing the following high-level tasks: Configuring the data plane node networking. Installing Red Hat Ceph Storage on the data plane nodes. Configuring Red Hat OpenStack Services on OpenShift (RHOSO) to use the Red Hat Ceph Storage cluster. 1.1. Data plane node services list Create an OpenStackDataPlaneNodeSet CR to configure data plane nodes. The openstack-operator reconciles the OpenStackDataPlaneNodeSet CR when an OpenStackDataPlaneDeployment CR is created. These CRs have a service list similar to the following example: Only the services in the services list are configured. Red Hat Ceph Storage must be deployed on the data plane node after the Storage network and NTP are configured but before the Compute service is configured. This means you must edit the services list and make other changes to the CR. Throughout this section, you edit the services list to complete the configuration of the HCI environment. 1.2. Configuring the data plane node networks You must configure the data plane node networks to accommodate the Red Hat Ceph Storage networking requirements. Prerequisites Control plane deployment is complete but has not yet been modified to use Ceph Storage. The data plane nodes have been provisioned with an operating system. The data plane nodes are accessible through an SSH key that Ansible can use. The data plane nodes have disks available to be used as Ceph OSDs. There are a minimum of three available data plane nodes. Ceph Storage clusters must have a minimum of three nodes to ensure redundancy. Procedure Create an OpenStackDataPlaneNodeSet CRD file to represent the data plane nodes. Note Do not create the CR in Red Hat OpenShift yet. Add the ceph-hci-pre service to the list before the configure-os service and remove all other service listings after run-os . The following is an example of the edited list: Note Note the services that you remove from the list. You add them back to the list later. (Optional) The ceph-hci-pre service prepares EDPM nodes to host Red Hat Ceph Storage services after network configuration using the edpm_ceph_hci_pre edpm-ansible role. By default, the edpm_ceph_hci_pre_enabled_services parameter of this role only contains RBD, RGW, and NFS services. If other services, such as the Dashboard, are deployed with HCI nodes; they must be added to the edpm_ceph_hci_pre_enabled_services parameter list. For more information about this role, see edpm_ceph_hci_pre role. Configure the Red Hat Ceph Storage cluster_network for storage management traffic between OSDs. Modify the CR to set edpm-ansible variables so that the edpm_network_config role configures a storage management network which Ceph uses as the cluster_network . The following example has 3 nodes. It assumes the storage management network range is 172.20.0.0/24 and that it is on VLAN23 . The bolded lines are additions for the cluster_network : Note It is not necessary to add the storage management network to the networkAttachments key. Apply the CR: Replace <dataplane_cr_file> with the name of your file. Note Ansible does not configure or validate the networks until the OpenStackDataPlaneDeployment CRD is created. Create an OpenStackDataPlaneDeployment CRD, as described in Creating the data plane in the Deploying Red Hat OpenStack Services on OpenShift guide, which has the OpenStackDataPlaneNodeSet CRD file defined above to have Ansible configure the services on the data plane nodes. To confirm the network is configured, complete the following steps: SSH into a data plane node. Use the ip a command to display configured networks. Confirm the storage networks are in the list of configured networks. 1.2.1. Red Hat Ceph Storage MTU settings The example in this procedure changes the MTU of the storage and storage_mgmt networks from 1500 to 9000. An MTU of 9000 is known as a jumbo frame. Even though it is not mandatory to increase the MTU, jumbo frames are used for improved storage performance. If jumbo frames are used, all network switch ports in the data path must be configured to support jumbo frames. MTU changes must also be made for services using the Storage network running on OpenShift. To change the MTU for the OpenShift services connecting to the data plane nodes, update the Node Network Configuration Policy (NNCP) for the base interface and the VLAN interface. It is not necessary to update the Network Attachment Definition (NAD) if the main NAD interface already has the desired MTU. If the MTU of the underlying interface is set to 9000, and it is not specified for the VLAN interface on top of it, then it will default to the value from the underlying interface. If the MTU values are not consistent, issues can occur on the application layer that can cause the Red Hat Ceph Storage cluster to not reach quorum or not support authentication using the CephX protocol. If the MTU is changed and you observe these types of problems, verify all hosts that use the network using jumbo frames can communicate at the chosen MTU value with the ping command, for example: 1.3. Configuring and deploying Red Hat Ceph Storage on data plane nodes Use the cephadm utility to configure and deploy Red Hat Ceph Storage for an HCI environment. 1.3.1. The cephadm utility Use the cephadm utility to configure and deploy Red Hat Ceph Storage on the data plane nodes. The cephadm package must be deployed on at least one data plane node before proceeding; edpm-ansible does not deploy Red Hat Ceph Storage. For additional information and procedures for deploying Red Hat Ceph Storage, see Red Hat Ceph Storage installation in the Red Hat Ceph Storage Installation Guide . 1.3.2. Configuring and deploying Red Hat Ceph Storage Configure and deploy Red Hat Ceph Storage by editing the configuration file and using the cephadm utility. Procedure Edit the Red Hat Ceph Storage configuration file. Add the Storage and Storage Management network ranges. Red Hat Ceph Storage uses the Storage network as the Red Hat Ceph Storage public_network and the Storage Management network as the cluster_network . The following example is for a configuration file entry where the Storage network range is 172.18.0.0/24 and the Storage Management network range is 172.20.0.0/24 : Add collocation boundaries between the Compute service and Ceph OSD services. Boundaries should be set between collocated Compute service and Ceph OSD services to reduce CPU and memory contention. The following is an example for a Ceph configuration file entry with these boundaries set: In this example, the osd_memory_target_autotune parameter is set to true so that the OSD daemons adjust memory consumption based on the osd_memory_target option. The autotune_memory_target_ratio defaults to 0.7. This means 70 percent of the total RAM in the system is the starting point from which any memory consumed by non-autotuned Ceph daemons is subtracted. The remaining memory is divided between the OSDs; assuming all OSDs have osd_memory_target_autotune set to true. For HCI deployments, you can set mgr/cephadm/autotune_memory_target_ratio to 0.2 so that more memory is available for the Compute service. For additional information about service collocation, see Collocating services in a HCI environment for NUMA nodes . Note If these values need to be adjusted after the deployment, use the ceph config set osd <key> <value> command. Deploy Ceph Storage with the edited configuration file on a data plane node: USD cephadm bootstrap --config <config_file> --mon-ip <data_plane_node_ip> --skip-monitoring-stack Replace <config_file> with the name of your Ceph configuration file. Replace <data_plane_node_ip> with the Storage network IP address of the data plane node on which Red Hat Ceph Storage will be installed. Note The --skip-monitoring-stack option is used in the cephadm bootstrap command to skip the deployment of monitoring services. This ensures the Red Hat Ceph Storage deployment completes successfully if monitoring services have been previously deployed as part of any other preceding process. If monitoring services have not been deployed, see the Red Hat Ceph Storage documentation for information and procedures on enabling monitoring services. After the Red Hat Ceph Storage cluster is bootstrapped on the first EDPM node, see Red Hat Ceph Storage installation in the Red Hat Ceph Storage Installation Guide to add the other EDPM nodes to the Ceph cluster. 1.3.2.1. Collocating services in a HCI environment for NUMA nodes A two-NUMA node system can host a latency sensitive Compute service workload on one NUMA node and a Ceph OSD workload on the other NUMA node. To configure Ceph OSDs to use a specific NUMA node not being used by the the Compute service, use either of the following Ceph OSD configurations: osd_numa_node sets affinity to a NUMA node (-1 for none). osd_numa_auto_affinity automatically sets affinity to the NUMA node where storage and network match. If there are network interfaces on both NUMA nodes and the disk controllers are on NUMA node 0, do the following: Use a network interface on NUMA node 0 for the storage network Host the Ceph OSD workload on NUMA node 0. Host the Compute service workload on NUMA node 1 and have it use the network interfaces on NUMA node 1. Set osd_numa_auto_affinity to true, as in the initial Ceph configuration file. Alternatively, set the osd_numa_node directly to 0 and clear the osd_numa_auto_affinity parameter so that it defaults to false . When a hyperconverged cluster backfills as a result of an OSD going offline, the backfill process can be slowed down. In exchange for a slower recovery, the backfill activity has less of an impact on the collocated Compute service (nova) workload. Red Hat Ceph Storage has the following defaults to control the rate of backfill activity. osd_recovery_op_priority = 3 osd_max_backfills = 1 osd_recovery_max_active_hdd = 3 osd_recovery_max_active_ssd = 10 1.3.3. Confirming Red Hat Ceph Storage deployment Confirm Red Hat Ceph Storage is deployed before proceeding. Procedure Connect to a data plane node by using SSH. View the status of the Red Hat Ceph Storage cluster: 1.3.4. Confirming Red Hat Ceph Storage tuning Ensure that Red Hat Ceph Storage is properly tuned before proceeding. Procedure Connect to a data plane node by using SSH. Verify overall Red Hat Ceph Storage tuning with the following commands: Verify the tuning of an OSD with the following commands: Replace <osd_number> with the number of an OSD. For example, to refer to OSD 11, use osd.11 . Verify the default backfill values of an OSD with the following commands: Replace <osd_number> with the number of an OSD. For example, to refer to OSD 11, use osd.11 . 1.4. Configuring the data plane to use the collocated Red Hat Ceph Storage server Although the Red Hat Ceph Storage cluster is physically collocated with the Compute services on the data plane nodes, it is treated as logically separated. Red Hat Ceph Storage must be configured as the storage solution before the data plane nodes can use it. Prerequisites Complete the procedures in Integrating Red Hat Ceph Storage . Procedure Edit the OpenStackDataPlaneNodeSet CR. To define the cephx key and configuration file for the Compute service (nova), use the extraMounts parameter. The following is an example of using the extraMounts parameter for this purpose: Locate the services list in the CR. Edit the services list to restore all of the services removed in Configuring the data plane node networks . Restoring the full services list allows the remaining jobs to be run that complete the configuration of the HCI environment. The following is an example of a full services list with the additional services in bold: Note In addition to restoring the default service list, the ceph-client service is added after the run-os service. The ceph-client service configures EDPM nodes as clients of a Red Hat Ceph Storage server. This service distributes the files necessary for the clients to connect to the Red Hat Ceph Storage server. Create a ConfigMap to set the reserved_host_memory_mb parameter to a value appropriate for your configuration. The following is an example of a ConfigMap used for this purpose: Note The value for the reserved_host_memory_mb parameter may be set so that the Compute service scheduler does not give memory to a virtual machine that a Ceph OSD on the same server needs. The example reserves 5 GB per OSD for 10 OSDs per host in addition to the default reserved memory for the hypervisor. In an IOPS-optimized cluster, you can improve performance by reserving more memory for each OSD. The 5 GB number is provided as a starting point which can be further tuned if necessary. Add reserved-memory-nova to the configMaps list by editing the OpenStackDataPlaneService/nova-custom-ceph file: Apply the CR changes. Replace <dataplane_cr_file> with the name of your file. Note Ansible does not configure or validate the networks until the OpenStackDataPlaneDeployment CRD is created. Create an OpenStackDataPlaneDeployment CRD, as described in Creating the data plane in the Deploying Red Hat OpenStack Services on OpenShift guide, which has the OpenStackDataPlaneNodeSet CRD file defined above to have Ansible configure the services on the data plane nodes.
[ "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet spec: services: - configure-network - validate-network - install-os - configure-os - run-os - ovn - libvirt - nova", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet spec: services: - download-cache - bootstrap - configure-network - validate-network - install-os - ceph-hci-pre - configure-os - ssh-known-hosts - run-os - reboot-os", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-edpm namespace: openstack spec: env: - name: ANSIBLE_FORCE_COLOR value: \"True\" networkAttachments: - ctlplane nodeTemplate: ansible: ansiblePort: 22 ansibleUser: cloud-admin ansibleVars: edpm_ceph_hci_pre_enabled_services: - ceph_mon - ceph_mgr - ceph_osd - ceph_rgw - ceph_nfs - ceph_rgw_frontend - ceph_nfs_frontend edpm_fips_mode: check edpm_iscsid_image: {{ registry_url }}/openstack-iscsid:{{ image_tag }} edpm_logrotate_crond_image: {{ registry_url }}/openstack-cron:{{ image_tag }} edpm_network_config_hide_sensitive_logs: false edpm_network_config_os_net_config_mappings: edpm-compute-0: nic1: 52:54:00:1e:af:6b nic2: 52:54:00:d9:cb:f4 edpm-compute-1: nic1: 52:54:00:f2:bc:af nic2: 52:54:00:f1:c7:dd edpm-compute-2: nic1: 52:54:00:dd:33:14 nic2: 52:54:00:50:fb:c3 edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup( vars , networks_lower[network] ~ _mtu )) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup( vars , networks_lower[network] ~ _mtu ) }} vlan_id: {{ lookup( vars , networks_lower[network] ~ _vlan_id ) }} addresses: - ip_netmask: {{ lookup( vars , networks_lower[network] ~ _ip ) }}/{{ lookup( vars , networks_lower[network] ~ _cidr ) }} routes: {{ lookup( vars , networks_lower[network] ~ _host_routes ) }} {% endfor %} edpm_neutron_metadata_agent_image: {{ registry_url }}/openstack-neutron-metadata-agent-ovn:{{ image_tag }} edpm_nodes_validation_validate_controllers_icmp: false edpm_nodes_validation_validate_gateway_icmp: false edpm_selinux_mode: enforcing edpm_sshd_allowed_ranges: - 192.168.122.0/24 - 192.168.111.0/24 edpm_sshd_configure_firewall: true enable_debug: false gather_facts: false image_tag: current-podified neutron_physical_bridge_name: br-ex neutron_public_interface_name: eth0 service_net_map: nova_api_network: internalapi nova_libvirt_network: internalapi storage_mgmt_cidr: \"24\" storage_mgmt_host_routes: [] storage_mgmt_mtu: 9000 storage_mgmt_vlan_id: 23 storage_mtu: 9000 timesync_ntp_servers: - hostname: pool.ntp.org ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret managementNetwork: ctlplane networks: - defaultRoute: true name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 nodes: edpm-compute-0: ansible: host: 192.168.122.100 hostName: compute-0 networks: - defaultRoute: true fixedIP: 192.168.122.100 name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: storagemgmt subnetName: subnet1 - name: tenant subnetName: subnet1 edpm-compute-1: ansible: host: 192.168.122.101 hostName: compute-1 networks: - defaultRoute: true fixedIP: 192.168.122.101 name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: storagemgmt subnetName: subnet1 - name: tenant subnetName: subnet1 edpm-compute-2: ansible: host: 192.168.122.102 hostName: compute-2 networks: - defaultRoute: true fixedIP: 192.168.122.102 name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: storagemgmt subnetName: subnet1 - name: tenant subnetName: subnet1 preProvisioned: true services: - bootstrap - configure-network - validate-network - install-os - ceph-hci-pre - configure-os - ssh-known-hosts - run-os - reboot-os", "oc apply -f <dataplane_cr_file>", "ping -M do -s 8972 172.20.0.100", "[global] public_network = 172.18.0.0/24 cluster_network = 172.20.0.0/24", "[osd] osd_memory_target_autotune = true osd_numa_auto_affinity = true [mgr] mgr/cephadm/autotune_memory_target_ratio = 0.2", "cephadm shell -- ceph -s", "ceph config dump | grep numa ceph config dump | grep autotune ceph config dump | get mgr", "ceph config get <osd_number> osd_memory_target ceph config get <osd_number> osd_memory_target_autotune ceph config get <osd_number> osd_numa_auto_affinity", "ceph config get <osd_number> osd_recovery_op_priority ceph config get <osd_number> osd_max_backfills ceph config get <osd_number> osd_recovery_max_active_hdd ceph config get <osd_number> osd_recovery_max_active_ssd", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet spec: nodeTemplate: extraMounts: - extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: \"/etc/ceph\" readOnly: true", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet spec: services: - bootstrap - configure-network - validate-network - install-os - ceph-hci-pre - configure-os - ssh-known-hosts - run-os - reboot-os - install-certs - ceph-client - ovn - neutron-metadata - libvirt - nova-custom-ceph", "apiVersion: v1 kind: ConfigMap metadata: name: reserved-memory-nova data: 04-reserved-memory-nova.conf: | [DEFAULT] reserved_host_memory_mb=75000", "kind: OpenStackDataPlaneService <...> spec: configMaps: - ceph-nova - reserved-memory-nova", "oc apply -f <dataplane_cr_file>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/deploying_a_hyperconverged_infrastructure_environment/assembly_configuring-a-hyperconverged-infrastructure-environment
Chapter 9. Premigration checklists
Chapter 9. Premigration checklists Before you migrate your application workloads with the Migration Toolkit for Containers (MTC), review the following checklists. 9.1. Resources ❏ If your application uses an internal service network or an external route for communicating with services, the relevant route exists. ❏ If your application uses cluster-level resources, you have re-created them on the target cluster. ❏ You have excluded persistent volumes (PVs), image streams, and other resources that you do not want to migrate. ❏ PV data has been backed up in case an application displays unexpected behavior after migration and corrupts the data. 9.2. Source cluster ❏ The cluster meets the minimum hardware requirements . ❏ You have installed the correct legacy Migration Toolkit for Containers Operator version: operator-3.7.yml on OpenShift Container Platform version 3.7. operator.yml on OpenShift Container Platform versions 3.9 to 4.5. ❏ All nodes have an active OpenShift Container Platform subscription. ❏ You have performed all the run-once tasks . ❏ You have performed all the environment health checks . ❏ You have checked for PVs with abnormal configurations stuck in a Terminating state by running the following command: USD oc get pv ❏ You have checked for pods whose status is other than Running or Completed by running the following command: USD oc get pods --all-namespaces | egrep -v 'Running | Completed' ❏ You have checked for pods with a high restart count by running the following command: USD oc get pods --all-namespaces --field-selector=status.phase=Running \ -o json | jq '.items[]|select(any( .status.containerStatuses[]; \ .restartCount > 3))|.metadata.name' Even if the pods are in a Running state, a high restart count might indicate underlying problems. ❏ You have removed old builds, deployments, and images from each namespace to be migrated by pruning . ❏ The OpenShift image registry uses a supported storage type . ❏ Direct image migration only: The OpenShift image registry is exposed to external traffic. ❏ You can read and write images to the registry. ❏ The etcd cluster is healthy. ❏ The average API server response time on the source cluster is less than 50 ms. ❏ The cluster certificates are valid for the duration of the migration process. ❏ You have checked for pending certificate-signing requests by running the following command: USD oc get csr -A | grep pending -i ❏ The identity provider is working. ❏ You have set the value of the openshift.io/host.generated annotation parameter to true for each OpenShift Container Platform route, which updates the host name of the route for the target cluster. Otherwise, the migrated routes retain the source cluster host name. 9.3. Target cluster ❏ You have installed Migration Toolkit for Containers Operator version 1.5.1. ❏ All MTC prerequisites are met. ❏ The cluster meets the minimum hardware requirements for the specific platform and installation method, for example, on bare metal . ❏ The cluster has storage classes defined for the storage types used by the source cluster, for example, block volume, file system, or object storage. Note NFS does not require a defined storage class. ❏ The cluster has the correct network configuration and permissions to access external services, for example, databases, source code repositories, container image registries, and CI/CD tools. ❏ External applications and services that use services provided by the cluster have the correct network configuration and permissions to access the cluster. ❏ Internal container image dependencies are met. If an application uses an internal image in the openshift namespace that is not supported by OpenShift Container Platform 4.17, you can manually update the OpenShift Container Platform 3 image stream tag with podman . ❏ The target cluster and the replication repository have sufficient storage space. ❏ The identity provider is working. ❏ DNS records for your application exist on the target cluster. ❏ Certificates that your application uses exist on the target cluster. ❏ You have configured appropriate firewall rules on the target cluster. ❏ You have correctly configured load balancing on the target cluster. ❏ If you migrate objects to an existing namespace on the target cluster that has the same name as the namespace being migrated from the source, the target namespace contains no objects of the same name and type as the objects being migrated. Note Do not create namespaces for your application on the target cluster before migration because this might cause quotas to change. 9.4. Performance ❏ The migration network has a minimum throughput of 10 Gbps. ❏ The clusters have sufficient resources for migration. Note Clusters require additional memory, CPUs, and storage in order to run a migration on top of normal workloads. Actual resource requirements depend on the number of Kubernetes resources being migrated in a single migration plan. You must test migrations in a non-production environment in order to estimate the resource requirements. ❏ The memory and CPU usage of the nodes are healthy. ❏ The etcd disk performance of the clusters has been checked with fio .
[ "oc get pv", "oc get pods --all-namespaces | egrep -v 'Running | Completed'", "oc get pods --all-namespaces --field-selector=status.phase=Running -o json | jq '.items[]|select(any( .status.containerStatuses[]; .restartCount > 3))|.metadata.name'", "oc get csr -A | grep pending -i" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/migrating_from_version_3_to_4/premigration-checklists-3-4
Chapter 4. Adding certification components
Chapter 4. Adding certification components 4.1. For OpenStack Infrastructure Containerized applications After creating the new product listing, add the certification components for the newly created product listing. You can configure the following options for the newly added components: Note The component configurations differ for different product categories. Section 4.1.1, "Images" Section 4.1.2, "Certification" Section 4.1.3, "Security" Section 4.1.4, "Repository information" Section 4.1.5, "Component details" Section 4.1.6, "Contact information for containerized applications" Section 4.1.7, "Associated products" To configure the options, go to the Components tab and click on any of the existing components. 4.1.1. Images The Images tab provides the test results for the container images that you submit by using the preflight tool. You have to configure preflight and push your container images to view the test results. To push your container images, click Set up Preflight . For detailed instructions about certification testing, see Running the certification test suite . When your testing is complete you can see two categories of images: Manifest Digests - denotes container images that are available for multiple architectures. Standalone Container Images - denotes container images that are available only for a single architecture. This page provides the following details of your container images: Specific image ID or the SHA ID Image Tag(s) Certification - Certified or Not certified, pass or fail status based on the checks performed. Click on it for more details. Architecture - specific architecture of your image, if applicable. Security - check for any vulnerabilities, if any. Health Index - Container Health Index is a measure of the oldest and most severe security updates available for a container image. 'A' is more up-to-date than 'F'. See Container Health Index grades as used inside the Red Hat Container Catalog for more details. Created - the day on which you submitted the certification. Click the Actions menu to perform the following tasks: Delete Image - click this option to delete your container image when your image is unpublished. Sync Tags - when you have altered your image tag, use this option to synchronize the container image information available on both Red Hat Partner Connect and Red Hat Container catalog . View in Catalog - When your container image is published, click this option to view the published container image on the Red Hat Ecosystem Container catalog . Click Publish , to publish your certified container images. 4.1.2. Certification The Certification tab provides detailed information about the Export control questionnaire and all the certification tests performed for the attached container images. Export Control Questionnaire The Export control questionnaire contains a series of questions through which the Red Hat legal team evaluates the export compliance by third-party vendors. Partner's legal representative must review and answer the questions. Red Hat takes approximately five business days to evaluate the responses and based on the responses Red Hat approves partner or declines partner or defers decision or requests more information. Click Start questionnaire , to enter all the legal information about your product. Click Review to modify the existing details. Note If you are using a version of Universal Base Image (UBI) to build your container image, you can host your image in a private repository. This allows you to skip the Export Compliance questionnaire. This form is required only if you are hosting your images on the Red Hat Container Catalog . Validate the functionality of your product on this Red Hat OpenStack release Validate the functionality of your product on this Red Hat OpenStack release by using the Certification tab. You can perform the following functions: Run the Red Hat Certification Tool locally. Download the test plan. Share the test results with the Red Hat certification team. Interact with the certification team, if required. To validate the functionality of your product perform the following steps: If you are a new partner, click Request a partner subscription . When your request is approved, you get active subscriptions added to your account. When you have active partner subscriptions, then click Start certification and then click Go to Red Hat certification tool . A new certification case gets created on the Red Hat Certification portal , and you are redirected to the appropriate certification portal page. The certification team will contact you to start the certification testing process, and will follow up with you in case of a problem. After successful verification, a green check mark is displayed with the validate complete message. To review the validated product details, click Review . Submit your container image for verification Run the certification suite on your container image. See Running certification tests for OpenStack certification . Upload the test results. You can later see the test results on the Images tab. Publish the container image certification on the Red Hat catalog. Note This step certifies your container only. Use the Certifications tab to certify the functionality. 4.1.3. Security The security tab provides the health status of the attached product components. Red Hat uses a Health Index to identify security risks with your components that Red Hat provides through the Red Hat Ecosystem Catalog . The Health Index is a measure of the oldest and most severe security updates available for a container image. An image with a grade of 'A' is more up-to-date than one with a grade of 'F. For more information, see Container Health Index grades as used inside the Red Hat Container Catalog . This tab provides the health index of your images which includes the following details: Image ID Health index 4.1.4. Repository information You can configure the registry and repository details by using the Repository information tab. Enter the required details in the following fields: Field name Description Container registry namespace Registry name set when the container was created. This field becomes non-editable when the container gets published. Outbound repository name Repository name that you have selected or the name obtained from your private registry in which your image is hosted, for example, ubi-minimal. Repository summary Repository summary obtained from the container image. Repository description Repository description obtained from the container image. Instructions for users to get your company's image on the Red Hat Container catalog Provide specific instructions that you want users to follow when they get your container image. This field is applicable only for container images. After configuring all the mandatory fields click Save . Note All the fields marked with an asterisk * are required and must be completed before you can proceed with container certification. 4.1.5. Component details Configure the product component details by using this tab. Enter the required details in the following fields: Field name Description Image Type Select the respective image type for your product component. Standalone image - Select this type if you want your image to be deployed either by your product or by users. Component image - Select this type if you want your image to be deployed by your product and not by users. Application categories Select the respective application type of your software product. Host level access Select between the two options: Unprivileged - If your container is isolated from the host. or Privileged - If your container requires special host-level privileges. Note If your product's functionality requires root access, you must select the privileged option, before running the preflight tool. This setting is subject to Red Hat review. Release Category Select between the two options: Generally Available - When you select this option, the application is generally available and supported. or Beta - When you select this option, the application is available as a pre-release candidate. Project name Name of the project for internal purposes. Auto-publish When you enable this option, the container image gets automatically published on the Red Hat Container catalog, after passing all the certification tests. Red Hat OpenStack platform It denotes the version of the OpenStack platform on which you are certifying your containerized application. Note This field is non-editable and is applicable only for containerized applications on OpenStack platform. 4.1.6. Contact information for containerized applications Note Providing information for this tab is optional. In the Contact Information tab, enter the primary technical contact details of your product component. Optional: In the Technical contact email address field, enter the email address of the image maintainer. Optional: To add additional contacts for your component, click + Add new contact . Click Save . 4.1.7. Associated products The Associated Product tab provides the list of products that are associated with your product component along with the following information: Product Name Type Visibility - Published or Not Published Last Activity - number of days before you ran the test To add products to your component, perform the following: If you want to find a product by its name, enter the product name in the Search by name text box and click the search icon. If you are not sure of the product name, click Find a product . From the Add product dialog, select the required product from the Available products list box and click the forward arrow. The selected product is added to the Chosen products list box. Click Update attached products . Added products are listed in the Associated product list. Note All the fields marked with an asterisk * are required and must be completed before you can proceed with the certification. 4.2. For OpenStack Infrastructure Non-containerized applications After creating the new product listing, add the certification components for the newly created product listing. You can configure the following options for the newly added components: Note The component configurations differ for different product categories. Section 4.2.1, "Certification for non-containers" Section 4.2.2, "Component Details" Section 4.2.3, "Contact Information for non containers" Section 4.2.4, "Associated products for non containers" To configure the component options, go to the Components tab and click on any of the existing components. 4.2.1. Certification for non-containers Validate the functionality of your product on this Red Hat OpenStack release Validate the functionality of your product on this Red Hat OpenStack release by using the Certification tab. This feature allows you to perform the following functions: Run the Red Hat Certification Tool locally. Share the test results with the Red Hat certification team. Interact with the certification team, if required. To validate the functionality of your product, perform the following steps: If you are a new partner, click Request a partner subscription . When your request is approved, you get active subscriptions added to your account. When you have active partner subscriptions, then click Start certification . Click Go to Red Hat certification tool . A new certification case gets created on the Red Hat Certification portal , and you are redirected to the appropriate component portal page. This page has four tabs: Summary - It includes the Certification Case, Partner Product and Progress details of the new certification case. On the Summary tab, navigate to the Files section and click Upload , to upload your test results. Add any relevant comments in the Discussions section, and then click Add Comment . Test Plans - displays the Test Plan Summary details of the new certification case. Related Certifications - displays the details of the related certification cases. Properties - displays the certification properties. It includes the following details: General - Provide the base certification version and platform information for your product certification and click Update Values : Platform - Select the required information for the following fields from the respective drop-down list: Product Version Base Version Support Case - Add any existing support cases or knowledge base articles that you want to reference with your certification case. OpenStack - Provide the details of the Red Hat OpenStack plugin components for certification and click Update . All the provided details along with the selected features and their respective protocols are available on the CertOps hub. The certification team will contact you to start the certification testing process and will follow up with you in case of a problem. After successful verification, a green check mark is displayed with the validate complete message. To review the validated product details, click Review . 4.2.2. Component Details Enter the required project details in the following fields: Project name - Enter the project name. This name is not published and is only for internal use. Red Hat OpenStack Version - Specifies the Red Hat OpenStack version on which you certify your non-containerized application component. 4.2.3. Contact Information for non containers Note Providing information for this tab is optional. In the Contact Information tab, enter the primary technical contact details of your product component. Optional: In the Technical contact email address field, enter the email address of the image maintainer. Optional: To add additional contacts for your component, click + Add new contact . Click Save . 4.2.4. Associated products for non containers The Associated Product tab provides the list of products that are associated with your product component along with the following information: Product Name Type Visibility - Published or Not Published Last Activity - number of days before you ran the test To add products to your component, perform the following: If you want to find a product by its name, enter the product name in the Search by name text box and click the search icon. If you are not sure of the product name, click Find a product . From the Add product dialog, select the required product from the Available products list box and click the forward arrow. The selected product is added to the Chosen products list box. Click Update attached products . Added products are listed in the Associated product list. Note All the fields marked with an asterisk * are required and must be completed before you can proceed with the certification.
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_certification_workflow_guide/adding-certification-components_onboarding-certification-partners
Chapter 13. Volume cloning
Chapter 13. Volume cloning A clone is a duplicate of an existing storage volume that is used as any standard volume. You create a clone of a volume to make a point in time copy of the data. A persistent volume claim (PVC) cannot be cloned with a different size. You can create up to 512 clones per PVC for both CephFS and RADOS Block Device (RBD). 13.1. Creating a clone Prerequisites Source PVC must be in Bound state and must not be in use. Note Do not create a clone of a PVC if a Pod is using it. Doing so might cause data corruption because the PVC is not quiesced (paused). Procedure Click Storage Persistent Volume Claims from the OpenShift Web Console. To create a clone, do one of the following: Beside the desired PVC, click Action menu (...) Clone PVC . Click on the PVC that you want to clone and click Actions Clone PVC . Enter a Name for the clone. Select the access mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Enter the required size of the clone. Select the storage class in which you want to create the clone. The storage class can be any RBD storage class and it need not necessarily be the same as the parent PVC. Click Clone . You are redirected to the new PVC details page. Wait for the cloned PVC status to become Bound . The cloned PVC is now available to be consumed by the pods. This cloned PVC is independent of its dataSource PVC.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/managing_and_allocating_storage_resources/volume-cloning_rhodf
Event-Driven Ansible controller user guide
Event-Driven Ansible controller user guide Red Hat Ansible Automation Platform 2.4 Learn to configure and use Event-Driven Ansible controller to enhance and expand automation Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/event-driven_ansible_controller_user_guide/index
Security and compliance
Security and compliance OpenShift Container Platform 4.14 Learning about and managing security for OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "variant: openshift version: 4.14.0 metadata: name: 51-worker-rh-registry-trust labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/containers/policy.json mode: 0644 overwrite: true contents: inline: | { \"default\": [ { \"type\": \"insecureAcceptAnything\" } ], \"transports\": { \"docker\": { \"registry.access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"docker-daemon\": { \"\": [ { \"type\": \"insecureAcceptAnything\" } ] } } }", "butane 51-worker-rh-registry-trust.bu -o 51-worker-rh-registry-trust.yaml", "oc apply -f 51-worker-rh-registry-trust.yaml", "oc get mc", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 00-worker a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 51-master-rh-registry-trust 3.2.0 13s 51-worker-rh-registry-trust 3.2.0 53s 1 99-master-generated-crio-seccomp-use-default 3.2.0 25m 99-master-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-master-ssh 3.2.0 28m 99-worker-generated-crio-seccomp-use-default 3.2.0 25m 99-worker-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-worker-ssh 3.2.0 28m rendered-master-af1e7ff78da0a9c851bab4be2777773b a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 8s rendered-master-cd51fd0c47e91812bfef2765c52ec7e6 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-2b52f75684fbc711bd1652dd86fd0b82 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-be3b3bce4f4aa52a62902304bac9da3c a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 48s 2", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-af1e7ff78da0a9c851bab4be2777773b True False False 3 3 3 0 30m worker rendered-worker-be3b3bce4f4aa52a62902304bac9da3c False True False 3 0 0 0 30m 1", "oc debug node/<node_name>", "sh-4.2# chroot /host", "docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore", "docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "oc describe machineconfigpool/worker", "Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Metadata: Creation Timestamp: 2019-12-19T02:02:12Z Generation: 3 Resource Version: 16229 Self Link: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker UID: 92697796-2203-11ea-b48c-fa163e3940e5 Spec: Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Machine Config Selector: Match Labels: machineconfiguration.openshift.io/role: worker Node Selector: Match Labels: node-role.kubernetes.io/worker: Paused: false Status: Conditions: Last Transition Time: 2019-12-19T02:03:27Z Message: Reason: Status: False Type: RenderDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: NodeDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: Degraded Last Transition Time: 2019-12-19T02:28:23Z Message: Reason: Status: False Type: Updated Last Transition Time: 2019-12-19T02:28:23Z Message: All nodes are updating to rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updating Configuration: Name: rendered-worker-d9b3f4ffcfd65c30dcf591a0e8cf9b2e Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 1 Observed Generation: 3 Ready Machine Count: 0 Unavailable Machine Count: 1 Updated Machine Count: 0 Events: <none>", "oc describe machineconfigpool/worker", "Last Transition Time: 2019-12-19T04:53:09Z Message: All nodes are updated with rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updated Last Transition Time: 2019-12-19T04:53:09Z Message: Reason: Status: False Type: Updating Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 4 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3", "oc debug node/<node> -- chroot /host cat /etc/containers/policy.json", "Starting pod/<node>-debug To use host binaries, run `chroot /host` { \"default\": [ { \"type\": \"insecureAcceptAnything\" } ], \"transports\": { \"docker\": { \"registry.access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"docker-daemon\": { \"\": [ { \"type\": \"insecureAcceptAnything\" } ] } } }", "oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.redhat.io.yaml", "Starting pod/<node>-debug To use host binaries, run `chroot /host` docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore", "oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.access.redhat.com.yaml", "Starting pod/<node>-debug To use host binaries, run `chroot /host` docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "oc adm release info quay.io/openshift-release-dev/ocp-release@sha256:2309578b68c5666dad62aed696f1f9d778ae1a089ee461060ba7b9514b7ca417 -o pullspec 1 quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9aafb914d5d7d0dec4edd800d02f811d7383a7d49e500af548eab5d00c1bffdb 2", "oc adm release info <release_version> \\ 1", "--- Pull From: quay.io/openshift-release-dev/ocp-release@sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 ---", "curl -o pub.key https://access.redhat.com/security/data/fd431d51.txt", "curl -o signature-1 https://mirror.openshift.com/pub/openshift-v4/signatures/openshift-release-dev/ocp-release/sha256%<sha_from_version>/signature-1 \\ 1", "skopeo inspect --raw docker://<quay_link_to_release> > manifest.json \\ 1", "skopeo standalone-verify manifest.json quay.io/openshift-release-dev/ocp-release:<release_number>-<arch> any signature-1 --public-key-file pub.key", "Signature verified using fingerprint 567E347AD0044ADE55BA8A5F199E2F91FD431D51, digest sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55", "quality.images.openshift.io/<qualityType>.<providerId>: {}", "quality.images.openshift.io/vulnerability.blackduck: {} quality.images.openshift.io/vulnerability.jfrog: {} quality.images.openshift.io/license.blackduck: {} quality.images.openshift.io/vulnerability.openscap: {}", "{ \"name\": \"OpenSCAP\", \"description\": \"OpenSCAP vulnerability score\", \"timestamp\": \"2016-09-08T05:04:46Z\", \"reference\": \"https://www.open-scap.org/930492\", \"compliant\": true, \"scannerVersion\": \"1.2\", \"summary\": [ { \"label\": \"critical\", \"data\": \"4\", \"severityIndex\": 3, \"reference\": null }, { \"label\": \"important\", \"data\": \"12\", \"severityIndex\": 2, \"reference\": null }, { \"label\": \"moderate\", \"data\": \"8\", \"severityIndex\": 1, \"reference\": null }, { \"label\": \"low\", \"data\": \"26\", \"severityIndex\": 0, \"reference\": null } ] }", "{ \"name\": \"Red Hat Ecosystem Catalog\", \"description\": \"Container health index\", \"timestamp\": \"2016-09-08T05:04:46Z\", \"reference\": \"https://access.redhat.com/errata/RHBA-2016:1566\", \"compliant\": null, \"scannerVersion\": \"1.2\", \"summary\": [ { \"label\": \"Health index\", \"data\": \"B\", \"severityIndex\": 1, \"reference\": null } ] }", "oc annotate image <image> quality.images.openshift.io/vulnerability.redhatcatalog='{ \"name\": \"Red Hat Ecosystem Catalog\", \"description\": \"Container health index\", \"timestamp\": \"2020-06-01T05:04:46Z\", \"compliant\": null, \"scannerVersion\": \"1.2\", \"reference\": \"https://access.redhat.com/errata/RHBA-2020:2347\", \"summary\": \"[ { \"label\": \"Health index\", \"data\": \"B\", \"severityIndex\": 1, \"reference\": null } ]\" }'", "annotations: images.openshift.io/deny-execution: true", "curl -X PATCH -H \"Authorization: Bearer <token>\" -H \"Content-Type: application/merge-patch+json\" https://<openshift_server>:6443/apis/image.openshift.io/v1/images/<image_id> --data '{ <image_annotation> }'", "{ \"metadata\": { \"annotations\": { \"quality.images.openshift.io/vulnerability.redhatcatalog\": \"{ 'name': 'Red Hat Ecosystem Catalog', 'description': 'Container health index', 'timestamp': '2020-06-01T05:04:46Z', 'compliant': null, 'reference': 'https://access.redhat.com/errata/RHBA-2020:2347', 'summary': [{'label': 'Health index', 'data': '4', 'severityIndex': 1, 'reference': null}] }\" } } }", "oc create secret generic secret-npmrc --from-file=.npmrc=~/.npmrc", "source: git: uri: https://github.com/sclorg/nodejs-ex.git secrets: - destinationDir: . secret: name: secret-npmrc", "oc new-build openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git --build-secret secret-npmrc", "oc set triggers deploy/deployment-example --from-image=example:latest --containers=web", "{ \"default\": [{\"type\": \"reject\"}], \"transports\": { \"docker\": { \"access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"atomic\": { \"172.30.1.1:5000/openshift\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"172.30.1.1:5000/production\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/example.com/pubkey\" } ], \"172.30.1.1:5000\": [{\"type\": \"reject\"}] } } }", "docker: access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "oc get event -n default | grep Node", "1h 20h 3 origin-node-1.example.local Node Normal NodeHasDiskPressure", "oc get events -n default -o json | jq '.items[] | select(.involvedObject.kind == \"Node\" and .reason == \"NodeHasDiskPressure\")'", "{ \"apiVersion\": \"v1\", \"count\": 3, \"involvedObject\": { \"kind\": \"Node\", \"name\": \"origin-node-1.example.local\", \"uid\": \"origin-node-1.example.local\" }, \"kind\": \"Event\", \"reason\": \"NodeHasDiskPressure\", }", "oc get events --all-namespaces -o json | jq '[.items[] | select(.involvedObject.kind == \"Pod\" and .reason == \"Pulling\")] | length'", "4", "oc create configmap custom-ca --from-file=ca-bundle.crt=</path/to/example-ca.crt> \\ 1 -n openshift-config", "oc patch proxy/cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"custom-ca\"}}}'", "oc create secret tls <secret> \\ 1 --cert=</path/to/cert.crt> \\ 2 --key=</path/to/cert.key> \\ 3 -n openshift-ingress", "oc patch ingresscontroller.operator default --type=merge -p '{\"spec\":{\"defaultCertificate\": {\"name\": \"<secret>\"}}}' \\ 1 -n openshift-ingress-operator", "oc login -u kubeadmin -p <password> https://FQDN:6443", "oc config view --flatten > kubeconfig-newapi", "oc create secret tls <secret> \\ 1 --cert=</path/to/cert.crt> \\ 2 --key=</path/to/cert.key> \\ 3 -n openshift-config", "oc patch apiserver cluster --type=merge -p '{\"spec\":{\"servingCerts\": {\"namedCertificates\": [{\"names\": [\"<FQDN>\"], 1 \"servingCertificate\": {\"name\": \"<secret>\"}}]}}}' 2", "oc get apiserver cluster -o yaml", "spec: servingCerts: namedCertificates: - names: - <FQDN> servingCertificate: name: <secret>", "oc get clusteroperators kube-apiserver", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.14.0 True False False 145m", "for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done", "oc annotate service <service_name> \\ 1 service.beta.openshift.io/serving-cert-secret-name=<secret_name> 2", "oc annotate service test1 service.beta.openshift.io/serving-cert-secret-name=test1", "oc describe service <service_name>", "Annotations: service.beta.openshift.io/serving-cert-secret-name: <service_name> service.beta.openshift.io/serving-cert-signed-by: openshift-service-serving-signer@1556850837", "oc annotate configmap <config_map_name> \\ 1 service.beta.openshift.io/inject-cabundle=true", "oc annotate configmap test1 service.beta.openshift.io/inject-cabundle=true", "oc get configmap <config_map_name> -o yaml", "apiVersion: v1 data: service-ca.crt: | -----BEGIN CERTIFICATE-----", "oc annotate apiservice <api_service_name> \\ 1 service.beta.openshift.io/inject-cabundle=true", "oc annotate apiservice test1 service.beta.openshift.io/inject-cabundle=true", "oc get apiservice <api_service_name> -o yaml", "apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" spec: caBundle: <CA_BUNDLE>", "oc annotate crd <crd_name> \\ 1 service.beta.openshift.io/inject-cabundle=true", "oc annotate crd test1 service.beta.openshift.io/inject-cabundle=true", "oc get crd <crd_name> -o yaml", "apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" spec: conversion: strategy: Webhook webhook: clientConfig: caBundle: <CA_BUNDLE>", "oc annotate mutatingwebhookconfigurations <mutating_webhook_name> \\ 1 service.beta.openshift.io/inject-cabundle=true", "oc annotate mutatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true", "oc get mutatingwebhookconfigurations <mutating_webhook_name> -o yaml", "apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE>", "oc annotate validatingwebhookconfigurations <validating_webhook_name> \\ 1 service.beta.openshift.io/inject-cabundle=true", "oc annotate validatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true", "oc get validatingwebhookconfigurations <validating_webhook_name> -o yaml", "apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE>", "oc describe service <service_name>", "service.beta.openshift.io/serving-cert-secret-name: <secret>", "oc delete secret <secret> 1", "oc get secret <service_name>", "NAME TYPE DATA AGE <service.name> kubernetes.io/tls 2 1s", "oc get secrets/signing-key -n openshift-service-ca -o template='{{index .data \"tls.crt\"}}' | base64 --decode | openssl x509 -noout -enddate", "oc delete secret/signing-key -n openshift-service-ca", "for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done", "apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE-----", "oc create configmap custom-ca --from-file=ca-bundle.crt=</path/to/example-ca.crt> \\ 1 -n openshift-config", "oc patch proxy/cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"custom-ca\"}}}'", "apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE-----", "cat install-config.yaml", "proxy: httpProxy: http://<username:[email protected]:123/> httpsProxy: http://<username:[email protected]:123/> noProxy: <123.example.com,10.88.0.0/16> additionalTrustBundle: | -----BEGIN CERTIFICATE----- <MY_HTTPS_PROXY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 50-examplecorp-ca-cert spec: config: ignition: version: 3.1.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVORENDQXh5Z0F3SUJBZ0lKQU51bkkwRDY2MmNuTUEwR0NTcUdTSWIzRFFFQkN3VUFNSUdsTVFzd0NRWUQKV1FRR0V3SlZVekVYTUJVR0ExVUVDQXdPVG05eWRHZ2dRMkZ5YjJ4cGJtRXhFREFPQmdOVkJBY01CMUpoYkdWcApBMmd4RmpBVUJnTlZCQW9NRFZKbFpDQklZWFFzSUVsdVl5NHhFekFSQmdOVkJBc01DbEpsWkNCSVlYUWdTVlF4Ckh6QVpCZ05WQkFNTUVsSmxaQ0JJWVhRZ1NWUWdVbTl2ZENCRFFURWhNQjhHQ1NxR1NJYjNEUUVKQVJZU2FXNW0KWGpDQnBURUxNQWtHQTFVRUJoTUNWVk14RnpBVkJnTlZCQWdNRGs1dmNuUm9JRU5oY205c2FXNWhNUkF3RGdZRApXUVFIREFkU1lXeGxhV2RvTVJZd0ZBWURWUVFLREExU1pXUWdTR0YwTENCSmJtTXVNUk13RVFZRFZRUUxEQXBTCkFXUWdTR0YwSUVsVU1Sc3dHUVlEVlFRRERCSlNaV1FnU0dGMElFbFVJRkp2YjNRZ1EwRXhJVEFmQmdrcWhraUcKMHcwQkNRRVdFbWx1Wm05elpXTkFjbVZrYUdGMExtTnZiVENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUApCRENDQVFvQ2dnRUJBTFF0OU9KUWg2R0M1TFQxZzgwcU5oMHU1MEJRNHNaL3laOGFFVHh0KzVsblBWWDZNSEt6CmQvaTdsRHFUZlRjZkxMMm55VUJkMmZRRGsxQjBmeHJza2hHSUlaM2lmUDFQczRsdFRrdjhoUlNvYjNWdE5xU28KSHhrS2Z2RDJQS2pUUHhEUFdZeXJ1eTlpckxaaW9NZmZpM2kvZ0N1dDBaV3RBeU8zTVZINXFXRi9lbkt3Z1BFUwpZOXBvK1RkQ3ZSQi9SVU9iQmFNNzYxRWNyTFNNMUdxSE51ZVNmcW5obzNBakxRNmRCblBXbG82MzhabTFWZWJLCkNFTHloa0xXTVNGa0t3RG1uZTBqUTAyWTRnMDc1dkNLdkNzQ0F3RUFBYU5qTUdFd0hRWURWUjBPQkJZRUZIN1IKNXlDK1VlaElJUGV1TDhacXczUHpiZ2NaTUI4R0ExVWRJd1FZTUJhQUZIN1I0eUMrVWVoSUlQZXVMOFpxdzNQegpjZ2NaTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RGdZRFZSMFBBUUgvQkFRREFnR0dNQTBHQ1NxR1NJYjNEUUVCCkR3VUFBNElCQVFCRE52RDJWbTlzQTVBOUFsT0pSOCtlbjVYejloWGN4SkI1cGh4Y1pROGpGb0cwNFZzaHZkMGUKTUVuVXJNY2ZGZ0laNG5qTUtUUUNNNFpGVVBBaWV5THg0ZjUySHVEb3BwM2U1SnlJTWZXK0tGY05JcEt3Q3NhawpwU29LdElVT3NVSks3cUJWWnhjckl5ZVFWMnFjWU9lWmh0UzV3QnFJd09BaEZ3bENFVDdaZTU4UUhtUzQ4c2xqCjVlVGtSaml2QWxFeHJGektjbGpDNGF4S1Fsbk92VkF6eitHbTMyVTB4UEJGNEJ5ZVBWeENKVUh3MVRzeVRtZWwKU3hORXA3eUhvWGN3bitmWG5hK3Q1SldoMWd4VVp0eTMKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= mode: 0644 overwrite: true path: /etc/pki/ca-trust/source/anchors/examplecorp-ca.crt", "oc annotate -n openshift-kube-apiserver-operator secret kube-apiserver-to-kubelet-signer auth.openshift.io/certificate-not-after-", "oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis", "oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis", "oc adm must-gather --image=USD(oc get csv compliance-operator.v1.6.0 -o=jsonpath='{.spec.relatedImages[?(@.name==\"must-gather\")].image}')", "oc get profile.compliance -n openshift-compliance", "NAME AGE VERSION ocp4-cis 3h49m 1.5.0 ocp4-cis-1-4 3h49m 1.4.0 ocp4-cis-1-5 3h49m 1.5.0 ocp4-cis-node 3h49m 1.5.0 ocp4-cis-node-1-4 3h49m 1.4.0 ocp4-cis-node-1-5 3h49m 1.5.0 ocp4-e8 3h49m ocp4-high 3h49m Revision 4 ocp4-high-node 3h49m Revision 4 ocp4-high-node-rev-4 3h49m Revision 4 ocp4-high-rev-4 3h49m Revision 4 ocp4-moderate 3h49m Revision 4 ocp4-moderate-node 3h49m Revision 4 ocp4-moderate-node-rev-4 3h49m Revision 4 ocp4-moderate-rev-4 3h49m Revision 4 ocp4-nerc-cip 3h49m ocp4-nerc-cip-node 3h49m ocp4-pci-dss 3h49m 3.2.1 ocp4-pci-dss-3-2 3h49m 3.2.1 ocp4-pci-dss-4-0 3h49m 4.0.0 ocp4-pci-dss-node 3h49m 3.2.1 ocp4-pci-dss-node-3-2 3h49m 3.2.1 ocp4-pci-dss-node-4-0 3h49m 4.0.0 ocp4-stig 3h49m V2R1 ocp4-stig-node 3h49m V2R1 ocp4-stig-node-v1r1 3h49m V1R1 ocp4-stig-node-v2r1 3h49m V2R1 ocp4-stig-v1r1 3h49m V1R1 ocp4-stig-v2r1 3h49m V2R1 rhcos4-e8 3h49m rhcos4-high 3h49m Revision 4 rhcos4-high-rev-4 3h49m Revision 4 rhcos4-moderate 3h49m Revision 4 rhcos4-moderate-rev-4 3h49m Revision 4 rhcos4-nerc-cip 3h49m rhcos4-stig 3h49m V2R1 rhcos4-stig-v1r1 3h49m V1R1 rhcos4-stig-v2r1 3h49m V2R1", "oc get -n openshift-compliance -oyaml profiles.compliance rhcos4-e8", "apiVersion: compliance.openshift.io/v1alpha1 description: 'This profile contains configuration checks for Red Hat Enterprise Linux CoreOS that align to the Australian Cyber Security Centre (ACSC) Essential Eight. A copy of the Essential Eight in Linux Environments guide can be found at the ACSC website: https://www.cyber.gov.au/acsc/view-all-content/publications/hardening-linux-workstations-and-servers' id: xccdf_org.ssgproject.content_profile_e8 kind: Profile metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/product: redhat_enterprise_linux_coreos_4 compliance.openshift.io/product-type: Node creationTimestamp: \"2022-10-19T12:06:49Z\" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-e8 namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: \"43699\" uid: 86353f70-28f7-40b4-bf0e-6289ec33675b rules: - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown - rhcos4-audit-rules-execution-chcon - rhcos4-audit-rules-execution-restorecon - rhcos4-audit-rules-execution-semanage - rhcos4-audit-rules-execution-setfiles - rhcos4-audit-rules-execution-setsebool - rhcos4-audit-rules-execution-seunshare - rhcos4-audit-rules-kernel-module-loading-delete - rhcos4-audit-rules-kernel-module-loading-finit - rhcos4-audit-rules-kernel-module-loading-init - rhcos4-audit-rules-login-events - rhcos4-audit-rules-login-events-faillock - rhcos4-audit-rules-login-events-lastlog - rhcos4-audit-rules-login-events-tallylog - rhcos4-audit-rules-networkconfig-modification - rhcos4-audit-rules-sysadmin-actions - rhcos4-audit-rules-time-adjtimex - rhcos4-audit-rules-time-clock-settime - rhcos4-audit-rules-time-settimeofday - rhcos4-audit-rules-time-stime - rhcos4-audit-rules-time-watch-localtime - rhcos4-audit-rules-usergroup-modification - rhcos4-auditd-data-retention-flush - rhcos4-auditd-freq - rhcos4-auditd-local-events - rhcos4-auditd-log-format - rhcos4-auditd-name-format - rhcos4-auditd-write-logs - rhcos4-configure-crypto-policy - rhcos4-configure-ssh-crypto-policy - rhcos4-no-empty-passwords - rhcos4-selinux-policytype - rhcos4-selinux-state - rhcos4-service-auditd-enabled - rhcos4-sshd-disable-empty-passwords - rhcos4-sshd-disable-gssapi-auth - rhcos4-sshd-disable-rhosts - rhcos4-sshd-disable-root-login - rhcos4-sshd-disable-user-known-hosts - rhcos4-sshd-do-not-permit-user-env - rhcos4-sshd-enable-strictmodes - rhcos4-sshd-print-last-log - rhcos4-sshd-set-loglevel-info - rhcos4-sysctl-kernel-dmesg-restrict - rhcos4-sysctl-kernel-kptr-restrict - rhcos4-sysctl-kernel-randomize-va-space - rhcos4-sysctl-kernel-unprivileged-bpf-disabled - rhcos4-sysctl-kernel-yama-ptrace-scope - rhcos4-sysctl-net-core-bpf-jit-harden title: Australian Cyber Security Centre (ACSC) Essential Eight", "oc get -n openshift-compliance -oyaml rules rhcos4-audit-rules-login-events", "apiVersion: compliance.openshift.io/v1alpha1 checkType: Node description: |- The audit system already collects login information for all users and root. If the auditd daemon is configured to use the augenrules program to read audit rules during daemon startup (the default), add the following lines to a file with suffix.rules in the directory /etc/audit/rules.d in order to watch for attempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins If the auditd daemon is configured to use the auditctl utility to read audit rules during daemon startup, add the following lines to /etc/audit/audit.rules file in order to watch for unattempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins id: xccdf_org.ssgproject.content_rule_audit_rules_login_events kind: Rule metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/rule: audit-rules-login-events control.compliance.openshift.io/NIST-800-53: AU-2(d);AU-12(c);AC-6(9);CM-6(a) control.compliance.openshift.io/PCI-DSS: Req-10.2.3 policies.open-cluster-management.io/controls: AU-2(d),AU-12(c),AC-6(9),CM-6(a),Req-10.2.3 policies.open-cluster-management.io/standards: NIST-800-53,PCI-DSS creationTimestamp: \"2022-10-19T12:07:08Z\" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-audit-rules-login-events namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: \"44819\" uid: 75872f1f-3c93-40ca-a69d-44e5438824a4 rationale: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion. severity: medium title: Record Attempts to Alter Logon and Logout Events warning: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion.", "apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle name: <profile bundle name> namespace: openshift-compliance status: dataStreamStatus: VALID 1", "apiVersion: compliance.openshift.io/v1alpha1 description: <description of the profile> id: xccdf_org.ssgproject.content_profile_moderate 1 kind: Profile metadata: annotations: compliance.openshift.io/product: <product name> compliance.openshift.io/product-type: Node 2 creationTimestamp: \"YYYY-MM-DDTMM:HH:SSZ\" generation: 1 labels: compliance.openshift.io/profile-bundle: <profile bundle name> name: rhcos4-moderate namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: <profile bundle name> uid: <uid string> resourceVersion: \"<version number>\" selfLink: /apis/compliance.openshift.io/v1alpha1/namespaces/openshift-compliance/profiles/rhcos4-moderate uid: <uid string> rules: 3 - rhcos4-account-disable-post-pw-expiration - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown title: <title of the profile>", "apiVersion: compliance.openshift.io/v1alpha1 checkType: Platform 1 description: <description of the rule> id: xccdf_org.ssgproject.content_rule_configure_network_policies_namespaces 2 instructions: <manual instructions for the scan> kind: Rule metadata: annotations: compliance.openshift.io/rule: configure-network-policies-namespaces control.compliance.openshift.io/CIS-OCP: 5.3.2 control.compliance.openshift.io/NERC-CIP: CIP-003-3 R4;CIP-003-3 R4.2;CIP-003-3 R5;CIP-003-3 R6;CIP-004-3 R2.2.4;CIP-004-3 R3;CIP-007-3 R2;CIP-007-3 R2.1;CIP-007-3 R2.2;CIP-007-3 R2.3;CIP-007-3 R5.1;CIP-007-3 R6.1 control.compliance.openshift.io/NIST-800-53: AC-4;AC-4(21);CA-3(5);CM-6;CM-6(1);CM-7;CM-7(1);SC-7;SC-7(3);SC-7(5);SC-7(8);SC-7(12);SC-7(13);SC-7(18) labels: compliance.openshift.io/profile-bundle: ocp4 name: ocp4-configure-network-policies-namespaces namespace: openshift-compliance rationale: <description of why this rule is checked> severity: high 3 title: <summary of the rule>", "apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: rhcos4-with-usb spec: extends: rhcos4-moderate 1 title: <title of the tailored profile> disableRules: - name: <name of a rule object to be disabled> rationale: <description of why this rule is checked> status: id: xccdf_compliance.openshift.io_profile_rhcos4-with-usb 2 outputRef: name: rhcos4-with-usb-tp 3 namespace: openshift-compliance state: READY 4", "compliance.openshift.io/product-type: Platform/Node", "apiVersion: compliance.openshift.io/v1alpha1 autoApplyRemediations: true 1 autoUpdateRemediations: true 2 kind: ScanSetting maxRetryOnTimeout: 3 metadata: creationTimestamp: \"2022-10-18T20:21:00Z\" generation: 1 name: default-auto-apply namespace: openshift-compliance resourceVersion: \"38840\" uid: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 rawResultStorage: nodeSelector: node-role.kubernetes.io/master: \"\" pvAccessModes: - ReadWriteOnce rotation: 3 3 size: 1Gi 4 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists roles: 5 - master - worker scanTolerations: - operator: Exists schedule: 0 1 * * * 6 showNotApplicable: false strictNodeScan: true timeout: 30m", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: <name of the scan> profiles: 1 # Node checks - name: rhcos4-with-usb kind: TailoredProfile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-moderate kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: 2 name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1", "oc get compliancesuites", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: <name of the scan> spec: autoApplyRemediations: false 1 schedule: \"0 1 * * *\" 2 scans: 3 - name: workers-scan scanType: Node profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc rule: \"xccdf_org.ssgproject.content_rule_no_netrc_files\" nodeSelector: node-role.kubernetes.io/worker: \"\" status: Phase: DONE 4 Result: NON-COMPLIANT 5 scanStatuses: - name: workers-scan phase: DONE result: NON-COMPLIANT", "oc get events --field-selector involvedObject.kind=ComplianceSuite,involvedObject.name=<name of the suite>", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceScan metadata: name: <name of the scan> spec: scanType: Node 1 profile: xccdf_org.ssgproject.content_profile_moderate 2 content: ssg-ocp4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... 3 rule: \"xccdf_org.ssgproject.content_rule_no_netrc_files\" 4 nodeSelector: 5 node-role.kubernetes.io/worker: \"\" status: phase: DONE 6 result: NON-COMPLIANT 7", "get events --field-selector involvedObject.kind=ComplianceScan,involvedObject.name=<name of the suite>", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceCheckResult metadata: labels: compliance.openshift.io/check-severity: medium compliance.openshift.io/check-status: FAIL compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan name: workers-scan-no-direct-root-logins namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceScan name: workers-scan description: <description of scan check> instructions: <manual instructions for the scan> id: xccdf_org.ssgproject.content_rule_no_direct_root_logins severity: medium 1 status: FAIL 2", "get compliancecheckresults -l compliance.openshift.io/suite=workers-compliancesuite", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: labels: compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan machineconfiguration.openshift.io/role: worker name: workers-scan-disable-users-coredumps namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: workers-scan-disable-users-coredumps uid: <UID> spec: apply: false 1 object: current: 2 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:,%2A%20%20%20%20%20hard%20%20%20core%20%20%20%200 filesystem: root mode: 420 path: /etc/security/limits.d/75-disable_users_coredumps.conf outdated: {} 3", "get complianceremediations -l compliance.openshift.io/suite=workers-compliancesuite", "get compliancecheckresults -l 'compliance.openshift.io/check-status in (FAIL),compliance.openshift.io/automated-remediation'", "get compliancecheckresults -l 'compliance.openshift.io/check-status in (FAIL),!compliance.openshift.io/automated-remediation'", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance", "oc create -f namespace-object.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance", "oc create -f operator-group-object.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: \"stable\" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f subscription-object.yaml", "oc get csv -n openshift-compliance", "oc get deploy -n openshift-compliance", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance", "oc create -f namespace-object.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance", "oc create -f operator-group-object.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: \"stable\" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: \"\" 1", "oc create -f subscription-object.yaml", "oc get csv -n openshift-compliance", "oc get deploy -n openshift-compliance", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance", "oc create -f namespace-object.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance", "oc create -f operator-group-object.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: \"stable\" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: \"\" env: - name: PLATFORM value: \"HyperShift\"", "oc create -f subscription-object.yaml", "oc get csv -n openshift-compliance", "oc get deploy -n openshift-compliance", "apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: \"2022-10-19T12:06:30Z\" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: \"46741\" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml 1 contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 2 status: conditions: - lastTransitionTime: \"2022-10-19T12:07:51Z\" message: Profile bundle successfully parsed reason: Valid status: \"True\" type: Ready dataStreamStatus: VALID", "oc -n openshift-compliance get profilebundles rhcos4 -oyaml", "apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: \"2022-10-19T12:06:30Z\" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: \"46741\" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 1 status: conditions: - lastTransitionTime: \"2022-10-19T12:07:51Z\" message: Profile bundle successfully parsed reason: Valid status: \"True\" type: Ready dataStreamStatus: VALID", "oc delete ssb --all -n openshift-compliance", "oc delete ss --all -n openshift-compliance", "oc delete suite --all -n openshift-compliance", "oc delete scan --all -n openshift-compliance", "oc delete profilebundle.compliance --all -n openshift-compliance", "oc delete sub --all -n openshift-compliance", "oc delete csv --all -n openshift-compliance", "oc delete project openshift-compliance", "project.project.openshift.io \"openshift-compliance\" deleted", "oc get project/openshift-compliance", "Error from server (NotFound): namespaces \"openshift-compliance\" not found", "oc explain scansettings", "oc explain scansettingbindings", "oc describe scansettings default -n openshift-compliance", "Name: default Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Kind: ScanSetting Max Retry On Timeout: 3 Metadata: Creation Timestamp: 2024-07-16T14:56:42Z Generation: 2 Resource Version: 91655682 UID: 50358cf1-57a8-4f69-ac50-5c7a5938e402 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce 1 Rotation: 3 2 Size: 1Gi 3 Storage Class Name: standard 4 Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master 5 worker 6 Scan Tolerations: 7 Operator: Exists Schedule: 0 1 * * * 8 Show Not Applicable: false Strict Node Scan: true Suspend: false Timeout: 30m Events: <none>", "Name: default-auto-apply Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Auto Apply Remediations: true 1 Auto Update Remediations: true 2 Kind: ScanSetting Metadata: Creation Timestamp: 2022-10-18T20:21:00Z Generation: 1 Managed Fields: API Version: compliance.openshift.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:autoApplyRemediations: f:autoUpdateRemediations: f:rawResultStorage: .: f:nodeSelector: .: f:node-role.kubernetes.io/master: f:pvAccessModes: f:rotation: f:size: f:tolerations: f:roles: f:scanTolerations: f:schedule: f:showNotApplicable: f:strictNodeScan: Manager: compliance-operator Operation: Update Time: 2022-10-18T20:21:00Z Resource Version: 38840 UID: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce Rotation: 3 Size: 1Gi Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master worker Scan Tolerations: Operator: Exists Schedule: 0 1 * * * Show Not Applicable: false Strict Node Scan: true Events: <none>", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis-compliance namespace: openshift-compliance profiles: - name: ocp4-cis-node kind: Profile apiGroup: compliance.openshift.io/v1alpha1 - name: ocp4-cis kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: default kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1", "oc create -f <file-name>.yaml -n openshift-compliance", "oc get compliancescan -w -n openshift-compliance", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: rs-on-workers namespace: openshift-compliance rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: \"\" 1 pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists 2 roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * *", "oc create -f rs-workers.yaml", "oc get scansettings rs-on-workers -n openshift-compliance -o yaml", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: creationTimestamp: \"2021-11-19T19:36:36Z\" generation: 1 name: rs-on-workers namespace: openshift-compliance resourceVersion: \"48305\" uid: 43fdfc5f-15a7-445a-8bbc-0e4a160cd46e rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: \"\" pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * * strictNodeScan: true", "oc get hostedcluster -A", "NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE local-cluster 79136a1bdb84b3c13217 4.13.5 79136a1bdb84b3c13217-admin-kubeconfig Completed True False The hosted control plane is available", "apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: hypershift-cisk57aw88gry namespace: openshift-compliance spec: description: This profile test required rules extends: ocp4-cis 1 title: Management namespace profile setValues: - name: ocp4-hypershift-cluster rationale: This value is used for HyperShift version detection value: 79136a1bdb84b3c13217 2 - name: ocp4-hypershift-namespace-prefix rationale: This value is used for HyperShift control plane namespace detection value: local-cluster 3", "oc create -n openshift-compliance -f mgmt-tp.yaml", "spec.containers[].resources.limits.cpu spec.containers[].resources.limits.memory spec.containers[].resources.limits.hugepages-<size> spec.containers[].resources.requests.cpu spec.containers[].resources.requests.memory spec.containers[].resources.requests.hugepages-<size>", "apiVersion: v1 kind: Pod metadata: name: frontend spec: containers: - name: app image: images.my-company.example/app:v4 resources: requests: 1 memory: \"64Mi\" cpu: \"250m\" limits: 2 memory: \"128Mi\" cpu: \"500m\" - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\"", "apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: new-profile annotations: compliance.openshift.io/product-type: Node 1 spec: extends: ocp4-cis-node 2 description: My custom profile 3 title: Custom profile 4 enableRules: - name: ocp4-etcd-unique-ca rationale: We really need to enable this disableRules: - name: ocp4-file-groupowner-cni-conf rationale: This does not apply to the cluster", "oc get rules.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4", "oc get variables.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4", "apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: nist-moderate-modified spec: extends: rhcos4-moderate description: NIST moderate profile title: My modified NIST moderate profile disableRules: - name: rhcos4-file-permissions-var-log-messages rationale: The file contains logs of error messages in the system - name: rhcos4-account-disable-post-pw-expiration rationale: No need to check this as it comes from the IdP setValues: - name: rhcos4-var-selinux-state rationale: Organizational requirements value: permissive", "apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: ocp4-manual-scc-check spec: extends: ocp4-cis description: This profile extends ocp4-cis by forcing the SCC check to always return MANUAL title: OCP4 CIS profile with manual SCC check manualRules: - name: ocp4-scc-limit-container-allowed-capabilities rationale: We use third party software that installs its own SCC with extra privileges", "oc create -n openshift-compliance -f new-profile-node.yaml 1", "tailoredprofile.compliance.openshift.io/nist-moderate-modified created", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: nist-moderate-modified profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-moderate - apiGroup: compliance.openshift.io/v1alpha1 kind: TailoredProfile name: nist-moderate-modified settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default", "oc create -n openshift-compliance -f new-scansettingbinding.yaml", "scansettingbinding.compliance.openshift.io/nist-moderate-modified created", "oc get compliancesuites nist-moderate-modified -o json -n openshift-compliance | jq '.status.scanStatuses[].resultsStorage'", "{ \"name\": \"ocp4-moderate\", \"namespace\": \"openshift-compliance\" } { \"name\": \"nist-moderate-modified-master\", \"namespace\": \"openshift-compliance\" } { \"name\": \"nist-moderate-modified-worker\", \"namespace\": \"openshift-compliance\" }", "oc get pvc -n openshift-compliance rhcos4-moderate-worker", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rhcos4-moderate-worker Bound pvc-548f6cfe-164b-42fe-ba13-a07cfbc77f3a 1Gi RWO gp2 92m", "oc create -n openshift-compliance -f pod.yaml", "apiVersion: \"v1\" kind: Pod metadata: name: pv-extract spec: containers: - name: pv-extract-pod image: registry.access.redhat.com/ubi9/ubi command: [\"sleep\", \"3000\"] volumeMounts: - mountPath: \"/workers-scan-results\" name: workers-scan-vol volumes: - name: workers-scan-vol persistentVolumeClaim: claimName: rhcos4-moderate-worker", "oc cp pv-extract:/workers-scan-results -n openshift-compliance .", "oc delete pod pv-extract -n openshift-compliance", "oc get -n openshift-compliance compliancecheckresults -l compliance.openshift.io/suite=workers-compliancesuite", "oc get -n openshift-compliance compliancecheckresults -l compliance.openshift.io/scan=workers-scan", "oc get -n openshift-compliance compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/automated-remediation'", "oc get compliancecheckresults -n openshift-compliance -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/check-severity=high'", "NAME STATUS SEVERITY nist-moderate-modified-master-configure-crypto-policy FAIL high nist-moderate-modified-master-coreos-pti-kernel-argument FAIL high nist-moderate-modified-master-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-master-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-master-enable-fips-mode FAIL high nist-moderate-modified-master-no-empty-passwords FAIL high nist-moderate-modified-master-selinux-state FAIL high nist-moderate-modified-worker-configure-crypto-policy FAIL high nist-moderate-modified-worker-coreos-pti-kernel-argument FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-worker-enable-fips-mode FAIL high nist-moderate-modified-worker-no-empty-passwords FAIL high nist-moderate-modified-worker-selinux-state FAIL high ocp4-moderate-configure-network-policies-namespaces FAIL high ocp4-moderate-fips-mode-enabled-on-all-nodes FAIL high", "oc get -n openshift-compliance compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,!compliance.openshift.io/automated-remediation'", "spec: apply: false current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf mode: 0644 contents: source: data:,net.ipv4.conf.all.accept_redirects%3D0 outdated: {} status: applicationState: NotApplied", "echo \"net.ipv4.conf.all.accept_redirects%3D0\" | python3 -c \"import sys, urllib.parse; print(urllib.parse.unquote(''.join(sys.stdin.readlines())))\"", "net.ipv4.conf.all.accept_redirects=0", "oc get nodes -n openshift-compliance", "NAME STATUS ROLES AGE VERSION ip-10-0-128-92.us-east-2.compute.internal Ready master 5h21m v1.27.3 ip-10-0-158-32.us-east-2.compute.internal Ready worker 5h17m v1.27.3 ip-10-0-166-81.us-east-2.compute.internal Ready worker 5h17m v1.27.3 ip-10-0-171-170.us-east-2.compute.internal Ready master 5h21m v1.27.3 ip-10-0-197-35.us-east-2.compute.internal Ready master 5h22m v1.27.3", "oc -n openshift-compliance label node ip-10-0-166-81.us-east-2.compute.internal node-role.kubernetes.io/<machine_config_pool_name>=", "node/ip-10-0-166-81.us-east-2.compute.internal labeled", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: <machine_config_pool_name> labels: pools.operator.machineconfiguration.openshift.io/<machine_config_pool_name>: '' 1 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,<machine_config_pool_name>]} nodeSelector: matchLabels: node-role.kubernetes.io/<machine_config_pool_name>: \"\"", "oc get mcp -w", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master - example scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis namespace: openshift-compliance profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis-node settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default", "oc get rules -o json | jq '.items[] | select(.checkType == \"Platform\") | select(.metadata.name | contains(\"ocp4-kubelet-\")) | .metadata.name'", "oc label mcp <sub-pool-name> pools.operator.machineconfiguration.openshift.io/<sub-pool-name>=", "oc -n openshift-compliance patch complianceremediations/<scan-name>-sysctl-net-ipv4-conf-all-accept-redirects --patch '{\"spec\":{\"apply\":true}}' --type=merge", "oc edit image.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2020-09-10T10:12:54Z\" generation: 2 name: cluster resourceVersion: \"363096\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: 2dcb614e-2f8a-4a23-ba9a-8e33cd0ff77e spec: allowedRegistriesForImport: - domainName: registry.redhat.io status: externalRegistryHostnames: - default-route-openshift-image-registry.apps.user-cluster-09-10-12-07.devcluster.openshift.com internalRegistryHostname: image-registry.openshift-image-registry.svc:5000", "oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=", "oc -n openshift-compliance get complianceremediations -l complianceoperator.openshift.io/outdated-remediation=", "NAME STATE workers-scan-no-empty-passwords Outdated", "oc -n openshift-compliance patch complianceremediations workers-scan-no-empty-passwords --type json -p '[{\"op\":\"remove\", \"path\":/spec/outdated}]'", "oc get -n openshift-compliance complianceremediations workers-scan-no-empty-passwords", "NAME STATE workers-scan-no-empty-passwords Applied", "oc -n openshift-compliance patch complianceremediations/rhcos4-moderate-worker-sysctl-net-ipv4-conf-all-accept-redirects --patch '{\"spec\":{\"apply\":false}}' --type=merge", "oc -n openshift-compliance get remediation \\ one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -o yaml", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: annotations: compliance.openshift.io/xccdf-value-used: var-kubelet-evictionhard-imagefs-available creationTimestamp: \"2022-01-05T19:52:27Z\" generation: 1 labels: compliance.openshift.io/scan-name: one-rule-tp-node-master 1 compliance.openshift.io/suite: one-rule-ssb-node name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available uid: fe8e1577-9060-4c59-95b2-3e2c51709adc resourceVersion: \"84820\" uid: 5339d21a-24d7-40cb-84d2-7a2ebb015355 spec: apply: true current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: kubeletConfig: evictionHard: imagefs.available: 10% 2 outdated: {} type: Configuration status: applicationState: Applied", "oc -n openshift-compliance patch complianceremediations/one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -p '{\"spec\":{\"apply\":false}}' --type=merge", "oc -n openshift-compliance get kubeletconfig --selector compliance.openshift.io/scan-name=one-rule-tp-node-master", "NAME AGE compliance-operator-kubelet-master 2m34s", "oc edit -n openshift-compliance KubeletConfig compliance-operator-kubelet-master", "oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc debug: true rule: xccdf_org.ssgproject.content_rule_no_direct_root_logins nodeSelector: node-role.kubernetes.io/worker: \"\"", "apiVersion: compliance.openshift.io/v1alpha1 strictNodeScan: true metadata: name: default namespace: openshift-compliance priorityClass: compliance-high-priority 1 kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker scanTolerations: - operator: Exists", "oc -n openshift-compliance create configmap nist-moderate-modified --from-file=tailoring.xml=/path/to/the/tailoringFile.xml", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: debug: true scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc debug: true tailoringConfigMap: name: nist-moderate-modified nodeSelector: node-role.kubernetes.io/worker: \"\"", "oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=", "oc get mc", "75-worker-scan-chronyd-or-ntpd-specify-remote-server 75-worker-scan-configure-usbguard-auditbackend 75-worker-scan-service-usbguard-enabled 75-worker-scan-usbguard-allow-hid-and-hub", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'", "oc -n openshift-compliance annotate compliancesuites/workers-compliancesuite compliance.openshift.io/apply-remediations=", "oc -n openshift-compliance annotate compliancesuites/workers-compliancesuite compliance.openshift.io/remove-outdated=", "allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs kind: SecurityContextConstraints metadata: name: restricted-adjusted-compliance priority: 30 1 readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - SETUID - SETGID - MKNOD runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny users: - system:serviceaccount:openshift-compliance:api-resource-collector 2 volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret", "oc create -n openshift-compliance -f restricted-adjusted-compliance.yaml", "securitycontextconstraints.security.openshift.io/restricted-adjusted-compliance created", "oc get -n openshift-compliance scc restricted-adjusted-compliance", "NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES restricted-adjusted-compliance false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny 30 false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"]", "oc get events -n openshift-compliance", "oc describe -n openshift-compliance compliancescan/cis-compliance", "oc -n openshift-compliance logs compliance-operator-775d7bddbd-gj58f | jq -c 'select(.logger == \"profilebundlectrl\")'", "date -d @1596184628.955853 --utc", "oc get -n openshift-compliance profilebundle.compliance", "oc get -n openshift-compliance profile.compliance", "oc logs -n openshift-compliance -lprofile-bundle=ocp4 -c profileparser", "oc get -n openshift-compliance deployments,pods -lprofile-bundle=ocp4", "oc logs -n openshift-compliance pods/<pod-name>", "oc describe -n openshift-compliance pod/<pod-name> -c profileparser", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: my-companys-constraints debug: true For each role, a separate scan will be created pointing to a node-role specified in roles roles: - worker --- apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: my-companys-compliance-requirements profiles: # Node checks - name: rhcos4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuiteCreated 9m52s scansettingbindingctrl ComplianceSuite openshift-compliance/my-companys-compliance-requirements created", "oc get cronjobs", "NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE <cron_name> 0 1 * * * False 0 <none> 151m", "oc -n openshift-compliance get cm -l compliance.openshift.io/scan-name=rhcos4-e8-worker,complianceoperator.openshift.io/scan-script=", "oc get pvc -n openshift-compliance -lcompliance.openshift.io/scan-name=rhcos4-e8-worker", "oc get pods -lcompliance.openshift.io/scan-name=rhcos4-e8-worker,workload=scanner --show-labels", "NAME READY STATUS RESTARTS AGE LABELS rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod 0/2 Completed 0 39m compliance.openshift.io/scan-name=rhcos4-e8-worker,targetNode=ip-10-0-169-90.eu-north-1.compute.internal,workload=scanner", "oc describe cm/rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod", "Name: rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod Namespace: openshift-compliance Labels: compliance.openshift.io/scan-name-scan=rhcos4-e8-worker complianceoperator.openshift.io/scan-result= Annotations: compliance-remediations/processed: compliance.openshift.io/scan-error-msg: compliance.openshift.io/scan-result: NON-COMPLIANT OpenSCAP-scan-result/node: ip-10-0-169-90.eu-north-1.compute.internal Data ==== exit-code: ---- 2 results: ---- <?xml version=\"1.0\" encoding=\"UTF-8\"?>", "oc get compliancecheckresults -lcompliance.openshift.io/scan-name=rhcos4-e8-worker", "NAME STATUS SEVERITY rhcos4-e8-worker-accounts-no-uid-except-zero PASS high rhcos4-e8-worker-audit-rules-dac-modification-chmod FAIL medium", "oc get complianceremediations -lcompliance.openshift.io/scan-name=rhcos4-e8-worker", "NAME STATE rhcos4-e8-worker-audit-rules-dac-modification-chmod NotApplied rhcos4-e8-worker-audit-rules-dac-modification-chown NotApplied rhcos4-e8-worker-audit-rules-execution-chcon NotApplied rhcos4-e8-worker-audit-rules-execution-restorecon NotApplied rhcos4-e8-worker-audit-rules-execution-semanage NotApplied rhcos4-e8-worker-audit-rules-execution-setfiles NotApplied", "oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=", "oc patch complianceremediations/rhcos4-e8-worker-audit-rules-dac-modification-chmod --patch '{\"spec\":{\"apply\":true}}' --type=merge", "oc get mc | grep 75-", "75-rhcos4-e8-worker-my-companys-compliance-requirements 3.2.0 2m46s", "oc describe mc/75-rhcos4-e8-worker-my-companys-compliance-requirements", "Name: 75-rhcos4-e8-worker-my-companys-compliance-requirements Labels: machineconfiguration.openshift.io/role=worker Annotations: remediation/rhcos4-e8-worker-audit-rules-dac-modification-chmod:", "oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=", "oc -n openshift-compliance get compliancecheckresults/rhcos4-e8-worker-audit-rules-dac-modification-chmod", "NAME STATUS SEVERITY rhcos4-e8-worker-audit-rules-dac-modification-chmod PASS medium", "oc logs -l workload=<workload_name> -c <container_name>", "spec: config: resources: limits: memory: 500Mi", "oc patch sub compliance-operator -nopenshift-compliance --patch-file co-memlimit-patch.yaml --type=merge", "kind: Subscription metadata: name: compliance-operator namespace: openshift-compliance spec: package: package-name channel: stable config: resources: requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\"", "oc get pod ocp4-pci-dss-api-checks-pod -w", "NAME READY STATUS RESTARTS AGE ocp4-pci-dss-api-checks-pod 0/2 Init:1/2 8 (5m56s ago) 25m ocp4-pci-dss-api-checks-pod 0/2 Init:OOMKilled 8 (6m19s ago) 26m", "timeout: 30m strictNodeScan: true metadata: name: default namespace: openshift-compliance kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker apiVersion: compliance.openshift.io/v1alpha1 maxRetryOnTimeout: 3 scanTolerations: - operator: Exists scanLimits: memory: 1024Mi 1", "oc apply -f scansetting.yaml", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' timeout: '10m0s' 1 maxRetryOnTimeout: 3 2", "podman run --rm -v ~/.local/bin:/mnt/out:Z registry.redhat.io/compliance/oc-compliance-rhel8:stable /bin/cp /usr/bin/oc-compliance /mnt/out/", "W0611 20:35:46.486903 11354 manifest.go:440] Chose linux/amd64 manifest from the manifest list.", "oc compliance fetch-raw <object-type> <object-name> -o <output-path>", "oc compliance fetch-raw scansettingbindings my-binding -o /tmp/", "Fetching results for my-binding scans: ocp4-cis, ocp4-cis-node-worker, ocp4-cis-node-master Fetching raw compliance results for scan 'ocp4-cis'.... The raw compliance results are available in the following directory: /tmp/ocp4-cis Fetching raw compliance results for scan 'ocp4-cis-node-worker'........ The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-worker Fetching raw compliance results for scan 'ocp4-cis-node-master'... The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-master", "ls /tmp/ocp4-cis-node-master/", "ocp4-cis-node-master-ip-10-0-128-89.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-150-5.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-163-32.ec2.internal-pod.xml.bzip2", "bunzip2 -c resultsdir/worker-scan/worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 > resultsdir/worker-scan/worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml", "ls resultsdir/worker-scan/", "worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 worker-scan-stage-459-tqkg7-compute-1-pod.xml.bzip2", "oc compliance rerun-now scansettingbindings my-binding", "Rerunning scans from 'my-binding': ocp4-cis Re-running scan 'openshift-compliance/ocp4-cis'", "oc compliance bind [--dry-run] -N <binding name> [-S <scansetting name>] <objtype/objname> [..<objtype/objname>]", "oc get profile.compliance -n openshift-compliance", "NAME AGE VERSION ocp4-cis 3h49m 1.5.0 ocp4-cis-1-4 3h49m 1.4.0 ocp4-cis-1-5 3h49m 1.5.0 ocp4-cis-node 3h49m 1.5.0 ocp4-cis-node-1-4 3h49m 1.4.0 ocp4-cis-node-1-5 3h49m 1.5.0 ocp4-e8 3h49m ocp4-high 3h49m Revision 4 ocp4-high-node 3h49m Revision 4 ocp4-high-node-rev-4 3h49m Revision 4 ocp4-high-rev-4 3h49m Revision 4 ocp4-moderate 3h49m Revision 4 ocp4-moderate-node 3h49m Revision 4 ocp4-moderate-node-rev-4 3h49m Revision 4 ocp4-moderate-rev-4 3h49m Revision 4 ocp4-nerc-cip 3h49m ocp4-nerc-cip-node 3h49m ocp4-pci-dss 3h49m 3.2.1 ocp4-pci-dss-3-2 3h49m 3.2.1 ocp4-pci-dss-4-0 3h49m 4.0.0 ocp4-pci-dss-node 3h49m 3.2.1 ocp4-pci-dss-node-3-2 3h49m 3.2.1 ocp4-pci-dss-node-4-0 3h49m 4.0.0 ocp4-stig 3h49m V2R1 ocp4-stig-node 3h49m V2R1 ocp4-stig-node-v1r1 3h49m V1R1 ocp4-stig-node-v2r1 3h49m V2R1 ocp4-stig-v1r1 3h49m V1R1 ocp4-stig-v2r1 3h49m V2R1 rhcos4-e8 3h49m rhcos4-high 3h49m Revision 4 rhcos4-high-rev-4 3h49m Revision 4 rhcos4-moderate 3h49m Revision 4 rhcos4-moderate-rev-4 3h49m Revision 4 rhcos4-nerc-cip 3h49m rhcos4-stig 3h49m V2R1 rhcos4-stig-v1r1 3h49m V1R1 rhcos4-stig-v2r1 3h49m V2R1", "oc get scansettings -n openshift-compliance", "NAME AGE default 10m default-auto-apply 10m", "oc compliance bind -N my-binding profile/ocp4-cis profile/ocp4-cis-node", "Creating ScanSettingBinding my-binding", "oc compliance controls profile ocp4-cis-node", "+-----------+----------+ | FRAMEWORK | CONTROLS | +-----------+----------+ | CIS-OCP | 1.1.1 | + +----------+ | | 1.1.10 | + +----------+ | | 1.1.11 | + +----------+", "oc compliance fetch-fixes profile ocp4-cis -o /tmp", "No fixes to persist for rule 'ocp4-api-server-api-priority-flowschema-catch-all' 1 No fixes to persist for rule 'ocp4-api-server-api-priority-gate-enabled' No fixes to persist for rule 'ocp4-api-server-audit-log-maxbackup' Persisted rule fix to /tmp/ocp4-api-server-audit-log-maxsize.yaml No fixes to persist for rule 'ocp4-api-server-audit-log-path' No fixes to persist for rule 'ocp4-api-server-auth-mode-no-aa' No fixes to persist for rule 'ocp4-api-server-auth-mode-node' No fixes to persist for rule 'ocp4-api-server-auth-mode-rbac' No fixes to persist for rule 'ocp4-api-server-basic-auth' No fixes to persist for rule 'ocp4-api-server-bind-address' No fixes to persist for rule 'ocp4-api-server-client-ca' Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-cipher.yaml Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-config.yaml", "head /tmp/ocp4-api-server-audit-log-maxsize.yaml", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: maximumFileSizeMegabytes: 100", "oc get complianceremediations -n openshift-compliance", "NAME STATE ocp4-cis-api-server-encryption-provider-cipher NotApplied ocp4-cis-api-server-encryption-provider-config NotApplied", "oc compliance fetch-fixes complianceremediations ocp4-cis-api-server-encryption-provider-cipher -o /tmp", "Persisted compliance remediation fix to /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml", "head /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: encryption: type: aescbc", "oc compliance view-result ocp4-cis-scheduler-no-bind-address", "oc create -f <file-name>.yaml", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-file-integrity", "oc create -f <file-name>.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: targetNamespaces: - openshift-file-integrity", "oc create -f <file-name>.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: channel: \"stable\" installPlanApproval: Automatic name: file-integrity-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc get csv -n openshift-file-integrity", "oc get deploy -n openshift-file-integrity", "apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: worker-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: 1 node-role.kubernetes.io/worker: \"\" tolerations: 2 - key: \"myNode\" operator: \"Exists\" effect: \"NoSchedule\" config: 3 name: \"myconfig\" namespace: \"openshift-file-integrity\" key: \"config\" gracePeriod: 20 4 maxBackups: 5 5 initialDelay: 60 6 debug: false status: phase: Active 7", "oc apply -f worker-fileintegrity.yaml -n openshift-file-integrity", "oc get fileintegrities -n openshift-file-integrity", "NAME AGE worker-fileintegrity 14s", "oc get fileintegrities/worker-fileintegrity -o jsonpath=\"{ .status.phase }\"", "Active", "oc get fileintegritynodestatuses", "NAME AGE worker-fileintegrity-ip-10-0-130-192.ec2.internal 101s worker-fileintegrity-ip-10-0-147-133.ec2.internal 109s worker-fileintegrity-ip-10-0-165-160.ec2.internal 102s", "oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq", "oc get fileintegritynodestatuses -w", "NAME NODE STATUS example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-169-137.us-east-2.compute.internal ip-10-0-169-137.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded", "[ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:45:57Z\" } ] [ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:46:03Z\" } ] [ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:45:48Z\" } ]", "oc debug node/ip-10-0-130-192.ec2.internal", "Creating debug namespace/openshift-debug-node-ldfbj Starting pod/ip-10-0-130-192ec2internal-debug To use host binaries, run `chroot /host` Pod IP: 10.0.130.192 If you don't see a command prompt, try pressing enter. sh-4.2# echo \"# integrity test\" >> /host/etc/resolv.conf sh-4.2# exit Removing debug pod Removing debug namespace/openshift-debug-node-ldfbj", "oc get fileintegritynodestatuses.fileintegrity.openshift.io/worker-fileintegrity-ip-10-0-130-192.ec2.internal -ojsonpath='{.results}' | jq -r", "oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq", "[ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:54:14Z\" }, { \"condition\": \"Failed\", \"filesChanged\": 1, \"lastProbeTime\": \"2020-09-15T12:57:20Z\", \"resultConfigMapName\": \"aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed\", \"resultConfigMapNamespace\": \"openshift-file-integrity\" } ]", "oc describe cm aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed", "Name: aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed Namespace: openshift-file-integrity Labels: file-integrity.openshift.io/node=ip-10-0-130-192.ec2.internal file-integrity.openshift.io/owner=worker-fileintegrity file-integrity.openshift.io/result-log= Annotations: file-integrity.openshift.io/files-added: 0 file-integrity.openshift.io/files-changed: 1 file-integrity.openshift.io/files-removed: 0 Data integritylog: ------ AIDE 0.15.1 found differences between database and filesystem!! Start timestamp: 2020-09-15 12:58:15 Summary: Total number of files: 31553 Added files: 0 Removed files: 0 Changed files: 1 --------------------------------------------------- Changed files: --------------------------------------------------- changed: /hostroot/etc/resolv.conf --------------------------------------------------- Detailed information about changes: --------------------------------------------------- File: /hostroot/etc/resolv.conf SHA512 : sTQYpB/AL7FeoGtu/1g7opv6C+KT1CBJ , qAeM+a8yTgHPnIHMaRlS+so61EN8VOpg Events: <none>", "oc get cm <failure-cm-name> -o json | jq -r '.data.integritylog' | base64 -d | gunzip", "oc get events --field-selector reason=FileIntegrityStatus", "LAST SEEN TYPE REASON OBJECT MESSAGE 97s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Pending 67s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Initializing 37s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Active", "oc get events --field-selector reason=NodeIntegrityStatus", "LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \\ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed", "oc get events --field-selector reason=NodeIntegrityStatus", "LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \\ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed 40m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:3,c:1,r:0 \\ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed", "oc explain fileintegrity.spec", "oc explain fileintegrity.spec.config", "oc describe cm/worker-fileintegrity", "@@define DBDIR /hostroot/etc/kubernetes @@define LOGDIR /hostroot/etc/kubernetes database=file:@@{DBDIR}/aide.db.gz database_out=file:@@{DBDIR}/aide.db.gz gzip_dbout=yes verbose=5 report_url=file:@@{LOGDIR}/aide.log report_url=stdout PERMS = p+u+g+acl+selinux+xattrs CONTENT_EX = sha512+ftype+p+u+g+n+acl+selinux+xattrs /hostroot/boot/ CONTENT_EX /hostroot/root/\\..* PERMS /hostroot/root/ CONTENT_EX", "oc extract cm/worker-fileintegrity --keys=aide.conf", "vim aide.conf", "/hostroot/etc/kubernetes/static-pod-resources !/hostroot/etc/kubernetes/aide.* !/hostroot/etc/kubernetes/manifests !/hostroot/etc/docker/certs.d !/hostroot/etc/selinux/targeted !/hostroot/etc/openvswitch/conf.db", "!/opt/mydaemon/", "/hostroot/etc/ CONTENT_EX", "oc create cm master-aide-conf --from-file=aide.conf", "apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: master-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: node-role.kubernetes.io/master: \"\" config: name: master-aide-conf namespace: openshift-file-integrity", "oc describe cm/master-fileintegrity | grep /opt/mydaemon", "!/hostroot/opt/mydaemon", "oc annotate fileintegrities/worker-fileintegrity file-integrity.openshift.io/re-init=", "ls -lR /host/etc/kubernetes/aide.* -rw-------. 1 root root 1839782 Sep 17 15:08 /host/etc/kubernetes/aide.db.gz -rw-------. 1 root root 1839783 Sep 17 14:30 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_38 -rw-------. 1 root root 73728 Sep 17 15:07 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_55 -rw-r--r--. 1 root root 0 Sep 17 15:08 /host/etc/kubernetes/aide.log -rw-------. 1 root root 613 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_38 -rw-r--r--. 1 root root 0 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_55", "oc -n openshift-file-integrity get ds/aide-worker-fileintegrity", "oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity", "oc -n openshift-file-integrity logs pod/aide-worker-fileintegrity-mr8x6", "Starting the AIDE runner daemon initializing AIDE db initialization finished running aide check", "oc get fileintegrities/worker-fileintegrity -o jsonpath=\"{ .status }\"", "oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity", "apiVersion: v1 kind: Namespace metadata: name: openshift-security-profiles labels: openshift.io/cluster-monitoring: \"true\"", "oc create -f namespace-object.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: security-profiles-operator namespace: openshift-security-profiles", "oc create -f operator-group-object.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: security-profiles-operator-sub namespace: openshift-security-profiles spec: channel: release-alpha-rhel-8 installPlanApproval: Automatic name: security-profiles-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f subscription-object.yaml", "oc get csv -n openshift-security-profiles", "oc get deploy -n openshift-security-profiles", "oc -n openshift-security-profiles patch spod spod --type=merge -p '{\"spec\":{\"verbosity\":1}}'", "securityprofilesoperatordaemon.security-profiles-operator.x-k8s.io/spod patched", "oc new-project my-namespace", "apiVersion: security-profiles-operator.x-k8s.io/v1beta1 kind: SeccompProfile metadata: namespace: my-namespace name: profile1 spec: defaultAction: SCMP_ACT_LOG", "apiVersion: v1 kind: Pod metadata: name: test-pod spec: securityContext: seccompProfile: type: Localhost localhostProfile: operator/my-namespace/profile1.json containers: - name: test-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21", "oc -n my-namespace get seccompprofile profile1 --output wide", "NAME STATUS AGE SECCOMPPROFILE.LOCALHOSTPROFILE profile1 Installed 14s operator/my-namespace/profile1.json", "oc get sp profile1 --output=jsonpath='{.status.localhostProfile}'", "operator/my-namespace/profile1.json", "spec: template: spec: securityContext: seccompProfile: type: Localhost localhostProfile: operator/my-namespace/profile1.json", "oc -n my-namespace patch deployment myapp --patch-file patch.yaml --type=merge", "deployment.apps/myapp patched", "oc -n my-namespace get deployment myapp --output=jsonpath='{.spec.template.spec.securityContext}' | jq .", "{ \"seccompProfile\": { \"localhostProfile\": \"operator/my-namespace/profile1.json\", \"type\": \"localhost\" } }", "apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileBinding metadata: namespace: my-namespace name: nginx-binding spec: profileRef: kind: SeccompProfile 1 name: profile 2 image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 3", "oc label ns my-namespace spo.x-k8s.io/enable-binding=true", "apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - name: test-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21", "oc create -f test-pod.yaml", "oc get pod test-pod -o jsonpath='{.spec.containers[*].securityContext.seccompProfile}'", "{\"localhostProfile\":\"operator/my-namespace/profile.json\",\"type\":\"Localhost\"}", "oc new-project my-namespace", "oc label ns my-namespace spo.x-k8s.io/enable-recording=true", "apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: namespace: my-namespace name: test-recording spec: kind: SeccompProfile recorder: logs podSelector: matchLabels: app: my-app", "apiVersion: v1 kind: Pod metadata: namespace: my-namespace name: my-pod labels: app: my-app spec: containers: - name: nginx image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080 - name: redis image: quay.io/security-profiles-operator/redis:6.2.1", "oc -n my-namespace get pods", "NAME READY STATUS RESTARTS AGE my-pod 2/2 Running 0 18s", "oc -n openshift-security-profiles logs --since=1m --selector name=spod -c log-enricher", "I0523 14:19:08.747313 430694 enricher.go:445] log-enricher \"msg\"=\"audit\" \"container\"=\"redis\" \"executable\"=\"/usr/local/bin/redis-server\" \"namespace\"=\"my-namespace\" \"node\"=\"xiyuan-23-5g2q9-worker-eastus2-6rpgf\" \"pid\"=656802 \"pod\"=\"my-pod\" \"syscallID\"=0 \"syscallName\"=\"read\" \"timestamp\"=\"1684851548.745:207179\" \"type\"=\"seccomp\"", "oc -n my-namepace delete pod my-pod", "oc get seccompprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace", "NAME STATUS AGE test-recording-nginx Installed 2m48s test-recording-redis Installed 2m48s", "apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: # The name of the Recording is the same as the resulting SeccompProfile CRD # after reconciliation. name: test-recording namespace: my-namespace spec: kind: SeccompProfile recorder: logs mergeStrategy: containers podSelector: matchLabels: app: sp-record", "oc label ns my-namespace security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite=true", "apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deploy namespace: my-namespace spec: replicas: 3 selector: matchLabels: app: sp-record template: metadata: labels: app: sp-record spec: serviceAccountName: spo-record-sa containers: - name: nginx-record image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080", "oc delete deployment nginx-deploy -n my-namespace", "oc delete profilerecording test-recording -n my-namespace", "oc get seccompprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace", "NAME STATUS AGE test-recording-nginx-record Installed 55s", "oc get seccompprofiles test-recording-nginx-record -o yaml", "oc new-project nginx-deploy", "apiVersion: security-profiles-operator.x-k8s.io/v1alpha2 kind: SelinuxProfile metadata: name: nginx-secure namespace: nginx-deploy spec: allow: '@self': tcp_socket: - listen http_cache_port_t: tcp_socket: - name_bind node_t: tcp_socket: - node_bind inherit: - kind: System name: container", "oc wait --for=condition=ready -n nginx-deploy selinuxprofile nginx-secure", "selinuxprofile.security-profiles-operator.x-k8s.io/nginx-secure condition met", "oc -n openshift-security-profiles rsh -c selinuxd ds/spod", "cat /etc/selinux.d/nginx-secure_nginx-deploy.cil", "(block nginx-secure_nginx-deploy (blockinherit container) (allow process nginx-secure_nginx-deploy.process ( tcp_socket ( listen ))) (allow process http_cache_port_t ( tcp_socket ( name_bind ))) (allow process node_t ( tcp_socket ( node_bind ))) )", "semodule -l | grep nginx-secure", "nginx-secure_nginx-deploy", "oc label ns nginx-deploy security.openshift.io/scc.podSecurityLabelSync=false", "oc label ns nginx-deploy --overwrite=true pod-security.kubernetes.io/enforce=privileged", "oc get selinuxprofile.security-profiles-operator.x-k8s.io/nginx-secure -n nginx-deploy -ojsonpath='{.status.usage}'", "nginx-secure_nginx-deploy.process", "apiVersion: v1 kind: Pod metadata: name: nginx-secure namespace: nginx-deploy spec: containers: - image: nginxinc/nginx-unprivileged:1.21 name: nginx securityContext: seLinuxOptions: # NOTE: This uses an appropriate SELinux type type: nginx-secure_nginx-deploy.process", "apiVersion: security-profiles-operator.x-k8s.io/v1alpha2 kind: SelinuxProfile metadata: name: nginx-secure namespace: nginx-deploy spec: permissive: true", "apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileBinding metadata: namespace: my-namespace name: nginx-binding spec: profileRef: kind: SelinuxProfile 1 name: profile 2 image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 3", "oc label ns my-namespace spo.x-k8s.io/enable-binding=true", "apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - name: test-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21", "oc create -f test-pod.yaml", "oc get pod test-pod -o jsonpath='{.spec.containers[*].securityContext.seLinuxOptions.type}'", "profile_nginx-binding.process", "oc new-project nginx-secure", "kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: spo-nginx namespace: nginx-secure subjects: - kind: ServiceAccount name: spo-deploy-test roleRef: kind: Role name: spo-nginx apiGroup: rbac.authorization.k8s.io", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: null name: spo-nginx namespace: nginx-secure rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints resourceNames: - privileged verbs: - use", "apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: null name: spo-deploy-test namespace: nginx-secure", "apiVersion: apps/v1 kind: Deployment metadata: name: selinux-test namespace: nginx-secure metadata: labels: app: selinux-test spec: replicas: 3 selector: matchLabels: app: selinux-test template: metadata: labels: app: selinux-test spec: serviceAccountName: spo-deploy-test securityContext: seLinuxOptions: type: nginx-secure_nginx-secure.process 1 containers: - name: nginx-unpriv image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080", "oc new-project my-namespace", "oc label ns my-namespace spo.x-k8s.io/enable-recording=true", "apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: namespace: my-namespace name: test-recording spec: kind: SelinuxProfile recorder: logs podSelector: matchLabels: app: my-app", "apiVersion: v1 kind: Pod metadata: namespace: my-namespace name: my-pod labels: app: my-app spec: containers: - name: nginx image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080 - name: redis image: quay.io/security-profiles-operator/redis:6.2.1", "oc -n my-namespace get pods", "NAME READY STATUS RESTARTS AGE my-pod 2/2 Running 0 18s", "oc -n openshift-security-profiles logs --since=1m --selector name=spod -c log-enricher", "I0517 13:55:36.383187 348295 enricher.go:376] log-enricher \"msg\"=\"audit\" \"container\"=\"redis\" \"namespace\"=\"my-namespace\" \"node\"=\"ip-10-0-189-53.us-east-2.compute.internal\" \"perm\"=\"name_bind\" \"pod\"=\"my-pod\" \"profile\"=\"test-recording_redis_6kmrb_1684331729\" \"scontext\"=\"system_u:system_r:selinuxrecording.process:s0:c4,c27\" \"tclass\"=\"tcp_socket\" \"tcontext\"=\"system_u:object_r:redis_port_t:s0\" \"timestamp\"=\"1684331735.105:273965\" \"type\"=\"selinux\"", "oc -n my-namepace delete pod my-pod", "oc get selinuxprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace", "NAME USAGE STATE test-recording-nginx test-recording-nginx_my-namespace.process Installed test-recording-redis test-recording-redis_my-namespace.process Installed", "apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: # The name of the Recording is the same as the resulting SelinuxProfile CRD # after reconciliation. name: test-recording namespace: my-namespace spec: kind: SelinuxProfile recorder: logs mergeStrategy: containers podSelector: matchLabels: app: sp-record", "oc label ns my-namespace security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite=true", "apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deploy namespace: my-namespace spec: replicas: 3 selector: matchLabels: app: sp-record template: metadata: labels: app: sp-record spec: serviceAccountName: spo-record-sa containers: - name: nginx-record image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080", "oc delete deployment nginx-deploy -n my-namespace", "oc delete profilerecording test-recording -n my-namespace", "oc get selinuxprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace", "NAME USAGE STATE test-recording-nginx-record test-recording-nginx-record_my-namespace.process Installed", "oc get selinuxprofiles test-recording-nginx-record -o yaml", "oc -n openshift-security-profiles patch spod spod --type merge -p '{\"spec\":{\"allowedSyscalls\": [\"exit\", \"exit_group\", \"futex\", \"nanosleep\"]}}'", "apiVersion: security-profiles-operator.x-k8s.io/v1beta1 kind: SeccompProfile metadata: namespace: my-namespace name: example-name spec: defaultAction: SCMP_ACT_ERRNO baseProfileName: runc-v1.0.0 syscalls: - action: SCMP_ACT_ALLOW names: - exit_group", "oc -n openshift-security-profiles patch spod spod --type=merge -p '{\"spec\":{\"enableMemoryOptimization\":true}}'", "apiVersion: v1 kind: Pod metadata: name: my-recording-pod labels: spo.x-k8s.io/enable-recording: \"true\"", "oc -n openshift-security-profiles patch spod spod --type merge -p '{\"spec\":{\"daemonResourceRequirements\": { \"requests\": {\"memory\": \"256Mi\", \"cpu\": \"250m\"}, \"limits\": {\"memory\": \"512Mi\", \"cpu\": \"500m\"}}}}'", "oc -n openshift-security-profiles patch spod spod --type=merge -p '{\"spec\":{\"priorityClassName\":\"my-priority-class\"}}'", "securityprofilesoperatordaemon.openshift-security-profiles.x-k8s.io/spod patched", "oc get svc/metrics -n openshift-security-profiles", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE metrics ClusterIP 10.0.0.228 <none> 443/TCP 43s", "oc run --rm -i --restart=Never --image=registry.fedoraproject.org/fedora-minimal:latest -n openshift-security-profiles metrics-test -- bash -c 'curl -ks -H \"Authorization: Bearer USD(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\" https://metrics.openshift-security-profiles/metrics-spod'", "HELP security_profiles_operator_seccomp_profile_total Counter about seccomp profile operations. TYPE security_profiles_operator_seccomp_profile_total counter security_profiles_operator_seccomp_profile_total{operation=\"delete\"} 1 security_profiles_operator_seccomp_profile_total{operation=\"update\"} 2", "oc get clusterrolebinding spo-metrics-client -o wide", "NAME ROLE AGE USERS GROUPS SERVICEACCOUNTS spo-metrics-client ClusterRole/spo-metrics-client 35m openshift-security-profiles/default", "oc -n openshift-security-profiles patch spod spod --type=merge -p '{\"spec\":{\"enableLogEnricher\":true}}'", "securityprofilesoperatordaemon.security-profiles-operator.x-k8s.io/spod patched", "oc -n openshift-security-profiles logs -f ds/spod log-enricher", "I0623 12:51:04.257814 1854764 deleg.go:130] setup \"msg\"=\"starting component: log-enricher\" \"buildDate\"=\"1980-01-01T00:00:00Z\" \"compiler\"=\"gc\" \"gitCommit\"=\"unknown\" \"gitTreeState\"=\"clean\" \"goVersion\"=\"go1.16.2\" \"platform\"=\"linux/amd64\" \"version\"=\"0.4.0-dev\" I0623 12:51:04.257890 1854764 enricher.go:44] log-enricher \"msg\"=\"Starting log-enricher on node: 127.0.0.1\" I0623 12:51:04.257898 1854764 enricher.go:46] log-enricher \"msg\"=\"Connecting to local GRPC server\" I0623 12:51:04.258061 1854764 enricher.go:69] log-enricher \"msg\"=\"Reading from file /var/log/audit/audit.log\" 2021/06/23 12:51:04 Seeked /var/log/audit/audit.log - &{Offset:0 Whence:2}", "apiVersion: security-profiles-operator.x-k8s.io/v1beta1 kind: SeccompProfile metadata: name: log namespace: default spec: defaultAction: SCMP_ACT_LOG", "apiVersion: v1 kind: Pod metadata: name: log-pod spec: securityContext: seccompProfile: type: Localhost localhostProfile: operator/default/log.json containers: - name: log-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21", "oc -n openshift-security-profiles logs -f ds/spod log-enricher", "... I0623 12:59:11.479869 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=3 \"syscallName\"=\"close\" \"timestamp\"=\"1624453150.205:1061\" \"type\"=\"seccomp\" I0623 12:59:11.487323 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=157 \"syscallName\"=\"prctl\" \"timestamp\"=\"1624453150.205:1062\" \"type\"=\"seccomp\" I0623 12:59:11.492157 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=157 \"syscallName\"=\"prctl\" \"timestamp\"=\"1624453150.205:1063\" \"type\"=\"seccomp\" ... I0623 12:59:20.258523 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=12 \"syscallName\"=\"brk\" \"timestamp\"=\"1624453150.235:2873\" \"type\"=\"seccomp\" I0623 12:59:20.263349 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=21 \"syscallName\"=\"access\" \"timestamp\"=\"1624453150.235:2874\" \"type\"=\"seccomp\" I0623 12:59:20.354091 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=257 \"syscallName\"=\"openat\" \"timestamp\"=\"1624453150.235:2875\" \"type\"=\"seccomp\" I0623 12:59:20.358844 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=5 \"syscallName\"=\"fstat\" \"timestamp\"=\"1624453150.235:2876\" \"type\"=\"seccomp\" I0623 12:59:20.363510 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=9 \"syscallName\"=\"mmap\" \"timestamp\"=\"1624453150.235:2877\" \"type\"=\"seccomp\" I0623 12:59:20.454127 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=3 \"syscallName\"=\"close\" \"timestamp\"=\"1624453150.235:2878\" \"type\"=\"seccomp\" I0623 12:59:20.458654 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=257 \"syscallName\"=\"openat\" \"timestamp\"=\"1624453150.235:2879\" \"type\"=\"seccomp\" ...", "spec: webhookOptions: - name: recording.spo.io objectSelector: matchExpressions: - key: spo-record operator: In values: - \"true\"", "oc -n openshift-security-profiles patch spod spod -p USD(cat /tmp/spod-wh.patch) --type=merge", "oc get MutatingWebhookConfiguration spo-mutating-webhook-configuration -oyaml", "oc -n openshift-security-profiles logs openshift-security-profiles-<id>", "I1019 19:34:14.942464 1 main.go:90] setup \"msg\"=\"starting openshift-security-profiles\" \"buildDate\"=\"2020-10-19T19:31:24Z\" \"compiler\"=\"gc\" \"gitCommit\"=\"a3ef0e1ea6405092268c18f240b62015c247dd9d\" \"gitTreeState\"=\"dirty\" \"goVersion\"=\"go1.15.1\" \"platform\"=\"linux/amd64\" \"version\"=\"0.2.0-dev\" I1019 19:34:15.348389 1 listener.go:44] controller-runtime/metrics \"msg\"=\"metrics server is starting to listen\" \"addr\"=\":8080\" I1019 19:34:15.349076 1 main.go:126] setup \"msg\"=\"starting manager\" I1019 19:34:15.349449 1 internal.go:391] controller-runtime/manager \"msg\"=\"starting metrics server\" \"path\"=\"/metrics\" I1019 19:34:15.350201 1 controller.go:142] controller \"msg\"=\"Starting EventSource\" \"controller\"=\"profile\" \"reconcilerGroup\"=\"security-profiles-operator.x-k8s.io\" \"reconcilerKind\"=\"SeccompProfile\" \"source\"={\"Type\":{\"metadata\":{\"creationTimestamp\":null},\"spec\":{\"defaultAction\":\"\"}}} I1019 19:34:15.450674 1 controller.go:149] controller \"msg\"=\"Starting Controller\" \"controller\"=\"profile\" \"reconcilerGroup\"=\"security-profiles-operator.x-k8s.io\" \"reconcilerKind\"=\"SeccompProfile\" I1019 19:34:15.450757 1 controller.go:176] controller \"msg\"=\"Starting workers\" \"controller\"=\"profile\" \"reconcilerGroup\"=\"security-profiles-operator.x-k8s.io\" \"reconcilerKind\"=\"SeccompProfile\" \"worker count\"=1 I1019 19:34:15.453102 1 profile.go:148] profile \"msg\"=\"Reconciled profile from SeccompProfile\" \"namespace\"=\"openshift-security-profiles\" \"profile\"=\"nginx-1.19.1\" \"name\"=\"nginx-1.19.1\" \"resource version\"=\"728\" I1019 19:34:15.453618 1 profile.go:148] profile \"msg\"=\"Reconciled profile from SeccompProfile\" \"namespace\"=\"openshift-security-profiles\" \"profile\"=\"openshift-security-profiles\" \"name\"=\"openshift-security-profiles\" \"resource version\"=\"729\"", "oc exec -t -n openshift-security-profiles openshift-security-profiles-<id> -- ls /var/lib/kubelet/seccomp/operator/my-namespace/my-workload", "profile-block.json profile-complain.json", "oc delete MutatingWebhookConfiguration spo-mutating-webhook-configuration", "oc get packagemanifests -n openshift-marketplace | grep tang", "tang-operator Red Hat", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: tang-operator namespace: openshift-operators spec: channel: stable 1 installPlanApproval: Automatic name: tang-operator 2 source: redhat-operators 3 sourceNamespace: openshift-marketplace 4", "oc apply -f tang-operator.yaml", "oc -n openshift-operators get pods", "NAME READY STATUS RESTARTS AGE tang-operator-controller-manager-694b754bd6-4zk7x 2/2 Running 0 12s", "oc -n nbde describe tangserver", "... Status: Active Keys: File Name: QS82aXnPKA4XpfHr3umbA0r2iTbRcpWQ0VI2Qdhi6xg Generated: 2022-02-08 15:44:17.030090484 +0000 sha1: PvYQKtrTuYsMV2AomUeHrUWkCGg sha256: QS82aXnPKA4XpfHr3umbA0r2iTbRcpWQ0VI2Qdhi6xg ...", "apiVersion: daemons.redhat.com/v1alpha1 kind: TangServer metadata: name: tangserver namespace: nbde finalizers: - finalizer.daemons.tangserver.redhat.com spec: replicas: 1 hiddenKeys: - sha1: \"PvYQKtrTuYsMV2AomUeHrUWkCGg\" 1", "oc apply -f minimal-keyretrieve-rotate-tangserver.yaml", "oc -n nbde describe tangserver", "... Spec: Hidden Keys: sha1: PvYQKtrTuYsMV2AomUeHrUWkCGg Replicas: 1 Status: Active Keys: File Name: T-0wx1HusMeWx4WMOk4eK97Q5u4dY5tamdDs7_ughnY.jwk Generated: 2023-10-25 15:38:18.134939752 +0000 sha1: vVxkNCNq7gygeeA9zrHrbc3_NZ4 sha256: T-0wx1HusMeWx4WMOk4eK97Q5u4dY5tamdDs7_ughnY Hidden Keys: File Name: .QS82aXnPKA4XpfHr3umbA0r2iTbRcpWQ0VI2Qdhi6xg.jwk Generated: 2023-10-25 15:37:29.126928965 +0000 Hidden: 2023-10-25 15:38:13.515467436 +0000 sha1: PvYQKtrTuYsMV2AomUeHrUWkCGg sha256: QS82aXnPKA4XpfHr3umbA0r2iTbRcpWQ0VI2Qdhi6xg ...", "oc -n nbde describe tangserver", "... Status: Active Keys: File Name: PvYQKtrTuYsMV2AomUeHrUWkCGg.jwk Generated: 2022-02-08 15:44:17.030090484 +0000 sha1: PvYQKtrTuYsMV2AomUeHrUWkCGg sha256: QS82aXnPKA4XpfHr3umbA0r2iTbRcpWQ0VI2Qdhi6xg ...", "apiVersion: daemons.redhat.com/v1alpha1 kind: TangServer metadata: name: tangserver namespace: nbde finalizers: - finalizer.daemons.tangserver.redhat.com spec: replicas: 1 hiddenKeys: [] 1", "oc apply -f hidden-keys-deletion-tangserver.yaml", "oc -n nbde describe tangserver", "... Spec: Hidden Keys: sha1: PvYQKtrTuYsMV2AomUeHrUWkCGg Replicas: 1 Status: Active Keys: File Name: T-0wx1HusMeWx4WMOk4eK97Q5u4dY5tamdDs7_ughnY.jwk Generated: 2023-10-25 15:38:18.134939752 +0000 sha1: vVxkNCNq7gygeeA9zrHrbc3_NZ4 sha256: T-0wx1HusMeWx4WMOk4eK97Q5u4dY5tamdDs7_ughnY Status: Ready: 1 Running: 1 Service External URL: http://35.222.247.84:7500/adv Tang Server Error: No Events: ...", "curl 2> /dev/null http://34.28.173.205:7500/adv | jq", "{ \"payload\": \"eyJrZXlzIj...eSJdfV19\", \"protected\": \"eyJhbGciOiJFUzUxMiIsImN0eSI6Imp3ay1zZXQranNvbiJ9\", \"signature\": \"AUB0qSFx0FJLeTU...aV_GYWlDx50vCXKNyMMCRx\" }", "oc -n nbde describe tangserver", "... Spec: ... Status: Ready: 1 Running: 1 Service External URL: http://34.28.173.205:7500/adv Tang Server Error: No Events: ...", "curl 2> /dev/null http://34.28.173.205:7500/adv | jq", "{ \"payload\": \"eyJrZXlzIj...eSJdfV19\", \"protected\": \"eyJhbGciOiJFUzUxMiIsImN0eSI6Imp3ay1zZXQranNvbiJ9\", \"signature\": \"AUB0qSFx0FJLeTU...aV_GYWlDx50vCXKNyMMCRx\" }", "oc get pods -n cert-manager", "NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 3m39s cert-manager-cainjector-56cc5f9868-7g9z7 1/1 Running 0 4m5s cert-manager-webhook-d4f79d7f7-9dg9w 1/1 Running 0 4m9s", "apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: acme-cluster-issuer spec: acme:", "apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-staging 1 spec: acme: preferredChain: \"\" privateKeySecretRef: name: <secret_for_private_key> 2 server: https://acme-staging-v02.api.letsencrypt.org/directory 3 solvers: - http01: ingress: ingressClassName: openshift-default 4", "oc patch ingress/<ingress-name> --type=merge --patch '{\"spec\":{\"ingressClassName\":\"openshift-default\"}}' -n <namespace>", "oc create -f acme-cluster-issuer.yaml", "apiVersion: v1 kind: Namespace metadata: name: my-ingress-namespace 1", "oc create -f namespace.yaml", "apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: sample-ingress 1 namespace: my-ingress-namespace 2 annotations: cert-manager.io/cluster-issuer: letsencrypt-staging 3 spec: ingressClassName: openshift-default 4 tls: - hosts: - <hostname> 5 secretName: sample-tls 6 rules: - host: <hostname> 7 http: paths: - path: / pathType: Prefix backend: service: name: sample-workload 8 port: number: 80", "oc create -f ingress.yaml", "oc edit certmanager cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3", "oc new-project <issuer_namespace>", "oc create secret generic aws-secret --from-literal=awsSecretAccessKey=<aws_secret_access_key> \\ 1 -n my-issuer-namespace", "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <letsencrypt_staging> 1 namespace: <issuer_namespace> 2 spec: acme: server: https://acme-staging-v02.api.letsencrypt.org/directory 3 email: \"<email_address>\" 4 privateKeySecretRef: name: <secret_private_key> 5 solvers: - dns01: route53: accessKeyID: <aws_key_id> 6 hostedZoneID: <hosted_zone_id> 7 region: <region_name> 8 secretAccessKeySecretRef: name: \"aws-secret\" 9 key: \"awsSecretAccessKey\" 10", "oc create -f issuer.yaml", "oc edit certmanager cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3", "oc new-project <issuer_namespace>", "oc patch certmanager/cluster --type=merge -p='{\"spec\":{\"controllerConfig\":{\"overrideArgs\":[\"--issuer-ambient-credentials\"]}}}'", "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <letsencrypt_staging> 1 namespace: <issuer_namespace> 2 spec: acme: server: https://acme-staging-v02.api.letsencrypt.org/directory 3 email: \"<email_address>\" 4 privateKeySecretRef: name: <secret_private_key> 5 solvers: - dns01: route53: hostedZoneID: <hosted_zone_id> 6 region: us-east-1", "oc create -f issuer.yaml", "oc edit certmanager cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3", "oc new-project my-issuer-namespace", "oc create secret generic clouddns-dns01-solver-svc-acct --from-file=service_account.json=<path/to/gcp_service_account.json> -n my-issuer-namespace", "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme_dns01_clouddns_issuer> 1 namespace: <issuer_namespace> 2 spec: acme: preferredChain: \"\" privateKeySecretRef: name: <secret_private_key> 3 server: https://acme-staging-v02.api.letsencrypt.org/directory 4 solvers: - dns01: cloudDNS: project: <project_id> 5 serviceAccountSecretRef: name: clouddns-dns01-solver-svc-acct 6 key: service_account.json 7", "oc create -f issuer.yaml", "oc edit certmanager cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3", "oc new-project <issuer_namespace>", "oc patch certmanager/cluster --type=merge -p='{\"spec\":{\"controllerConfig\":{\"overrideArgs\":[\"--issuer-ambient-credentials\"]}}}'", "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme_dns01_clouddns_issuer> 1 namespace: <issuer_namespace> spec: acme: preferredChain: \"\" privateKeySecretRef: name: <secret_private_key> 2 server: https://acme-staging-v02.api.letsencrypt.org/directory 3 solvers: - dns01: cloudDNS: project: <gcp_project_id> 4", "oc create -f issuer.yaml", "oc edit certmanager cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3", "oc new-project my-issuer-namespace", "oc create secret generic <secret_name> --from-literal=<azure_secret_access_key_name>=<azure_secret_access_key_value> \\ 1 2 3 -n my-issuer-namespace", "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme-dns01-azuredns-issuer> 1 namespace: <issuer_namespace> 2 spec: acme: preferredChain: \"\" privateKeySecretRef: name: <secret_private_key> 3 server: https://acme-staging-v02.api.letsencrypt.org/directory 4 solvers: - dns01: azureDNS: clientID: <azure_client_id> 5 clientSecretSecretRef: name: <secret_name> 6 key: <azure_secret_access_key_name> 7 subscriptionID: <azure_subscription_id> 8 tenantID: <azure_tenant_id> 9 resourceGroupName: <azure_dns_zone_resource_group> 10 hostedZoneName: <azure_dns_zone> 11 environment: AzurePublicCloud", "oc create -f issuer.yaml", "apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: <tls_cert> 1 namespace: <issuer_namespace> 2 spec: isCA: false commonName: '<common_name>' 3 secretName: <secret_name> 4 dnsNames: - \"<domain_name>\" 5 issuerRef: name: <issuer_name> 6 kind: Issuer", "oc create -f certificate.yaml", "oc get certificate -w -n <issuer_namespace>", "apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: <tls_cert> 1 namespace: openshift-config spec: isCA: false commonName: \"api.<cluster_base_domain>\" 2 secretName: <secret_name> 3 dnsNames: - \"api.<cluster_base_domain>\" 4 issuerRef: name: <issuer_name> 5 kind: Issuer", "oc create -f certificate.yaml", "oc get certificate -w -n openshift-config", "apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: <tls_cert> 1 namespace: openshift-ingress spec: isCA: false commonName: \"apps.<cluster_base_domain>\" 2 secretName: <secret_name> 3 dnsNames: - \"apps.<cluster_base_domain>\" 4 - \"*.apps.<cluster_base_domain>\" 5 issuerRef: name: <issuer_name> 6 kind: Issuer", "oc create -f certificate.yaml", "oc get certificate -w -n openshift-ingress", "oc label namespace cert-manager openshift.io/cluster-monitoring=true", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: prometheus-k8s namespace: cert-manager rules: - apiGroups: - \"\" resources: - services - endpoints - pods verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: prometheus-k8s namespace: cert-manager roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: prometheus-k8s subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring --- apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: cert-manager app.kubernetes.io/component: controller app.kubernetes.io/instance: cert-manager app.kubernetes.io/name: cert-manager name: cert-manager namespace: cert-manager spec: endpoints: - interval: 30s port: tcp-prometheus-servicemonitor scheme: http selector: matchLabels: app.kubernetes.io/component: controller app.kubernetes.io/instance: cert-manager app.kubernetes.io/name: cert-manager", "oc create -f monitoring.yaml", "{instance=\"<endpoint>\"} 1", "{endpoint=\"tcp-prometheus-servicemonitor\"}", "oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type='merge' -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"UNSUPPORTED_ADDON_FEATURES\",\"value\":\"IstioCSR=true\"}]}}}'", "oc rollout status deployment/cert-manager-operator-controller-manager -n cert-manager-operator", "deployment \"cert-manager-operator-controller-manager\" successfully rolled out", "apiVersion: cert-manager.io/v1 kind: Issuer 1 metadata: name: selfsigned namespace: <istio_project_name> 2 spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: istio-ca namespace: <istio_project_name> spec: isCA: true duration: 87600h # 10 years secretName: istio-ca commonName: istio-ca privateKey: algorithm: ECDSA size: 256 subject: organizations: - cluster.local - cert-manager issuerRef: name: selfsigned kind: Issuer 3 group: cert-manager.io --- kind: Issuer metadata: name: istio-ca namespace: <istio_project_name> 4 spec: ca: secretName: istio-ca", "oc get issuer istio-ca -n <istio_project_name>", "NAME READY AGE istio-ca True 3m", "oc new-project <istio_csr_project_name>", "apiVersion: operator.openshift.io/v1alpha1 kind: IstioCSR metadata: name: default namespace: <istio_csr_project_name> spec: IstioCSRConfig: certManager: issuerRef: name: istio-ca 1 kind: Issuer 2 group: cert-manager.io istiodTLSConfig: trustDomain: cluster.local istio: namespace: istio-system", "oc create -f IstioCSR.yaml", "oc get deployment -n <istio_csr_project_name>", "NAME READY UP-TO-DATE AVAILABLE AGE cert-manager-istio-csr 1/1 1 1 24s", "oc get pod -n <istio_csr_project_name>", "NAME READY STATUS RESTARTS AGE cert-manager-istio-csr-5c979f9b7c-bv57w 1/1 Running 0 45s", "oc -n <istio_csr_project_name> logs <istio_csr_pod_name>", "oc -n cert-manager-operator logs <cert_manager_operator_pod_name>", "oc -n <istio-csr_project_name> delete istiocsrs.operator.openshift.io default", "oc get clusterrolebindings,clusterroles -l \"app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr\"", "oc get certificate,deployments,services,serviceaccounts -l \"app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr\" -n <istio_csr_project_name>", "oc get roles,rolebindings -l \"app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr\" -n <istio_csr_project_name>", "oc -n <istio_csr_project_name> delete <resource_type>/<resource_name>", "oc create configmap trusted-ca -n cert-manager", "oc label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true -n cert-manager", "oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type='merge' -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"TRUSTED_CA_CONFIGMAP_NAME\",\"value\":\"trusted-ca\"}]}}}'", "oc rollout status deployment/cert-manager-operator-controller-manager -n cert-manager-operator && rollout status deployment/cert-manager -n cert-manager && rollout status deployment/cert-manager-webhook -n cert-manager && rollout status deployment/cert-manager-cainjector -n cert-manager", "deployment \"cert-manager-operator-controller-manager\" successfully rolled out deployment \"cert-manager\" successfully rolled out deployment \"cert-manager-webhook\" successfully rolled out deployment \"cert-manager-cainjector\" successfully rolled out", "oc get deployment cert-manager -n cert-manager -o=jsonpath={.spec.template.spec.'containers[0].volumeMounts'}", "[{\"mountPath\":\"/etc/pki/tls/certs/cert-manager-tls-ca-bundle.crt\",\"name\":\"trusted-ca\",\"subPath\":\"ca-bundle.crt\"}]", "oc get deployment cert-manager -n cert-manager -o=jsonpath={.spec.template.spec.volumes}", "[{\"configMap\":{\"defaultMode\":420,\"name\":\"trusted-ca\"},\"name\":\"trusted-ca\"}]", "oc edit certmanager cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: overrideEnv: - name: HTTP_PROXY value: http://<proxy_url> 1 - name: HTTPS_PROXY value: https://<proxy_url> 2 - name: NO_PROXY value: <ignore_proxy_domains> 3", "oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager", "NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 39s", "oc get pod <redeployed_cert-manager_controller_pod> -n cert-manager -o yaml", "env: - name: HTTP_PROXY value: http://<PROXY_URL> - name: HTTPS_PROXY value: https://<PROXY_URL> - name: NO_PROXY value: <IGNORE_PROXY_DOMAINS>", "oc edit certmanager cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: overrideArgs: - '--dns01-recursive-nameservers=<host>:<port>' 1 - '--dns01-recursive-nameservers-only' 2 - '--acme-http01-solver-nameservers=<host>:<port>' 3 - '--v=<verbosity_level>' 4 - '--metrics-listen-address=<host>:<port>' 5 - '--issuer-ambient-credentials' 6 webhookConfig: overrideArgs: - '--v=4' 7 cainjectorConfig: overrideArgs: - '--v=2' 8", "oc get pods -n cert-manager -o yaml", "metadata: name: cert-manager-6d4b5d4c97-kldwl namespace: cert-manager spec: containers: - args: - --acme-http01-solver-nameservers=1.1.1.1:53 - --cluster-resource-namespace=USD(POD_NAMESPACE) - --dns01-recursive-nameservers=1.1.1.1:53 - --dns01-recursive-nameservers-only - --leader-election-namespace=kube-system - --max-concurrent-challenges=60 - --metrics-listen-address=0.0.0.0:9042 - --v=6 metadata: name: cert-manager-cainjector-866c4fd758-ltxxj namespace: cert-manager spec: containers: - args: - --leader-election-namespace=kube-system - --v=2 metadata: name: cert-manager-webhook-6d48f88495-c88gd namespace: cert-manager spec: containers: - args: - --v=4", "oc get certificate", "NAME READY SECRET AGE certificate-from-clusterissuer-route53-ambient True certificate-from-clusterissuer-route53-ambient 8h", "oc edit certmanager cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: overrideArgs: - '--enable-certificate-owner-ref'", "oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager -o yaml", "metadata: name: cert-manager-6e4b4d7d97-zmdnb namespace: cert-manager spec: containers: - args: - --enable-certificate-owner-ref", "oc get deployment -n cert-manager", "NAME READY UP-TO-DATE AVAILABLE AGE cert-manager 1/1 1 1 53m cert-manager-cainjector 1/1 1 1 53m cert-manager-webhook 1/1 1 1 53m", "oc get deployment -n cert-manager -o yaml", "metadata: name: cert-manager namespace: cert-manager spec: template: spec: containers: - name: cert-manager-controller resources: {} 1 metadata: name: cert-manager-cainjector namespace: cert-manager spec: template: spec: containers: - name: cert-manager-cainjector resources: {} 2 metadata: name: cert-manager-webhook namespace: cert-manager spec: template: spec: containers: - name: cert-manager-webhook resources: {} 3", "oc patch certmanager.operator cluster --type=merge -p=\" spec: controllerConfig: overrideResources: limits: 1 cpu: 200m 2 memory: 64Mi 3 requests: 4 cpu: 10m 5 memory: 16Mi 6 webhookConfig: overrideResources: limits: 7 cpu: 200m 8 memory: 64Mi 9 requests: 10 cpu: 10m 11 memory: 16Mi 12 cainjectorConfig: overrideResources: limits: 13 cpu: 200m 14 memory: 64Mi 15 requests: 16 cpu: 10m 17 memory: 16Mi 18 \"", "certmanager.operator.openshift.io/cluster patched", "oc get deployment -n cert-manager -o yaml", "metadata: name: cert-manager namespace: cert-manager spec: template: spec: containers: - name: cert-manager-controller resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi metadata: name: cert-manager-cainjector namespace: cert-manager spec: template: spec: containers: - name: cert-manager-cainjector resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi metadata: name: cert-manager-webhook namespace: cert-manager spec: template: spec: containers: - name: cert-manager-webhook resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi", "oc patch certmanager.operator cluster --type=merge -p=\" spec: controllerConfig: overrideScheduling: nodeSelector: node-role.kubernetes.io/control-plane: '' 1 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule 2 webhookConfig: overrideScheduling: nodeSelector: node-role.kubernetes.io/control-plane: '' 3 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule 4 cainjectorConfig: overrideScheduling: nodeSelector: node-role.kubernetes.io/control-plane: '' 5 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule\" 6", "oc get pods -n cert-manager -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cert-manager-58d9c69db4-78mzp 1/1 Running 0 10m 10.129.0.36 ip-10-0-1-106.ec2.internal <none> <none> cert-manager-cainjector-85b6987c66-rhzf7 1/1 Running 0 11m 10.128.0.39 ip-10-0-1-136.ec2.internal <none> <none> cert-manager-webhook-7f54b4b858-29bsp 1/1 Running 0 11m 10.129.0.35 ip-10-0-1-106.ec2.internal <none> <none>", "oc get deployments -n cert-manager -o jsonpath='{range .items[*]}{.metadata.name}{\"\\n\"}{.spec.template.spec.nodeSelector}{\"\\n\"}{.spec.template.spec.tolerations}{\"\\n\\n\"}{end}'", "cert-manager {\"kubernetes.io/os\":\"linux\",\"node-role.kubernetes.io/control-plane\":\"\"} [{\"effect\":\"NoSchedule\",\"key\":\"node-role.kubernetes.io/master\",\"operator\":\"Exists\"}] cert-manager-cainjector {\"kubernetes.io/os\":\"linux\",\"node-role.kubernetes.io/control-plane\":\"\"} [{\"effect\":\"NoSchedule\",\"key\":\"node-role.kubernetes.io/master\",\"operator\":\"Exists\"}] cert-manager-webhook {\"kubernetes.io/os\":\"linux\",\"node-role.kubernetes.io/control-plane\":\"\"} [{\"effect\":\"NoSchedule\",\"key\":\"node-role.kubernetes.io/master\",\"operator\":\"Exists\"}]", "oc get events -n cert-manager --field-selector reason=Scheduled", "mkdir credentials-request", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"route53:GetChange\" effect: Allow resource: \"arn:aws:route53:::change/*\" - action: - \"route53:ChangeResourceRecordSets\" - \"route53:ListResourceRecordSets\" effect: Allow resource: \"arn:aws:route53:::hostedzone/*\" - action: - \"route53:ListHostedZonesByName\" effect: Allow resource: \"*\" secretRef: name: aws-creds namespace: cert-manager serviceAccountNames: - cert-manager", "ccoctl aws create-iam-roles --name <user_defined_name> --region=<aws_region> --credentials-requests-dir=<path_to_credrequests_dir> --identity-provider-arn <oidc_provider_arn> --output-dir=<path_to_output_dir>", "2023/05/15 18:10:34 Role arn:aws:iam::XXXXXXXXXXXX:role/<user_defined_name>-cert-manager-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: <path_to_output_dir>/manifests/cert-manager-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role <user_defined_name>-cert-manager-aws-creds", "oc -n cert-manager annotate serviceaccount cert-manager eks.amazonaws.com/role-arn=\"<aws_role_arn>\"", "oc delete pods -l app.kubernetes.io/name=cert-manager -n cert-manager", "oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager", "NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 39s", "oc set env -n cert-manager po/<cert_manager_controller_pod_name> --list", "pods/cert-manager-57f9555c54-vbcpg, container cert-manager-controller POD_NAMESPACE from field path metadata.namespace AWS_ROLE_ARN=XXXXXXXXXXXX AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token", "oc edit certmanager.operator cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager spec: logLevel: Normal 1", "oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type='merge' -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"OPERATOR_LOG_LEVEL\",\"value\":\"v\"}]}}}' 1", "oc set env deploy/cert-manager-operator-controller-manager -n cert-manager-operator --list | grep -e OPERATOR_LOG_LEVEL -e container", "deployments/cert-manager-operator-controller-manager, container kube-rbac-proxy OPERATOR_LOG_LEVEL=9 deployments/cert-manager-operator-controller-manager, container cert-manager-operator OPERATOR_LOG_LEVEL=9", "oc logs deploy/cert-manager-operator-controller-manager -n cert-manager-operator", "mkdir credentials-request", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/dns.admin secretRef: name: gcp-credentials namespace: cert-manager serviceAccountNames: - cert-manager", "ccoctl gcp create-service-accounts --name <user_defined_name> --output-dir=<path_to_output_dir> --credentials-requests-dir=<path_to_credrequests_dir> --workload-identity-pool <workload_identity_pool> --workload-identity-provider <workload_identity_provider> --project <gcp_project_id>", "ccoctl gcp create-service-accounts --name abcde-20230525-4bac2781 --output-dir=/home/outputdir --credentials-requests-dir=/home/credentials-requests --workload-identity-pool abcde-20230525-4bac2781 --workload-identity-provider abcde-20230525-4bac2781 --project openshift-gcp-devel", "ls <path_to_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {}", "oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"CLOUD_CREDENTIALS_SECRET_NAME\",\"value\":\"gcp-credentials\"}]}}}'", "oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager", "NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s", "oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml", "spec: containers: - args: volumeMounts: - mountPath: /var/run/secrets/openshift/serviceaccount name: bound-sa-token - mountPath: /.config/gcloud name: cloud-credentials volumes: - name: bound-sa-token projected: sources: - serviceAccountToken: audience: openshift path: token - name: cloud-credentials secret: items: - key: service_account.json path: application_default_credentials.json secretName: gcp-credentials", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"route53:GetChange\" effect: Allow resource: \"arn:aws:route53:::change/*\" - action: - \"route53:ChangeResourceRecordSets\" - \"route53:ListResourceRecordSets\" effect: Allow resource: \"arn:aws:route53:::hostedzone/*\" - action: - \"route53:ListHostedZonesByName\" effect: Allow resource: \"*\" secretRef: name: aws-creds namespace: cert-manager serviceAccountNames: - cert-manager", "oc create -f sample-credential-request.yaml", "oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"CLOUD_CREDENTIALS_SECRET_NAME\",\"value\":\"aws-creds\"}]}}}'", "oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager", "NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s", "oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml", "spec: containers: - args: - mountPath: /.aws name: cloud-credentials volumes: - name: cloud-credentials secret: secretName: aws-creds", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/dns.admin secretRef: name: gcp-credentials namespace: cert-manager serviceAccountNames: - cert-manager", "oc create -f sample-credential-request.yaml", "oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"CLOUD_CREDENTIALS_SECRET_NAME\",\"value\":\"gcp-credentials\"}]}}}'", "oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager", "NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s", "oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml", "spec: containers: - args: volumeMounts: - mountPath: /.config/gcloud name: cloud-credentials . volumes: - name: cloud-credentials secret: items: - key: service_account.json path: application_default_credentials.json secretName: gcp-credentials", "oc delete deployment -n cert-manager -l app.kubernetes.io/instance=cert-manager", "{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"ad209ce1-fec7-4130-8192-c4cc63f1d8cd\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cert-recovery-controller-lock?timeout=35s\",\"verb\":\"update\",\"user\":{\"username\":\"system:serviceaccount:openshift-kube-controller-manager:localhost-recovery-client\",\"uid\":\"dd4997e3-d565-4e37-80f8-7fc122ccd785\",\"groups\":[\"system:serviceaccounts\",\"system:serviceaccounts:openshift-kube-controller-manager\",\"system:authenticated\"]},\"sourceIPs\":[\"::1\"],\"userAgent\":\"cluster-kube-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat\",\"objectRef\":{\"resource\":\"configmaps\",\"namespace\":\"openshift-kube-controller-manager\",\"name\":\"cert-recovery-controller-lock\",\"uid\":\"5c57190b-6993-425d-8101-8337e48c7548\",\"apiVersion\":\"v1\",\"resourceVersion\":\"574307\"},\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2020-04-02T08:27:20.200962Z\",\"stageTimestamp\":\"2020-04-02T08:27:20.206710Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"system:openshift:operator:kube-controller-manager-recovery\\\" of ClusterRole \\\"cluster-admin\\\" to ServiceAccount \\\"localhost-recovery-client/openshift-kube-controller-manager\\\"\"}}", "oc adm node-logs --role=master --path=openshift-apiserver/", "ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T00-12-19.834.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T00-11-49.835.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T00-13-00.128.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log", "oc adm node-logs <node_name> --path=openshift-apiserver/<log_name>", "oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=openshift-apiserver/audit-2021-03-09T00-12-19.834.log", "{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"381acf6d-5f30-4c7d-8175-c9c317ae5893\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/metrics\",\"verb\":\"get\",\"user\":{\"username\":\"system:serviceaccount:openshift-monitoring:prometheus-k8s\",\"uid\":\"825b60a0-3976-4861-a342-3b2b561e8f82\",\"groups\":[\"system:serviceaccounts\",\"system:serviceaccounts:openshift-monitoring\",\"system:authenticated\"]},\"sourceIPs\":[\"10.129.2.6\"],\"userAgent\":\"Prometheus/2.23.0\",\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2021-03-08T18:02:04.086545Z\",\"stageTimestamp\":\"2021-03-08T18:02:04.107102Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"prometheus-k8s\\\" of ClusterRole \\\"prometheus-k8s\\\" to ServiceAccount \\\"prometheus-k8s/openshift-monitoring\\\"\"}}", "oc adm node-logs --role=master --path=kube-apiserver/", "ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T14-07-27.129.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T19-24-22.620.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T18-37-07.511.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log", "oc adm node-logs <node_name> --path=kube-apiserver/<log_name>", "oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=kube-apiserver/audit-2021-03-09T14-07-27.129.log", "{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"cfce8a0b-b5f5-4365-8c9f-79c1227d10f9\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\",\"verb\":\"get\",\"user\":{\"username\":\"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\",\"uid\":\"2574b041-f3c8-44e6-a057-baef7aa81516\",\"groups\":[\"system:serviceaccounts\",\"system:serviceaccounts:openshift-kube-scheduler-operator\",\"system:authenticated\"]},\"sourceIPs\":[\"10.128.0.8\"],\"userAgent\":\"cluster-kube-scheduler-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat\",\"objectRef\":{\"resource\":\"serviceaccounts\",\"namespace\":\"openshift-kube-scheduler\",\"name\":\"openshift-kube-scheduler-sa\",\"apiVersion\":\"v1\"},\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2021-03-08T18:06:42.512619Z\",\"stageTimestamp\":\"2021-03-08T18:06:42.516145Z\",\"annotations\":{\"authentication.k8s.io/legacy-token\":\"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\",\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"system:openshift:operator:cluster-kube-scheduler-operator\\\" of ClusterRole \\\"cluster-admin\\\" to ServiceAccount \\\"openshift-kube-scheduler-operator/openshift-kube-scheduler-operator\\\"\"}}", "oc adm node-logs --role=master --path=oauth-apiserver/", "ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T13-06-26.128.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T18-23-21.619.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T17-36-06.510.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log", "oc adm node-logs <node_name> --path=oauth-apiserver/<log_name>", "oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=oauth-apiserver/audit-2021-03-09T13-06-26.128.log", "{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"dd4c44e2-3ea1-4830-9ab7-c91a5f1388d6\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/apis/user.openshift.io/v1/users/~\",\"verb\":\"get\",\"user\":{\"username\":\"system:serviceaccount:openshift-monitoring:prometheus-k8s\",\"groups\":[\"system:serviceaccounts\",\"system:serviceaccounts:openshift-monitoring\",\"system:authenticated\"]},\"sourceIPs\":[\"10.0.32.4\",\"10.128.0.1\"],\"userAgent\":\"dockerregistry/v0.0.0 (linux/amd64) kubernetes/USDFormat\",\"objectRef\":{\"resource\":\"users\",\"name\":\"~\",\"apiGroup\":\"user.openshift.io\",\"apiVersion\":\"v1\"},\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2021-03-08T17:47:43.653187Z\",\"stageTimestamp\":\"2021-03-08T17:47:43.660187Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"basic-users\\\" of ClusterRole \\\"basic-user\\\" to Group \\\"system:authenticated\\\"\"}}", "oc adm node-logs --role=master --path=oauth-server/", "ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2022-05-11T18-57-32.395.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2022-05-11T19-07-07.021.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2022-05-11T19-06-51.844.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log", "oc adm node-logs <node_name> --path=oauth-server/<log_name>", "oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=oauth-server/audit-2022-05-11T18-57-32.395.log", "{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"13c20345-f33b-4b7d-b3b6-e7793f805621\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/login\",\"verb\":\"post\",\"user\":{\"username\":\"system:anonymous\",\"groups\":[\"system:unauthenticated\"]},\"sourceIPs\":[\"10.128.2.6\"],\"userAgent\":\"Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0\",\"responseStatus\":{\"metadata\":{},\"code\":302},\"requestReceivedTimestamp\":\"2022-05-11T17:31:16.280155Z\",\"stageTimestamp\":\"2022-05-11T17:31:16.297083Z\",\"annotations\":{\"authentication.openshift.io/decision\":\"error\",\"authentication.openshift.io/username\":\"kubeadmin\",\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"\"}}", "oc adm node-logs node-1.example.com --path=openshift-apiserver/audit.log | jq 'select(.user.username == \"myusername\")'", "oc adm node-logs node-1.example.com --path=openshift-apiserver/audit.log | jq 'select(.userAgent == \"cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat\")'", "oc adm node-logs node-1.example.com --path=kube-apiserver/audit.log | jq 'select(.requestURI | startswith(\"/apis/apiextensions.k8s.io/v1beta1\")) | .userAgent'", "oc adm node-logs node-1.example.com --path=oauth-apiserver/audit.log | jq 'select(.verb != \"get\")'", "oc adm node-logs node-1.example.com --path=oauth-server/audit.log | jq 'select(.annotations[\"authentication.openshift.io/username\"] != null and .annotations[\"authentication.openshift.io/decision\"] == \"error\")'", "oc adm must-gather -- /usr/bin/gather_audit_logs", "tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1", "oc edit apiserver cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: spec: audit: profile: WriteRequestBodies 1", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 12 1", "oc edit apiserver cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: spec: audit: customRules: 1 - group: system:authenticated:oauth profile: WriteRequestBodies - group: system:authenticated profile: AllRequestBodies profile: Default 2", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 12 1", "oc edit apiserver cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: spec: audit: profile: None", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 12 1", "oc explain <component>.spec.tlsSecurityProfile.<profile> 1", "oc explain apiserver.spec.tlsSecurityProfile.intermediate", "KIND: APIServer VERSION: config.openshift.io/v1 DESCRIPTION: intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: TLSv1.2", "oc explain <component>.spec.tlsSecurityProfile 1", "oc explain ingresscontroller.spec.tlsSecurityProfile", "KIND: IngressController VERSION: operator.openshift.io/v1 RESOURCE: tlsSecurityProfile <Object> DESCRIPTION: FIELDS: custom <> custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: TLSv1.1 intermediate <> intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ... 1 modern <> modern is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility and looks like this (yaml): ... 2 NOTE: Currently unsupported. old <> old is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility and looks like this (yaml): ... 3 type <string>", "apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: old: {} type: Old", "oc edit IngressController default -n openshift-ingress-operator", "apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11", "oc describe IngressController default -n openshift-ingress-operator", "Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController Spec: Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom", "apiVersion: config.openshift.io/v1 kind: APIServer spec: tlsSecurityProfile: old: {} type: Old", "oc edit APIServer cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11", "oc describe apiserver cluster", "Name: cluster Namespace: API Version: config.openshift.io/v1 Kind: APIServer Spec: Audit: Profile: Default Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom", "oc describe etcd cluster", "Name: cluster Namespace: API Version: operator.openshift.io/v1 Kind: Etcd Spec: Log Level: Normal Management State: Managed Observed Config: Serving Info: Cipher Suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 Min TLS Version: VersionTLS12", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\"", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 4 #", "oc create -f <filename>", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# cat /etc/kubernetes/kubelet.conf", "\"kind\": \"KubeletConfiguration\", \"apiVersion\": \"kubelet.config.k8s.io/v1beta1\", # \"tlsCipherSuites\": [ \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256\", \"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256\" ], \"tlsMinVersion\": \"VersionTLS12\", #", "oc get pods -n <namespace>", "oc get pods -n workshop", "NAME READY STATUS RESTARTS AGE parksmap-1-4xkwf 1/1 Running 0 2m17s parksmap-1-deploy 0/1 Completed 0 2m22s", "oc get pod parksmap-1-4xkwf -n workshop -o yaml", "apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.18\" ], \"default\": true, \"dns\": {} }] k8s.v1.cni.cncf.io/network-status: |- [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.18\" ], \"default\": true, \"dns\": {} }] openshift.io/deployment-config.latest-version: \"1\" openshift.io/deployment-config.name: parksmap openshift.io/deployment.name: parksmap-1 openshift.io/generated-by: OpenShiftWebConsole openshift.io/scc: restricted-v2 1 seccomp.security.alpha.kubernetes.io/pod: runtime/default 2", "oc -n <workload-namespace> adm policy add-scc-to-user <scc-name> -z <serviceaccount_name>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: custom-seccomp spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<hash> filesystem: root mode: 0644 path: /var/lib/kubelet/seccomp/seccomp-nostat.json", "seccompProfiles: - localhost/<custom-name>.json 1", "spec: securityContext: seccompProfile: type: Localhost localhostProfile: <custom-name>.json 1", "oc edit apiserver.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-07-11T17:35:37Z\" generation: 1 name: cluster resourceVersion: \"907\" selfLink: /apis/config.openshift.io/v1/apiservers/cluster uid: 4b45a8dd-a402-11e9-91ec-0219944e0696 spec: additionalCORSAllowedOrigins: - (?i)//my\\.subdomain\\.domain\\.com(:|\\z) 1", "oc edit apiserver", "spec: encryption: type: aesgcm 1", "oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: routes.route.openshift.io", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: secrets, configmaps", "oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io", "oc edit apiserver", "spec: encryption: type: identity 1", "oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc create secret generic container-security-operator-extra-certs --from-file=quay.crt -n openshift-operators", "oc get packagemanifests container-security-operator -o jsonpath='{range .status.channels[*]}{@.currentCSV} {@.name}{\"\\n\"}{end}' | awk '{print \"STARTING_CSV=\" USD1 \" CHANNEL=\" USD2 }' | sort -Vr | head -1", "STARTING_CSV=container-security-operator.v3.8.9 CHANNEL=stable-3.8", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: container-security-operator namespace: openshift-operators spec: channel: USD{CHANNEL} 1 installPlanApproval: Automatic name: container-security-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: USD{STARTING_CSV} 2", "oc apply -f container-security-operator.yaml", "subscription.operators.coreos.com/container-security-operator created", "oc get vuln --all-namespaces", "NAMESPACE NAME AGE default sha256.ca90... 6m56s skynet sha256.ca90... 9m37s", "oc describe vuln --namespace mynamespace sha256.ac50e3752", "Name: sha256.ac50e3752 Namespace: quay-enterprise Spec: Features: Name: nss-util Namespace Name: centos:7 Version: 3.44.0-3.el7 Versionformat: rpm Vulnerabilities: Description: Network Security Services (NSS) is a set of libraries", "oc delete customresourcedefinition imagemanifestvulns.secscan.quay.redhat.com", "customresourcedefinition.apiextensions.k8s.io \"imagemanifestvulns.secscan.quay.redhat.com\" deleted", "echo plaintext | clevis encrypt tang '{\"url\":\"http://localhost:7500\"}' -y >/tmp/encrypted.oldkey", "clevis decrypt </tmp/encrypted.oldkey", "tang-show-keys 7500", "36AHjNH3NZDSnlONLz1-V4ie6t8", "cd /var/db/tang/", "ls -A1", "36AHjNH3NZDSnlONLz1-V4ie6t8.jwk gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk", "for key in *.jwk; do mv -- \"USDkey\" \".USDkey\"; done", "/usr/libexec/tangd-keygen /var/db/tang", "ls -A1", ".36AHjNH3NZDSnlONLz1-V4ie6t8.jwk .gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk", "tang-show-keys 7500", "WOjQYkyK7DxY_T5pMncMO5w0f6E", "clevis decrypt </tmp/encrypted.oldkey", "apiVersion: apps/v1 kind: DaemonSet metadata: name: tang-rekey namespace: openshift-machine-config-operator spec: selector: matchLabels: name: tang-rekey template: metadata: labels: name: tang-rekey spec: containers: - name: tang-rekey image: registry.access.redhat.com/ubi9/ubi-minimal:latest imagePullPolicy: IfNotPresent command: - \"/sbin/chroot\" - \"/host\" - \"/bin/bash\" - \"-ec\" args: - | rm -f /tmp/rekey-complete || true echo \"Current tang pin:\" clevis-luks-list -d USDROOT_DEV -s 1 echo \"Applying new tang pin: USDNEW_TANG_PIN\" clevis-luks-edit -f -d USDROOT_DEV -s 1 -c \"USDNEW_TANG_PIN\" echo \"Pin applied successfully\" touch /tmp/rekey-complete sleep infinity readinessProbe: exec: command: - cat - /host/tmp/rekey-complete initialDelaySeconds: 30 periodSeconds: 10 env: - name: ROOT_DEV value: /dev/disk/by-partlabel/root - name: NEW_TANG_PIN value: >- {\"t\":1,\"pins\":{\"tang\":[ {\"url\":\"http://tangserver01:7500\",\"thp\":\"WOjQYkyK7DxY_T5pMncMO5w0f6E\"}, {\"url\":\"http://tangserver02:7500\",\"thp\":\"I5Ynh2JefoAO3tNH9TgI4obIaXI\"}, {\"url\":\"http://tangserver03:7500\",\"thp\":\"38qWZVeDKzCPG9pHLqKzs6k1ons\"} ]}} volumeMounts: - name: hostroot mountPath: /host securityContext: privileged: true volumes: - name: hostroot hostPath: path: / nodeSelector: kubernetes.io/os: linux priorityClassName: system-node-critical restartPolicy: Always serviceAccount: machine-config-daemon serviceAccountName: machine-config-daemon", "oc apply -f tang-rekey.yaml", "oc get -n openshift-machine-config-operator ds tang-rekey", "NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE tang-rekey 1 1 0 1 0 kubernetes.io/os=linux 11s", "oc get -n openshift-machine-config-operator ds tang-rekey", "NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE tang-rekey 1 1 1 1 1 kubernetes.io/os=linux 13h", "echo \"okay\" | clevis encrypt tang '{\"url\":\"http://tangserver02:7500\",\"thp\":\"badthumbprint\"}' | clevis decrypt", "Unable to fetch advertisement: 'http://tangserver02:7500/adv/badthumbprint'!", "echo \"okay\" | clevis encrypt tang '{\"url\":\"http://tangserver03:7500\",\"thp\":\"goodthumbprint\"}' | clevis decrypt", "okay", "oc get pods -A | grep tang-rekey", "openshift-machine-config-operator tang-rekey-7ks6h 1/1 Running 20 (8m39s ago) 89m", "oc logs tang-rekey-7ks6h", "Current tang pin: 1: sss '{\"t\":1,\"pins\":{\"tang\":[{\"url\":\"http://10.46.55.192:7500\"},{\"url\":\"http://10.46.55.192:7501\"},{\"url\":\"http://10.46.55.192:7502\"}]}}' Applying new tang pin: {\"t\":1,\"pins\":{\"tang\":[ {\"url\":\"http://tangserver01:7500\",\"thp\":\"WOjQYkyK7DxY_T5pMncMO5w0f6E\"}, {\"url\":\"http://tangserver02:7500\",\"thp\":\"I5Ynh2JefoAO3tNH9TgI4obIaXI\"}, {\"url\":\"http://tangserver03:7500\",\"thp\":\"38qWZVeDKzCPG9pHLqKzs6k1ons\"} ]}} Updating binding Binding edited successfully Pin applied successfully", "cd /var/db/tang/", "ls -A1", ".36AHjNH3NZDSnlONLz1-V4ie6t8.jwk .gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk", "rm .*.jwk", "ls -A1", "Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk", "tang-show-keys 7500", "WOjQYkyK7DxY_T5pMncMO5w0f6E", "clevis decrypt </tmp/encryptValidation", "Error communicating with the server!", "sudo clevis luks pass -d /dev/vda2 -s 1", "sudo clevis luks regen -d /dev/vda2 -s 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/security_and_compliance/index
Logging
Logging OpenShift Container Platform 4.7 OpenShift Logging installation, usage, and release notes Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/logging/index
3.5. Maintaining Consistent Schema
3.5. Maintaining Consistent Schema A consistent schema within Directory Server helps LDAP client applications locate directory entries. Using an inconsistent schema makes it very difficult to efficiently locate information in the directory tree. Inconsistent schema use different attributes or formats to store the same information. Maintain schema consistency in the following ways: Use schema checking to ensure that attributes and object classes conform to the schema rules. Use syntax validation to ensure that attribute values match the required attribute syntax. Select and apply a consistent data format. 3.5.1. Schema Checking Schema checking ensures that all new or modified directory entries conform to the schema rules. When the rules are violated, the directory rejects the requested change. Note Schema checking checks only that the proper attributes are present. To verify that attribute values are in the correct syntax, use syntax validation, as described in Section 3.5.2, "Syntax Validation" . By default, the directory enables schema checking. Red Hat recommends not disabling this feature. For information on enabling and disabling schema checking, see the Red Hat Directory Server Administration Guide . With schema checking enabled, be attentive to required and allowed attributes as defined by the object classes. Object class definitions usually contain at least one required attribute and one or more optional attributes. Optional attributes are attributes that can be, but are not required to be, added to the directory entry. Attempting to add an attribute to an entry that is neither required nor allowed according to the entry's object class definition causes the Directory Server to return an object class violation message. For example, if an entry is defined to use the organizationalPerson object class, then the common name ( cn ) and surname ( sn ) attributes are required for the entry. That is, values for these attributes must be set when the entry is created. In addition, there is a long list of attributes that can optionally be used on the entry, including descriptive attributes like telephoneNumber , uid , streetAddress , and userPassword . 3.5.2. Syntax Validation Syntax validation means that the Directory Server checks that the value of an attribute matches the required syntax for that attribute. For example, syntax validation will confirm that a new telephoneNumber attribute actually has a valid telephone number for its value. 3.5.2.1. Overview of Syntax Validation By default, syntax validation is enabled. This is the most basic syntax validation. As with schema checking, this validates any directory modification and rejects changes that violate the syntax rules. Additional settings can be optionally configured so that syntax validation can log warning messages about syntax violations and then either reject the modification or allow the modification process to succeed. Syntax validation checks LDAP operations where a new attribute value is added, either because a new attribute is added or because an attribute value is changed. Syntax validation does not process existing attributes or attributes added through database operations like replication. Existing attributes can be validated using a special script, syntax-validate.pl . This feature validates all attribute syntaxes, with the exception of binary syntaxes (which cannot be verified) and non-standard syntaxes, which do not have a defined required format. The syntaxes are validated against RFC 4514 , except for DNs, which are validated against the less strict RFC 1779 or RFC 2253 . (Strict DN validation can be configured.) 3.5.2.2. Syntax Validation and Other Directory Server Operations Syntax validation is mainly relevant for standard LDAP operations like creating entries (add) or editing attributes (modify). Validating attribute syntax can impact other Directory Server operations, however. Database Encryption For normal LDAP operations, an attribute is encrypted just before the value is written to the database. This means That encryption occurs after the attribute syntax is validated. Encrypted databases (as described in Section 9.8, "Encrypting the Database" ) can be exported and imported. Normally, it is strongly recommended that these export and import operations are done with the -E flag with db2ldif and ldif2db , which allows syntax validation to occur just fine for the import operation. However, if the encrypted database is exported without using the -E flag (which is not supported), then an LDIF with encrypted values is created. When this LDIF is then imported, the encrypted attributes cannot be validated, a warning is logged, and attribute validation is skipped in the imported entry. Synchronization There may be differences in the allowed or enforced syntaxes for attributes in Windows Active Directory entries and Red Hat Directory Server entries. In that case, the Active Directory values could not be properly synced over because syntax validation enforces the RFC standards in the Directory Server entries. Replication If the Directory Server 11.0 instance is a supplier which replicates its changes to a consumer, then there is no issue with using syntax validation. However, if the supplier in replication is an older version of Directory Server or has syntax validation disabled, then syntax validation should not be used on the 11.0 consumer because the Directory Server 11.0 consumer may reject attribute values that the supplier allows. 3.5.3. Selecting Consistent Data Formats LDAP schema allows any data to be placed on any attribute value. However, it is important to store data consistently in the directory tree by selecting a format appropriate for the LDAP client applications and directory users. With the LDAP protocol and Directory Server, data must be represented in the data formats specified in RFC 2252. For example, the correct LDAP format for telephone numbers is defined in two ITU-T recommendations documents: ITU-T Recommendation E.123 . Notation for national and international telephone numbers. ITU-T Recommendation E.163 . Numbering plan for the international telephone services. For example, a US phone number is formatted as +1 555 222 1717 . As another example, the postalAddress attribute expects an attribute value in the form of a multi-line string that uses dollar signs (USD) as line delimiters. A properly formatted directory entry appears as follows: Attributes can require strings, binary input, integers, and other formats. The allowed format is set in the schema definition for the attribute. 3.5.4. Maintaining Consistency in Replicated Schema When the directory schema is edited, the changes are recorded in the changelog. During replication, the changelog is scanned for changes, and any changes are replicated. Maintaining consistency in replicated schema allows replication to continue smoothly. Consider the following points for maintaining consistent schema in a replicated environment: Do not modify the schema on a read-only replica. Modifying the schema on a read-only replica introduces an inconsistency in the schema and causes replication to fail. Do not create two attributes with the same name that use different syntaxes. If an attribute is created in a read-write replica that has the same name as an attribute on the supplier replica but has a different syntax from the attribute on the supplier, replication will fail.
[ "postalAddress: 1206 Directory DriveUSDPleasant View, MNUSD34200" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/designing_the_directory_schema-maintaining_consistent_schema
Appendix D. Understanding the he_gluster_vars.json file
Appendix D. Understanding the he_gluster_vars.json file The he_gluster_vars.json file is an example Ansible variable file. The variables in this file need to be defined in order to deploy Red Hat Hyperconverged Infrastructure for Virtualization. You can find an example file at /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/he_gluster_vars.json on any hyperconverged host. Example he_gluster_vars.json file Red Hat recommends encrypting this file. See Working with files encrypted using Ansible Vault for more information. D.1. Required variables he_appliance_password The password for the hosted engine. For a production cluster, use an encrypted value created with Ansible Vault. he_admin_password The password for the admin account of the hosted engine. For a production cluster, use an encrypted value created with Ansible Vault. he_domain_type The type of storage domain. Set to glusterfs . he_fqdn The FQDN for the hosted engine virtual machine. he_vm_mac_addr The MAC address for the appropriate network device of the hosted engine virtual machine. You can skip this option for hosted deployment with static IP configuration as in such cases the MAC address for Hosted Engine is automatically generated. he_default_gateway The FQDN of the gateway to be used. he_mgmt_network The name of the management network. Set to ovirtmgmt . he_storage_domain_name The name of the storage domain to create for the hosted engine. Set to HostedEngine . he_storage_domain_path The path of the Gluster volume that provides the storage domain. Set to /engine . he_storage_domain_addr The back-end FQDN of the first host providing the engine domain. he_mount_options Specifies additional mount options. The he_mount_option is not required for IPv4 based single node deployment of Red Hat Hyperconverged Infrastructure for Virtualization. For a three node deployment with IPv6 configurations, set: For a single node deployment with IPv6 configurations, set: he_bridge_if The name of the interface to use for bridge creation. he_enable_hc_gluster_service Enables Gluster services. Set to true . he_mem_size_MB The amount of memory allocated to the hosted engine virtual machine in megabytes. he_cluster The name of the cluster in which the hyperconverged hosts are placed. he_vcpus The amount of CPUs used on the engine VM. By default 4 VCPUs are allocated for Hosted Engine Virtual Machine. D.2. Required variables for static network configurations DHCP configuration is used on the Hosted Engine VM by default. However, if you want to use static IP or FQDN, define the following variables: he_vm_ip_addr Static IP address for Hosted Engine VM (IPv4 or IPv6). he_vm_ip_prefix IP prefix for Hosted Engine VM (IPv4 or IPv6). he_dns_addr DNS server for Hosted Engine VM (IPv4 or IPv6). he_default_gateway Default gateway for Hosted Engine VM (IPv4 or IPv6). he_vm_etc_hosts Specifies Hosted Engine VM IP address and FQDN to /etc/hosts on the host, boolean value. Example he_gluster_vars.json file with static Hosted Engine configuration Note If DNS is not available, use ping for he_network_test instead of dns .
[ "{ \"he_appliance_password\": \"encrypt-password-using-ansible-vault\", \"he_admin_password\": \"UI-password-for-login\", \"he_domain_type\": \"glusterfs\", \"he_fqdn\": \"FQDN-for-Hosted-Engine\", \"he_vm_mac_addr\": \"Valid MAC address\", \"he_default_gateway\": \"Valid Gateway\", \"he_mgmt_network\": \"ovirtmgmt\", \"he_storage_domain_name\": \"HostedEngine\", \"he_storage_domain_path\": \"/engine\", \"he_storage_domain_addr\": \"host1-backend-network-FQDN\", \"he_mount_options\": \"backup-volfile-servers=host2-backend-network-FQDN:host3-backend-network-FQDN\", \"he_bridge_if\": \"interface name for bridge creation\", \"he_enable_hc_gluster_service\": true, \"he_mem_size_MB\": \"16384\", \"he_cluster\": \"Default\", \"he_vcpus\": \"4\" }", "For a three node deployment with IPv4 configurations, set:", "\"he_mount_options\":\"backup-volfile-servers=host2-backend-network-FQDN:host3-backend-network-FQDN\"", "\"he_mount_options\":\"backup-volfile-servers=host2-backend-network-FQDN:host3-backend-network-FQDN\",xlator-option='transport.address-family=inet6'\"", "\"he_mount_options\":\"xlator-option='transport.address-family=inet6'\"", "{ \"he_appliance_password\": \"mybadappliancepassword\", \"he_admin_password\": \"mybadadminpassword\", \"he_domain_type\": \"glusterfs\", \"he_fqdn\": \"engine.example.com\", \"he_vm_mac_addr\": \"00:01:02:03:04:05\", \"he_default_gateway\": \"gateway.example.com\", \"he_mgmt_network\": \"ovirtmgmt\", \"he_storage_domain_name\": \"HostedEngine\", \"he_storage_domain_path\": \"/engine\", \"he_storage_domain_addr\": \"host1-backend.example.com\", \"he_mount_options\": \"backup-volfile-servers=host2-backend.example.com:host3-backend.example.com\", \"he_bridge_if\": \"interface name for bridge creation\", \"he_enable_hc_gluster_service\": true, \"he_mem_size_MB\": \"16384\", \"he_cluster\": \"Default\", \"he_vm_ip_addr\": \"10.70.34.43\", \"he_vm_ip_prefix\": \"24\", \"he_dns_addr\": \"10.70.34.6\", \"he_default_gateway\": \"10.70.34.255\", \"he_vm_etc_hosts\": \"false\", \"he_network_test\": \"ping\" }", "Example: \"he_network_test\": \"ping\"" ]
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/upgrading_red_hat_hyperconverged_infrastructure_for_virtualization/understanding-the-he-gluster-vars-json-file
Chapter 10. Authenticating in the API
Chapter 10. Authenticating in the API You can use the following authentication methods in the API: Session authentication Basic authentication OAuth 2 token authentication Single sign-on authentication Automation controller is designed for organizations to centralize and control their automation with a visual dashboard for out-of-the box control while providing a REST API to integrate with your other tools on a deeper level. Automation controller supports several authentication methods to make it easy to embed automation controller into existing tools and processes. This ensures that the right people can access its resources. 10.1. Using session authentication You can use session authentication when logging in directly to the automation controller's API or UI to manually create resources, such as inventories, projects, and job templates and start jobs in the browser. With this method, you can remain logged in for a prolonged period of time, not just for that HTTP request. For example, when browsing the API or UI in a browser such as Chrome or Mozilla Firefox. When a user logs in, a session cookie is created, this means that they can remain logged in when navigating to different pages within automation controller. The following image represents the communication that occurs between the client and server in a session: Use the curl tool to see the activity that occurs when you log in to automation controller. Procedure Use GET to go to the /api/login/ endpoint to get the csrftoken cookie: curl -k -c - https://<controller-host>/api/login/ localhost FALSE / FALSE 0 csrftoken AswSFn5p1qQvaX4KoRZN6A5yer0Pq0VG2cXMTzZnzuhaY0L4tiidYqwf5PXZckuj POST to the /api/login/ endpoint with username, password, and X-CSRFToken=<token-value> : curl -X POST -H 'Content-Type: application/x-www-form-urlencoded' \ --referer https://<awx-host>/api/login/ \ -H 'X-CSRFToken: K580zVVm0rWX8pmNylz5ygTPamgUJxifrdJY0UDtMMoOis5Q1UOxRmV9918BUBIN' \ --data 'username=root&password=reverse' \ --cookie 'csrftoken=K580zVVm0rWX8pmNylz5ygTPamgUJxifrdJY0UDtMMoOis5Q1UOxRmV9918BUBIN' \ https://<awx-host>/api/login/ -k -D - -o /dev/null All of this is done by automation controller when you log in to the UI or API in the browser, and you must only use it when authenticating in the browser. For programmatic integration with automation controller, see OAuth2 token authentication . The following is an example of a typical response: Server: nginx Date: <current date> Content-Type: text/html; charset=utf-8 Content-Length: 0 Connection: keep-alive Location: /accounts/profile/ X-API-Session-Cookie-Name: awx_sessionid Expires: <date> Cache-Control: max-age=0, no-cache, no-store, must-revalidate, private Vary: Cookie, Accept-Language, Origin Session-Timeout: 1800 Content-Language: en X-API-Total-Time: 0.377s X-API-Request-Id: 700826696425433fb0c8807cd40c00a0 Access-Control-Expose-Headers: X-API-Request-Id Set-Cookie: userLoggedIn=true; Path=/ Set-Cookie: current_user=<user cookie data>; Path=/ Set-Cookie: csrftoken=<csrftoken>; Path=/; SameSite=Lax Set-Cookie: awx_sessionid=<your session id>; expires=<date>; HttpOnly; Max-Age=1800; Path=/; SameSite=Lax Strict-Transport-Security: max-age=15768000 When a user is successfully authenticated with this method, the server responds with a header called X-API-Session-Cookie-Name , indicating the configured name of the session cookie. The default value is awx_session_id which you can see later in the Set-Cookie headers. Note You can change the session expiration time by specifying it in the SESSION_COOKIE_AGE parameter. For more information, see Working with session limits . 10.2. Basic authentication Basic authentication is stateless, therefore, you must send the base64-encoded username and password along with each request through the Authorization header. You can use this for API calls from curl requests, python scripts, or individual requests to the API. We recommend OAuth 2 Token Authentication for accessing the API when at all possible. The following is an example of Basic authentication with curl: # the --user flag adds this Authorization header for us curl -X GET --user 'user:password' https://<controller-host>/api/v2/credentials -k -L Additional resources For more information about Basic authentication, see The 'Basic' HTTP Authentication Scheme . Disabling Basic authentication You can disable Basic authentication for security purposes. Procedure From the navigation panel, select Settings . Select Miscellaneous Authentication settings from the list of System options. Disable the option to Enable HTTP Basic Auth . 10.3. OAuth 2 token authentication OAuth (Open Authorization) is an open standard for token-based authentication and authorization. OAuth 2 authentication is commonly used when interacting with the automation controller API programmatically. Similar to Basic authentication, you are given an OAuth 2 token with each API request through the Authorization header. Unlike Basic authentication, OAuth 2 tokens have a configurable timeout and are scopable. Tokens have a configurable expiration time and can be easily revoked for one user or for the entire automation controller system by an administrator if needed. You can do this with the revoke_oauth2_tokens management command, or by using the API as explained in Revoke an access token . The different methods for obtaining OAuth2 access tokens in automation controller include the following: Personal access tokens (PAT) Application token: Password grant type Application token: Implicit grant type Application token: Authorization Code grant type A user needs to create an OAuth 2 token in the API or in the Users > Tokens tab of the automation controller UI. For more information about creating tokens through the UI, see Users - Tokens . For the purpose of this example, use the PAT method for creating a token in the API. After you create it, you can set the scope. Note You can configure the expiration time of the token system-wide. For more information, see Using OAuth 2 Token System for Personal Access Tokens . Token authentication is best used for any programmatic use of the automation controller API, such as Python scripts or tools such as curl. Curl example This call returns JSON data with the following: You can use the value of the token property to perform a GET request for an automation controller resource, such as Hosts: curl -k -X POST \ -H "Content-Type: application/json" -H "Authorization: Bearer <oauth2-token-value>" \ https://<controller-host>/api/v2/hosts/ You can also run a job by making a POST to the job template that you want to start: curl -k -X POST \ -H "Authorization: Bearer <oauth2-token-value>" \ -H "Content-Type: application/json" \ --data '{"limit" : "ansible"}' \ https://<controller-host>/api/v2/job_templates/14/launch/ Python example awxkit is an open source tool that makes it easy to use HTTP requests to access the automation controller API. You can have awxkit get a PAT on your behalf by using the awxkit login command. For more information, see AWX Command Line Interface . If you need to write custom requests, you can write a Python script by using Python library requests, such as the following example: import requests oauth2_token_value = 'y1Q8ye4hPvT61aQq63Da6N1C25jiA' # your token value from controller url = 'https://<controller-host>/api/v2/users/' payload = {} headers = {'Authorization': 'Bearer ' + oauth2_token_value,} # makes request to controller user endpoint response = requests.request('GET', url, headers=headers, data=payload, allow_redirects=False, verify=False) # prints json returned from controller with formatting print(json.dumps(response.json(), indent=4, sort_keys=True)) Additional resources For more information about obtaining OAuth2 access tokens and how to use OAuth 2 in the context of external applications, see Token-Based Authentication in the Automation controller Administration Guide . Enabling external users to create OAuth 2 tokens By default, external users such as those created by single sign-on are not able to generate OAuth tokens for security purposes. Procedure From the navigation panel, select Settings . Select Miscellaneous Authentication settings from the list of System options. Enable the option to Allow External Users to Create OAuth2 Tokens . 10.4. Single sign-on authentication Single sign-on (SSO) authentication methods are different from other methods because the authentication of the user happens external to automation controller, such as Google SSO, Microsoft Azure SSO, SAML, or GitHub. For example, with GitHub SSO, GitHub is the single source of truth, which verifies your identity based on the username and password you gave automation controller. You can configure SSO authentication by using automation controller inside a large organization with a central Identity Provider. Once you have configured an SSO method in automation controller, an option for that SSO is available on the login screen. If you click that option, it redirects you to the Identity Provider, in this case GitHub, where you present your credentials. If the Identity Provider verifies you successfully, automation controller makes a user linked to your GitHub user (if this is your first time logging in with this SSO method), and logs you in. Additional resources For the various types of supported SSO authentication methods, see Setting up social authentication and Setting up enterprise authentication in the Automation controller Administration Guide .
[ "curl -k -c - https://<controller-host>/api/login/ localhost FALSE / FALSE 0 csrftoken AswSFn5p1qQvaX4KoRZN6A5yer0Pq0VG2cXMTzZnzuhaY0L4tiidYqwf5PXZckuj", "curl -X POST -H 'Content-Type: application/x-www-form-urlencoded' --referer https://<awx-host>/api/login/ -H 'X-CSRFToken: K580zVVm0rWX8pmNylz5ygTPamgUJxifrdJY0UDtMMoOis5Q1UOxRmV9918BUBIN' --data 'username=root&password=reverse' --cookie 'csrftoken=K580zVVm0rWX8pmNylz5ygTPamgUJxifrdJY0UDtMMoOis5Q1UOxRmV9918BUBIN' https://<awx-host>/api/login/ -k -D - -o /dev/null", "Server: nginx Date: <current date> Content-Type: text/html; charset=utf-8 Content-Length: 0 Connection: keep-alive Location: /accounts/profile/ X-API-Session-Cookie-Name: awx_sessionid Expires: <date> Cache-Control: max-age=0, no-cache, no-store, must-revalidate, private Vary: Cookie, Accept-Language, Origin Session-Timeout: 1800 Content-Language: en X-API-Total-Time: 0.377s X-API-Request-Id: 700826696425433fb0c8807cd40c00a0 Access-Control-Expose-Headers: X-API-Request-Id Set-Cookie: userLoggedIn=true; Path=/ Set-Cookie: current_user=<user cookie data>; Path=/ Set-Cookie: csrftoken=<csrftoken>; Path=/; SameSite=Lax Set-Cookie: awx_sessionid=<your session id>; expires=<date>; HttpOnly; Max-Age=1800; Path=/; SameSite=Lax Strict-Transport-Security: max-age=15768000", "the --user flag adds this Authorization header for us curl -X GET --user 'user:password' https://<controller-host>/api/v2/credentials -k -L", "curl -u user:password -k -X POST https://<controller-host>/api/v2/tokens/", "curl -k -X POST -H \"Content-Type: application/json\" -H \"Authorization: Bearer <oauth2-token-value>\" https://<controller-host>/api/v2/hosts/", "curl -k -X POST -H \"Authorization: Bearer <oauth2-token-value>\" -H \"Content-Type: application/json\" --data '{\"limit\" : \"ansible\"}' https://<controller-host>/api/v2/job_templates/14/launch/", "import requests oauth2_token_value = 'y1Q8ye4hPvT61aQq63Da6N1C25jiA' # your token value from controller url = 'https://<controller-host>/api/v2/users/' payload = {} headers = {'Authorization': 'Bearer ' + oauth2_token_value,} makes request to controller user endpoint response = requests.request('GET', url, headers=headers, data=payload, allow_redirects=False, verify=False) prints json returned from controller with formatting print(json.dumps(response.json(), indent=4, sort_keys=True))" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_api_overview/controller-api-auth-methods
17.2. XML Representation of a Virtual Machine Template
17.2. XML Representation of a Virtual Machine Template Example 17.1. An XML representation of a virtual machine template
[ "<template href=\"/ovirt-engine/api/templates/00000000-0000-0000-0000-000000000000\" id=\"00000000-0000-0000-0000-000000000000\"> <actions> <link href=\"/ovirt-engine/api/templates/00000000-0000-0000-0000-000000000000/export\" rel=\"export\"/> </actions> <name>Blank</name> <description>Blank template</description> <comment>Blank template</comment> <link href=\"/ovirt-engine/api/templates/00000000-0000-0000-0000-000000000000/disks\" rel=\"disks\"/> <link href=\"/ovirt-engine/api/templates/00000000-0000-0000-0000-000000000000/nics\" rel=\"nics\"/> <link href=\"/ovirt-engine/api/templates/00000000-0000-0000-0000-000000000000/cdroms\" rel=\"cdroms\"/> <link href=\"/ovirt-engine/api/templates/00000000-0000-0000-0000-000000000000/permissions\" rel=\"permissions\"/> <link href=\"/ovirt-engine/api/templates/00000000-0000-0000-0000-000000000000/watchdogs\" rel=\"watchdogs\"/> <type>server</type> <status> <state>ok</state> </status> <memory>536870912</memory> <cpu> <topology sockets=\"1\" cores=\"1\"/> <architecture>X86_64<architecture/> </cpu> <cpu_shares>0</cpu_shares> <os type=\"rhel_6x64\"> <boot dev=\"hd\"/> <boot dev=\"cdrom\"/>; </os> <cluster id=\"00000000-0000-0000-0000-000000000000\" href=\"/ovirt-engine/api/clusters/00000000-0000-0000-0000-000000000000\"/> <creation_time>2010-08-16T14:24:29</creation_time> <origin>ovirt</origin> <high_availability> <enabled>true</enabled> <priority>100</priority> </high_availability> <display> <type>spice</type> <monitors>1</monitors> <single_qxl_pci>false</single_qxl_pci> <allow_override>true</allow_override> <smartcard_enabled>true</smartcard_enabled> </display> <stateless>false</stateless> <delete_protected>false</delete_protected> <sso> <methods> <method id=\"GUEST_AGENT\">true</enabled> </methods> </sso> <usb> <enabled>true</enabled> </usb> <migration_downtime>-1</migration_downtime> <version> <base_template href=\"/ovirt-engine/api/templates/00000000-0000-0000-0000-000000000000\" id=\"00000000-0000-0000-0000-000000000000\"/> <version_number>2</version_number> <version_name>RHEL65_TMPL_001</version_name> </version> </template>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/xml_representation_of_a_virtual_machine_template
Chapter 10. Using CPU Manager and Topology Manager
Chapter 10. Using CPU Manager and Topology Manager CPU Manager manages groups of CPUs and constrains workloads to specific CPUs. CPU Manager is useful for workloads that have some of these attributes: Require as much CPU time as possible. Are sensitive to processor cache misses. Are low-latency network applications. Coordinate with other processes and benefit from sharing a single processor cache. Topology Manager collects hints from the CPU Manager, Device Manager, and other Hint Providers to align pod resources, such as CPU, SR-IOV VFs, and other device resources, for all Quality of Service (QoS) classes on the same non-uniform memory access (NUMA) node. Topology Manager uses topology information from the collected hints to decide if a pod can be accepted or rejected on a node, based on the configured Topology Manager policy and pod resources requested. Topology Manager is useful for workloads that use hardware accelerators to support latency-critical execution and high throughput parallel computation. To use Topology Manager you must configure CPU Manager with the static policy. 10.1. Setting up CPU Manager To configure CPU manager, create a KubeletConfig custom resource (CR) and apply it to the desired set of nodes. Procedure Label a node by running the following command: # oc label node perf-node.example.com cpumanager=true To enable CPU Manager for all compute nodes, edit the CR by running the following command: # oc edit machineconfigpool worker Add the custom-kubelet: cpumanager-enabled label to metadata.labels section. metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled Create a KubeletConfig , cpumanager-kubeletconfig.yaml , custom resource (CR). Refer to the label created in the step to have the correct nodes updated with the new kubelet config. See the machineConfigPoolSelector section: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 Specify a policy: none . This policy explicitly enables the existing default CPU affinity scheme, providing no affinity beyond what the scheduler does automatically. This is the default policy. static . This policy allows containers in guaranteed pods with integer CPU requests. It also limits access to exclusive CPUs on the node. If static , you must use a lowercase s . 2 Optional. Specify the CPU Manager reconcile frequency. The default is 5s . Create the dynamic kubelet config by running the following command: # oc create -f cpumanager-kubeletconfig.yaml This adds the CPU Manager feature to the kubelet config and, if needed, the Machine Config Operator (MCO) reboots the node. To enable CPU Manager, a reboot is not needed. Check for the merged kubelet config by running the following command: # oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7 Example output "ownerReferences": [ { "apiVersion": "machineconfiguration.openshift.io/v1", "kind": "KubeletConfig", "name": "cpumanager-enabled", "uid": "7ed5616d-6b72-11e9-aae1-021e1ce18878" } ] Check the compute node for the updated kubelet.conf file by running the following command: # oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager Example output cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 cpuManagerPolicy is defined when you create the KubeletConfig CR. 2 cpuManagerReconcilePeriod is defined when you create the KubeletConfig CR. Create a project by running the following command: USD oc new-project <project_name> Create a pod that requests a core or multiple cores. Both limits and requests must have their CPU value set to a whole integer. That is the number of cores that will be dedicated to this pod: # cat cpumanager-pod.yaml Example output apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: cpumanager image: gcr.io/google_containers/pause:3.2 resources: requests: cpu: 1 memory: "1G" limits: cpu: 1 memory: "1G" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] nodeSelector: cpumanager: "true" Create the pod: # oc create -f cpumanager-pod.yaml Verification Verify that the pod is scheduled to the node that you labeled by running the following command: # oc describe pod cpumanager Example output Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx ... Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G ... QoS Class: Guaranteed Node-Selectors: cpumanager=true Verify that a CPU has been exclusively assigned to the pod by running the following command: # oc describe node --selector='cpumanager=true' | grep -i cpumanager- -B2 Example output NAMESPACE NAME CPU Requests CPU Limits Memory Requests Memory Limits Age cpuman cpumanager-mlrrz 1 (28%) 1 (28%) 1G (13%) 1G (13%) 27m Verify that the cgroups are set up correctly. Get the process ID (PID) of the pause process by running the following commands: # oc debug node/perf-node.example.com sh-4.2# systemctl status | grep -B5 pause Note If the output returns multiple pause process entries, you must identify the correct pause process. Example output # ├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause Verify that pods of quality of service (QoS) tier Guaranteed are placed within the kubepods.slice subdirectory by running the following commands: # cd /sys/fs/cgroup/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope # for i in `ls cpuset.cpus cgroup.procs` ; do echo -n "USDi "; cat USDi ; done Note Pods of other QoS tiers end up in child cgroups of the parent kubepods . Example output cpuset.cpus 1 tasks 32706 Check the allowed CPU list for the task by running the following command: # grep ^Cpus_allowed_list /proc/32706/status Example output Cpus_allowed_list: 1 Verify that another pod on the system cannot run on the core allocated for the Guaranteed pod. For example, to verify the pod in the besteffort QoS tier, run the following commands: # cat /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus # oc describe node perf-node.example.com Example output ... Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%) This VM has two CPU cores. The system-reserved setting reserves 500 millicores, meaning that half of one core is subtracted from the total capacity of the node to arrive at the Node Allocatable amount. You can see that Allocatable CPU is 1500 millicores. This means you can run one of the CPU Manager pods since each will take one whole core. A whole core is equivalent to 1000 millicores. If you try to schedule a second pod, the system will accept the pod, but it will never be scheduled: NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s 10.2. Topology Manager policies Topology Manager aligns Pod resources of all Quality of Service (QoS) classes by collecting topology hints from Hint Providers, such as CPU Manager and Device Manager, and using the collected hints to align the Pod resources. Topology Manager supports four allocation policies, which you assign in the KubeletConfig custom resource (CR) named cpumanager-enabled : none policy This is the default policy and does not perform any topology alignment. best-effort policy For each container in a pod with the best-effort topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager stores this and admits the pod to the node. restricted policy For each container in a pod with the restricted topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager rejects this pod from the node, resulting in a pod in a Terminated state with a pod admission failure. single-numa-node policy For each container in a pod with the single-numa-node topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager determines if a single NUMA Node affinity is possible. If it is, the pod is admitted to the node. If a single NUMA Node affinity is not possible, the Topology Manager rejects the pod from the node. This results in a pod in a Terminated state with a pod admission failure. 10.3. Setting up Topology Manager To use Topology Manager, you must configure an allocation policy in the KubeletConfig custom resource (CR) named cpumanager-enabled . This file might exist if you have set up CPU Manager. If the file does not exist, you can create the file. Prerequisites Configure the CPU Manager policy to be static . Procedure To activate Topology Manager: Configure the Topology Manager allocation policy in the custom resource. USD oc edit KubeletConfig cpumanager-enabled apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2 1 This parameter must be static with a lowercase s . 2 Specify your selected Topology Manager allocation policy. Here, the policy is single-numa-node . Acceptable values are: default , best-effort , restricted , single-numa-node . 10.4. Pod interactions with Topology Manager policies The example Pod specs below help illustrate pod interactions with Topology Manager. The following pod runs in the BestEffort QoS class because no resource requests or limits are specified. spec: containers: - name: nginx image: nginx The pod runs in the Burstable QoS class because requests are less than limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" requests: memory: "100Mi" If the selected policy is anything other than none , Topology Manager would not consider either of these Pod specifications. The last example pod below runs in the Guaranteed QoS class because requests are equal to limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" cpu: "2" example.com/device: "1" requests: memory: "200Mi" cpu: "2" example.com/device: "1" Topology Manager would consider this pod. The Topology Manager would consult the hint providers, which are CPU Manager and Device Manager, to get topology hints for the pod. Topology Manager will use this information to store the best topology for this container. In the case of this pod, CPU Manager and Device Manager will use this stored information at the resource allocation stage.
[ "oc label node perf-node.example.com cpumanager=true", "oc edit machineconfigpool worker", "metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "oc create -f cpumanager-kubeletconfig.yaml", "oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7", "\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"KubeletConfig\", \"name\": \"cpumanager-enabled\", \"uid\": \"7ed5616d-6b72-11e9-aae1-021e1ce18878\" } ]", "oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager", "cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "oc new-project <project_name>", "cat cpumanager-pod.yaml", "apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: cpumanager image: gcr.io/google_containers/pause:3.2 resources: requests: cpu: 1 memory: \"1G\" limits: cpu: 1 memory: \"1G\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] nodeSelector: cpumanager: \"true\"", "oc create -f cpumanager-pod.yaml", "oc describe pod cpumanager", "Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G QoS Class: Guaranteed Node-Selectors: cpumanager=true", "oc describe node --selector='cpumanager=true' | grep -i cpumanager- -B2", "NAMESPACE NAME CPU Requests CPU Limits Memory Requests Memory Limits Age cpuman cpumanager-mlrrz 1 (28%) 1 (28%) 1G (13%) 1G (13%) 27m", "oc debug node/perf-node.example.com", "sh-4.2# systemctl status | grep -B5 pause", "├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause", "cd /sys/fs/cgroup/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope", "for i in `ls cpuset.cpus cgroup.procs` ; do echo -n \"USDi \"; cat USDi ; done", "cpuset.cpus 1 tasks 32706", "grep ^Cpus_allowed_list /proc/32706/status", "Cpus_allowed_list: 1", "cat /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus", "oc describe node perf-node.example.com", "Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)", "NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s", "oc edit KubeletConfig cpumanager-enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2", "spec: containers: - name: nginx image: nginx", "spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" requests: memory: \"100Mi\"", "spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\" requests: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/scalability_and_performance/using-cpu-manager
Chapter 3. Installing and Uninstalling Identity Management Clients
Chapter 3. Installing and Uninstalling Identity Management Clients This chapter explains how to configure a system to join an Identity Management (IdM) domain as a client machine enrolled with a server. Note See Section 1.2, "The Identity Management Domain" for details on clients and servers in the IdM domain. 3.1. Prerequisites for Installing a Client DNS requirements Employ proper DNS delegation. For details on DNS requirements in IdM, see Section 2.1.5, "Host Name and DNS Configuration" . Do not alter the resolv.conf file on clients. Port requirements IdM clients connect to a number of ports on IdM servers to communicate with their services. These ports must be open on the IdM servers in the incoming direction. For more information on which ports IdM requires, see Section 2.1.6, "Port Requirements" . On a client, open these ports in the outgoing direction. If you are using a firewall that does not filter outgoing packets, such as firewalld , the ports are already available in the outgoing direction. Name Service Cache Daemon (NSCD) requirements Red Hat recommends to disable NSCD on Identity Management machines. Alternatively, if disabling NSCD is not possible, only enable NSCD for maps that SSSD does not cache. Both NSCD and the SSSD service perform caching, and problems can occur when systems use both services simultaneously. See the System-Level Authentication Guide for information on how to avoid conflicts between NSCD and SSSD. 3.1.1. Supported versions of RHEL for installing IdM clients An Identity Management (IdM) deployment in which IdM servers are running on the latest minor version of RHEL 7 supports clients that are running on the latest minor versions of: RHEL 7 RHEL 8 RHEL 9 Note If you are planning to make your IdM deployment FIPS-compliant, {RH} strongly recommends migrating your environment to RHEL 9. RHEL 9 is the first major RHEL version certified for FIPS 140-3. 3.1.2. Prerequisites for Installing a Client in a FIPS Environment In environments set up using Red Hat Enterprise Linux 7.4 and later: You can configure a new client on a system with the Federal Information Processing Standard (FIPS) mode enabled. The installation script automatically detects a system with FIPS enabled and configures IdM without the administrator's intervention. To enable FIPS in the operating system, see Enabling FIPS Mode in the Security Guide . In environments set up using Red Hat Enterprise Linux 7.3 and earlier: IdM does not support the FIPS mode. Disable FIPS on your system before installing an IdM client, and do not enable it after the installation. For further details about FIPS mode, see Federal Information Processing Standard (FIPS) in the Security Guide .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/setting-up-clients
Virtualization Tuning and Optimization Guide
Virtualization Tuning and Optimization Guide Red Hat Enterprise Linux 7 Using KVM performance features for host systems and virtualized guests on RHEL Jiri Herrmann Red Hat Customer Content Services [email protected] Yehuda Zimmerman Red Hat Customer Content Services [email protected] Dayle Parker Red Hat Customer Content Services Scott Radvan Red Hat Customer Content Services Red Hat Subject Matter Experts
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_tuning_and_optimization_guide/index
Chapter 4. Creating a router network
Chapter 4. Creating a router network To create a network of AMQ Interconnect routers, you define a deployment in an Interconnect Custom Resource, and then apply it. The AMQ Interconnect Operator creates the deployment by scheduling the necessary Pods and creating any needed Resources. The procedures in this section demonstrate the following router network topologies: Interior router mesh Interior router mesh with edge routers for scalability Inter-cluster router network that connects two OpenShift clusters Prerequisites The AMQ Interconnect Operator is installed in your OpenShift Container Platform project. 4.1. Creating an interior router deployment Interior routers establish connections with each other and automatically compute the lowest cost paths across the network. Procedure This procedure creates an interior router network of three routers. The routers automatically connect to each other in a mesh topology, and their connections are secured with mutual SSL/TLS authentication. Create an Interconnect Custom Resource YAML file that describes the interior router deployment. Sample router-mesh.yaml file apiVersion: interconnectedcloud.github.io/v1alpha1 kind: Interconnect metadata: name: router-mesh spec: deploymentPlan: role: interior 1 size: 3 2 placement: Any 3 1 The operating mode of the routers in the deployment. The Operator will automatically connect interior routers in a mesh topology. 2 The number of routers to create. 3 Each router runs in a separate Pod. The placement defines where in the cluster the Operator should schedule and place the Pods. You can choose the following placement options: Any The Pods can run on any node in the OpenShift Container Platform cluster. Every The Operator places a router Pod on each node in the cluster. If you choose this option, the Size property is not needed - the number of routers corresponds to the number of nodes in the cluster. Anti-Affinity The Operator ensures that multiple router Pods do not run on the same node in the cluster. If the size is greater than the number of nodes in the cluster, the extra Pods that cannot be scheduled will remain in a Pending state. Create the router deployment described in the YAML file. The Operator creates a deployment of interior routers in a mesh topology that uses default address semantics. It also creates a Service through which the routers can be accessed, and a Route through which you can access the web console. Verify that the router mesh was created and the Pods are running. Each router runs in a separate Pod. They connect to each other automatically using the Service that the Operator created. Review the router deployment. 1 The default address configuration. All messages sent to an address that does not match any of these prefixes are distributed in a balanced anycast pattern . 2 A router mesh of three interior routers was deployed. 3 Each interior router listens on port 45672 for connections from edge routers. 4 The interior routers connect to each other on port 55671 . These inter-router connections are secured with SSL/TLS mutual authentication. The inter-router SSL Profile contains the details of the certificates that the Operator generated. 5 Each interior router listens for connections from external clients on the following ports: 5672 - Unsecure connections from messaging applications. 5671 - Secure connections from messaging applications. 8080 - AMQ Interconnect web console access. Default user name/password security is applied. 6 Using the Red Hat Integration - AMQ Certificate Manager Operator, the Red Hat Integration - AMQ Interconnect automatically creates two SSL profiles: inter-router - The Operator secures the inter-router network with mutual TLS authentication by creating a Certificate Authority (CA) and generating certificates signed by the CA for each interior router. default - The Operator creates TLS certificates for messaging applications to connect to the interior routers on port 5671 . 7 The AMQ Interconnect web console is secured with user name/password authentication. The Operator automatically generates the credentials and stores them in the router-mesh-users Secret. 4.2. Creating an edge router deployment You can efficiently scale your router network by adding an edge router deployment. Edge routers act as connection concentrators for messaging applications. Each edge router maintains a single uplink connection to an interior router, and messaging applications connect to the edge routers to send and receive messages. Prerequisites The interior router mesh is deployed. For more information, see Section 4.1, "Creating an interior router deployment" . Procedure This procedure creates an edge router on each node of the OpenShift Container Platform cluster and connects them to the previously created interior router mesh. Create an Interconnect Custom Resource YAML file that describes the edge router deployment. Sample edge-routers.yaml file apiVersion: interconnectedcloud.github.io/v1alpha1 kind: Interconnect metadata: name: edge-routers spec: deploymentPlan: role: edge placement: Every 1 edgeConnectors: 2 - host: router-mesh 3 port: 45672 4 1 An edge router Pod will be deployed on each node in the OpenShift Container Platform cluster. This placement helps to balance messaging application traffic across the cluster. The Operator will create a DaemonSet to ensure that the number of Pods scheduled always corresponds to the number of nodes in the cluster. 2 Edge connectors define the connections from the edge routers to the interior routers. 3 The name of the Service that was created for the interior routers. 4 The port on which the interior routers listen for edge connections. The default is 45672 . Create the edge routers described in the YAML file: The Operator deploys an edge router on each node of the OpenShift Container Platform cluster, and connects them to the interior routers. Verify that the edge routers were created and the Pods are running. Each router runs in a separate Pod. Each edge router connects to any of the previously created interior routers. 4.3. Creating an inter-cluster router network Depending on whether you are using AMQ Certificate Manager, there are different procedures for creating an inter-cluster router network. Section 4.3.1, "Creating an inter-cluster router network using a Certificate Authority" Section 4.3.2, "Creating an inter-cluster router network using AMQ Certificate Manager" 4.3.1. Creating an inter-cluster router network using a Certificate Authority You can create a router network from routers running in different OpenShift Container Platform clusters. This enables you to connect applications running in separate clusters. Prerequisites You have already created secrets defining an existing certificate for each router. Procedure This procedure creates router deployments in two different OpenShift Container Platform clusters ( cluster1 and cluster2 ) and connects them together to form an inter-cluster router network. The connection between the router deployments is secured with SSL/TLS mutual authentication. In the first OpenShift Container Platform cluster ( cluster1 ), create an Interconnect Custom Resource YAML file that describes the interior router deployment. This example creates a single interior router with a default configuration. Sample cluster1-router-mesh.yaml file apiVersion: interconnectedcloud.github.io/v1alpha1 kind: Interconnect metadata: name: cluster1-router-mesh spec: interRouterListeners: - authenticatePeer: true 1 host: 0.0.0.0 2 port: 55672 3 saslMechanisms: EXTERNAL 4 sslProfile: inter-router-profile 5 expose: true 6 sslProfiles: - caCert: inter-router-certs-secret 7 credentials: inter-router-certs-secret 8 name: inter-router-profile 9 1 authenticatePeer must be set to true to authenticate using TLS certificates 2 listener host 3 listener port 4 SASL mechanism to authenticate, use EXTERNAL for TLS certificates 5 ssl-profile name to use for authenticating clients 6 exposes a route so that the port is accessible from outside the cluster 7 name of cluster secret or your CA containing a ca.crt name (in case you're using the same secret used in credentials, otherwise it must have a tls.crt) 8 name of cluster secret with the CA certificate containing tls.crt and tls.key files 9 ssl-profile name to use for the interRouterListener Create the router deployment described in the YAML file. The Red Hat Integration - AMQ Interconnect creates an interior router with a default configuration and a listener to authenticate other routers. Log in to the second OpenShift Container Platform cluster ( cluster2 ), and switch to the project where you want to create the second router deployment. In cluster2 , create an Interconnect Custom Resource YAML file to describe the router deployment. apiVersion: interconnectedcloud.github.io/v1alpha1 kind: Interconnect metadata: name: cluster2-router-mesh spec: sslProfiles: - name: inter-router-profile 1 credentials: inter-router-certs-secret caCert: inter-router-certs-secret interRouterConnectors: - host: cluster1-router-mesh-port-55672-myproject.cluster1.openshift.com 2 port: 443 verifyHostname: false sslProfile: inter-router-profile name: cluster1 1 This SSL Profile defines the certificate needed to connect to the router deployment in cluster1 . 2 The URL of the Route for the inter-router listener on cluster1 . Create the router deployment described in the YAML file. Verify that the routers are connected. This example displays the connections from the router in cluster2 to the router in cluster1 . 4.3.2. Creating an inter-cluster router network using AMQ Certificate Manager You can create a router network from routers running in different OpenShift Container Platform clusters. This enables you to connect applications running in separate clusters. Procedure This procedure creates router deployments in two different OpenShift Container Platform clusters ( cluster1 and cluster2 ) and connects them together to form an inter-cluster router network. The connection between the router deployments is secured with SSL/TLS mutual authentication. In the first OpenShift Container Platform cluster ( cluster1 ), create an Interconnect Custom Resource YAML file that describes the interior router deployment. This example creates a single interior router with a default configuration. Sample cluster1-router-mesh.yaml file apiVersion: interconnectedcloud.github.io/v1alpha1 kind: Interconnect metadata: name: cluster1-router-mesh spec: {} Create the router deployment described in the YAML file. The Red Hat Integration - AMQ Interconnect creates an interior router with a default configuration. It uses the Red Hat Integration - AMQ Certificate Manager Operator to create a Certificate Authority (CA) and generate a certificate signed by the CA. Generate an additional certificate for the router deployment in the second OpenShift Container Platform cluster ( cluster2 ). The router deployment in cluster2 requires a certificate issued by the CA of cluster1 . Create a Certificate Custom Resource YAML file to request a certificate. Sample certificate-request.yaml file apiVersion: certmanager.k8s.io/v1alpha1 kind: Certificate metadata: name: cluster2-inter-router-tls spec: commonName: cluster1-router-mesh-myproject.cluster2.openshift.com issuerRef: name: cluster1-router-mesh-inter-router-ca 1 secretName: cluster2-inter-router-tls-secret --- 1 The name of the Issuer that created the inter-router CA for cluster1 . By default, the name of the Issuer is < application-name >-inter-router-ca . Create the certificate described in the YAML file. Extract the certificate that you generated. Log in to the second OpenShift Container Platform cluster ( cluster2 ), and switch to the project where you want to create the second router deployment. In cluster2 , create a Secret containing the certificate that you generated. In cluster2 , create an Interconnect Custom Resource YAML file to describe the router deployment. apiVersion: interconnectedcloud.github.io/v1alpha1 kind: Interconnect metadata: name: cluster2-router-mesh spec: sslProfiles: - name: inter-cluster-tls 1 credentials: cluster2-inter-router-tls-secret caCert: cluster2-inter-router-tls-secret interRouterConnectors: - host: cluster1-router-mesh-port-55671-myproject.cluster1.openshift.com 2 port: 443 verifyHostname: false sslProfile: inter-cluster-tls 1 This SSL Profile defines the certificate needed to connect to the router deployment in cluster1 . 2 The URL of the Route for the inter-router listener on cluster1 . Create the router deployment described in the YAML file. Verify that the routers are connected. This example displays the connections from the router in cluster2 to the router in cluster1 .
[ "apiVersion: interconnectedcloud.github.io/v1alpha1 kind: Interconnect metadata: name: router-mesh spec: deploymentPlan: role: interior 1 size: 3 2 placement: Any 3", "oc apply -f router-mesh.yaml", "oc get pods NAME READY STATUS RESTARTS AGE interconnect-operator-587f94784b-4bzdx 1/1 Running 0 52m router-mesh-6b48f89bd-588r5 1/1 Running 0 40m router-mesh-6b48f89bd-bdjc4 1/1 Running 0 40m router-mesh-6b48f89bd-h6d5r 1/1 Running 0 40m", "oc get interconnect/router-mesh -o yaml apiVersion: interconnectedcloud.github.io/v1alpha1 kind: Interconnect spec: addresses: 1 - distribution: closest prefix: closest - distribution: multicast prefix: multicast - distribution: closest prefix: unicast - distribution: closest prefix: exclusive - distribution: multicast prefix: broadcast deploymentPlan: 2 livenessPort: 8888 placement: Any resources: {} role: interior size: 3 edgeListeners: 3 - port: 45672 interRouterListeners: 4 - authenticatePeer: true expose: true port: 55671 saslMechanisms: EXTERNAL sslProfile: inter-router listeners: 5 - port: 5672 - authenticatePeer: true expose: true http: true port: 8080 - port: 5671 sslProfile: default sslProfiles: 6 - credentials: router-mesh-default-tls name: default - caCert: router-mesh-inter-router-tls credentials: router-mesh-inter-router-tls mutualAuth: true name: inter-router users: router-mesh-users 7", "apiVersion: interconnectedcloud.github.io/v1alpha1 kind: Interconnect metadata: name: edge-routers spec: deploymentPlan: role: edge placement: Every 1 edgeConnectors: 2 - host: router-mesh 3 port: 45672 4", "oc apply -f edge-routers.yaml", "oc get pods NAME READY STATUS RESTARTS AGE edge-routers-2jz5j 1/1 Running 0 33s edge-routers-fhlxv 1/1 Running 0 33s edge-routers-gg2qb 1/1 Running 0 33s edge-routers-hj72t 1/1 Running 0 33s interconnect-operator-587f94784b-4bzdx 1/1 Running 0 54m router-mesh-6b48f89bd-588r5 1/1 Running 0 42m router-mesh-6b48f89bd-bdjc4 1/1 Running 0 42m router-mesh-6b48f89bd-h6d5r 1/1 Running 0 42m", "apiVersion: interconnectedcloud.github.io/v1alpha1 kind: Interconnect metadata: name: cluster1-router-mesh spec: interRouterListeners: - authenticatePeer: true 1 host: 0.0.0.0 2 port: 55672 3 saslMechanisms: EXTERNAL 4 sslProfile: inter-router-profile 5 expose: true 6 sslProfiles: - caCert: inter-router-certs-secret 7 credentials: inter-router-certs-secret 8 name: inter-router-profile 9", "oc apply -f cluster1-router-mesh.yaml", "apiVersion: interconnectedcloud.github.io/v1alpha1 kind: Interconnect metadata: name: cluster2-router-mesh spec: sslProfiles: - name: inter-router-profile 1 credentials: inter-router-certs-secret caCert: inter-router-certs-secret interRouterConnectors: - host: cluster1-router-mesh-port-55672-myproject.cluster1.openshift.com 2 port: 443 verifyHostname: false sslProfile: inter-router-profile name: cluster1", "oc apply -f cluster2-router-mesh.yaml", "oc exec cluster2-fb6bc5797-crvb6 -it -- qdstat -c Connections id host container role dir security authentication tenant ==================================================================================================================================================================================================== 1 cluster1-router-mesh-port-55672-myproject.cluster1.openshift.com:443 cluster1-router-mesh-54cffd9967-9h4vq inter-router out TLSv1/SSLv3(DHE-RSA-AES256-GCM-SHA384) x.509", "apiVersion: interconnectedcloud.github.io/v1alpha1 kind: Interconnect metadata: name: cluster1-router-mesh spec: {}", "oc apply -f cluster1-router-mesh.yaml", "apiVersion: certmanager.k8s.io/v1alpha1 kind: Certificate metadata: name: cluster2-inter-router-tls spec: commonName: cluster1-router-mesh-myproject.cluster2.openshift.com issuerRef: name: cluster1-router-mesh-inter-router-ca 1 secretName: cluster2-inter-router-tls-secret ---", "oc apply -f certificate-request.yaml", "mkdir /tmp/cluster2-inter-router-tls oc extract secret/cluster2-inter-router-tls-secret --to=/tmp/cluster2-inter-router-tls", "oc create secret generic cluster2-inter-router-tls-secret --from-file=/tmp/cluster2-inter-router-tls", "apiVersion: interconnectedcloud.github.io/v1alpha1 kind: Interconnect metadata: name: cluster2-router-mesh spec: sslProfiles: - name: inter-cluster-tls 1 credentials: cluster2-inter-router-tls-secret caCert: cluster2-inter-router-tls-secret interRouterConnectors: - host: cluster1-router-mesh-port-55671-myproject.cluster1.openshift.com 2 port: 443 verifyHostname: false sslProfile: inter-cluster-tls", "oc apply -f cluster2-router-mesh.yaml", "oc exec cluster2-fb6bc5797-crvb6 -it -- qdstat -c Connections id host container role dir security authentication tenant ==================================================================================================================================================================================================== 1 cluster1-router-mesh-port-55671-myproject.cluster1.openshift.com:443 cluster1-router-mesh-54cffd9967-9h4vq inter-router out TLSv1/SSLv3(DHE-RSA-AES256-GCM-SHA384) x.509" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/deploying_amq_interconnect_on_openshift/creating-router-network-router-ocp
8.10. Configure a Network Team Using the Command Line
8.10. Configure a Network Team Using the Command Line 8.10.1. Configure Network Teaming Using nmcli To view the connections available on the system: To view the devices available on the system: To create a new team interface, with name ServerA : NetworkManager will set its internal parameter connection.autoconnect to yes and as no IP address was given ipv4.method will be set to auto . NetworkManager will also write a configuration file to /etc/sysconfig/network-scripts/ifcfg-team-ServerA where the corresponding ONBOOT will be set to yes and BOOTPROTO will be set to dhcp . Note that manual changes to the ifcfg file will not be noticed by NetworkManager until the interface is brought up. See Section 2.7, "Using NetworkManager with sysconfig files" for more information on using configuration files. To view the other values assigned: As no JSON configuration file was specified the default values apply. See the teamd.conf(5) man page for more information on the team JSON parameters and their default values. Notice that the name was derived from the interface name by prepending the type. Alternatively, specify a name with the con-name option as follows: To view the team interfaces just configured, enter a command as follows: To change the name assigned to a team, enter a command in the following format: nmcli con mod old-team-name connection.id new-team-name To load a team configuration file for a team that already exists: nmcli connection modify team-name team.config JSON-config You can specify the team configuration either as a JSON string or provide a file name containing the configuration. The file name can include the path. In both cases, what is stored in the team.config property is the JSON string. In the case of a JSON string, use single quotes around the string and paste the entire string to the command line. To review the team.config property: nmcli con show team-name | grep team.config When the team.config property is set, all the other team properties are updated accordingly. It is also possible a more flexible way of exposing and setting particular team options without modifying directly the corresponding JSON string. You can do this by using the other available team properties to set the related team options one by one to the required values. As a result, the team.config property is updated to match the new values. For example, to set the team.link-watchers property which allows to specify one or multiple link-watchers , enter a command in the following format: nmcli connection modify team-name team.link-watchers "name= ethtool delay-up= 5 , name= nsna_ping target-host= target.host " The required link-watchers are separated by comma and the attributes which belong to the same link-watcher are separated by space. To set the team.runner and the team.link-watchers properties, enter a command in the following format: nmcli connection modify team-name team.runner activebackup team.link-watchers "name= ethtool delay-up= 5 , name= nsna_ping target-host= target.host " This is equivalent to set the team.config property to the corresponding JSON string: nmcli connection modify team-name team.config '{"runner": {"name": "activebackup"}, "link_watch": [{"name": "ethtool", "delay_up": 5},{"name": "nsna_ping", "target_host ": "target.host"}]' To add an interface enp1s0 to Team0 , with the name Team0-port1 , issue a command as follows: Similarly, to add another interface, enp2s0 , with the name Team0-port2 , issue a command as follows: nmcli only supports Ethernet ports. To open a team, the ports must be brought up first as follows: You can verify that the team interface was brought up by the activation of the ports, as follows: Alternatively, issue a command to open the team as follows: See Section 3.3, "Configuring IP Networking with nmcli" for an introduction to nmcli 8.10.2. Creating a Network Team Using teamd Note Configurations created using teamd are not persistent, and as such it may be necessary to create a team using the steps defined in Section 8.10.1, "Configure Network Teaming Using nmcli" or Section 8.10.3, "Creating a Network Team Using ifcfg Files" . To create a network team, a JSON format configuration file is required for the virtual interface that will serve as the interface to the team of ports or links. A quick way is to copy the example configuration files and then edit them using an editor running with root privileges. To list the available example configurations, enter the following command: To view one of the included files, such as activebackup_ethtool_1.conf , enter the following command: Create a working configurations directory to store teamd configuration files. For example, as normal user, enter a command with the following format: Copy the file you have chosen to your working directory and edit it as necessary. As an example, you could use a command with the following format: To edit the file to suit your environment, for example to change the interfaces to be used as ports for the network team, open the file for editing as follows: Make any necessary changes and save the file. See the vi(1) man page for help on using the vi editor or use your preferred editor. Note that it is essential that the interfaces to be used as ports within the team must not be active, that is to say, they must be " down " , when adding them into a team device. To check their status, issue the following command: In this example we see that both the interfaces we plan to use are " UP " . To take down an interface, issue a command as root in the following format: Repeat for each interface as necessary. To create a team interface based on the configuration file, as root user, change to the working configurations directory ( teamd_working_configs in this example): Then issue a command in the following format: The -g option is for debug messages, -f option is to specify the configuration file to load, and the -d option is to make the process run as a daemon after startup. See the teamd(8) man page for other options. To check the status of the team, issue the following command as root : To apply an address to the network team interface, team0 , issue a command as root in the following format: To check the IP address of a team interface, issue a command as follows: To activate the team interface, or to bring it " up " , issue a command as root in the following format: To temporarily deactivate the team interface, or to take it " down " , issue a command as root in the following format: To terminate, or kill, an instance of the team daemon, as root user, issue a command in the following format: The -k option is to specify that the instance of the daemon associated with the device team0 is to be killed. See the teamd(8) man page for other options. For help on command-line options for teamd , issue the following command: In addition, see the teamd(8) man page. 8.10.3. Creating a Network Team Using ifcfg Files To create a networking team using ifcfg files, create a file in the /etc/sysconfig/network-scripts/ directory as follows: This creates the interface to the team, in other words, this is the master . To create a port to be a member of team0 , create one or more files in the /etc/sysconfig/network-scripts/ directory as follows: Add additional port interfaces similar to the above as required, changing the DEVICE and HWADDR field to match the ports (the network devices) being added. If port priority is not specified by prio it defaults to 0 ; it accepts negative and positive values in the range -32,767 to +32,767 . Specifying the hardware or MAC address using the HWADDR directive will influence the device naming procedure. This is explained in Chapter 11, Consistent Network Device Naming . To open the network team, issue the following command as root : To view the network team, issue the following command: 8.10.4. Add a Port to a Network Team Using iputils To add a port em1 to a network team team0 , using the ip utility, issue the following commands as root : Add additional ports as required. Team driver will bring ports up automatically. 8.10.5. Listing the ports of a Team Using teamnl To view or list the ports in a network team, using the teamnl utility, issue the following command as root : 8.10.6. Configuring Options of a Team Using teamnl To view or list all currently available options, using the teamnl utility, issue the following command as root : To configure a team to use active backup mode, issue the following command as root : 8.10.7. Add an Address to a Network Team Using iputils To add an address to a team team0 , using the ip utility, issue the following command as root : 8.10.8. open an Interface to a Network Team Using iputils To activate or " open " an interface to a network team, team0 , using the ip utility, issue the following command as root : 8.10.9. Viewing the Active Port Options of a Team Using teamnl To view or list the activeport option in a network team, using the teamnl utility, issue the following command as root : 8.10.10. Setting the Active Port Options of a Team Using teamnl To set the activeport option in a network team, using the teamnl utility, issue the following command as root : To check the change in team port options, issue the following command as root :
[ "~]USD nmcli connection show NAME UUID TYPE DEVICE enp2s0 0e8185a1-f0fd-4802-99fb-bedbb31c689b 802-3-ethernet -- enp1s0 dfe1f57b-419d-4d1c-aaf5-245deab82487 802-3-ethernet --", "~]USD nmcli device status DEVICE TYPE STATE CONNECTION virbr0 bridge connected virbr0 ens3 ethernet connected ens3", "~]USD nmcli connection add type team ifname ServerA Connection 'team-ServerA' (b954c62f-5fdd-4339-97b0-40efac734c50) successfully added.", "~]USD nmcli con show team-ServerA connection.id: team-ServerA connection.uuid: b954c62f-5fdd-4339-97b0-40efac734c50 connection.interface-name: ServerA connection.type: team connection.autoconnect: yes ... ipv4.method: auto [output truncated]", "~]USD nmcli connection add type team con-name Team0 ifname ServerB Connection 'Team0' (5f7160a1-09f6-4204-8ff0-6d96a91218a7) successfully added.", "~]USD nmcli con show NAME UUID TYPE DEVICE team-ServerA b954c62f-5fdd-4339-97b0-40efac734c50 team ServerA enp2s0 0e8185a1-f0fd-4802-99fb-bedbb31c689b 802-3-ethernet -- enp1s0 dfe1f57b-419d-4d1c-aaf5-245deab82487 802-3-ethernet -- Team0 5f7160a1-09f6-4204-8ff0-6d96a91218a7 team ServerB", "~]USD nmcli con add type ethernet con-name Team0-port1 ifname enp1s0 slave-type team master Team0 Connection 'Team0-port1' (ccd87704-c866-459e-8fe7-01b06cf1cffc) successfully added.", "~]USD nmcli con add type ethernet con-name Team0-port2 ifname enp2s0 slave-type team master Team0 Connection 'Team0-port2' (a89ccff8-8202-411e-8ca6-2953b7db52dd) successfully added.", "~]USD nmcli connection up Team0-port1 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/2)", "~]USD nmcli connection up Team0-port2 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)", "~]USD ip link 3: Team0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT link/ether 52:54:00:76:6f:f0 brd ff:ff:ff:ff:ff:f", "~]USD nmcli connection up Team0 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/4)", "~]USD ls /usr/share/doc/teamd-*/example_configs/ activebackup_arp_ping_1.conf activebackup_multi_lw_1.conf loadbalance_2.conf activebackup_arp_ping_2.conf activebackup_nsna_ping_1.conf loadbalance_3.conf activebackup_ethtool_1.conf broadcast.conf random.conf activebackup_ethtool_2.conf lacp_1.conf roundrobin_2.conf activebackup_ethtool_3.conf loadbalance_1.conf roundrobin.conf", "~]USD cat /usr/share/doc/teamd-*/example_configs/activebackup_ethtool_1.conf { \"device\": \"team0\", \"runner\": {\"name\": \"activebackup\"}, \"link_watch\": {\"name\": \"ethtool\"}, \"ports\": { \"enp1s0\": { \"prio\": -10, \"sticky\": true }, \"enp2s0\": { \"prio\": 100 } } }", "~]USD mkdir ~/ teamd_working_configs", "~]USD cp /usr/share/doc/teamd-*/example_configs/activebackup_ethtool_1.conf \\ ~/ teamd_working_configs /activebackup_ethtool_1.conf", "~]USD vi ~/ teamd_working_configs /activebackup_ethtool_1.conf", "~]USD ip link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000 link/ether 52:54:00:d5:f7:d4 brd ff:ff:ff:ff:ff:ff 3: em2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000 link/ether 52:54:00:d8:04:70 brd ff:ff:ff:ff:ff:ff", "~]# ip link set down em1", "~]# cd /home/ user teamd_working_configs", "~]# teamd -g -f activebackup_ethtool_1.conf -d Using team device \"team0\". Using PID file \"/var/run/teamd/team0.pid\" Using config file \"/home/ user /teamd_working_configs/activebackup_ethtool_1.conf\"", "~]# teamdctl team0 state setup: runner: activebackup ports: em1 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up em2 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up runner: active port: em1", "~]# ip addr add 192.168.23.2/24 dev team0", "~]USD ip addr show team0 4: team0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 16:38:57:60:20:6f brd ff:ff:ff:ff:ff:ff inet 192.168.23.2/24 scope global team0 valid_lft forever preferred_lft forever inet6 2620:52:0:221d:1438:57ff:fe60:206f/64 scope global dynamic valid_lft 2591880sec preferred_lft 604680sec inet6 fe80::1438:57ff:fe60:206f/64 scope link valid_lft forever preferred_lft forever", "~]# ip link set dev team0 up", "~]# ip link set dev team0 down", "~]# teamd -t team0 -k", "~]USD teamd -h", "DEVICE=team0 DEVICETYPE=Team ONBOOT=yes BOOTPROTO=none IPADDR=192.168.11.1 PREFIX=24 TEAM_CONFIG='{\"runner\": {\"name\": \"activebackup\"}, \"link_watch\": {\"name\": \"ethtool\"}}'", "DEVICE=enp1s0 HWADDR=D4:85:64:01:46:9E DEVICETYPE=TeamPort ONBOOT=yes TEAM_MASTER=team0 TEAM_PORT_CONFIG='{\"prio\": 100}'", "~]# ifup team0", "~]USD ip link show", "~]# ip link set dev em1 down ~]# ip link set dev em1 master team0", "~]# teamnl team0 ports em2: up 100 fullduplex em1: up 100 fullduplex", "~]# teamnl team0 options", "~]# teamnl team0 setoption mode activebackup", "~]# ip addr add 192.168.252.2/24 dev team0", "~]# ip link set team0 up", "~]# teamnl team0 getoption activeport 0", "~]# teamnl team0 setoption activeport 5", "~]# teamnl team0 getoption activeport 5" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-Configure_a_Network_Team_Using-the_Command_Line
Chapter 55. Compiler and Tools
Chapter 55. Compiler and Tools Performance of regular expressions cannot be boosted with the JIT technique if executable stack is disabled When the SELinux policy disallows executable stack, the PCRE library cannot use JIT compilation to speed up regular expressions. As a result, attempting JIT compilation for regular expressions is ignored and their performance is not boosted. To work around this problem, amend the SELinux policy with a rule for enabling the execmem action on affected SELinux domains to enable JIT compilation. Some of the rules are already provided and can be enabled by specific SELinux booleans. To list these booleans, see the output of the following command: An alternative workaround is changing application code to not request JIT compilation with calls to the pcre_study() function. (BZ# 1290432 ) Memory leaks occur when certain applications fail to exit after unloading the Gluster libraries Gluster consists of many internal components and different translators that implement functions and features. The gfapi access method was added to integrate Gluster tightly with applications. However, not all components and translators are designed to be unloaded in running applications. As a consequence, programs that do not exit after unloading the Gluster libraries are unable to release some of the memory allocations that are performed internally by Gluster. To reduce the amount of memory leaks, prevent applications from calling the glfs_init() and glfs_fini() functions whenever possible. To release the leaked memory, you must restart long-running applications. (BZ# 1409773 ) URL to DISA SRGs is incorrect The SCAP Security Guide (SSG) rules refer to Defense Information Systems Agency Security Requirement Guides (DISA SRGs). Connecting to the URL fails with an 404 - not found error. As a consequence, users have no direct reference to SRGs. (BZ# 1464899 ) The ensure_gpgcheck_repo_metadata rule fails During remediation of the ensure_gpgcheck_repo_metadata rule, certain profiles update the yum.conf file to enable the repo_gpgcheck option. Red Hat does not currently provide signed repository metadata. As a consequence, the yum utility is no longer able to install any package from official repositories. To work around the problem, use a tailoring file to remove ensure_gpgcheck_repo_metadata from the profile. If remediation already breaks the system, update yum.conf and set repo_gpgcheck to 0 . (BZ# 1465677 ) The SSG pam_faillock module utilization check incorrectly accepts default=die The SCAP Security Guide (SSG) pam_faillock module utilization check incorrectly accepts the default=die option. Consequently, when a user authentication using the pam_unix module fails, the pam stack evaluation stops immediately without incrementing the counter of pam_faillock. To work around this problem, do not use default=die before the authfail option. This ensures that the pam_faillock counter is incremented properly. (BZ#1448952)
[ "getsebool -a | grep execmem" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/known_issues_compiler_and_tools
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/red_hat_openshift_data_foundation_architecture/making-open-source-more-inclusive
Chapter 16. Image-based installation for single-node OpenShift
Chapter 16. Image-based installation for single-node OpenShift 16.1. Understanding image-based installation and deployment for single-node OpenShift Image-based installations significantly reduce the deployment time of single-node OpenShift clusters by streamlining the installation process. This approach enables the preinstallation of configured and validated instances of single-node OpenShift on target hosts. These preinstalled hosts can be rapidly reconfigured and deployed at the far edge of the network, including in disconnected environments, with minimal intervention. Note To deploy a managed cluster using an imaged-based approach in combination with GitOps Zero Touch Provisioning (ZTP), you can use the SiteConfig operator. For more information, see SiteConfig operator . 16.1.1. Overview of image-based installation and deployment for single-node OpenShift clusters Deploying infrastructure at the far edge of the network presents challenges for service providers with low bandwidth, high latency, and disconnected environments. It is also costly and time-consuming to install and deploy single-node OpenShift clusters. An image-based approach to installing and deploying single-node OpenShift clusters at the far edge of the network overcomes these challenges by separating the installation and deployment stages. Figure 16.1. Overview of an image-based installation and deployment for managed single-node OpenShift clusters Imaged-based installation Preinstall multiple hosts with single-node OpenShift at a central site, such as a service depot or a factory. Then, validate the base configuration for these hosts and leverage the image-based approach to perform reproducible factory installs at scale by using a single live installation ISO. Image-based deployment Ship the preinstalled and validated hosts to a remote site and rapidly reconfigure and deploy the clusters in a matter of minutes by using a configuration ISO. You can choose from two methods to preinstall and configure your SNO clusters. Using the openshift-install program For a single-node OpenShift cluster, use the openshift-install program only to manually create the live installation ISO that is common to all hosts. Then, use the program again to create the configuration ISO which ensures that the host is unique. For more information, see "Deploying managed single-node OpenShift using the openshift-install program". Using the IBI Operator For managed single-node OpenShift clusters, you can use the openshift-install with the Image Based Install (IBI) Operator to scale up the operations. The program creates the live installation ISO and then the IBI Operator creates one configuration ISO for each host. For more information, see "Deploying single-node OpenShift using the IBI Operator". 16.1.1.1. Image-based installation for single-node OpenShift clusters Using the Lifecycle Agent, you can generate an OCI container image that encapsulates an instance of a single-node OpenShift cluster. This image is derived from a dedicated cluster that you can configure with the target OpenShift Container Platform version. You can reference this image in a live installation ISO to consistently preinstall configured and validated instances of single-node OpenShift to multiple hosts. This approach enables the preparation of hosts at a central location, for example in a factory or service depot, before shipping the preinstalled hosts to a remote site for rapid reconfiguration and deployment. The instructions for preinstalling a host are the same whether you deploy the host by using only the openshift-install program or using the program with the IBI Operator. The following is a high-level overview of the image-based installation process: Generate an image from a single-node OpenShift cluster. Use the openshift-install program to embed the seed image URL, and other installation artifacts, in a live installation ISO. Start the host using the live installation ISO to preinstall the host. During this process, the openshift-install program installs Red Hat Enterprise Linux CoreOS (RHCOS) to the disk, pulls the image you generated, and precaches release container images to the disk. When the installation completes, the host is ready to ship to the remote site for rapid reconfiguration and deployment. 16.1.1.2. Image-based deployment for single-node OpenShift clusters You can use the openshift-install program or the IBI Operator to configure and deploy a host that you preinstalled with an image-based installation. Single-node OpenShift cluster deployment To configure the target host with site-specific details by using the openshift-install program, you must create the following resources: The install-config.yaml installation manifest The image-based-config.yaml manifest The openshift-install program uses these resources to generate a configuration ISO that you attach to the preinstalled target host to complete the deployment. Managed single-node OpenShift cluster deployment Red Hat Advanced Cluster Management (RHACM) and the multicluster engine for Kubernetes Operator (MCE) use a hub-and-spoke architecture to manage and deploy single-node OpenShift clusters across multiple sites. Using this approach, the hub cluster serves as a central control plane that manages the spoke clusters, which are often remote single-node OpenShift clusters deployed at the far edge of the network. You can define the site-specific configuration resources for an image-based deployment in the hub cluster. The IBI Operator uses these configuration resources to reconfigure the preinstalled host at the remote site and deploy the host as a managed single-node OpenShift cluster. This approach is especially beneficial for telecommunications providers and other service providers with extensive, distributed infrastructures, where an end-to-end installation at the remote site would be time-consuming and costly. The following is a high-level overview of the image-based deployment process for hosts preinstalled with an imaged-based installation: Define the site-specific configuration resources for the preinstalled host in the hub cluster. Apply these resources in the hub cluster. This initiates the deployment process. The IBI Operator creates a configuration ISO. The IBI Operator boots the target preinstalled host with the configuration ISO attached. The host mounts the configuration ISO and begins the reconfiguration process. When the reconfiguration completes, the single-node OpenShift cluster is ready. As the host is already preinstalled using an image-based installation, a technician can reconfigure and deploy the host in a matter of minutes. 16.1.2. Image-based installation and deployment components The following content describes the components in an image-based installation and deployment. Seed image OCI container image generated from a dedicated cluster with the target OpenShift Container Platform version. Seed cluster Dedicated single-node OpenShift cluster that you use to create a seed image and is deployed with the target OpenShift Container Platform version. Lifecycle Agent Generates the seed image. Image Based Install (IBI) Operator When you deploy managed clusters, the IBI Operator creates a configuration ISO from the site-specific resources you define in the hub cluster, and attaches the configuration ISO to the preinstalled host by using a bare-metal provisioning service. openshift-install program Creates the installation and configuration ISO, and embeds the seed image URL in the live installation ISO. If the IBI Operator is not used, you must manually attach the configuration ISO to a preinstalled host to complete the deployment. Additional resources Deploying a single-node OpenShift cluster using the openshift-install program 16.1.3. Cluster guidelines for image-based installation and deployment For a successful image-based installation and deployment, see the following guidelines. 16.1.3.1. Cluster guidelines If you are using Red Hat Advanced Cluster Management (RHACM), to avoid including any RHACM resources in your seed image, you need to disable all optional RHACM add-ons before generating the seed image. 16.1.3.2. Seed cluster guidelines If your cluster deployment at the edge of the network requires a proxy configuration, you must create a seed image from a seed cluster featuring a proxy configuration. The proxy configurations do not have to match. If you set a maximum transmission unit (MTU) in the seed cluster, you must set the same MTU value in the static network configuration for the image-based configuration ISO. Your single-node OpenShift seed cluster must have a shared /var/lib/containers directory for precaching images during an image-based installation. For more information see "Configuring a shared container partition between ostree stateroots". Create a seed image from a single-node OpenShift cluster that uses the same hardware as your target bare-metal host. The seed cluster must reflect your target cluster configuration for the following items: CPU topology CPU architecture Number of CPU cores Tuned performance configuration, such as number of reserved CPUs IP version Note Dual-stack networking is not supported in this release. Disconnected registry Note If the target cluster uses a disconnected registry, your seed cluster must use a disconnected registry. The registries do not have to be the same. FIPS configuration Additional resources Configuring a shared container partition between ostree stateroots 16.1.4. Software prerequisites for an image-based installation and deployment An image-based installation and deployment requires the following minimum software versions for these required components. Table 16.1. Minimum software requirements Component Software version Managed cluster version 4.17 Hub cluster version 4.16 Red Hat Advanced Cluster Management (RHACM) 2.12 Lifecycle Agent 4.16 or later Image Based Install Operator 4.17 openshift-install program 4.17 Additional resources Multicluster architecture Understanding the image-based upgrade for single-node OpenShift clusters 16.2. Preparing for image-based installation for single-node OpenShift clusters To prepare for an image-based installation for single-node OpenShift clusters, you must complete the following tasks: Create a seed image by using the Lifecycle Agent. Verify that all software components meet the required versions. For further information, see "Software prerequisites for an image-based installation and deployment". Additional resources Software prerequisites for an image-based installation and deployment 16.2.1. Installing the Lifecycle Agent Use the Lifecycle Agent to generate a seed image from a seed cluster. You can install the Lifecycle Agent using the OpenShift CLI ( oc ) or the web console. 16.2.1.1. Installing the Lifecycle Agent by using the CLI You can use the OpenShift CLI ( oc ) to install the Lifecycle Agent. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. Procedure Create a Namespace object YAML file for the Lifecycle Agent: apiVersion: v1 kind: Namespace metadata: name: openshift-lifecycle-agent annotations: workload.openshift.io/allowed: management Create the Namespace CR by running the following command: USD oc create -f <namespace_filename>.yaml Create an OperatorGroup object YAML file for the Lifecycle Agent: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-lifecycle-agent namespace: openshift-lifecycle-agent spec: targetNamespaces: - openshift-lifecycle-agent Create the OperatorGroup CR by running the following command: USD oc create -f <operatorgroup_filename>.yaml Create a Subscription CR for the Lifecycle Agent: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-lifecycle-agent-subscription namespace: openshift-lifecycle-agent spec: channel: "stable" name: lifecycle-agent source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription CR by running the following command: USD oc create -f <subscription_filename>.yaml Verification To verify that the installation succeeded, inspect the CSV resource by running the following command: USD oc get csv -n openshift-lifecycle-agent Example output NAME DISPLAY VERSION REPLACES PHASE lifecycle-agent.v4.17.0 Openshift Lifecycle Agent 4.17.0 Succeeded Verify that the Lifecycle Agent is up and running by running the following command: USD oc get deploy -n openshift-lifecycle-agent Example output NAME READY UP-TO-DATE AVAILABLE AGE lifecycle-agent-controller-manager 1/1 1 1 14s 16.2.1.2. Installing the Lifecycle Agent by using the web console You can use the OpenShift Container Platform web console to install the Lifecycle Agent. Prerequisites You have logged in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Search for the Lifecycle Agent from the list of available Operators, and then click Install . On the Install Operator page, under A specific namespace on the cluster select openshift-lifecycle-agent . Click Install . Verification To confirm that the installation is successful: Click Operators Installed Operators . Ensure that the Lifecycle Agent is listed in the openshift-lifecycle-agent project with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. If the Operator is not installed successfully: Click Operators Installed Operators , and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Click Workloads Pods , and check the logs for pods in the openshift-lifecycle-agent project. 16.2.2. Configuring a shared container partition between ostree stateroots Important You must complete this procedure at installation time. Apply a MachineConfig to the seed cluster to create a separate partition and share the /var/lib/containers partition between the two ostree stateroots that will be used during the preinstall process. Procedure Apply a MachineConfig to create a separate partition: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-containers-partitioned spec: config: ignition: version: 3.2.0 storage: disks: - device: /dev/disk/by-path/pci-<root_disk> 1 partitions: - label: var-lib-containers startMiB: <start_of_partition> 2 sizeMiB: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var-lib-containers format: xfs mountOptions: - defaults - prjquota path: /var/lib/containers wipeFilesystem: true systemd: units: - contents: |- # Generated by Butane [Unit] Before=local-fs.target Requires=systemd-fsck@dev-disk-by\x2dpartlabel-var\x2dlib\x2dcontainers.service After=systemd-fsck@dev-disk-by\x2dpartlabel-var\x2dlib\x2dcontainers.service [Mount] Where=/var/lib/containers What=/dev/disk/by-partlabel/var-lib-containers Type=xfs Options=defaults,prjquota [Install] RequiredBy=local-fs.target enabled: true name: var-lib-containers.mount 1 Specify the root disk. 2 Specify the start of the partition in MiB. If the value is too small, the installation will fail. 3 Specify a minimum size for the partition of 500 GB to ensure adequate disk space for precached images. If the value is too small, the deployments after installation will fail. 16.2.3. Seed image configuration You can create a seed image from a single-node OpenShift cluster with the same hardware as your bare-metal host, and with a similar target cluster configuration. However, the seed image generated from the seed cluster cannot contain any cluster-specific configuration. The following table lists the components, resources, and configurations that you must and must not include in your seed image: Table 16.2. Seed image configuration Cluster configuration Include in seed image Performance profile Yes MachineConfig resources for the target cluster Yes IP version [1] Yes Set of Day 2 Operators, including the Lifecycle Agent and the OADP Operator Yes Disconnected registry configuration [2] Yes Valid proxy configuration [3] Yes FIPS configuration Yes Dedicated partition on the primary disk for container storage that matches the size of the target clusters Yes Local volumes StorageClass used in LocalVolume for LSO LocalVolume for LSO LVMCluster CR for LVMS No Dual-stack networking is not supported in this release. If the seed cluster is installed in a disconnected environment, the target clusters must also be installed in a disconnected environment. The proxy configuration on the seed and target clusters does not have to match. 16.2.3.1. Seed image configuration using the RAN DU profile The following table lists the components, resources, and configurations that you must and must not include in the seed image when using the RAN DU profile: Table 16.3. Seed image configuration with RAN DU profile Resource Include in seed image All extra manifests that are applied as part of Day 0 installation Yes All Day 2 Operator subscriptions Yes DisableOLMPprof.yaml Yes TunedPerformancePatch.yaml Yes PerformanceProfile.yaml Yes SriovOperatorConfig.yaml Yes DisableSnoNetworkDiag.yaml Yes StorageClass.yaml No, if it is used in StorageLV.yaml StorageLV.yaml No StorageLVMCluster.yaml No The following list of resources and configurations can be applied as extra manifests or by using RHACM policies: ClusterLogForwarder.yaml ReduceMonitoringFootprint.yaml SriovFecClusterConfig.yaml PtpOperatorConfigForEvent.yaml DefaultCatsrc.yaml PtpConfig.yaml SriovNetwork.yaml Important If you are using GitOps ZTP, enable these resources by using RHACM policies to ensure configuration changes can be applied throughout the cluster lifecycle. 16.2.4. Generating a seed image with the Lifecycle Agent Use the Lifecycle Agent to generate a seed image from a managed cluster. The Operator checks for required system configurations, performs any necessary system cleanup before generating the seed image, and launches the image generation. The seed image generation includes the following tasks: Stopping cluster Operators Preparing the seed image configuration Generating and pushing the seed image to the image repository specified in the SeedGenerator CR Restoring cluster Operators Expiring seed cluster certificates Generating new certificates for the seed cluster Restoring and updating the SeedGenerator CR on the seed cluster Prerequisites RHACM and multicluster engine for Kubernetes Operator are not installed on the seed cluster. You have configured a shared container directory on the seed cluster. You have installed the minimum version of the OADP Operator and the Lifecycle Agent on the seed cluster. Ensure that persistent volumes are not configured on the seed cluster. Ensure that the LocalVolume CR does not exist on the seed cluster if the Local Storage Operator is used. Ensure that the LVMCluster CR does not exist on the seed cluster if LVM Storage is used. Ensure that the DataProtectionApplication CR does not exist on the seed cluster if OADP is used. Procedure Detach the managed cluster from the hub to delete any RHACM-specific resources from the seed cluster that must not be in the seed image: Manually detach the seed cluster by running the following command: USD oc delete managedcluster sno-worker-example Wait until the managed cluster is removed. After the cluster is removed, create the proper SeedGenerator CR. The Lifecycle Agent cleans up the RHACM artifacts. If you are using GitOps ZTP, detach your cluster by removing the seed cluster's SiteConfig CR from the kustomization.yaml . If you have a kustomization.yaml file that references multiple SiteConfig CRs, remove your seed cluster's SiteConfig CR from the kustomization.yaml : apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: #- example-seed-sno1.yaml - example-target-sno2.yaml - example-target-sno3.yaml If you have a kustomization.yaml that references one SiteConfig CR, remove your seed cluster's SiteConfig CR from the kustomization.yaml and add the generators: {} line: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: {} Commit the kustomization.yaml changes in your Git repository and push the changes to your repository. The ArgoCD pipeline detects the changes and removes the managed cluster. Create the Secret object so that you can push the seed image to your registry. Create the authentication file by running the following commands: USD MY_USER=myuserid USD AUTHFILE=/tmp/my-auth.json USD podman login --authfile USD{AUTHFILE} -u USD{MY_USER} quay.io/USD{MY_USER} USD base64 -w 0 USD{AUTHFILE} ; echo Copy the output into the seedAuth field in the Secret YAML file named seedgen in the openshift-lifecycle-agent namespace: apiVersion: v1 kind: Secret metadata: name: seedgen 1 namespace: openshift-lifecycle-agent type: Opaque data: seedAuth: <encoded_AUTHFILE> 2 1 The Secret resource must have the name: seedgen and namespace: openshift-lifecycle-agent fields. 2 Specifies a base64-encoded authfile for write-access to the registry for pushing the generated seed images. Apply the Secret by running the following command: USD oc apply -f secretseedgenerator.yaml Create the SeedGenerator CR: apiVersion: lca.openshift.io/v1 kind: SeedGenerator metadata: name: seedimage 1 spec: seedImage: <seed_container_image> 2 1 The SeedGenerator CR must be named seedimage . 2 Specify the container image URL, for example, quay.io/example/seed-container-image:<tag> . It is recommended to use the <seed_cluster_name>:<ocp_version> format. Generate the seed image by running the following command: USD oc apply -f seedgenerator.yaml Important The cluster reboots and loses API capabilities while the Lifecycle Agent generates the seed image. Applying the SeedGenerator CR stops the kubelet and the CRI-O operations, then it starts the image generation. If you want to generate more seed images, you must provision a new seed cluster with the version that you want to generate a seed image from. Verification After the cluster recovers and it is available, you can check the status of the SeedGenerator CR by running the following command: USD oc get seedgenerator -o yaml Example output status: conditions: - lastTransitionTime: "2024-02-13T21:24:26Z" message: Seed Generation completed observedGeneration: 1 reason: Completed status: "False" type: SeedGenInProgress - lastTransitionTime: "2024-02-13T21:24:26Z" message: Seed Generation completed observedGeneration: 1 reason: Completed status: "True" type: SeedGenCompleted 1 observedGeneration: 1 1 The seed image generation is complete. 16.3. Preinstalling single-node OpenShift using an image-based installation Use the openshift-install program to create a live installation ISO for preinstalling single-node OpenShift on bare-metal hosts. For more information about downloading the installation program, see "Installation process" in the "Additional resources" section. The installation program takes a seed image URL and other inputs, such as the release version of the seed image and the disk to use for the installation process, and creates a live installation ISO. You can then start the host using the live installation ISO to begin preinstallation. When preinstallation is complete, the host is ready to ship to a remote site for the final site-specific configuration and deployment. The following are the high-level steps to preinstall a single-node OpenShift cluster using an image-based installation: Generate a seed image. Create a live installation ISO using the openshift-install installation program. Boot the host using the live installation ISO to preinstall the host. Additional resources Installation process 16.3.1. Creating a live installation ISO for a single-node OpenShift image-based installation You can embed your single-node OpenShift seed image URL, and other installation artifacts, in a live installation ISO by using the openshift-install program. Note For more information about the specification for the image-based-installation-config.yaml manifest, see the section "Reference specifications for the image-based-installation-config.yaml manifest". Prerequisites You generated a seed image from a single-node OpenShift seed cluster. You downloaded the openshift-install program. The version of the openshift-install program must match the OpenShift Container Platform version in your seed image. The target host has network access to the seed image URL and all other installation artifacts. If you require static networking, you must install the nmstatectl library on the host that creates the live installation ISO. Procedure Create a live installation ISO and embed your single-node OpenShift seed image URL and other installation artifacts: Create a working directory by running the following: USD mkdir ibi-iso-workdir 1 1 Replace ibi-iso-workdir with the name of your working directory. Optional. Create an installation configuration template to use as a reference when configuring the ImageBasedInstallationConfig resource: USD openshift-install image-based create image-config-template --dir ibi-iso-workdir 1 1 If you do not specify a working directory, the command uses the current directory. Example output INFO Image-Config-Template created in: ibi-iso-workdir The command creates the image-based-installation-config.yaml installation configuration template in your target directory: # # Note: This is a sample ImageBasedInstallationConfig file showing # which fields are available to aid you in creating your # own image-based-installation-config.yaml file. # apiVersion: v1beta1 kind: ImageBasedInstallationConfig metadata: name: example-image-based-installation-config # The following fields are required seedImage: quay.io/openshift-kni/seed-image:4.17.0 seedVersion: 4.17.0 installationDisk: /dev/vda pullSecret: '<your_pull_secret>' # networkConfig is optional and contains the network configuration for the host in NMState format. # See https://nmstate.io/examples.html for examples. # networkConfig: # interfaces: # - name: eth0 # type: ethernet # state: up # mac-address: 00:00:00:00:00:00 # ipv4: # enabled: true # address: # - ip: 192.168.122.2 # prefix-length: 23 # dhcp: false Edit your installation configuration file: Example image-based-installation-config.yaml file apiVersion: v1beta1 kind: ImageBasedInstallationConfig metadata: name: example-image-based-installation-config seedImage: quay.io/repo-id/seed:latest seedVersion: "4.17.0" extraPartitionStart: "-240G" installationDisk: /dev/disk/by-id/wwn-0x62c... sshKey: 'ssh-ed25519 AAAA...' pullSecret: '{"auths": ...}' networkConfig: interfaces: - name: ens1f0 type: ethernet state: up ipv4: enabled: true dhcp: false auto-dns: false address: - ip: 192.168.200.25 prefix-length: 24 ipv6: enabled: false dns-resolver: config: server: - 192.168.15.47 - 192.168.15.48 routes: config: - destination: 0.0.0.0/0 metric: 150 -hop-address: 192.168.200.254 -hop-interface: ens1f0 Create the live installation ISO by running the following command: USD openshift-install image-based create image --dir ibi-iso-workdir Example output INFO Consuming Image-based Installation ISO Config from target directory INFO Creating Image-based Installation ISO with embedded ignition Verification View the output in the working directory: ibi-iso-workdir/ └── rhcos-ibi.iso Additional resources Reference specifications for the image-based-installation-config.yaml manifest 16.3.1.1. Configuring additional partitions on the target host The installation ISO creates a partition for the /var/lib/containers directory as part of the image-based installation process. You can create additional partitions by using the coreosInstallerArgs specification. For example, in hard disks with adequate storage, you might need an additional partition for storage options, such as Logical Volume Manager (LVM) Storage. Note The /var/lib/containers partition requires at least 500 GB to ensure adequate disk space for precached images. You must create additional partitions with a starting position larger than the partition for /var/lib/containers . Procedure Edit the image-based-installation-config.yaml file to configure additional partitions: Example image-based-installation-config.yaml file apiVersion: v1beta1 kind: ImageBasedInstallationConfig metadata: name: example-extra-partition seedImage: quay.io/repo-id/seed:latest seedVersion: "4.17.0" installationDisk: /dev/sda pullSecret: '{"auths": ...}' # ... skipDiskCleanup: true 1 coreosInstallerArgs: - "--save-partindex" 2 - "6" 3 ignitionConfigOverride: | { "ignition": { "version": "3.2.0" }, "storage": { "disks": [ { "device": "/dev/sda", 4 "partitions": [ { "label": "storage", 5 "number": 6, 6 "sizeMiB": 380000, 7 "startMiB": 500000 8 } ] } ] } } 1 Specify true to skip disk formatting during the installation process. 2 Specify this argument to preserve a partition. 3 The live installation ISO requires five partitions. Specify a number greater than five to identify the additional partition to preserve. 4 Specify the installation disk on the target host. 5 Specify the label for the partition. 6 Specify the number for the partition. 7 Specify the size of parition in MiB. 8 Specify the starting position on the disk in MiB for the additional partition. You must specify a starting point larger that the partition for var/lib/containers . Verification When you complete the preinstallation of the host with the live installation ISO, login to the target host and run the following command to view the partitions: USD lsblk Example output sda 8:0 0 140G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 127M 0 part ├─sda3 8:3 0 384M 0 part /var/mnt/boot ├─sda4 8:4 0 120G 0 part /var/mnt ├─sda5 8:5 0 500G 0 part /var/lib/containers └─sda6 8:6 0 380G 0 part 16.3.2. Provisioning the live installation ISO to a host Using your preferred method, boot the target bare-metal host from the rhcos-ibi.iso live installation ISO to preinstall single-node OpenShift. Verification Login to the target host. View the system logs by running the following command: USD journalctl -b Example output Aug 13 17:01:44 10.46.26.129 install-rhcos-and-restore-seed.sh[2876]: time="2024-08-13T17:01:44Z" level=info msg="All the precaching threads have finished." Aug 13 17:01:44 10.46.26.129 install-rhcos-and-restore-seed.sh[2876]: time="2024-08-13T17:01:44Z" level=info msg="Total Images: 125" Aug 13 17:01:44 10.46.26.129 install-rhcos-and-restore-seed.sh[2876]: time="2024-08-13T17:01:44Z" level=info msg="Images Pulled Successfully: 125" Aug 13 17:01:44 10.46.26.129 install-rhcos-and-restore-seed.sh[2876]: time="2024-08-13T17:01:44Z" level=info msg="Images Failed to Pull: 0" Aug 13 17:01:44 10.46.26.129 install-rhcos-and-restore-seed.sh[2876]: time="2024-08-13T17:01:44Z" level=info msg="Completed executing pre-caching" Aug 13 17:01:44 10.46.26.129 install-rhcos-and-restore-seed.sh[2876]: time="2024-08-13T17:01:44Z" level=info msg="Pre-cached images successfully." Aug 13 17:01:44 10.46.26.129 install-rhcos-and-restore-seed.sh[2876]: time="2024-08-13 17:01:44" level=info msg="Skipping shutdown" Aug 13 17:01:44 10.46.26.129 install-rhcos-and-restore-seed.sh[2876]: time="2024-08-13 17:01:44" level=info msg="IBI preparation process finished successfully!" Aug 13 17:01:44 10.46.26.129 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Aug 13 17:01:44 10.46.26.129 systemd[1]: Finished SNO Image-based Installation. Aug 13 17:01:44 10.46.26.129 systemd[1]: Reached target Multi-User System. Aug 13 17:01:44 10.46.26.129 systemd[1]: Reached target Graphical Interface. 16.3.3. Reference specifications for the image-based-installation-config.yaml manifest The following content describes the specifications for the image-based-installation-config.yaml manifest. The openshift-install program uses the image-based-installation-config.yaml manifest to create a live installation ISO for image-based installations of single-node OpenShift. Table 16.4. Required specifications Specification Type Description seedImage string Specifies the seed image to use in the ISO generation process. seedVersion string Specifies the OpenShift Container Platform release version of the seed image. The release version in the seed image must match the release version that you specify in the seedVersion field. installationDisk string Specifies the disk that will be used for the installation process. Because the disk discovery order is not guaranteed, the kernel name of the disk can change across booting options for machines with multiple disks. For example, /dev/sda becomes /dev/sdb and vice versa. To avoid this issue, you must use a persistent disk attribute, such as the disk World Wide Name (WWN), for example: /dev/disk/by-id/wwn-<disk-id> . pullSecret string Specifies the pull secret to use during the precache process. The pull secret contains authentication credentials for pulling the release payload images from the container registry. If the seed image requires a separate private registry authentication, add the authentication details to the pull secret. Table 16.5. Optional specifications Specification Type Description shutdown boolean Specifies if the host shuts down after the installation process completes. The default value is false . extraPartitionStart string Specifies the start of the extra partition used for /var/lib/containers . The default value is -40G , which means that the partition will be exactly 40GiB in size and uses the space 40GiB from the end of the disk. If you specify a positive value, the partition will start at that position of the disk and extend to the end of the disk. extraPartitionLabel string The label of the extra partition you use for /var/lib/containers . The default partition label is var-lib-containers . Note You must ensure that the partition label in the installation ISO matches the partition label set in the machine configuration for the seed image. If the partition labels are different, the partition mount fails during installation on the host. For more information, see "Configuring a shared container partition between ostree stateroots". extraPartitionNumber unsigned integer The number of the extra partition you use for /var/lib/containers . The default number is 5 . skipDiskCleanup boolean The installation process formats the disk on the host. Set this specification to 'true' to skip this step. The default is false . networkConfig string Specifies networking configurations for the host, for example: networkConfig: interfaces: - name: ens1f0 type: ethernet state: up ... If you require static networking, you must install the nmstatectl library on the host that creates the live installation ISO. For further information about defining network configurations by using nmstate , see nmstate.io . Important The name of the interface must match the actual NIC name as shown in the operating system. proxy string Specifies proxy settings to use during the installation ISO generation, for example: proxy: httpProxy: "http://proxy.example.com:8080" httpsProxy: "http://proxy.example.com:8080" noProxy: "no_proxy.example.com" imageDigestSources string Specifies the sources or repositories for the release-image content, for example: imageDigestSources: - mirrors: - "registry.example.com:5000/ocp4/openshift4" source: "quay.io/openshift-release-dev/ocp-release" additionalTrustBundle string Specifies the PEM-encoded X.509 certificate bundle. The installation program adds this to the /etc/pki/ca-trust/source/anchors/ directory in the installation ISO. additionalTrustBundle: | -----BEGIN CERTIFICATE----- MTICLDCCAdKgAwfBAgIBAGAKBggqhkjOPQRDAjB9MQswCQYRVEQGE ... l2wOuDwKQa+upc4GftXE7C//4mKBNBC6Ty01gUaTIpo= -----END CERTIFICATE----- sshKey string Specifies the SSH key to authenticate access to the host. ignitionConfigOverride string Specifies a JSON string containing the user overrides for the Ignition config. The configuration merges with the Ignition config file generated by the installation program. This feature requires Ignition version is 3.2 or later. coreosInstallerArgs string Specifies custom arguments for the coreos-install command that you can use to configure kernel arguments and disk partitioning options. Additional resources Configuring a shared container partition between ostree stateroots 16.4. Deploying single-node OpenShift clusters 16.4.1. About image-based deployments for managed single-node OpenShift When a host preinstalled with single-node OpenShift using an image-based installation arrives at a remote site, a technician can easily reconfigure and deploy the host in a matter of minutes. For clusters with a hub-and-spoke architecture, to complete the deployment of a preinstalled host, you must first define site-specific configuration resources on the hub cluster for each host. These resources contain configuration information such as the properties of the bare-metal host, authentication details, and other deployment and networking information. The Image Based Install (IBI) Operator creates a configuration ISO from these resources, and then boots the host with the configuration ISO attached. The host mounts the configuration ISO and runs the reconfiguration process. When the reconfiguration completes, the single-node OpenShift cluster is ready. Note You must create distinct configuration resources for each bare-metal host. See the following high-level steps to deploy a preinstalled host in a cluster with a hub-and-spoke architecture: Install the IBI Operator on the hub cluster. Create site-specific configuration resources in the hub cluster for each host. The IBI Operator creates a configuration ISO from these resources and boots the target host with the configuration ISO attached. The host mounts the configuration ISO and runs the reconfiguration process. When the reconfiguration completes, the single-node OpenShift cluster is ready. Note Alternatively, you can manually deploy a preinstalled host for a cluster without using a hub cluster. You must define an ImageBasedConfig resource and an installation manifest, and provide these as inputs to the openshift-install installation program. For more information, see "Deploying a single-node OpenShift cluster using the openshift-install program". Additional resources Deploying a single-node OpenShift cluster using the openshift-install program 16.4.1.1. Installing the Image Based Install Operator The Image Based Install (IBI) Operator is part of the image-based deployment workflow for preinstalled single-node OpenShift on bare-metal hosts. Note The IBI Operator is part of the multicluster engine for Kubernetes Operator from MCE version 2.7. Prerequisites You logged in as a user with cluster-admin privileges. You deployed a Red Hat Advanced Cluster Management (RHACM) hub cluster or you deployed the multicluster engine for Kubernetes Operator. You reviewed the required versions of software components in the section "Software prerequisites for an image-based installation". Procedure Set the enabled specification to true for the image-based-install-operator component in the MultiClusterEngine resource by running the following command: USD oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type json \ --patch '[{"op": "add", "path":"/spec/overrides/components/-", "value": {"name":"image-based-install-operator","enabled": true}}]' Verification Check that the Image Based Install Operator pod is running by running the following command: USD oc get pods -A | grep image-based Example output multicluster-engine image-based-install-operator-57fb8sc423-bxdj8 2/2 Running 0 5m 16.4.1.2. Deploying a managed single-node OpenShift cluster using the IBI Operator Create the site-specific configuration resources in the hub cluster to initiate the image-based deployment of a preinstalled host. When you create these configuration resources in the hub cluster, the Image Based Install (IBI) Operator generates a configuration ISO and attaches it to the target host to begin the site-specific configuration process. When the configuration process completes, the single-node OpenShift cluster is ready. Note For more information about the configuration resources that you must configure in the hub cluster, see "Cluster configuration resources for deploying a preinstalled host". Prerequisites You preinstalled a host with single-node OpenShift using an image-based installation. You logged in as a user with cluster-admin privileges. You deployed a Red Hat Advanced Cluster Management (RHACM) hub cluster or you deployed the multicluster engine for Kubernetes operator (MCE). You installed the IBI Operator on the hub cluster. You created a pull secret to authenticate pull requests. For more information, see "Using image pull secrets". Procedure Create the ibi-ns namespace by running the following command: USD oc create namespace ibi-ns Create the Secret resource for your image registry: Create a YAML file that defines the Secret resource for your image registry: Example secret-image-registry.yaml file apiVersion: v1 kind: Secret metadata: name: ibi-image-pull-secret namespace: ibi-ns stringData: .dockerconfigjson: <base64-docker-auth-code> 1 type: kubernetes.io/dockerconfigjson 1 You must provide base64-encoded credential details. See the "Additional resources" section for more information about using image pull secrets. Create the Secret resource for your image registry by running the following command: USD oc create -f secret-image-registry.yaml Optional: Configure static networking for the host: Create a Secret resource containing the static network configuration in nmstate format: Example host-network-config-secret.yaml file apiVersion: v1 kind: Secret metadata: name: host-network-config-secret 1 namespace: ibi-ns type: Opaque stringData: nmstate: | 2 interfaces: - name: ens1f0 3 type: ethernet state: up ipv4: enabled: true address: - ip: 192.168.200.25 prefix-length: 24 dhcp: false 4 ipv6: enabled: false dns-resolver: config: server: - 192.168.15.47 5 - 192.168.15.48 routes: config: 6 - destination: 0.0.0.0/0 metric: 150 -hop-address: 192.168.200.254 -hop-interface: ens1f0 table-id: 254 1 Specify the name for the Secret resource. 2 Define the static network configuration in nmstate format. 3 Specify the name of the interface on the host. The name of the interface must match the actual NIC name as shown in the operating system. To use your MAC address for NIC matching, set the identifier field to mac-address . 4 You must specify dhcp: false to ensure nmstate assigns the static IP address to the interface. 5 Specify one or more DNS servers that the system will use to resolve domain names. 6 In this example, the default route is configured through the ens1f0 interface to the hop IP address 192.168.200.254 . Create the BareMetalHost and Secret resources: Create a YAML file that defines the BareMetalHost and Secret resources: Example ibi-bmh.yaml file apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: ibi-bmh 1 namespace: ibi-ns spec: online: false 2 bootMACAddress: 00:a5:12:55:62:64 3 bmc: address: redfish-virtualmedia+http://192.168.111.1:8000/redfish/v1/Systems/8a5babac-94d0-4c20-b282-50dc3a0a32b5 4 credentialsName: ibi-bmh-bmc-secret 5 preprovisioningNetworkDataName: host-network-config-secret 6 automatedCleaningMode: disabled 7 externallyProvisioned: true 8 --- apiVersion: v1 kind: Secret metadata: name: ibi-bmh-secret 9 namespace: ibi-ns type: Opaque data: username: <user_name> 10 password: <password> 11 1 Specify the name for the BareMetalHost resource. 2 Specify if the host should be online. 3 Specify the host boot MAC address. 4 Specify the BMC address. You can only use bare-metal host drivers that support virtual media networking booting, for example redfish-virtualmedia and idrac-virtualmedia. 5 Specify the name of the bare-metal host Secret resource. 6 Optional: If you require static network configuration for the host, specify the name of the Secret resource containing the configuration. 7 You must specify automatedCleaningMode:disabled to prevent the provisioning service from deleting all preinstallation artifacts, such as the seed image, during disk inspection. 8 You must specify externallyProvisioned: true to enable the host to boot from the preinstalled disk, instead of the configuration ISO. 9 Specify the name for the Secret resource. 10 Specify the username. 11 Specify the password. Create the BareMetalHost and Secret resources by running the following command: USD oc create -f ibi-bmh.yaml Create the ClusterImageSet resource: Create a YAML file that defines the ClusterImageSet resource: Example ibi-cluster-image-set.yaml file apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: ibi-img-version-arch 1 spec: releaseImage: ibi.example.com:path/to/release/images:version-arch 2 1 Specify the name for the ClusterImageSet resource. 2 Specify the address for the release image to use for the deployment. If you use a different image registry compared to the image registry used during seed image generation, ensure that the OpenShift Container Platform version for the release image remains the same. Create the ClusterImageSet resource by running the following command: USD oc apply -f ibi-cluster-image-set.yaml Create the ImageClusterInstall resource: Create a YAML file that defines the ImageClusterInstall resource: Example ibi-image-cluster-install.yaml file apiVersion: extensions.hive.openshift.io/v1alpha1 kind: ImageClusterInstall metadata: name: ibi-image-install 1 namespace: ibi-ns spec: bareMetalHostRef: name: ibi-bmh 2 namespace: ibi-ns clusterDeploymentRef: name: ibi-cluster-deployment 3 hostname: ibi-host 4 imageSetRef: name: ibi-img-version-arch 5 machineNetwork: 10.0.0.0/24 6 proxy: 7 httpProxy: "http://proxy.example.com:8080" #httpsProxy: "http://proxy.example.com:8080" #noProxy: "no_proxy.example.com" 1 Specify the name for the ImageClusterInstall resource. 2 Specify the BareMetalHost resource that you want to target for the image-based installation. 3 Specify the name of the ClusterDeployment resource that you want to use for the image-based installation of the target host. 4 Specify the hostname for the cluster. 5 Specify the name of the ClusterImageSet resource you used to define the container release images to use for deployment. 6 Specify the public CIDR (Classless Inter-Domain Routing) of the external network. 7 Optional: Specify a proxy to use for the cluster deployment. Important If your cluster deployment requires a proxy configuration, you must do the following: Create a seed image from a seed cluster featuring a proxy configuration. The proxy configurations do not have to match. Configure the machineNetwork field in your installation manifest. Create the ImageClusterInstall resource by running the following command: USD oc create -f ibi-image-cluster-install.yaml Create the ClusterDeployment resource: Create a YAML file that defines the ClusterDeployment resource: Example ibi-cluster-deployment.yaml file apiVersion: hive.openshift.io/v1 kind: ClusterDeployment metadata: name: ibi-cluster-deployment 1 namespace: ibi-ns 2 spec: baseDomain: example.com 3 clusterInstallRef: group: extensions.hive.openshift.io kind: ImageClusterInstall name: ibi-image-install 4 version: v1alpha1 clusterName: ibi-cluster 5 platform: none: {} pullSecretRef: name: ibi-image-pull-secret 6 1 Specify the name for the ClusterDeployment resource. 2 Specify the namespace for the ClusterDeployment resource. 3 Specify the base domain that the cluster should belong to. 4 Specify the name of the ImageClusterInstall in which you defined the container images to use for the image-based installation of the target host. 5 Specify a name for the cluster. 6 Specify the secret to use for pulling images from your image registry. Create the ClusterDeployment resource by running the following command: USD oc apply -f ibi-cluster-deployment.yaml Create the ManagedCluster resource: Create a YAML file that defines the ManagedCluster resource: Example ibi-managed.yaml file apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: sno-ibi 1 spec: hubAcceptsClient: true 2 1 Specify the name for the ManagedCluster resource. 2 Specify true to enable RHACM to manage the cluster. Create the ManagedCluster resource by running the following command: USD oc apply -f ibi-managed.yaml Verification Check the status of the ImageClusterInstall in the hub cluster to monitor the progress of the target host installation by running the following command: USD oc get imageclusterinstall Example output NAME REQUIREMENTSMET COMPLETED BAREMETALHOSTREF target-0 HostValidationSucceeded ClusterInstallationSucceeded ibi-bmh Warning If the ImageClusterInstall resource is deleted, the IBI Operator reattaches the BareMetalHost resource and reboots the machine. When the installation completes, you can retrieve the kubeconfig secret to log in to the managed cluster by running the following command: USD oc extract secret/<cluster_name>-admin-kubeconfig -n <cluster_namespace> --to - > <directory>/<cluster_name>-kubeconfig <cluster_name> is the name of the cluster. <cluster_namespace> is the namespace of the cluster. <directory> is the directory in which to create the file. Additional resources Using image pull secrets Cluster configuration resources for deploying a preinstalled host 16.4.1.2.1. Cluster configuration resources for deploying a preinstalled host To complete a deployment for a preinstalled host at a remote site, you must configure the following site-specifc cluster configuration resources in the hub cluster for each bare-metal host. Table 16.6. Cluster configuration resources reference Resource Description Namespace Namespace for the managed single-node OpenShift cluster. BareMetalHost Describes the physical host and its properties, such as the provisioning and hardware configuration. Secret for the bare-metal host Credentials for the host BMC. Secret for the bare-metal host static network configuration Optional: Describes static network configuration for the target host. Secret for the image registry Credentials for the image registry. The secret for the image registry must be of type kubernetes.io/dockerconfigjson . ImageClusterInstall References the bare-metal host, deployment, and image set resources. ClusterImageSet Describes the release images to use for the cluster. ClusterDeployment Describes networking, authentication, and platform-specific settings. ManagedCluster Describes cluster details to enable Red Hat Advanced Cluster Management (RHACM) to register and manage. ConfigMap Optional: Describes additional configurations for the cluster deployment, such as adding a bundle of trusted certificates for the host to ensure trusted communications for cluster services. 16.4.1.2.2. ImageClusterInstall resource API specifications The following content describes the API specifications for the ImageClusterInstall resource. This resource is the endpoint for the Image Based Install Operator. Table 16.7. Required specifications Specification Type Description imageSetRef string Specify the name of the ClusterImageSet resource that defines the release images for the deployment. hostname string Specify the hostname for the cluster. sshKey string Specify your SSH key to provide SSH access to the target host. Table 16.8. Optional specifications Specification Type Description clusterDeploymentRef string Specify the name of the ClusterDeployment resource that you want to use for the image-based installation of the target host. clusterMetadata string After the deployment completes, this specification is automatically populated with metadata information about the cluster, including the cluster-admin kubeconfig credentials for logging in to the cluster. imageDigestSources string Specifies the sources or repositories for the release-image content, for example: imageDigestSources: - mirrors: - "registry.example.com:5000/ocp4/openshift4" source: "quay.io/openshift-release-dev/ocp-release" extraManifestsRefs string Specify a ConfigMap resource containing additional manifests to be applied to the target cluster. bareMetalHostRef string Specify the bareMetalHost resource to use for the cluster deployment machineNetwork string Specify the public CIDR (Classless Inter-Domain Routing) of the external network. proxy string Specifies proxy settings for the cluster, for example: proxy: httpProxy: "http://proxy.example.com:8080" httpsProxy: "http://proxy.example.com:8080" noProxy: "no_proxy.example.com" caBundleRef string Specify a ConfigMap resource containing the new bundle of trusted certificates for the host. 16.4.1.3. ConfigMap resources for extra manifests You can optionally create a ConfigMap resource to define additional manifests in an image-based deployment for managed single-node OpenShift clusters. After you create the ConfigMap resource, reference it in the ImageClusterInstall resource. During deployment, the IBI Operator includes the extra manifests in the deployment. 16.4.1.3.1. Creating a ConfigMap resource to add extra manifests in an image-based deployment You can use a ConfigMap resource to add extra manifests to the image-based deployment for single-node OpenShift clusters. The following example adds an single-root I/O virtualization (SR-IOV) network to the deployment. Note Filenames for extra manifests must not exceed 30 characters. Longer filenames might cause deployment failures. Prerequisites You preinstalled a host with single-node OpenShift using an image-based installation. You logged in as a user with cluster-admin privileges. Procedure Create the SriovNetworkNodePolicy and SriovNetwork resources: Create a YAML file that defines the resources: Example sriov-extra-manifest.yaml file apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: "example-sriov-node-policy" namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: [ens1f0] nodeSelector: node-role.kubernetes.io/master: "" mtu: 1500 numVfs: 8 priority: 99 resourceName: example-sriov-node-policy --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: "example-sriov-network" namespace: openshift-sriov-network-operator spec: ipam: |- { } linkState: auto networkNamespace: sriov-namespace resourceName: example-sriov-node-policy spoofChk: "on" trust: "off" Create the ConfigMap resource by running the following command: USD oc create configmap sr-iov-extra-manifest --from-file=sriov-extra-manifest.yaml -n ibi-ns 1 1 Specify the namespace that has the ImageClusterInstall resource. Example output configmap/sr-iov-extra-manifest created Note If you add more than one extra manifest, and the manifests must be applied in a specific order, you must prefix the filenames of the manifests with numbers that represent the required order. For example, 00-namespace.yaml , 01-sriov-extra-manifest.yaml , and so on. Reference the ConfigMap resource in the spec.extraManifestsRefs field of the ImageClusterInstall resource: #... spec: extraManifestsRefs: - name: sr-iov-extra-manifest #... 16.4.1.3.2. Creating a ConfigMap resource to add a CA bundle in an image-based deployment You can use a ConfigMap resource to add a certificate authority (CA) bundle to the host to ensure trusted communications for cluster services. After you create the ConfigMap resource, reference it in the spec.caBundleRef field of the ImageClusterInstall resource. Prerequisites You preinstalled a host with single-node OpenShift using an image-based installation. You logged in as a user with cluster-admin privileges. Procedure Create a CA bundle file called tls-ca-bundle.pem : Example tls-ca-bundle.pem file -----BEGIN CERTIFICATE----- MIIDXTCCAkWgAwIBAgIJAKmjYKJbIyz3MA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV ...Custom CA certificate bundle... 4WPl0Qb27Sb1xZyAsy1ww6MYb98EovazUSfjYr2EVF6ThcAPu4/sMxUV7He2J6Jd cA8SMRwpUbz3LXY= -----END CERTIFICATE----- Create the ConfigMap object by running the following command: USD oc create configmap custom-ca --from-file=tls-ca-bundle.pem -n ibi-ns custom-ca specifies the name for the ConfigMap resource. tls-ca-bundle.pem defines the key for the data entry in the ConfigMap resource. You must include a data entry with the tls-ca-bundle.pem key. ibi-ns specifies the namespace that has the ImageClusterInstall resource. Example output configmap/custom-ca created Reference the ConfigMap resource in the spec.caBundleRef field of the ImageClusterInstall resource: #... spec: caBundleRef: name: custom-ca #... Additional resources About the BareMetalHost resource Using image pull secrets Reference specifications for the image-based-config.yaml manifest 16.4.2. About image-based deployments for single-node OpenShift You can manually generate a configuration ISO by using the openshift-install program. Attach the configuration ISO to your preinstalled target host to complete the deployment. 16.4.2.1. Deploying a single-node OpenShift cluster using the openshift-install program You can use the openshift-install program to configure and deploy a host that you preinstalled with an image-based installation. To configure the target host with site-specific details, you must create the following resources: The install-config.yaml installation manifest The image-based-config.yaml manifest The openshift-install program uses these resources to generate a configuration ISO that you attach to the preinstalled target host to complete the deployment. Note For more information about the specifications for the image-based-config.yaml manifest, see "Reference specifications for the image-based-config.yaml manifest". Prerequisites You preinstalled a host with single-node OpenShift using an image-based installation. You downloaded the latest version of the openshift-install program. You created a pull secret to authenticate pull requests. For more information, see "Using image pull secrets". Procedure Create a working directory by running the following: USD mkdir ibi-config-iso-workdir 1 1 Replace ibi-config-iso-workdir with the name of your working directory. Create the installation manifest: Create a YAML file that defines the install-config manifest: Example install-config.yaml file apiVersion: v1 metadata: name: sno-cluster-name baseDomain: host.example.com compute: - architecture: amd64 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 hyperthreading: Enabled name: master replicas: 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.200.0/24 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} fips: false cpuPartitioningMode: "AllNodes" pullSecret: '{"auths":{"<your_pull_secret>"}}}' sshKey: 'ssh-rsa <your_ssh_pub_key>' Important If your cluster deployment requires a proxy configuration, you must do the following: Create a seed image from a seed cluster featuring a proxy configuration. The proxy configurations do not have to match. Configure the machineNetwork field in your installation manifest. Save the file in your working directory. Optional. Create a configuration template in your working directory by running the following command: USD openshift-install image-based create config-template --dir ibi-config-iso-workdir/ Example output INFO Config-Template created in: ibi-config-iso-workdir The command creates the image-based-config.yaml configuration template in your working directory: # # Note: This is a sample ImageBasedConfig file showing # which fields are available to aid you in creating your # own image-based-config.yaml file. # apiVersion: v1beta1 kind: ImageBasedConfig metadata: name: example-image-based-config additionalNTPSources: - 0.rhel.pool.ntp.org - 1.rhel.pool.ntp.org hostname: change-to-hostname releaseRegistry: quay.io # networkConfig contains the network configuration for the host in NMState format. # See https://nmstate.io/examples.html for examples. networkConfig: interfaces: - name: eth0 type: ethernet state: up mac-address: 00:00:00:00:00:00 ipv4: enabled: true address: - ip: 192.168.122.2 prefix-length: 23 dhcp: false Edit your configuration file: Example image-based-config.yaml file # # Note: This is a sample ImageBasedConfig file showing # which fields are available to aid you in creating your # own image-based-config.yaml file. # apiVersion: v1beta1 kind: ImageBasedConfig metadata: name: sno-cluster-name additionalNTPSources: - 0.rhel.pool.ntp.org - 1.rhel.pool.ntp.org hostname: host.example.com releaseRegistry: quay.io # networkConfig contains the network configuration for the host in NMState format. # See https://nmstate.io/examples.html for examples. networkConfig: interfaces: - name: ens1f0 type: ethernet state: up ipv4: enabled: true dhcp: false auto-dns: false address: - ip: 192.168.200.25 prefix-length: 24 ipv6: enabled: false dns-resolver: config: server: - 192.168.15.47 - 192.168.15.48 routes: config: - destination: 0.0.0.0/0 metric: 150 -hop-address: 192.168.200.254 -hop-interface: ens1f0 Create the configuration ISO in your working directory by running the following command: USD openshift-install image-based create config-image --dir ibi-config-iso-workdir/ Example output INFO Adding NMConnection file <ens1f0.nmconnection> INFO Consuming Install Config from target directory INFO Consuming Image-based Config ISO configuration from target directory INFO Config-Image created in: ibi-config-iso-workdir/auth View the output in the working directory: Example output ibi-config-iso-workdir/ ├── auth │ ├── kubeadmin-password │ └── kubeconfig └── imagebasedconfig.iso Attach the imagebasedconfig.iso to the preinstalled host using your preferred method and restart the host to complete the configuration process and deploy the cluster. Verification When the configuration process completes on the host, access the cluster to verify its status. Export the kubeconfig environment variable to your kubeconfig file by running the following command: USD export KUBECONFIG=ibi-config-iso-workdir/auth/kubeconfig Verify that the cluster is responding by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION node/sno-cluster-name.host.example.com Ready control-plane,master 5h15m v1.30.3 Additional resources Using image pull secrets Reference specifications for the image-based-installation-config.yaml manifest 16.4.2.1.1. Reference specifications for the image-based-config.yaml manifest The following content describes the specifications for the image-based-config.yaml manifest. The openshift-install program uses the image-based-config.yaml manifest to create a site-specific configuration ISO for image-based deployments of single-node OpenShift. Table 16.9. Required specifications Specification Type Description hostname string Define the name of the node for the single-node OpenShift cluster. Table 16.10. Optional specifications Specification Type Description networkConfig string Specifies networking configurations for the host, for example: networkConfig: interfaces: - name: ens1f0 type: ethernet state: up ... If you require static networking, you must install the nmstatectl library on the host that creates the live installation ISO. For further information about defining network configurations by using nmstate , see nmstate.io . Important The name of the interface must match the actual NIC name as shown in the operating system. additionalNTPSources string Specifies a list of NTP sources for all cluster hosts. These NTP sources are added to any existing NTP sources in the cluster. You can use the hostname or IP address for the NTP source. releaseRegistry string Specifies the container image registry that you used for the release image of the seed cluster. nodeLabels map[string]string Specifies custom node labels for the single-node OpenShift node, for example: nodeLabels: node-role.kubernetes.io/edge: true environment: production 16.4.2.2. Configuring resources for extra manifests You can optionally define additional resources in an image-based deployment for single-node OpenShift clusters. Create the additional resources in an extra-manifests folder in the same working directory that has the install-config.yaml and image-based-config.yaml manifests. Note Filenames for additional resources in the extra-manifests directory must not exceed 30 characters. Longer filenames might cause deployment failures. 16.4.2.2.1. Creating a resource in the extra-manifests folder You can create a resource in the extra-manifests folder of your working directory to add extra manifests to the image-based deployment for single-node OpenShift clusters. The following example adds an single-root I/O virtualization (SR-IOV) network to the deployment. Note If you add more than one extra manifest, and the manifests must be applied in a specific order, you must prefix the filenames of the manifests with numbers that represent the required order. For example, 00-namespace.yaml , 01-sriov-extra-manifest.yaml , and so on. Prerequisites You created a working directory with the install-config.yaml and image-based-config.yaml manifests Procedure Go to your working directory and create the extra-manifests folder by running the following command: USD mkdir extra-manifests Create the SriovNetworkNodePolicy and SriovNetwork resources in the extra-manifests folder: Create a YAML file that defines the resources: Example sriov-extra-manifest.yaml file apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: "example-sriov-node-policy" namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: [ens1f0] nodeSelector: node-role.kubernetes.io/master: "" mtu: 1500 numVfs: 8 priority: 99 resourceName: example-sriov-node-policy --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: "example-sriov-network" namespace: openshift-sriov-network-operator spec: ipam: |- { } linkState: auto networkNamespace: sriov-namespace resourceName: example-sriov-node-policy spoofChk: "on" trust: "off" Verification When you create the configuration ISO, you can view the reference to the extra manifests in the .openshift_install_state.json file in your working directory: "*configimage.ExtraManifests": { "FileList": [ { "Filename": "extra-manifests/sriov-extra-manifest.yaml", "Data": "YXBFDFFD..." } ] }
[ "apiVersion: v1 kind: Namespace metadata: name: openshift-lifecycle-agent annotations: workload.openshift.io/allowed: management", "oc create -f <namespace_filename>.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-lifecycle-agent namespace: openshift-lifecycle-agent spec: targetNamespaces: - openshift-lifecycle-agent", "oc create -f <operatorgroup_filename>.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-lifecycle-agent-subscription namespace: openshift-lifecycle-agent spec: channel: \"stable\" name: lifecycle-agent source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f <subscription_filename>.yaml", "oc get csv -n openshift-lifecycle-agent", "NAME DISPLAY VERSION REPLACES PHASE lifecycle-agent.v4.17.0 Openshift Lifecycle Agent 4.17.0 Succeeded", "oc get deploy -n openshift-lifecycle-agent", "NAME READY UP-TO-DATE AVAILABLE AGE lifecycle-agent-controller-manager 1/1 1 1 14s", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-containers-partitioned spec: config: ignition: version: 3.2.0 storage: disks: - device: /dev/disk/by-path/pci-<root_disk> 1 partitions: - label: var-lib-containers startMiB: <start_of_partition> 2 sizeMiB: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var-lib-containers format: xfs mountOptions: - defaults - prjquota path: /var/lib/containers wipeFilesystem: true systemd: units: - contents: |- # Generated by Butane [Unit] Before=local-fs.target Requires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service After=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service [Mount] Where=/var/lib/containers What=/dev/disk/by-partlabel/var-lib-containers Type=xfs Options=defaults,prjquota [Install] RequiredBy=local-fs.target enabled: true name: var-lib-containers.mount", "oc delete managedcluster sno-worker-example", "apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: #- example-seed-sno1.yaml - example-target-sno2.yaml - example-target-sno3.yaml", "apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: {}", "MY_USER=myuserid AUTHFILE=/tmp/my-auth.json podman login --authfile USD{AUTHFILE} -u USD{MY_USER} quay.io/USD{MY_USER}", "base64 -w 0 USD{AUTHFILE} ; echo", "apiVersion: v1 kind: Secret metadata: name: seedgen 1 namespace: openshift-lifecycle-agent type: Opaque data: seedAuth: <encoded_AUTHFILE> 2", "oc apply -f secretseedgenerator.yaml", "apiVersion: lca.openshift.io/v1 kind: SeedGenerator metadata: name: seedimage 1 spec: seedImage: <seed_container_image> 2", "oc apply -f seedgenerator.yaml", "oc get seedgenerator -o yaml", "status: conditions: - lastTransitionTime: \"2024-02-13T21:24:26Z\" message: Seed Generation completed observedGeneration: 1 reason: Completed status: \"False\" type: SeedGenInProgress - lastTransitionTime: \"2024-02-13T21:24:26Z\" message: Seed Generation completed observedGeneration: 1 reason: Completed status: \"True\" type: SeedGenCompleted 1 observedGeneration: 1", "mkdir ibi-iso-workdir 1", "openshift-install image-based create image-config-template --dir ibi-iso-workdir 1", "INFO Image-Config-Template created in: ibi-iso-workdir", "# Note: This is a sample ImageBasedInstallationConfig file showing which fields are available to aid you in creating your own image-based-installation-config.yaml file. # apiVersion: v1beta1 kind: ImageBasedInstallationConfig metadata: name: example-image-based-installation-config The following fields are required seedImage: quay.io/openshift-kni/seed-image:4.17.0 seedVersion: 4.17.0 installationDisk: /dev/vda pullSecret: '<your_pull_secret>' networkConfig is optional and contains the network configuration for the host in NMState format. See https://nmstate.io/examples.html for examples. networkConfig: interfaces: - name: eth0 type: ethernet state: up mac-address: 00:00:00:00:00:00 ipv4: enabled: true address: - ip: 192.168.122.2 prefix-length: 23 dhcp: false", "apiVersion: v1beta1 kind: ImageBasedInstallationConfig metadata: name: example-image-based-installation-config seedImage: quay.io/repo-id/seed:latest seedVersion: \"4.17.0\" extraPartitionStart: \"-240G\" installationDisk: /dev/disk/by-id/wwn-0x62c sshKey: 'ssh-ed25519 AAAA...' pullSecret: '{\"auths\": ...}' networkConfig: interfaces: - name: ens1f0 type: ethernet state: up ipv4: enabled: true dhcp: false auto-dns: false address: - ip: 192.168.200.25 prefix-length: 24 ipv6: enabled: false dns-resolver: config: server: - 192.168.15.47 - 192.168.15.48 routes: config: - destination: 0.0.0.0/0 metric: 150 next-hop-address: 192.168.200.254 next-hop-interface: ens1f0", "openshift-install image-based create image --dir ibi-iso-workdir", "INFO Consuming Image-based Installation ISO Config from target directory INFO Creating Image-based Installation ISO with embedded ignition", "ibi-iso-workdir/ └── rhcos-ibi.iso", "apiVersion: v1beta1 kind: ImageBasedInstallationConfig metadata: name: example-extra-partition seedImage: quay.io/repo-id/seed:latest seedVersion: \"4.17.0\" installationDisk: /dev/sda pullSecret: '{\"auths\": ...}' skipDiskCleanup: true 1 coreosInstallerArgs: - \"--save-partindex\" 2 - \"6\" 3 ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/sda\", 4 \"partitions\": [ { \"label\": \"storage\", 5 \"number\": 6, 6 \"sizeMiB\": 380000, 7 \"startMiB\": 500000 8 } ] } ] } }", "lsblk", "sda 8:0 0 140G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 127M 0 part ├─sda3 8:3 0 384M 0 part /var/mnt/boot ├─sda4 8:4 0 120G 0 part /var/mnt ├─sda5 8:5 0 500G 0 part /var/lib/containers └─sda6 8:6 0 380G 0 part", "journalctl -b", "Aug 13 17:01:44 10.46.26.129 install-rhcos-and-restore-seed.sh[2876]: time=\"2024-08-13T17:01:44Z\" level=info msg=\"All the precaching threads have finished.\" Aug 13 17:01:44 10.46.26.129 install-rhcos-and-restore-seed.sh[2876]: time=\"2024-08-13T17:01:44Z\" level=info msg=\"Total Images: 125\" Aug 13 17:01:44 10.46.26.129 install-rhcos-and-restore-seed.sh[2876]: time=\"2024-08-13T17:01:44Z\" level=info msg=\"Images Pulled Successfully: 125\" Aug 13 17:01:44 10.46.26.129 install-rhcos-and-restore-seed.sh[2876]: time=\"2024-08-13T17:01:44Z\" level=info msg=\"Images Failed to Pull: 0\" Aug 13 17:01:44 10.46.26.129 install-rhcos-and-restore-seed.sh[2876]: time=\"2024-08-13T17:01:44Z\" level=info msg=\"Completed executing pre-caching\" Aug 13 17:01:44 10.46.26.129 install-rhcos-and-restore-seed.sh[2876]: time=\"2024-08-13T17:01:44Z\" level=info msg=\"Pre-cached images successfully.\" Aug 13 17:01:44 10.46.26.129 install-rhcos-and-restore-seed.sh[2876]: time=\"2024-08-13 17:01:44\" level=info msg=\"Skipping shutdown\" Aug 13 17:01:44 10.46.26.129 install-rhcos-and-restore-seed.sh[2876]: time=\"2024-08-13 17:01:44\" level=info msg=\"IBI preparation process finished successfully!\" Aug 13 17:01:44 10.46.26.129 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Aug 13 17:01:44 10.46.26.129 systemd[1]: Finished SNO Image-based Installation. Aug 13 17:01:44 10.46.26.129 systemd[1]: Reached target Multi-User System. Aug 13 17:01:44 10.46.26.129 systemd[1]: Reached target Graphical Interface.", "networkConfig: interfaces: - name: ens1f0 type: ethernet state: up", "proxy: httpProxy: \"http://proxy.example.com:8080\" httpsProxy: \"http://proxy.example.com:8080\" noProxy: \"no_proxy.example.com\"", "imageDigestSources: - mirrors: - \"registry.example.com:5000/ocp4/openshift4\" source: \"quay.io/openshift-release-dev/ocp-release\"", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- MTICLDCCAdKgAwfBAgIBAGAKBggqhkjOPQRDAjB9MQswCQYRVEQGE l2wOuDwKQa+upc4GftXE7C//4mKBNBC6Ty01gUaTIpo= -----END CERTIFICATE-----", "oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type json --patch '[{\"op\": \"add\", \"path\":\"/spec/overrides/components/-\", \"value\": {\"name\":\"image-based-install-operator\",\"enabled\": true}}]'", "oc get pods -A | grep image-based", "multicluster-engine image-based-install-operator-57fb8sc423-bxdj8 2/2 Running 0 5m", "oc create namespace ibi-ns", "apiVersion: v1 kind: Secret metadata: name: ibi-image-pull-secret namespace: ibi-ns stringData: .dockerconfigjson: <base64-docker-auth-code> 1 type: kubernetes.io/dockerconfigjson", "oc create -f secret-image-registry.yaml", "apiVersion: v1 kind: Secret metadata: name: host-network-config-secret 1 namespace: ibi-ns type: Opaque stringData: nmstate: | 2 interfaces: - name: ens1f0 3 type: ethernet state: up ipv4: enabled: true address: - ip: 192.168.200.25 prefix-length: 24 dhcp: false 4 ipv6: enabled: false dns-resolver: config: server: - 192.168.15.47 5 - 192.168.15.48 routes: config: 6 - destination: 0.0.0.0/0 metric: 150 next-hop-address: 192.168.200.254 next-hop-interface: ens1f0 table-id: 254", "apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: ibi-bmh 1 namespace: ibi-ns spec: online: false 2 bootMACAddress: 00:a5:12:55:62:64 3 bmc: address: redfish-virtualmedia+http://192.168.111.1:8000/redfish/v1/Systems/8a5babac-94d0-4c20-b282-50dc3a0a32b5 4 credentialsName: ibi-bmh-bmc-secret 5 preprovisioningNetworkDataName: host-network-config-secret 6 automatedCleaningMode: disabled 7 externallyProvisioned: true 8 --- apiVersion: v1 kind: Secret metadata: name: ibi-bmh-secret 9 namespace: ibi-ns type: Opaque data: username: <user_name> 10 password: <password> 11", "oc create -f ibi-bmh.yaml", "apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: ibi-img-version-arch 1 spec: releaseImage: ibi.example.com:path/to/release/images:version-arch 2", "oc apply -f ibi-cluster-image-set.yaml", "apiVersion: extensions.hive.openshift.io/v1alpha1 kind: ImageClusterInstall metadata: name: ibi-image-install 1 namespace: ibi-ns spec: bareMetalHostRef: name: ibi-bmh 2 namespace: ibi-ns clusterDeploymentRef: name: ibi-cluster-deployment 3 hostname: ibi-host 4 imageSetRef: name: ibi-img-version-arch 5 machineNetwork: 10.0.0.0/24 6 proxy: 7 httpProxy: \"http://proxy.example.com:8080\" #httpsProxy: \"http://proxy.example.com:8080\" #noProxy: \"no_proxy.example.com\"", "oc create -f ibi-image-cluster-install.yaml", "apiVersion: hive.openshift.io/v1 kind: ClusterDeployment metadata: name: ibi-cluster-deployment 1 namespace: ibi-ns 2 spec: baseDomain: example.com 3 clusterInstallRef: group: extensions.hive.openshift.io kind: ImageClusterInstall name: ibi-image-install 4 version: v1alpha1 clusterName: ibi-cluster 5 platform: none: {} pullSecretRef: name: ibi-image-pull-secret 6", "oc apply -f ibi-cluster-deployment.yaml", "apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: sno-ibi 1 spec: hubAcceptsClient: true 2", "oc apply -f ibi-managed.yaml", "oc get imageclusterinstall", "NAME REQUIREMENTSMET COMPLETED BAREMETALHOSTREF target-0 HostValidationSucceeded ClusterInstallationSucceeded ibi-bmh", "oc extract secret/<cluster_name>-admin-kubeconfig -n <cluster_namespace> --to - > <directory>/<cluster_name>-kubeconfig", "imageDigestSources: - mirrors: - \"registry.example.com:5000/ocp4/openshift4\" source: \"quay.io/openshift-release-dev/ocp-release\"", "proxy: httpProxy: \"http://proxy.example.com:8080\" httpsProxy: \"http://proxy.example.com:8080\" noProxy: \"no_proxy.example.com\"", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: \"example-sriov-node-policy\" namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: [ens1f0] nodeSelector: node-role.kubernetes.io/master: \"\" mtu: 1500 numVfs: 8 priority: 99 resourceName: example-sriov-node-policy --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: \"example-sriov-network\" namespace: openshift-sriov-network-operator spec: ipam: |- { } linkState: auto networkNamespace: sriov-namespace resourceName: example-sriov-node-policy spoofChk: \"on\" trust: \"off\"", "oc create configmap sr-iov-extra-manifest --from-file=sriov-extra-manifest.yaml -n ibi-ns 1", "configmap/sr-iov-extra-manifest created", "# spec: extraManifestsRefs: - name: sr-iov-extra-manifest #", "-----BEGIN CERTIFICATE----- MIIDXTCCAkWgAwIBAgIJAKmjYKJbIyz3MA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV ...Custom CA certificate bundle 4WPl0Qb27Sb1xZyAsy1ww6MYb98EovazUSfjYr2EVF6ThcAPu4/sMxUV7He2J6Jd cA8SMRwpUbz3LXY= -----END CERTIFICATE-----", "oc create configmap custom-ca --from-file=tls-ca-bundle.pem -n ibi-ns", "configmap/custom-ca created", "# spec: caBundleRef: name: custom-ca #", "mkdir ibi-config-iso-workdir 1", "apiVersion: v1 metadata: name: sno-cluster-name baseDomain: host.example.com compute: - architecture: amd64 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 hyperthreading: Enabled name: master replicas: 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.200.0/24 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} fips: false cpuPartitioningMode: \"AllNodes\" pullSecret: '{\"auths\":{\"<your_pull_secret>\"}}}' sshKey: 'ssh-rsa <your_ssh_pub_key>'", "openshift-install image-based create config-template --dir ibi-config-iso-workdir/", "INFO Config-Template created in: ibi-config-iso-workdir", "# Note: This is a sample ImageBasedConfig file showing which fields are available to aid you in creating your own image-based-config.yaml file. # apiVersion: v1beta1 kind: ImageBasedConfig metadata: name: example-image-based-config additionalNTPSources: - 0.rhel.pool.ntp.org - 1.rhel.pool.ntp.org hostname: change-to-hostname releaseRegistry: quay.io networkConfig contains the network configuration for the host in NMState format. See https://nmstate.io/examples.html for examples. networkConfig: interfaces: - name: eth0 type: ethernet state: up mac-address: 00:00:00:00:00:00 ipv4: enabled: true address: - ip: 192.168.122.2 prefix-length: 23 dhcp: false", "# Note: This is a sample ImageBasedConfig file showing which fields are available to aid you in creating your own image-based-config.yaml file. # apiVersion: v1beta1 kind: ImageBasedConfig metadata: name: sno-cluster-name additionalNTPSources: - 0.rhel.pool.ntp.org - 1.rhel.pool.ntp.org hostname: host.example.com releaseRegistry: quay.io networkConfig contains the network configuration for the host in NMState format. See https://nmstate.io/examples.html for examples. networkConfig: interfaces: - name: ens1f0 type: ethernet state: up ipv4: enabled: true dhcp: false auto-dns: false address: - ip: 192.168.200.25 prefix-length: 24 ipv6: enabled: false dns-resolver: config: server: - 192.168.15.47 - 192.168.15.48 routes: config: - destination: 0.0.0.0/0 metric: 150 next-hop-address: 192.168.200.254 next-hop-interface: ens1f0", "openshift-install image-based create config-image --dir ibi-config-iso-workdir/", "INFO Adding NMConnection file <ens1f0.nmconnection> INFO Consuming Install Config from target directory INFO Consuming Image-based Config ISO configuration from target directory INFO Config-Image created in: ibi-config-iso-workdir/auth", "ibi-config-iso-workdir/ ├── auth │ ├── kubeadmin-password │ └── kubeconfig └── imagebasedconfig.iso", "export KUBECONFIG=ibi-config-iso-workdir/auth/kubeconfig", "oc get nodes", "NAME STATUS ROLES AGE VERSION node/sno-cluster-name.host.example.com Ready control-plane,master 5h15m v1.30.3", "networkConfig: interfaces: - name: ens1f0 type: ethernet state: up", "nodeLabels: node-role.kubernetes.io/edge: true environment: production", "mkdir extra-manifests", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: \"example-sriov-node-policy\" namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: [ens1f0] nodeSelector: node-role.kubernetes.io/master: \"\" mtu: 1500 numVfs: 8 priority: 99 resourceName: example-sriov-node-policy --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: \"example-sriov-network\" namespace: openshift-sriov-network-operator spec: ipam: |- { } linkState: auto networkNamespace: sriov-namespace resourceName: example-sriov-node-policy spoofChk: \"on\" trust: \"off\"", "\"*configimage.ExtraManifests\": { \"FileList\": [ { \"Filename\": \"extra-manifests/sriov-extra-manifest.yaml\", \"Data\": \"YXBFDFFD...\" } ] }" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/edge_computing/image-based-installation-for-single-node-openshift
16.3. PAM Configuration File Format
16.3. PAM Configuration File Format Each PAM configuration file contains a group of directives formatted as follows: Each of these elements are explained in the subsequent sections. 16.3.1. Module Interface There are four types of PAM module interfaces which correlate to different aspects of the authorization process: auth - This module interface authenticates use. For example, it asks for and verifies the validity of a password. Modules with this interface can also set credentials, such as group memberships or Kerberos tickets. account - This module interface verifies that access is allowed. For example, it may check if a user account is expired or is allowed to log in at a particular time of day. password - This module interface sets and verifies passwords. session - This module interface configures and manages user sessions. Modules with this interface can also perform additional tasks that are needed to allow access, like mounting a user's home directory and making the user's mailbox available. Note An individual module can provide any or all module interfaces. For instance, pam_unix.so provides all four module interfaces. In a PAM configuration file, the module interface is the first field defined. For example, a typical line in a configuration may look like this: This instructs PAM to use the pam_unix.so module's auth interface. 16.3.1.1. Stacking Module Interfaces Module interface directives can be stacked , or placed upon one another, so that multiple modules are used together for one purpose. For this reason, the order in which the modules are listed is very important to the authentication process. Stacking makes it very easy for an administrator to require specific conditions to exist before allowing the user to authenticate. For example, rlogin normally uses five stacked auth modules, as seen in its PAM configuration file: Before someone is allowed to use rlogin , PAM verifies that the /etc/nologin file does not exist, that they are not trying to log in remotely as a root user over a network connection, and that any environmental variables can be loaded. Then, if a successful rhosts authentication is performed, the connection is allowed. If the rhosts authentication fails, then standard password authentication is performed.
[ "<module interface> <control flag> <module name> <module arguments>", "auth required pam_unix.so", "auth required pam_nologin.so auth required pam_securetty.so auth required pam_env.so auth sufficient pam_rhosts_auth.so auth required pam_stack.so service=system-auth" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-pam-format
Chapter 14. Using qemu-img
Chapter 14. Using qemu-img The qemu-img command-line tool is used for formatting, modifying, and verifying various file systems used by KVM. qemu-img options and usages are highlighted in the sections that follow. Warning Never use qemu-img to modify images in use by a running virtual machine or any other process. This may destroy the image. Also, be aware that querying an image that is being modified by another process may encounter inconsistent state. 14.1. Checking the Disk Image To perform a consistency check on a disk image with the file name imgname . Note Only a selected group of formats support consistency checks. These include qcow2 , vdi , vhdx , vmdk , and qed .
[ "qemu-img check [-f format ] imgname" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/chap-using_qemu_img
Chapter 6. Bug fixes
Chapter 6. Bug fixes This section describes bugs fixed in Red Hat Satellite 6.16 that have a significant impact on users. 6.1. Web UI Web UI displays short names of hosts consistently Previously, some pages in the Satellite web UI displayed names of hosts as FQDNs even when Display FQDN for hosts was disabled. With this release, the Display FQDN for hosts setting is applied consistently. Jira:SAT-21815 [1] Suggest New link for IPv4 addresses exhausted Infobox DHCP pool Satellite suggests new IPv4 addresses in subnets with DHCP when creating or editing hosts in the UI via the Suggest New link or via the /api/subnets/:id/freeip API endpoint. Previously, when using the Infoblox DHCP provider, it would stop suggesting addresses after some time. This was because the suggestions did not expire after a period of time causing the pool of available IP addresses to be used up. With this release, the issue is now fixed. Jira:SAT-17785 Libvirt VM details with any volume type can be accessed in the web UI Previously, when you manually created a virtual machine with a volume type other than file in Libvirt and then attempted to access the details of the virtual machine in the web UI, Satellite produced an ambiguous error. The loading of volumes in the Libvirt compute resource has been fixed. As a result, you can access details of your virtual machines with volumes of any type in the web UI. Jira:SAT-23211 Host details display all overridden Ansible variables Previously, when you accessed the list of Ansible variables in Host details, Satellite displayed only the first page of Ansible variables and pagination controls were missing. The Satellite web UI has been fixed to provide pagination controls and follow the Entries per page setting. As a result, you can browse all Ansible variables that you have overridden for Ansible roles assigned to your host. Jira:SAT-21887 Ansible jobs use passwords from Advanced fields correctly Previously, the Ansible provider for remote execution ignored the following Advanced fields inputs from the job wizard: SSH password Effective user password With this fix, the Ansible provider uses these values correctly. Jira:SAT-18270 Ansible jobs use the SSH User field from Advanced fields Previously, the Ansible provider for remote execution ignored the SSH User field of the Advanced fields inputs from the job wizard. The foreman_ansible component has been fixed to use the value of SSH User as the Ansible user. As a result, the Ansible provider uses this value correctly. Jira:SAT-23947 6.2. Security and authentication Uploading an OpenSCAP report no longer fails on hosts that run RHEL 9.4 or later with FIPS mode enabled On Satellite hosts that run RHEL 9 in FIPS mode, uploading an OpenSCAP report previously failed with the following error: The problem has been addressed on the RHEL side and no longer affects Satellite hosts that run RHEL 9.4 or later. Note that uploading an OpenSCAP report in the described situation still fails on hosts that run RHEL 9.3 or earlier. See also Chapter 5, Known issues . Jira:SAT-22421 [1] 6.3. Content management Checksum set value has changed The supported --checksum-type set of sha1 and sha256 for repositories has changed to sha256 , sha384 and sha512 . The checksum sha1 is no longer secure and modern systems such as Red Hat Enterprise Linux 6 and up support file checksums greater than sha1 . Synchronizing sha1 content is still possible but content published in repositories by Satellite will now use sha256 , sha384 , or sha512 . The default checksum is now sha256 and not sha1 . Jira:SAT-25511 [1] 6.4. Host provisioning and management Host owner set correctly during registration Previously, when you registered a host, Satellite set the host owner to Anonymous Admin . With this release, Satellite sets the host owner to the user who generated the registration command. Jira:SAT-21682 Satellite uses the Default location subscribed hosts setting as the default location for registered hosts Previously, if you did not specify the location to which you were registering the host, Satellite ignored the Default location subscribed hosts setting. With this fix, Satellite uses the value of that setting as the default location. Jira:SAT-23047 CloudInit default no longer generates invalid YAML output Previously, when you generated the CloudInit default provisioning template, the YAML output was incorrectly indented. The incorrect indentation caused the output to be invalid. The subscription_manager_setup snippet has been fixed to produce correct indentation. As a result, the generated YAML output is valid. Jira:SAT-25042 [1] Path to Red Hat Image Builder image renders correctly Previously, when you provisioned a host by using a Red Hat Image Builder image and provided a relative path to the image in the kickstart_liveimg parameter, the Kickstart default provisioning template failed to render. The Katello component has been fixed to render the absolute path of the image correctly. As a result, the Kickstart default template renders successfully and you can provision hosts by using the image. Jira:SAT-23943 Discovery image updated with the latest RHEL 8 Previously, the Foreman Discovery image was based on Red Hat Enterprise Linux 8.6, which was limiting discovery of systems that require a later RHEL version. The Foreman Discovery image has been updated with the Red Hat Enterprise Linux 8.10.0 base. As a result, you can discover newer systems. Jira:SAT-24197 6.5. Backup and restore The certificate .tar file is now collected when using satellite-maintain backup on Capsule Server Previously, the satellite-maintain backup command did not collect the certificate .tar file of the Capsule Server when creating a backup. As a consequence, restoring the archive failed. With this update, restoring the archive completes successfully in this situation. Jira:SAT-23093
[ "Unable to load certs Neither PUB key nor PRIV key" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/release_notes/bug-fixes
12.4.2. Configuring /etc/rndc.conf
12.4.2. Configuring /etc/rndc.conf The key is the most important statement in /etc/rndc.conf . The <key-name> and <key-value> should be exactly the same as their settings in /etc/named.conf . To match the keys specified in the target server's /etc/named.conf , add the following lines to /etc/rndc.conf . This directive sets a global default key. However, the rndc configuration file can also specify different keys for different servers, as in the following example: Warning Make sure that only the root user can read or write to the /etc/rndc.conf file. For more information about the /etc/rndc.conf file, refer to the rndc.conf man page.
[ "key \" <key-name> \" { algorithm hmac-md5; secret \" <key-value> \"; };", "options { default-server localhost; default-key \" <key-name> \"; };", "server localhost { key \" <key-name> \"; };" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-bind-rndc-configuration-rndcconf
Virtual Machine Management Guide
Virtual Machine Management Guide Red Hat Virtualization 4.3 Managing virtual machines in Red Hat Virtualization Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Abstract This document describes the installation, configuration, and administration of virtual machines in Red Hat Virtualization.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/index
Chapter 8. HostFirmwareSettings [metal3.io/v1alpha1]
Chapter 8. HostFirmwareSettings [metal3.io/v1alpha1] Description HostFirmwareSettings is the Schema for the hostfirmwaresettings API. Type object 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object HostFirmwareSettingsSpec defines the desired state of HostFirmwareSettings. status object HostFirmwareSettingsStatus defines the observed state of HostFirmwareSettings. 8.1.1. .spec Description HostFirmwareSettingsSpec defines the desired state of HostFirmwareSettings. Type object Required settings Property Type Description settings integer-or-string Settings are the desired firmware settings stored as name/value pairs. 8.1.2. .status Description HostFirmwareSettingsStatus defines the observed state of HostFirmwareSettings. Type object Required settings Property Type Description conditions array Track whether settings stored in the spec are valid based on the schema conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } lastUpdated string Time that the status was last updated schema object FirmwareSchema is a reference to the Schema used to describe each FirmwareSetting. By default, this will be a Schema in the same Namespace as the settings but it can be overwritten in the Spec settings object (string) Settings are the firmware settings stored as name/value pairs 8.1.3. .status.conditions Description Track whether settings stored in the spec are valid based on the schema Type array 8.1.4. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 8.1.5. .status.schema Description FirmwareSchema is a reference to the Schema used to describe each FirmwareSetting. By default, this will be a Schema in the same Namespace as the settings but it can be overwritten in the Spec Type object Required name namespace Property Type Description name string name is the reference to the schema. namespace string namespace is the namespace of the where the schema is stored. 8.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/hostfirmwaresettings GET : list objects of kind HostFirmwareSettings /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings DELETE : delete collection of HostFirmwareSettings GET : list objects of kind HostFirmwareSettings POST : create HostFirmwareSettings /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings/{name} DELETE : delete HostFirmwareSettings GET : read the specified HostFirmwareSettings PATCH : partially update the specified HostFirmwareSettings PUT : replace the specified HostFirmwareSettings /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings/{name}/status GET : read status of the specified HostFirmwareSettings PATCH : partially update status of the specified HostFirmwareSettings PUT : replace status of the specified HostFirmwareSettings 8.2.1. /apis/metal3.io/v1alpha1/hostfirmwaresettings HTTP method GET Description list objects of kind HostFirmwareSettings Table 8.1. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettingsList schema 401 - Unauthorized Empty 8.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings HTTP method DELETE Description delete collection of HostFirmwareSettings Table 8.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind HostFirmwareSettings Table 8.3. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettingsList schema 401 - Unauthorized Empty HTTP method POST Description create HostFirmwareSettings Table 8.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.5. Body parameters Parameter Type Description body HostFirmwareSettings schema Table 8.6. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 201 - Created HostFirmwareSettings schema 202 - Accepted HostFirmwareSettings schema 401 - Unauthorized Empty 8.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings/{name} Table 8.7. Global path parameters Parameter Type Description name string name of the HostFirmwareSettings HTTP method DELETE Description delete HostFirmwareSettings Table 8.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 8.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified HostFirmwareSettings Table 8.10. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified HostFirmwareSettings Table 8.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.12. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified HostFirmwareSettings Table 8.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.14. Body parameters Parameter Type Description body HostFirmwareSettings schema Table 8.15. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 201 - Created HostFirmwareSettings schema 401 - Unauthorized Empty 8.2.4. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings/{name}/status Table 8.16. Global path parameters Parameter Type Description name string name of the HostFirmwareSettings HTTP method GET Description read status of the specified HostFirmwareSettings Table 8.17. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified HostFirmwareSettings Table 8.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.19. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified HostFirmwareSettings Table 8.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.21. Body parameters Parameter Type Description body HostFirmwareSettings schema Table 8.22. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 201 - Created HostFirmwareSettings schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/provisioning_apis/hostfirmwaresettings-metal3-io-v1alpha1
Chapter 4. Important Changes to External Kernel Parameters
Chapter 4. Important Changes to External Kernel Parameters This chapter provides system administrators with a summary of significant changes in the kernel distributed with Red Hat Enterprise Linux 7.9. These changes include added or updated proc entries, sysctl , and sysfs default values, boot parameters, kernel configuration options, or any noticeable behavior changes. 4.1. New kernel parameters bert_disable [ACPI] This parameter disables Boot Error Record Table (BERT) on defective BIOSes. BERT is one of four ACPI Platform Error Interface tables and is used for obtaining hardware error logs that occurred in the boot and firmware did not notify the kernel about the error at runtime, for example through a non-maskable interrupt (NMI) or a machine-check exception (MCE). bert_enable [ACPI] RHEL7 only. This parameter enables Boot Error Record Table (BERT). The default state is disabled. page_owner = [KNL] Storage of the information about who allocated each page is disabled in default. This parameter enables to store such information by using the following option: on - enable the feature srbds = [X86,INTEL] This parameter controls the Special Register Buffer Data Sampling (SRBDS) mitigation. Certain CPUs are vulnerable to MDS-like (Microarchitectural Data Sampling) exploits which can leak bits from the random number generator. By default, this issue is mitigated by microcode. However, the microcode fix can cause the RDRAND (read random) and RDSEED instructions to become much slower. Among other effects, this will result in reduced throughput from the /dev/urandom file. The microcode mitigation can be disabled with the following option: off - Disable mitigation and remove performance impact to RDRAND and RDSEED . 4.2. New /proc/sys/kernel/ parameters hyperv_record_panic_msg This parameter controls whether the panic kernel (kmsg) data is reported to Hyper-V or not. The values are: 0 - Do not report the panic kmsg data. 1 - Report the panic kmsg data. This is the default behavior.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.9_release_notes/kernel_parameters_changes
7.3. Permissions
7.3. Permissions 7.3.1. User Query Permissions CREATE, READ, UPDATE, DELETE (CRUD) permissions can be set for any resource path in a VDB. A resource path can be as specific as the fully qualified name of a column or as general a top level model (schema) name. Permissions granted to a particular path apply to it and any resource paths that share the same partial name. For example, granting read to "model" will also grant read to "model.table", "model.table.column", etc. Allowing or denying a particular action is determined by searching for permissions from the most to least specific resource paths. The first permission found with a specific allow or deny will be used. Thus it is possible to set very general permissions at high-level resource path names and to override only as necessary at more specific resource paths. Permission grants are only needed for resources that a role needs access to. Permissions are also only applied to the columns/tables/procedures in the user query - not to every resource accessed transitively through view and procedure definitions. It is important therefore to ensure that permission grants are applied consistently across models that access the same resources. Warning Non-visible models are accessible by user queries. To restrict user access at a model level, at least one data role should be created to enable data role checking. In turn that role can be mapped to any authenticated user and should not grant permissions to models that should be inaccessible. Permissions are not applicable to the SYS and pg_catalog schemas. These metadata reporting schemas are always accessible regardless of the user. The SYSADMIN schema however may need permissions as applicable. 7.3.2. Assigning Permissions To process a SELECT statement or a stored procedure execution, the user account requires the following access rights: READ - on the Table(s) being accessed or the procedure being called. READ - on every column referenced. To process an INSERT statement, the user account requires the following access rights: CREATE - on the Table being inserted into. CREATE - on every column being inserted on that Table. To process an UPDATE statement, the user account requires the following access rights: UPDATE - on the Table being updated. UPDATE - on every column being updated on that Table. READ - on every column referenced in the criteria. To process a DELETE statement, the user account requires the following access rights: DELETE - on the Table being deleted. READ - on every column referenced in the criteria. To process a EXEC/CALL statement, the user account requires the following access rights: EXECUTE (or READ) - on the Procedure being executed. To process any function, the user account requires the following access rights: EXECUTE (or READ) - on the Function being called. To process any ALTER or CREATE TRIGGER statement, the user account requires the following access rights: ALTER - on the view or procedure that is effected. INSTEAD OF Triggers (update procedures) are not yet treated as full schema objects and are instead treated as attributes of the view. To process any OBJECTTABLE function, the user account requires the following access rights: LANGUAGE - specifying the language name that is allowed. To process any statement against a JBoss Data Virtualization temporary table requires the following access rights: allow-create-temporary-tables attribute on any applicable role CREATE - against the target source/schema if defining a FOREIGN temporary table. 7.3.3. Row and Column-Based Security Conditions Although specified in a similar way to user query CRUD permissions, row-based and column-based permissions may be used together or separately to control at a more granular and consistent level the data returned to users. 7.3.4. Row-Based Security Conditions A permission against a fully qualified table/view/procedure may also specify a condition. Unlike the allow actions defined above, a condition is always applied - not only at the user query level. The condition can be any valid SQL referencing the columns of the table/view/procedure. The condition will act as a row-based filter and as a checked constraint for insert/update operations. 7.3.5. Applying Row-Based Security Conditions A condition is applied conjunctively to UPDATE/DELETE/SELECT WHERE clauses against the affected resource. Those queries will therefore only ever be effective against the subset of rows that pass the condition, i.e. "SELECT * FROM TBL WHERE something AND condition ". The condition will be present regardless of how the table/view is used in the query, whether via a union, join, etc. Inserts and updates against physical tables affected by a condition are further validated so that the insert/change values must pass the condition (evaluate to true) for the insert/update to succeed - this is effectively the same as an SQL constraint. This will happen for all styles of insert/update - insert with query expression, bulk insert/update, etc. Inserts/updates against views are not checked with regards to the constraint. You can disable the insert/update constraint check by setting the condition constraint flag to false. This is typically only needed in circumstances when the condition cannot always be evaluated. However disabling the condition as a constraint drops the condition from consideration when logically evaluating the constraint. Any other condition constraints will still be evaluated. Across multiple applicable roles, if more than one condition applies to the same resource, the conditions will be accumulated disjunctively via OR, i.e. "(condition1) OR (condition2) ...". Therefore granting a permission with the condition "true" will allow users in that role to see all rows of the given resource. 7.3.6. Considerations When Using Conditions Be aware that non-pushdown conditions may adversely impact performance. Avoid using multiple conditions against the same resource as any non-pushdown condition will cause the entire OR statement to not be pushed down. If you need to insert permission conditions, be careful when adding an inline view as this can cause performance problems if your sources do not support them. Note that pushdown of multi-row insert/update operations is inhibited since conditions must be checked for each row. You can manage permission conditions on a per-role basis, but another approach is to add condition permissions to any authenticated role. By doing it this way, the conditions are generalized for anyone using hasRole , user , or other security functions. The advantage of this latter approach is that it provides you with a static row-based policy. As a result, all of your query plans can be shared between your users. How you handle null values is up to you. You can implement ISNULL checks to ensure that null values are allowed when a column is "nullable". 7.3.7. Limitations to Using Conditions Conditions on source tables that act as check constraints must currently not contain correlated subqueries. Conditions may not contain aggregate or windowed functions. Tables and procedures referenced via subqueries will still have row-based filters and column masking applied to them. Note Row-based filter conditions are enforced even for materialized view loads. You should ensure that tables consumed to produce materialized views do not have row-based filter conditions on them that could affect the materialized view results. 7.3.8. Column Masking A permission against a fully qualified table/view/procedure column may also specify a mask and optionally a condition. When the query is submitted the roles are consulted and the relevant mask/condition information are combined to form a searched case expression to mask the values that would have been returned by the access. Unlike the CRUD allow actions defined above, the resulting masking effect is always applied - not only at the user query level. The condition and expression can be any valid SQL referencing the columns of the table/view/procedure. Procedure result set columns may be referenced as proc.col. 7.3.9. Applying Column Masking Column masking is applied only against SELECTs. Column masking is applied logically after the affect of row based security. However since both views and source tables may have row and column based security, the actual view level masking may take place on top of source level masking. If the condition is specified along with the mask, then the effective mask expression effects only a subset of the rows: "CASE WHEN condition THEN mask ELSE column". Otherwise the condition is assumed to be TRUE, meaning that the mask applies to all rows. If multiple roles specify a mask against a column, the mask order argument will determine their precedence from highest to lowest as part of a larger searched case expression. For example a mask with the default order of 0 and a mask with an order of 1 would be combined as "CASE WHEN condition1 THEN mask1 WHEN condition0 THEN mask0 ELSE column". 7.3.10. Column Masking Considerations Non-pushdown masking conditions/expressions may adversely impact performance, since their evaluation may inhibit pushdown of query constructs on top of the affected resource. In some circumstances the insertion of masking may require that the plan be altered with the addition of an inline view, which can result in adverse performance against sources that do not support inline views. In addition to managing masking on a per-role basis with the use of the order value, another approach is to specify masking in a single any authenticated role such that the conditions/expressions are generalized for all users/roles using the hasRole , user , and other such security functions. The advantage of the latter approach is that there is effectively a static masking policy in effect such that all query plans can still be shared between users. 7.3.11. Column Masking Limitations In the event that two masks have the same order value, it is not well defined what order they are applied in. Masks or their conditions may not contain aggregate or windowed functions. Tables and procedures referenced via subqueries will still have row-based filters and column masking applied to them. Note Masking is enforced even for materialized view loads. You should ensure that tables consumed to produce materialized views do not have masking on them that could affect the materialized view results.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/sect-permissions
2.3. Changing the Data Warehouse Sampling Scale
2.3. Changing the Data Warehouse Sampling Scale Data Warehouse is required in Red Hat Virtualization. It can be installed and configured on the same machine as the Manager, or on a separate machine with access to the Manager. The default data retention settings may not be required for all setups, so engine-setup offers two data sampling scales: Basic and Full . Full uses the default values for the data retention settings listed in Application Settings for the Data Warehouse service (recommended when Data Warehouse is installed on a remote host). Basic reduces the values of DWH_TABLES_KEEP_HOURLY to 720 and DWH_TABLES_KEEP_DAILY to 0 , easing the load on the Manager machine. Use Basic when the Manager and Data Warehouse are installed on the same machine. The sampling scale is configured by engine-setup during installation: --== MISC CONFIGURATION ==-- Please choose Data Warehouse sampling scale: (1) Basic (2) Full (1, 2)[1]: You can change the sampling scale later by running engine-setup again with the --reconfigure-dwh-scale option. Changing the Data Warehouse Sampling Scale You can also adjust individual data retention settings if necessary, as documented in Application Settings for the Data Warehouse service .
[ "--== MISC CONFIGURATION ==-- Please choose Data Warehouse sampling scale: (1) Basic (2) Full (1, 2)[1]:", "engine-setup --reconfigure-dwh-scale [...] Setup can automatically configure the firewall on this system. Note: automatic configuration of the firewall may overwrite current settings. Do you want Setup to configure the firewall? (Yes, No) [Yes]: [...] Perform full vacuum on the oVirt engine history database ovirt_engine_history@localhost? This operation may take a while depending on this setup health and the configuration of the db vacuum process. See https://www.postgresql.org/docs/12/static/sql-vacuum.html (Yes, No) [No]: [...] Setup can backup the existing database. The time and space required for the database backup depend on its size. This process takes time, and in some cases (for instance, when the size is few GBs) may take several hours to complete. If you choose to not back up the database, and Setup later fails for some reason, it will not be able to restore the database and all DWH data will be lost. Would you like to backup the existing database before upgrading it? (Yes, No) [Yes]: [...] Please choose Data Warehouse sampling scale: (1) Basic (2) Full (1, 2)[1]: 2 [...] During execution engine service will be stopped (OK, Cancel) [OK]: [...] Please confirm installation settings (OK, Cancel) [OK]:" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/data_warehouse_guide/Changing_the_Data_Warehouse_Sampling_Scale
Chapter 6. I/O Scheduling
Chapter 6. I/O Scheduling You can use an input/output (I/O) scheduler to improve disk performance both when Red Hat Enterprise Linux 7 is a virtualization host as well as when it is a virtualization guest . 6.1. I/O Scheduling with Red Hat Enterprise Linux as a Virtualization Host When using Red Hat Enterprise Linux 7 as a host for virtualized guests, the default deadline scheduler is usually ideal. This scheduler performs well on nearly all workloads. However, if maximizing I/O throughput is more important than minimizing I/O latency on the guest workloads, it may be beneficial to use the cfq scheduler instead.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_tuning_and_optimization_guide/chap-Virtualization_Tuning_Optimization_Guide-Disk_IO_Scheduler
Chapter 32. event
Chapter 32. event This chapter describes the commands under the event command. 32.1. event trigger create Create new trigger. Usage: Table 32.1. Positional Arguments Value Summary name Event trigger name workflow_id Workflow id exchange Event trigger exchange topic Event trigger topic event Event trigger event name workflow_input Workflow input Table 32.2. Optional Arguments Value Summary -h, --help Show this help message and exit --params PARAMS Workflow params Table 32.3. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 32.4. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 32.5. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 32.6. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 32.2. event trigger delete Delete trigger. Usage: Table 32.7. Positional Arguments Value Summary event_trigger_id Id of event trigger(s). Table 32.8. Optional Arguments Value Summary -h, --help Show this help message and exit 32.3. event trigger list List all event triggers. Usage: Table 32.9. Optional Arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. Table 32.10. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 32.11. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 32.12. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 32.13. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 32.4. event trigger show Show specific event trigger. Usage: Table 32.14. Positional Arguments Value Summary event_trigger Event trigger id Table 32.15. Optional Arguments Value Summary -h, --help Show this help message and exit Table 32.16. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 32.17. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 32.18. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 32.19. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack event trigger create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--params PARAMS] name workflow_id exchange topic event [workflow_input]", "openstack event trigger delete [-h] event_trigger_id [event_trigger_id ...]", "openstack event trigger list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS]", "openstack event trigger show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] event_trigger" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/event
Chapter 2. About Kafka
Chapter 2. About Kafka Apache Kafka is an open-source distributed publish-subscribe messaging system for fault-tolerant real-time data feeds. Additional resources For more information about Apache Kafka, see the Apache Kafka website . 2.1. Kafka concepts Knowledge of the key concepts of Kafka is important in understanding how AMQ Streams works. A Kafka cluster comprises multiple brokers. Topics are used to receive and store data in a Kafka cluster. Topics are split by partitions, where the data is written. Partitions are replicated across topics for fault tolerance. Kafka brokers and topics Broker A broker, sometimes referred to as a server or node, orchestrates the storage and passing of messages. Topic A topic provides a destination for the storage of data. Each topic is split into one or more partitions. Cluster A group of broker instances. Partition The number of topic partitions is defined by a topic partition count . Partition leader A partition leader handles all producer requests for a topic. Partition follower A partition follower replicates the partition data of a partition leader, optionally handling consumer requests. Topics use a replication factor to configure the number of replicas of each partition within the cluster. A topic comprises at least one partition. An in-sync replica has the same number of messages as the leader. Configuration defines how many replicas must be in-sync to be able to produce messages, ensuring that a message is committed only after it has been successfully copied to the replica partition. In this way, if the leader fails the message is not lost. In the Kafka brokers and topics diagram, we can see each numbered partition has a leader and two followers in replicated topics. 2.2. Producers and consumers Producers and consumers send and receive messages (publish and subscribe) through brokers. Messages comprise an optional key and a value that contains the message data, plus headers and related metadata. The key is used to identify the subject of the message, or a property of the message. Messages are delivered in batches, and batches and records contain headers and metadata that provide details that are useful for filtering and routing by clients, such as the timestamp and offset position for the record. Producers and consumers Producer A producer sends messages to a broker topic to be written to the end offset of a partition. Messages are written to partitions by a producer on a round robin basis, or to a specific partition based on the message key. Consumer A consumer subscribes to a topic and reads messages according to topic, partition and offset. Consumer group Consumer groups are used to share a typically large data stream generated by multiple producers from a given topic. Consumers are grouped using a group.id , allowing messages to be spread across the members. Consumers within a group do not read data from the same partition, but can receive data from one or more partitions. Offsets Offsets describe the position of messages within a partition. Each message in a given partition has a unique offset, which helps identify the position of a consumer within the partition to track the number of records that have been consumed. Committed offsets are written to an offset commit log. A __consumer_offsets topic stores information on committed offsets, the position of last and offset, according to consumer group. Producing and consuming data
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/amq_streams_on_openshift_overview/kafka-concepts_str
Chapter 2. The Integrated ActiveMQ Artemis Messaging Broker
Chapter 2. The Integrated ActiveMQ Artemis Messaging Broker 2.1. ActiveMQ Artemis Apache ActiveMQ Artemis is an open source project for an asynchronous messaging system. It is high performance, embeddable, clustered and supports multiple protocols. JBoss EAP 7 uses Apache ActiveMQ Artemis as its Jakarta Messaging broker and is configured using the messaging-activemq subsystem. This fully replaces the HornetQ broker but retains protocol compatibility with JBoss EAP 6. The core ActiveMQ Artemis is Jakarta Messaging-agnostic and provides a non-Jakarta Messaging API, which is referred to as the core API . ActiveMQ Artemis also provides a Jakarta Messaging client API which uses a facade layer to implement the Jakarta Messaging semantics on top of the core API. Essentially, Jakarta Messaging interactions are translated into core API operations on the client side using the Jakarta Messaging client API. From there, all operations are sent using the core client API and Apache ActiveMQ Artemis wire format. The server itself only uses the core API. For more details on the core API and its concepts, refer to the ActiveMQ Artemis documentation . 2.2. Apache ActiveMQ Artemis Core API and Jakarta Messaging Destinations Let's quickly discuss how Jakarta Messaging destinations are mapped to Apache ActiveMQ Artemis addresses. Apache ActiveMQ Artemis core is Jakarta Messaging-agnostic. It does not have any concept of a Jakarta Messaging topic. A Jakarta Messaging topic is implemented in core as an address (the topic name) with zero or more queues bound to it. Each queue bound to that address represents a topic subscription. Likewise, a Jakarta Messaging queue is implemented as an address (the Jakarta Messaging queue name) with one single queue bound to it which represents the Jakarta Messaging queue. By convention, all Jakarta Messaging queues map to core queues where the core queue name has the string jms.queue. prepended to it. For example, the Jakarta Messaging queue with the name orders.europe would map to the core queue with the name jms.queue.orders.europe . The address at which the core queue is bound is also given by the core queue name. For Jakarta Messaging topics the address at which the queues that represent the subscriptions are bound is given by prepending the string jms.topic. to the name of the Jakarta Messaging topic. For example, the Jakarta Messaging topic with name news.europe would map to the core address jms.topic.news.europe . In other words if you send a Jakarta Messaging message to a Jakarta Messaging queue with name orders.europe , it will get routed on the server to any core queues bound to the address jms.queue.orders.europe . If you send a Jakarta Messaging message to a Jakarta Messaging topic with name news.europe , it will get routed on the server to any core queues bound to the address jms.topic.news.europe . If you want to configure settings for a Jakarta Messaging queue with the name orders.europe , you need to configure the corresponding core queue jms.queue.orders.europe : <!-- expired messages in JMS Queue "orders.europe" will be sent to the JMS Queue "expiry.europe" --> <address-setting match="jms.queue.orders.europe"> <expiry-address>jms.queue.expiry.europe</expiry-address> ... </address-setting>
[ "<!-- expired messages in JMS Queue \"orders.europe\" will be sent to the JMS Queue \"expiry.europe\" --> <address-setting match=\"jms.queue.orders.europe\"> <expiry-address>jms.queue.expiry.europe</expiry-address> </address-setting>" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuring_messaging/the_integrated_activemq_artemis_messaging_broker
Chapter 3. Using Samba as a server
Chapter 3. Using Samba as a server Samba implements the Server Message Block (SMB) protocol in Red Hat Enterprise Linux. The SMB protocol is used to access resources on a server, such as file shares and shared printers. Additionally, Samba implements the Distributed Computing Environment Remote Procedure Call (DCE RPC) protocol used by Microsoft Windows. You can run Samba as: An Active Directory (AD) or NT4 domain member A standalone server An NT4 Primary Domain Controller (PDC) or Backup Domain Controller (BDC) Note Red Hat supports the PDC and BDC modes only in existing installations with Windows versions which support NT4 domains. Red Hat recommends not setting up a new Samba NT4 domain, because Microsoft operating systems later than Windows 7 and Windows Server 2008 R2 do not support NT4 domains. Red Hat does not support running Samba as an AD domain controller (DC). Independently of the installation mode, you can optionally share directories and printers. This enables Samba to act as a file and print server. 3.1. Understanding the different Samba services and modes The samba package provides multiple services. Depending on your environment and the scenario you want to configure, you require one or more of these services and configure Samba in different modes. 3.1.1. The Samba services Samba provides the following services: smbd This service provides file sharing and printing services using the SMB protocol. Additionally, the service is responsible for resource locking and for authenticating connecting users. For authenticating domain members, smbd requires winbindd . The smb systemd service starts and stops the smbd daemon. To use the smbd service, install the samba package. nmbd This service provides host name and IP resolution using the NetBIOS over IPv4 protocol. Additionally to the name resolution, the nmbd service enables browsing the SMB network to locate domains, work groups, hosts, file shares, and printers. For this, the service either reports this information directly to the broadcasting client or forwards it to a local or master browser. The nmb systemd service starts and stops the nmbd daemon. Note that modern SMB networks use DNS to resolve clients and IP addresses. For Kerberos a working DNS setup is required. To use the nmbd service, install the samba package. winbindd This service provides an interface for the Name Service Switch (NSS) to use AD or NT4 domain users and groups on the local system. This enables, for example, domain users to authenticate to services hosted on a Samba server or to other local services. The winbind systemd service starts and stops the winbindd daemon. If you set up Samba as a domain member, winbindd must be started before the smbd service. Otherwise, domain users and groups are not available to the local system.. To use the winbindd service, install the samba-winbind package. Important Red Hat only supports running Samba as a server with the winbindd service to provide domain users and groups to the local system. Due to certain limitations, such as missing Windows access control list (ACL) support and NT LAN Manager (NTLM) fallback, SSSD is not supported. 3.1.2. The Samba security services The security parameter in the [global] section in the /etc/samba/smb.conf file manages how Samba authenticates users that are connecting to the service. Depending on the mode you install Samba in, the parameter must be set to different values: On an AD domain member, set security = ads In this mode, Samba uses Kerberos to authenticate AD users. For details about setting up Samba as a domain member, see Setting up Samba as an AD domain member server . On a standalone server, set security = user In this mode, Samba uses a local database to authenticate connecting users. For details about setting up Samba as a standalone server, see Setting up Samba as a standalone server . On an NT4 PDC or BDC, set security = user In this mode, Samba authenticates users to a local or LDAP database. On an NT4 domain member, set security = domain In this mode, Samba authenticates connecting users to an NT4 PDC or BDC. You cannot use this mode on AD domain members. For details about setting up Samba as a domain member, see Setting up Samba as an AD domain member server . Additional resources security parameter in the smb.conf(5) man page on your system 3.1.3. Scenarios when Samba services and Samba client utilities load and reload their configuration The following describes when Samba services and utilities load and reload their configuration: Samba services reload their configuration: Automatically every 3 minutes On manual request, for example, when you run the smbcontrol all reload-config command. Samba client utilities read their configuration only when you start them. Note that certain parameters, such as security require a restart of the smb service to take effect and a reload is not sufficient. Additional resources The How configuration changes are applied section in the smb.conf(5) man page on your system smbd(8) , nmbd(8) , and winbindd(8) man pages on your system 3.1.4. Editing the Samba configuration in a safe way Samba services automatically reload their configuration every 3 minutes. To prevent that the services reload the changes before you have verified the configuration using the testparm utility, you can edit the Samba configuration in a safe way. Prerequisites Samba is installed. Procedure Create a copy of the /etc/samba/smb.conf file: Edit the copied file and make the required changes. Verify the configuration in the /etc/samba/samba.conf. copy file: If testparm reports errors, fix them and run the command again. Override the /etc/samba/smb.conf file with the new configuration: Wait until the Samba services automatically reload their configuration or manually reload the configuration: Additional resources Scenarios when Samba services and Samba client utilities load and reload their configuration 3.2. Verifying the smb.conf file by using the testparm utility The testparm utility verifies that the Samba configuration in the /etc/samba/smb.conf file is correct. The utility detects invalid parameters and values, but also incorrect settings, such as for ID mapping. If testparm reports no problem, the Samba services will successfully load the /etc/samba/smb.conf file. Note that testparm cannot verify that the configured services will be available or work as expected. Important Red Hat recommends that you verify the /etc/samba/smb.conf file by using testparm after each modification of this file. Prerequisites You installed Samba. The /etc/samba/smb.conf file exits. Procedure Run the testparm utility as the root user: The example output reports a non-existent parameter and an incorrect ID mapping configuration. If testparm reports incorrect parameters, values, or other errors in the configuration, fix the problem and run the utility again. 3.3. Setting up Samba as a standalone server You can set up Samba as a server that is not a member of a domain. In this installation mode, Samba authenticates users to a local database instead of to a central DC. Additionally, you can enable guest access to allow users to connect to one or multiple services without authentication. 3.3.1. Setting up the server configuration for the standalone server You can set up the server configuration for a Samba standalone server. Procedure Install the samba package: Edit the /etc/samba/smb.conf file and set the following parameters: This configuration defines a standalone server named Server within the Example-WG work group. Additionally, this configuration enables logging on a minimal level ( 1 ) and log files will be stored in the /var/log/samba/ directory. Samba will expand the %m macro in the log file parameter to the NetBIOS name of connecting clients. This enables individual log files for each client. Optional: Configure file or printer sharing. See: Setting up a share that uses POSIX ACLs Setting up a share that uses Windows ACLs Setting up Samba as a Print Server Verify the /etc/samba/smb.conf file: If you set up shares that require authentication, create the user accounts. For details, see Creating and enabling local user accounts . Open the required ports and reload the firewall configuration by using the firewall-cmd utility: Enable and start the smb service: Additional resources smb.conf(5) man page on your system 3.3.2. Creating and enabling local user accounts To enable users to authenticate when they connect to a share, you must create the accounts on the Samba host both in the operating system and in the Samba database. Samba requires the operating system account to validate the Access Control Lists (ACL) on file system objects and the Samba account to authenticate connecting users. If you use the passdb backend = tdbsam default setting, Samba stores user accounts in the /var/lib/samba/private/passdb.tdb database. You can create a local Samba user named example . Prerequisites Samba is installed and configured as a standalone server. Procedure Create the operating system account: This command adds the example account without creating a home directory. If the account is only used to authenticate to Samba, assign the /sbin/nologin command as shell to prevent the account from logging in locally. Set a password to the operating system account to enable it: Samba does not use the password set on the operating system account to authenticate. However, you need to set a password to enable the account. If an account is disabled, Samba denies access if this user connects. Add the user to the Samba database and set a password to the account: Use this password to authenticate when using this account to connect to a Samba share. Enable the Samba account: 3.4. Understanding and configuring Samba ID mapping Windows domains distinguish users and groups by unique Security Identifiers (SID). However, Linux requires unique UIDs and GIDs for each user and group. If you run Samba as a domain member, the winbindd service is responsible for providing information about domain users and groups to the operating system. To enable the winbindd service to provide unique IDs for users and groups to Linux, you must configure ID mapping in the /etc/samba/smb.conf file for: The local database (default domain) The AD or NT4 domain the Samba server is a member of Each trusted domain from which users must be able to access resources on this Samba server Samba provides different ID mapping back ends for specific configurations. The most frequently used back ends are: Back end Use case tdb The * default domain only ad AD domains only rid AD and NT4 domains autorid AD, NT4, and the * default domain 3.4.1. Planning Samba ID ranges Regardless of whether you store the Linux UIDs and GIDs in AD or if you configure Samba to generate them, each domain configuration requires a unique ID range that must not overlap with any of the other domains. Warning If you set overlapping ID ranges, Samba fails to work correctly. Example 3.1. Unique ID Ranges The following shows non-overlapping ID mapping ranges for the default ( * ), AD-DOM , and the TRUST-DOM domains. Important You can only assign one range per domain. Therefore, leave enough space between the domains ranges. This enables you to extend the range later if your domain grows. If you later assign a different range to a domain, the ownership of files and directories previously created by these users and groups will be lost. 3.4.2. The * default domain In a domain environment, you add one ID mapping configuration for each of the following: The domain the Samba server is a member of Each trusted domain that should be able to access the Samba server However, for all other objects, Samba assigns IDs from the default domain. This includes: Local Samba users and groups Samba built-in accounts and groups, such as BUILTIN\Administrators Important You must configure the default domain as described to enable Samba to operate correctly. The default domain back end must be writable to permanently store the assigned IDs. For the default domain, you can use one of the following back ends: tdb When you configure the default domain to use the tdb back end, set an ID range that is big enough to include objects that will be created in the future and that are not part of a defined domain ID mapping configuration. For example, set the following in the [global] section in the /etc/samba/smb.conf file: For further details, see Using the TDB ID mapping back end . autorid When you configure the default domain to use the autorid back end, adding additional ID mapping configurations for domains is optional. For example, set the following in the [global] section in the /etc/samba/smb.conf file: For further details, see Using the autorid ID mapping back end . 3.4.3. Using the tdb ID mapping back end The winbindd service uses the writable tdb ID mapping back end by default to store Security Identifier (SID), UID, and GID mapping tables. This includes local users, groups, and built-in principals. Use this back end only for the * default domain. For example: Additional resources The * default domain . 3.4.4. Using the ad ID mapping back end You can configure a Samba AD member to use the ad ID mapping back end. The ad ID mapping back end implements a read-only API to read account and group information from AD. This provides the following benefits: All user and group settings are stored centrally in AD. User and group IDs are consistent on all Samba servers that use this back end. The IDs are not stored in a local database which can corrupt, and therefore file ownerships cannot be lost. Note The ad ID mapping back end does not support Active Directory domains with one-way trusts. If you configure a domain member in an Active Directory with one-way trusts, use instead one of the following ID mapping back ends: tdb , rid , or autorid . The ad back end reads the following attributes from AD: AD attribute name Object type Mapped to sAMAccountName User and group User or group name, depending on the object uidNumber User User ID (UID) gidNumber Group Group ID (GID) loginShell [a] User Path to the shell of the user unixHomeDirectory [a] User Path to the home directory of the user primaryGroupID [b] User Primary group ID [a] Samba only reads this attribute if you set idmap config DOMAIN :unix_nss_info = yes . [b] Samba only reads this attribute if you set idmap config DOMAIN :unix_primary_group = yes . Prerequisites Both users and groups must have unique IDs set in AD, and the IDs must be within the range configured in the /etc/samba/smb.conf file. Objects whose IDs are outside of the range will not be available on the Samba server. Users and groups must have all required attributes set in AD. If required attributes are missing, the user or group will not be available on the Samba server. The required attributes depend on your configuration. .Prerequisites You installed Samba. The Samba configuration, except ID mapping, exists in the /etc/samba/smb.conf file. Procedure Edit the [global] section in the /etc/samba/smb.conf file: Add an ID mapping configuration for the default domain ( * ) if it does not exist. For example: Enable the ad ID mapping back end for the AD domain: Set the range of IDs that is assigned to users and groups in the AD domain. For example: Important The range must not overlap with any other domain configuration on this server. Additionally, the range must be set big enough to include all IDs assigned in the future. For further details, see Planning Samba ID ranges . Set that Samba uses the RFC 2307 schema when reading attributes from AD: To enable Samba to read the login shell and the path to the users home directory from the corresponding AD attribute, set: Alternatively, you can set a uniform domain-wide home directory path and login shell that is applied to all users. For example: By default, Samba uses the primaryGroupID attribute of a user object as the user's primary group on Linux. Alternatively, you can configure Samba to use the value set in the gidNumber attribute instead: Verify the /etc/samba/smb.conf file: Reload the Samba configuration: Additional resources The * default domain smb.conf(5) and idmap_ad(8) man pages on your system VARIABLE SUBSTITUTIONS section in the smb.conf(5) man page on your system 3.4.5. Using the rid ID mapping back end You can configure a Samba domain member to use the rid ID mapping back end. Samba can use the relative identifier (RID) of a Windows SID to generate an ID on Red Hat Enterprise Linux. Note The RID is the last part of a SID. For example, if the SID of a user is S-1-5-21-5421822485-1151247151-421485315-30014 , then 30014 is the corresponding RID. The rid ID mapping back end implements a read-only API to calculate account and group information based on an algorithmic mapping scheme for AD and NT4 domains. When you configure the back end, you must set the lowest and highest RID in the idmap config DOMAIN : range parameter. Samba will not map users or groups with a lower or higher RID than set in this parameter. Important As a read-only back end, rid cannot assign new IDs, such as for BUILTIN groups. Therefore, do not use this back end for the * default domain. Benefits of using the rid back end All domain users and groups that have an RID within the configured range are automatically available on the domain member. You do not need to manually assign IDs, home directories, and login shells. Drawbacks of using the rid back end All domain users get the same login shell and home directory assigned. However, you can use variables. User and group IDs are only the same across Samba domain members if all use the rid back end with the same ID range settings. You cannot exclude individual users or groups from being available on the domain member. Only users and groups outside of the configured range are excluded. Based on the formulas the winbindd service uses to calculate the IDs, duplicate IDs can occur in multi-domain environments if objects in different domains have the same RID. Prerequisites You installed Samba. The Samba configuration, except ID mapping, exists in the /etc/samba/smb.conf file. Procedure Edit the [global] section in the /etc/samba/smb.conf file: Add an ID mapping configuration for the default domain ( * ) if it does not exist. For example: Enable the rid ID mapping back end for the domain: Set a range that is big enough to include all RIDs that will be assigned in the future. For example: Samba ignores users and groups whose RIDs in this domain are not within the range. Important The range must not overlap with any other domain configuration on this server. Additionally, the range must be set big enough to include all IDs assigned in the future. For further details, see Planning Samba ID ranges . Set a shell and home directory path that will be assigned to all mapped users. For example: Verify the /etc/samba/smb.conf file: Reload the Samba configuration: Additional resources The * default domain VARIABLE SUBSTITUTIONS section in the smb.conf(5) man page on your system Calculation of the local ID from a RID, see the idmap_rid(8) man page on your system 3.4.6. Using the autorid ID mapping back end You can configure a Samba domain member to use the autorid ID mapping back end. The autorid back end works similar to the rid ID mapping back end, but can automatically assign IDs for different domains. This enables you to use the autorid back end in the following situations: Only for the * default domain For the * default domain and additional domains, without the need to create ID mapping configurations for each of the additional domains Only for specific domains Note If you use autorid for the default domain, adding additional ID mapping configuration for domains is optional. Parts of this section were adopted from the idmap config autorid documentation published in the Samba Wiki. License: CC BY 4.0 . Authors and contributors: See the history tab on the Wiki page. Benefits of using the autorid back end All domain users and groups whose calculated UID and GID is within the configured range are automatically available on the domain member. You do not need to manually assign IDs, home directories, and login shells. No duplicate IDs, even if multiple objects in a multi-domain environment have the same RID. Drawbacks User and group IDs are not the same across Samba domain members. All domain users get the same login shell and home directory assigned. However, you can use variables. You cannot exclude individual users or groups from being available on the domain member. Only users and groups whose calculated UID or GID is outside of the configured range are excluded. Prerequisites You installed Samba. The Samba configuration, except ID mapping, exists in the /etc/samba/smb.conf file. Procedure Edit the [global] section in the /etc/samba/smb.conf file: Enable the autorid ID mapping back end for the * default domain: Set a range that is big enough to assign IDs for all existing and future objects. For example: Samba ignores users and groups whose calculated IDs in this domain are not within the range. Warning After you set the range and Samba starts using it, you can only increase the upper limit of the range. Any other change to the range can result in new ID assignments, and thus in losing file ownerships. Optional: Set a range size. For example: Samba assigns this number of continuous IDs for each domain's object until all IDs from the range set in the idmap config * : range parameter are taken. Note If you set a rangesize, you need to adapt the range accordingly. The range needs to be a multiple of the rangesize. Set a shell and home directory path that will be assigned to all mapped users. For example: Optional: Add additional ID mapping configuration for domains. If no configuration for an individual domain is available, Samba calculates the ID using the autorid back end settings in the previously configured * default domain. Important The range must not overlap with any other domain configuration on this server. Additionally, the range must be set big enough to include all IDs assigned in the future. For further details, see Planning Samba ID ranges . Verify the /etc/samba/smb.conf file: Reload the Samba configuration: Additional resources THE MAPPING FORMULAS section in the idmap_autorid(8) man page on your system rangesize parameter description in the idmap_autorid(8) man page on your system VARIABLE SUBSTITUTIONS section in the smb.conf(5) man page on your system 3.5. Setting up Samba as an AD domain member server If you are running an AD or NT4 domain, use Samba to add your Red Hat Enterprise Linux server as a member to the domain to gain the following: Access domain resources on other domain members Authenticate domain users to local services, such as sshd Share directories and printers hosted on the server to act as a file and print server 3.5.1. Joining a RHEL system to an AD domain Samba Winbind is an alternative to the System Security Services Daemon (SSSD) for connecting a Red Hat Enterprise Linux (RHEL) system with Active Directory (AD). You can join a RHEL system to an AD domain by using realmd to configure Samba Winbind. Procedure If your AD requires the deprecated RC4 encryption type for Kerberos authentication, enable support for these ciphers in RHEL: Install the following packages: To share directories or printers on the domain member, install the samba package: Backup the existing /etc/samba/smb.conf Samba configuration file: Join the domain. For example, to join a domain named ad.example.com : Using the command, the realm utility automatically: Creates a /etc/samba/smb.conf file for a membership in the ad.example.com domain Adds the winbind module for user and group lookups to the /etc/nsswitch.conf file Updates the Pluggable Authentication Module (PAM) configuration files in the /etc/pam.d/ directory Starts the winbind service and enables the service to start when the system boots Optional: Set an alternative ID mapping back end or customized ID mapping settings in the /etc/samba/smb.conf file. For details, see Understanding and configuring Samba ID mapping . Verify that the winbind service is running: Important To enable Samba to query domain user and group information, the winbind service must be running before you start smb . If you installed the samba package to share directories and printers, enable and start the smb service: If you are authenticating local logins to Active Directory, enable the winbind_krb5_localauth plug-in. See Using the local authorization plug-in for MIT Kerberos . Verification Display an AD user's details, such as the AD administrator account in the AD domain: Query the members of the domain users group in the AD domain: Optional: Verify that you can use domain users and groups when you set permissions on files and directories. For example, to set the owner of the /srv/samba/example.txt file to AD\administrator and the group to AD\Domain Users : Verify that Kerberos authentication works as expected: On the AD domain member, obtain a ticket for the [email protected] principal: Display the cached Kerberos ticket: Display the available domains: Additional resources If you do not want to use the deprecated RC4 ciphers, you can enable the AES encryption type in AD. See Enabling the AES encryption type in Active Directory using a GPO realm(8) man page on your system 3.5.2. Using the local authorization plug-in for MIT Kerberos The winbind service provides Active Directory users to the domain member. In certain situations, administrators want to enable domain users to authenticate to local services, such as an SSH server, which are running on the domain member. When using Kerberos to authenticate the domain users, enable the winbind_krb5_localauth plug-in to correctly map Kerberos principals to Active Directory accounts through the winbind service. For example, if the sAMAccountName attribute of an Active Directory user is set to EXAMPLE and the user tries to log with the user name lowercase, Kerberos returns the user name in upper case. As a consequence, the entries do not match and authentication fails. Using the winbind_krb5_localauth plug-in, the account names are mapped correctly. Note that this only applies to GSSAPI authentication and not for getting the initial ticket granting ticket (TGT). Prerequisites Samba is configured as a member of an Active Directory. Red Hat Enterprise Linux authenticates log in attempts against Active Directory. The winbind service is running. Procedure Edit the /etc/krb5.conf file and add the following section: Additional resources winbind_krb5_localauth(8) man page on your system 3.6. Setting up Samba on an IdM domain member You can set up Samba on a host that is joined to a Red Hat Identity Management (IdM) domain. Users from IdM and also, if available, from trusted Active Directory (AD) domains, can access shares and printer services provided by Samba. Important Using Samba on an IdM domain member is an unsupported Technology Preview feature and contains certain limitations. For example, IdM trust controllers do not support the Active Directory Global Catalog service, and they do not support resolving IdM groups using the Distributed Computing Environment / Remote Procedure Calls (DCE/RPC) protocols. As a consequence, AD users can only access Samba shares and printers hosted on IdM clients when logged in to other IdM clients; AD users logged into a Windows machine can not access Samba shares hosted on an IdM domain member. Customers deploying Samba on IdM domain members are encouraged to provide feedback to Red Hat. If users from AD domains need to access shares and printer services provided by Samba, ensure the AES encryption type is enabled is AD. For more information, see Enabling the AES encryption type in Active Directory using a GPO . Prerequisites The host is joined as a client to the IdM domain. Both the IdM servers and the client must run on RHEL 8.1 or later. 3.6.1. Preparing the IdM domain for installing Samba on domain members Before you can set up Samba on an IdM client, you must prepare the IdM domain using the ipa-adtrust-install utility on an IdM server. Note Any system where you run the ipa-adtrust-install command automatically becomes an AD trust controller. However, you must run ipa-adtrust-install only once on an IdM server. Prerequisites IdM server is installed. You have root privileges to install packages and restart IdM services. Procedure Install the required packages: Authenticate as the IdM administrative user: Run the ipa-adtrust-install utility: The DNS service records are created automatically if IdM was installed with an integrated DNS server. If you installed IdM without an integrated DNS server, ipa-adtrust-install prints a list of service records that you must manually add to DNS before you can continue. The script prompts you that the /etc/samba/smb.conf already exists and will be rewritten: The script prompts you to configure the slapi-nis plug-in, a compatibility plug-in that allows older Linux clients to work with trusted users: When prompted, enter the NetBIOS name for the IdM domain or press Enter to accept the name suggested: You are prompted to run the SID generation task to create a SID for any existing users: This is a resource-intensive task, so if you have a high number of users, you can run this at another time. Optional: By default, the Dynamic RPC port range is defined as 49152-65535 for Windows Server 2008 and later. If you need to define a different Dynamic RPC port range for your environment, configure Samba to use different ports and open those ports in your firewall settings. The following example sets the port range to 55000-65000 . Restart the ipa service: Use the smbclient utility to verify that Samba responds to Kerberos authentication from the IdM side: 3.6.2. Installing and configuring a Samba server on an IdM client You can install and configure Samba on a client enrolled in an IdM domain. Prerequisites Both the IdM servers and the client must run on RHEL 8.1 or later. The IdM domain is prepared as described in Preparing the IdM domain for installing Samba on domain members . If IdM has a trust configured with AD, enable the AES encryption type for Kerberos. For example, use a group policy object (GPO) to enable the AES encryption type. For details, see Enabling AES encryption in Active Directory using a GPO . Procedure Install the ipa-client-samba package: Use the ipa-client-samba utility to prepare the client and create an initial Samba configuration: By default, ipa-client-samba automatically adds the [homes] section to the /etc/samba/smb.conf file that dynamically shares a user's home directory when the user connects. If users do not have home directories on this server, or if you do not want to share them, remove the following lines from /etc/samba/smb.conf : Share directories and printers. For details, see: Setting up a Samba file share that uses POSIX ACLs Setting up a share that uses Windows ACLs Setting up Samba as a print server Open the ports required for a Samba client in the local firewall: Enable and start the smb and winbind services: Verification Run the following verification step on a different IdM domain member that has the samba-client package installed: List the shares on the Samba server using Kerberos authentication: Additional resources ipa-client-samba(1) man page on your system 3.6.3. Manually adding an ID mapping configuration if IdM trusts a new domain Samba requires an ID mapping configuration for each domain from which users access resources. On an existing Samba server running on an IdM client, you must manually add an ID mapping configuration after the administrator added a new trust to an Active Directory (AD) domain. Prerequisites You configured Samba on an IdM client. Afterward, a new trust was added to IdM. The DES and RC4 encryption types for Kerberos must be disabled in the trusted AD domain. For security reasons, RHEL 8 does not support these weak encryption types. Procedure Authenticate using the host's keytab: Use the ipa idrange-find command to display both the base ID and the ID range size of the new domain. For example, the following command displays the values for the ad.example.com domain: You need the values from the ipabaseid and ipaidrangesize attributes in the steps. To calculate the highest usable ID, use the following formula: With the values from the step, the highest usable ID for the ad.example.com domain is 1918599999 (1918400000 + 200000 - 1). Edit the /etc/samba/smb.conf file, and add the ID mapping configuration for the domain to the [global] section: Specify the value from ipabaseid attribute as the lowest and the computed value from the step as the highest value of the range. Restart the smb and winbind services: Verification List the shares on the Samba server using Kerberos authentication: 3.6.4. Additional resources Installing an Identity Management client 3.7. Setting up a Samba file share that uses POSIX ACLs As a Linux service, Samba supports shares with POSIX ACLs. They enable you to manage permissions locally on the Samba server using utilities, such as chmod . If the share is stored on a file system that supports extended attributes, you can define ACLs with multiple users and groups. Note If you need to use fine-granular Windows ACLs instead, see Setting up a share that uses Windows ACLs . Parts of this section were adopted from the Setting up a Share Using POSIX ACLs documentation published in the Samba Wiki. License: CC BY 4.0 . Authors and contributors: See the history tab on the Wiki page. 3.7.1. Adding a share that uses POSIX ACLs You can create a share named example that provides the content of the /srv/samba/example/ directory and uses POSIX ACLs. Prerequisites Samba has been set up in one of the following modes: Standalone server Domain member Procedure Create the folder if it does not exist. For example: If you run SELinux in enforcing mode, set the samba_share_t context on the directory: Set file system ACLs on the directory. For details, see: Setting standard ACLs on a Samba Share that uses POSIX ACLs Setting extended ACLs on a share that uses POSIX ACLs . Add the example share to the /etc/samba/smb.conf file. For example, to add the share write-enabled: Note Regardless of the file system ACLs; if you do not set read only = no , Samba shares the directory in read-only mode. Verify the /etc/samba/smb.conf file: Open the required ports and reload the firewall configuration using the firewall-cmd utility: Restart the smb service: 3.7.2. Setting standard Linux ACLs on a Samba share that uses POSIX ACLs The standard ACLs on Linux support setting permissions for one owner, one group, and for all other undefined users. You can use the chown , chgrp , and chmod utility to update the ACLs. If you require precise control, then you use the more complex POSIX ACLs, see Setting extended ACLs on a Samba share that uses POSIX ACLs . The following procedure sets the owner of the /srv/samba/example/ directory to the root user, grants read and write permissions to the Domain Users group, and denies access to all other users. Prerequisites The Samba share on which you want to set the ACLs exists. Procedure Note Enabling the set-group-ID (SGID) bit on a directory automatically sets the default group for all new files and subdirectories to that of the directory group, instead of the usual behavior of setting it to the primary group of the user who created the new directory entry. Additional resources chown(1) and chmod(1) man pages on your system 3.7.3. Setting extended ACLs on a Samba share that uses POSIX ACLs If the file system the shared directory is stored on supports extended ACLs, you can use them to set complex permissions. Extended ACLs can contain permissions for multiple users and groups. Extended POSIX ACLs enable you to configure complex ACLs with multiple users and groups. However, you can only set the following permissions: No access Read access Write access Full control If you require the fine-granular Windows permissions, such as Create folder / append data , configure the share to use Windows ACLs. See Setting up a share that uses Windows ACLs . The following procedure shows how to enable extended ACLs on a share. Additionally, it contains an example about setting extended ACLs. Prerequisites The Samba share on which you want to set the ACLs exists. Procedure Enable the following parameter in the share's section in the /etc/samba/smb.conf file to enable ACL inheritance of extended ACLs: For details, see the parameter description in the smb.conf(5 ) man page. Restart the smb service: Set the ACLs on the directory. For example: Example 3.2. Setting Extended ACLs The following procedure sets read, write, and execute permissions for the Domain Admins group, read, and execute permissions for the Domain Users group, and deny access to everyone else on the /srv/samba/example/ directory: Disable auto-granting permissions to the primary group of user accounts: The primary group of the directory is additionally mapped to the dynamic CREATOR GROUP principal. When you use extended POSIX ACLs on a Samba share, this principal is automatically added and you cannot remove it. Set the permissions on the directory: Grant read, write, and execute permissions to the Domain Admins group: Grant read and execute permissions to the Domain Users group: Set permissions for the other ACL entry to deny access to users that do not match the other ACL entries: These settings apply only to this directory. In Windows, these ACLs are mapped to the This folder only mode. To enable the permissions set in the step to be inherited by new file system objects created in this directory: With these settings, the This folder only mode for the principals is now set to This folder, subfolders, and files . Samba maps the permissions set in the procedure to the following Windows ACLs: Principal Access Applies to Domain \Domain Admins Full control This folder, subfolders, and files Domain \Domain Users Read & execute This folder, subfolders, and files Everyone [a] None This folder, subfolders, and files owner ( Unix User\owner ) [b] Full control This folder only primary_group ( Unix User\primary_group ) [c] None This folder only CREATOR OWNER [d] [e] Full control Subfolders and files only CREATOR GROUP [e] [f] None Subfolders and files only [a] Samba maps the permissions for this principal from the other ACL entry. [b] Samba maps the owner of the directory to this entry. [c] Samba maps the primary group of the directory to this entry. [d] On new file system objects, the creator inherits automatically the permissions of this principal. [e] Configuring or removing these principals from the ACLs not supported on shares that use POSIX ACLs. [f] On new file system objects, the creator's primary group inherits automatically the permissions of this principal. 3.8. Setting permissions on a share that uses POSIX ACLs Optionally, to limit or grant access to a Samba share, you can set certain parameters in the share's section in the /etc/samba/smb.conf file. Note Share-based permissions manage if a user, group, or host is able to access a share. These settings do not affect file system ACLs. Use share-based settings to restrict access to shares, for example, to deny access from specific hosts. Prerequisites A share with POSIX ACLs has been set up. 3.8.1. Configuring user and group-based share access User and group-based access control enables you to grant or deny access to a share for certain users and groups. Prerequisites The Samba share on which you want to set user or group-based access exists. Procedure For example, to enable all members of the Domain Users group to access a share while access is denied for the user account, add the following parameters to the share's configuration: The invalid users parameter has a higher priority than the valid users parameter. For example, if the user account is a member of the Domain Users group, access is denied to this account when you use the example. Reload the Samba configuration: Additional resources smb.conf(5) man page on your system 3.8.2. Configuring host-based share access Host-based access control enables you to grant or deny access to a share based on client's host names, IP addresses, or IP range. The following procedure explains how to enable the 127.0.0.1 IP address, the 192.0.2.0/24 IP range, and the client1.example.com host to access a share, and additionally deny access for the client2.example.com host: Prerequisites The Samba share on which you want to set host-based access exists. Procedure Add the following parameters to the configuration of the share in the /etc/samba/smb.conf file: The hosts deny parameter has a higher priority than hosts allow . For example, if client1.example.com resolves to an IP address that is listed in the hosts allow parameter, access for this host is denied. Reload the Samba configuration: Additional resources smb.conf(5) man page on your system 3.9. Setting up a share that uses Windows ACLs Samba supports setting Windows ACLs on shares and file system object. This enables you to: Use the fine-granular Windows ACLs Manage share permissions and file system ACLs using Windows Alternatively, you can configure a share to use POSIX ACLs. For details, see Setting up a Samba file share that uses POSIX ACLs . Parts of this section were adopted from the Setting up a Share Using Windows ACLs documentation published in the Samba Wiki. License: CC BY 4.0 . Authors and contributors: See the history tab on the Wiki page. 3.9.1. Granting the SeDiskOperatorPrivilege privilege Only users and groups having the SeDiskOperatorPrivilege privilege granted can configure permissions on shares that use Windows ACLs. Procedure For example, to grant the SeDiskOperatorPrivilege privilege to the DOMAIN \Domain Admins group: Note In a domain environment, grant SeDiskOperatorPrivilege to a domain group. This enables you to centrally manage the privilege by updating a user's group membership. To list all users and groups having SeDiskOperatorPrivilege granted: 3.9.2. Enabling Windows ACL support To configure shares that support Windows ACLs, you must enable this feature in Samba. Prerequisites A user share is configured on the Samba server. Procedure To enable it globally for all shares, add the following settings to the [global] section of the /etc/samba/smb.conf file: Alternatively, you can enable Windows ACL support for individual shares, by adding the same parameters to a share's section instead. Restart the smb service: 3.9.3. Adding a share that uses Windows ACLs You can create a share named example that shares the content of the /srv/samba/example/ directory and uses Windows ACLs. Procedure Create the folder if it does not exist. For example: If you run SELinux in enforcing mode, set the samba_share_t context on the directory: Add the example share to the /etc/samba/smb.conf file. For example, to add the share write-enabled: Note Regardless of the file system ACLs; if you do not set read only = no , Samba shares the directory in read-only mode. If you have not enabled Windows ACL support in the [global] section for all shares, add the following parameters to the [example] section to enable this feature for this share: Verify the /etc/samba/smb.conf file: Open the required ports and reload the firewall configuration using the firewall-cmd utility: Restart the smb service: 3.9.4. Managing share permissions and file system ACLs of a share that uses Windows ACLs To manage share permissions and file system ACLs on a Samba share that uses Windows ACLs, use a Windows applications, such as Computer Management . For details, see the Windows documentation. Alternatively, use the smbcacls utility to manage ACLs. Note To modify the file system permissions from Windows, you must use an account that has the SeDiskOperatorPrivilege privilege granted. Additional resources Section 3.10, "Managing ACLs on an SMB share using smbcacls" Section 3.9.1, "Granting the SeDiskOperatorPrivilege privilege" 3.10. Managing ACLs on an SMB share using smbcacls The smbcacls utility can list, set, and delete ACLs of files and directories stored on an SMB share. You can use smbcacls to manage file system ACLs: On a local or remote Samba server that uses advanced Windows ACLs or POSIX ACLs On Red Hat Enterprise Linux to remotely manage ACLs on a share hosted on Windows 3.10.1. Access control entries Each ACL entry of a file system object contains Access Control Entries (ACE) in the following format: Example 3.3. Access control entries If the AD\Domain Users group has Modify permissions that apply to This folder, subfolders, and files on Windows, the ACL contains the following ACE: An ACE contains the following parts: Security principal The security principal is the user, group, or SID the permissions in the ACL are applied to. Access right Defines if access to an object is granted or denied. The value can be ALLOWED or DENIED . Inheritance information The following values exist: Table 3.1. Inheritance settings Value Description Maps to OI Object Inherit This folder and files CI Container Inherit This folder and subfolders IO Inherit Only The ACE does not apply to the current file or directory ID Inherited The ACE was inherited from the parent directory Additionally, the values can be combined as follows: Table 3.2. Inheritance settings combinations Value combinations Maps to the Windows Applies to setting OI|CI This folder, subfolders, and files OI|CI|IO Subfolders and files only CI|IO Subfolders only OI|IO Files only Permissions This value can be either a hex value that represents one or more Windows permissions or an smbcacls alias: A hex value that represents one or more Windows permissions. The following table displays the advanced Windows permissions and their corresponding value in hex format: Table 3.3. Windows permissions and their corresponding smbcacls value in hex format Windows permissions Hex values Full control 0x001F01FF Traverse folder / execute file 0x00100020 List folder / read data 0x00100001 Read attributes 0x00100080 Read extended attributes 0x00100008 Create files / write data 0x00100002 Create folders / append data 0x00100004 Write attributes 0x00100100 Write extended attributes 0x00100010 Delete subfolders and files 0x00100040 Delete 0x00110000 Read permissions 0x00120000 Change permissions 0x00140000 Take ownership 0x00180000 Multiple permissions can be combined as a single hex value using the bit-wise OR operation. For details, see ACE mask calculation . An smbcacls alias. The following table displays the available aliases: Table 3.4. Existing smbcacls aliases and their corresponding Windows permission smbcacls alias Maps to Windows permission R Read READ Read & execute W Special: Create files / write data Create folders / append data Write attributes Write extended attributes Read permissions D Delete P Change permissions O Take ownership X Traverse / execute CHANGE Modify FULL Full control Note You can combine single-letter aliases when you set permissions. For example, you can set RD to apply the Windows permission Read and Delete . However, you can neither combine multiple non-single-letter aliases nor combine aliases and hex values. 3.10.2. Displaying ACLs using smbcacls To display ACLs on an SMB share, use the smbcacls utility. If you run smbcacls without any operation parameter, such as --add , the utility displays the ACLs of a file system object. Procedure For example, to list the ACLs of the root directory of the //server/example share: The output of the command displays: REVISION : The internal Windows NT ACL revision of the security descriptor CONTROL : Security descriptor control OWNER : Name or SID of the security descriptor's owner GROUP : Name or SID of the security descriptor's group ACL entries. For details, see Access control entries . 3.10.3. ACE mask calculation In most situations, when you add or update an ACE, you use the smbcacls aliases listed in Existing smbcacls aliases and their corresponding Windows permission . However, if you want to set advanced Windows permissions as listed in Windows permissions and their corresponding smbcacls value in hex format , you must use the bit-wise OR operation to calculate the correct value. You can use the following shell command to calculate the value: Example 3.4. Calculating an ACE Mask You want to set the following permissions: Traverse folder / execute file (0x00100020) List folder / read data (0x00100001) Read attributes (0x00100080) To calculate the hex value for the permissions, enter: Use the returned value when you set or update an ACE. 3.10.4. Adding, updating, and removing an ACL using smbcacls Depending on the parameter you pass to the smbcacls utility, you can add, update, and remove ACLs from a file or directory. Adding an ACL To add an ACL to the root of the //server/example share that grants CHANGE permissions for This folder, subfolders, and files to the AD\Domain Users group: Updating an ACL Updating an ACL is similar to adding a new ACL. You update an ACL by overriding the ACL using the --modify parameter with an existing security principal. If smbcacls finds the security principal in the ACL list, the utility updates the permissions. Otherwise the command fails with an error: For example, to update the permissions of the AD\Domain Users group and set them to READ for This folder, subfolders, and files : Deleting an ACL To delete an ACL, pass the --delete parameter with the exact ACL to the smbcacls utility. For example: 3.11. Enabling users to share directories on a Samba server On a Samba server, you can configure that users can share directories without root permissions. 3.11.1. Enabling the user shares feature Before users can share directories, the administrator must enable user shares in Samba. For example, to enable only members of the local example group to create user shares. Procedure Create the local example group, if it does not exist: Prepare the directory for Samba to store the user share definitions and set its permissions properly. For example: Create the directory: Set write permissions for the example group: Set the sticky bit to prevent users to rename or delete files stored by other users in this directory. Edit the /etc/samba/smb.conf file and add the following to the [global] section: Set the path to the directory you configured to store the user share definitions. For example: Set how many user shares Samba allows to be created on this server. For example: If you use the default of 0 for the usershare max shares parameter, user shares are disabled. Optional: Set a list of absolute directory paths. For example, to configure that Samba only allows to share subdirectories of the /data and /srv directory to be shared, set: For a list of further user share-related parameters you can set, see the USERSHARES section in the smb.conf(5) man page on your system. Verify the /etc/samba/smb.conf file: Reload the Samba configuration: Users are now able to create user shares. 3.11.2. Adding a user share After you enabled the user share feature in Samba, users can share directories on the Samba server without root permissions by running the net usershare add command. Synopsis of the net usershare add command: net usershare add share_name path [[ comment ] | [ ACLs ]] [ guest_ok=y|n ] Important If you set ACLs when you create a user share, you must specify the comment parameter prior to the ACLs. To set an empty comment, use an empty string in double quotes. Note that users can only enable guest access on a user share, if the administrator set usershare allow guests = yes in the [global] section in the /etc/samba/smb.conf file. Example 3.5. Adding a user share A user wants to share the /srv/samba/ directory on a Samba server. The share should be named example , have no comment set, and should be accessible by guest users. Additionally, the share permissions should be set to full access for the AD\Domain Users group and read permissions for other users. To add this share, run as the user: 3.11.3. Updating settings of a user share To update settings of a user share, override the share by using the net usershare add command with the same share name and the new settings. See Adding a user share . 3.11.4. Displaying information about existing user shares Users can enter the net usershare info command on a Samba server to display user shares and their settings. Prerequisites A user share is configured on the Samba server. Procedure To display all user shares created by any user: To list only shares created by the user who runs the command, omit the -l parameter. To display only the information about specific shares, pass the share name or wild cards to the command. For example, to display the information about shares whose name starts with share_ : 3.11.5. Listing user shares If you want to list only the available user shares without their settings on a Samba server, use the net usershare list command. Prerequisites A user share is configured on the Samba server. Procedure To list the shares created by any user: To list only shares created by the user who runs the command, omit the -l parameter. To list only specific shares, pass the share name or wild cards to the command. For example, to list only shares whose name starts with share_ : 3.11.6. Deleting a user share To delete a user share, use the command net usershare delete command as the user who created the share or as the root user. Prerequisites A user share is configured on the Samba server. Procedure 3.12. Configuring a share to allow access without authentication In certain situations, you want to share a directory to which users can connect without authentication. To configure this, enable guest access on a share. Warning Shares that do not require authentication can be a security risk. 3.12.1. Enabling guest access to a share If guest access is enabled on a share, Samba maps guest connections to the operating system account set in the guest account parameter. Guest users can access files on this share if at least one of the following conditions is satisfied: The account is listed in file system ACLs The POSIX permissions for other users allow it Example 3.6. Guest share permissions If you configured Samba to map the guest account to nobody , which is the default, the ACLs in the following example: Allow guest users to read file1.txt Allow guest users to read and modify file2.txt Prevent guest users to read or modify file3.txt Procedure Edit the /etc/samba/smb.conf file: If this is the first guest share you set up on this server: Set map to guest = Bad User in the [global] section: With this setting, Samba rejects login attempts that use an incorrect password unless the user name does not exist. If the specified user name does not exist and guest access is enabled on a share, Samba treats the connection as a guest log in. By default, Samba maps the guest account to the nobody account on Red Hat Enterprise Linux. Alternatively, you can set a different account. For example: The account set in this parameter must exist locally on the Samba server. For security reasons, Red Hat recommends using an account that does not have a valid shell assigned. Add the guest ok = yes setting to the [example] share section: Verify the /etc/samba/smb.conf file: Reload the Samba configuration: 3.13. Configuring Samba for macOS clients The fruit virtual file system (VFS) Samba module provides enhanced compatibility with Apple server message block (SMB) clients. 3.13.1. Optimizing the Samba configuration for providing file shares for macOS clients The fruit module provides enhanced compatibility of Samba with macOS clients. You can configure the module for all shares hosted on a Samba server to optimize the file shares for macOS clients. Note Enable the fruit module globally. Clients using macOS negotiate the server message block version 2 (SMB2) Apple (AAPL) protocol extensions when the client establishes the first connection to the server. If the client first connects to a share without AAPL extensions enabled, the client does not use the extensions for any share of the server. Prerequisites Samba is configured as a file server. Procedure Edit the /etc/samba/smb.conf file, and enable the fruit and streams_xattr VFS modules in the [global] section: Important You must enable the fruit module before enabling streams_xattr . The fruit module uses alternate data streams (ADS). For this reason, you must also enable the streams_xattr module. Optional: To provide macOS Time Machine support on a share, add the following setting to the share configuration in the /etc/samba/smb.conf file: Verify the /etc/samba/smb.conf file: Reload the Samba configuration: Additional resources vfs_fruit(8) man page on your system Configuring file shares: Setting up a Samba file share that uses POSIX ACLs Setting up a share that uses Windows ACLs . 3.14. Using the smbclient utility to access an SMB share The smbclient utility enables you to access file shares on an SMB server, similarly to a command-line FTP client. You can use it, for example, to upload and download files to and from a share. Prerequisites The samba-client package is installed. 3.14.1. How the smbclient interactive mode works For example, to authenticate to the example share hosted on server using the DOMAIN\user account: After smbclient connected successfully to the share, the utility enters the interactive mode and shows the following prompt: To display all available commands in the interactive shell, enter: To display the help for a specific command, enter: Additional resources smbclient(1) man page on your system 3.14.2. Using smbclient in interactive mode If you use smbclient without the -c parameter, the utility enters the interactive mode. The following procedure shows how to connect to an SMB share and download a file from a subdirectory. Procedure Connect to the share: Change into the /example/ directory: List the files in the directory: Download the example.txt file: Disconnect from the share: 3.14.3. Using smbclient in scripting mode If you pass the -c parameter to smbclient , you can automatically execute the commands on the remote SMB share. This enables you to use smbclient in scripts. The following procedure shows how to connect to an SMB share and download a file from a subdirectory. Procedure Use the following command to connect to the share, change into the example directory, download the example.txt file: 3.15. Setting up Samba as a print server If you set up Samba as a print server, clients in your network can use Samba to print. Additionally, Windows clients can, if configured, download the driver from the Samba server. Parts of this section were adopted from the Setting up Samba as a Print Server documentation published in the Samba Wiki. License: CC BY 4.0 . Authors and contributors: See the history tab on the Wiki page. Prerequisites Samba has been set up in one of the following modes: Standalone server Domain member 3.15.1. Enabling print server support in Samba By default, print server support is not enabled in Samba. To use Samba as a print server, you must configure Samba accordingly. Note Print jobs and printer operations require remote procedure calls (RPCs). By default, Samba starts the rpcd_spoolss service on demand to manage RPCs. During the first RPC call, or when you update the printer list in CUPS, Samba retrieves the printer information from CUPS. This can require approximately 1 second per printer. Therefore, if you have more than 50 printers, tune the rpcd_spoolss settings. Prerequisites The printers are configured in a CUPS server. For details about configuring printers in CUPS, see the documentation provided in the CUPS web console (https:// printserver :631/help) on the print server. Procedure Edit the /etc/samba/smb.conf file: Add the [printers] section to enable the printing backend in Samba: Important The [printers] share name is hard-coded and cannot be changed. If the CUPS server runs on a different host or port, specify the setting in the [printers] section: If you have many printers, set the number of idle seconds to a higher value than the numbers of printers connected to CUPS. For example, if you have 100 printers, set in the [global] section: If this setting does not scale in your environment, also increase the number of rpcd_spoolss workers in the [global] section: By default, rpcd_spoolss starts 5 workers. Verify the /etc/samba/smb.conf file: Open the required ports and reload the firewall configuration using the firewall-cmd utility: Restart the smb service: After restarting the service, Samba automatically shares all printers that are configured in the CUPS back end. If you want to manually share only specific printers, see Manually sharing specific printers . Verification Submit a print job. For example, to print a PDF file, enter: 3.15.2. Manually sharing specific printers If you configured Samba as a print server, by default, Samba shares all printers that are configured in the CUPS back end. The following procedure explains how to share only specific printers. Prerequisites Samba is set up as a print server Procedure Edit the /etc/samba/smb.conf file: In the [global] section, disable automatic printer sharing by setting: Add a section for each printer you want to share. For example, to share the printer named example in the CUPS back end as Example-Printer in Samba, add the following section: You do not need individual spool directories for each printer. You can set the same spool directory in the path parameter for the printer as you set in the [printers] section. Verify the /etc/samba/smb.conf file: Reload the Samba configuration: 3.16. Setting up automatic printer driver downloads for Windows clients on Samba print servers If you are running a Samba print server for Windows clients, you can upload drivers and preconfigure printers. If a user connects to a printer, Windows automatically downloads and installs the driver locally on the client. The user does not require local administrator permissions for the installation. Additionally, Windows applies preconfigured driver settings, such as the number of trays. Parts of this section were adopted from the Setting up Automatic Printer Driver Downloads for Windows Clients documentation published in the Samba Wiki. License: CC BY 4.0 . Authors and contributors: See the history tab on the Wiki page. Prerequisites Samba is set up as a print server 3.16.1. Basic information about printer drivers This section provides general information about printer drivers. Supported driver model version Samba only supports the printer driver model version 3 which is supported in Windows 2000 and later, and Windows Server 2000 and later. Samba does not support the driver model version 4, introduced in Windows 8 and Windows Server 2012. However, these and later Windows versions also support version 3 drivers. Package-aware drivers Samba does not support package-aware drivers. Preparing a printer driver for being uploaded Before you can upload a driver to a Samba print server: Unpack the driver if it is provided in a compressed format. Some drivers require to start a setup application that installs the driver locally on a Windows host. In certain situations, the installer extracts the individual files into the operating system's temporary folder during the setup runs. To use the driver files for uploading: Start the installer. Copy the files from the temporary folder to a new location. Cancel the installation. Ask your printer manufacturer for drivers that support uploading to a print server. Providing 32-bit and 64-bit drivers for a printer to a client To provide the driver for a printer for both 32-bit and 64-bit Windows clients, you must upload a driver with exactly the same name for both architectures. For example, if you are uploading the 32-bit driver named Example PostScript and the 64-bit driver named Example PostScript (v1.0) , the names do not match. Consequently, you can only assign one of the drivers to a printer and the driver will not be available for both architectures. 3.16.2. Enabling users to upload and preconfigure drivers To be able to upload and preconfigure printer drivers, a user or a group needs to have the SePrintOperatorPrivilege privilege granted. A user must be added into the printadmin group. Red Hat Enterprise Linux automatically creates this group when you install the samba package. The printadmin group gets assigned the lowest available dynamic system GID that is lower than 1000. Procedure For example, to grant the SePrintOperatorPrivilege privilege to the printadmin group: Note In a domain environment, grant SePrintOperatorPrivilege to a domain group. This enables you to centrally manage the privilege by updating a user's group membership. To list all users and groups having SePrintOperatorPrivilege granted: 3.16.3. Setting up the printUSD share Windows operating systems download printer drivers from a share named printUSD from a print server. This share name is hard-coded in Windows and cannot be changed. The following procedure explains how to share the /var/lib/samba/drivers/ directory as printUSD , and enable members of the local printadmin group to upload printer drivers. Procedure Add the [printUSD] section to the /etc/samba/smb.conf file: Using these settings: Only members of the printadmin group can upload printer drivers to the share. The group of new created files and directories will be set to printadmin . The permissions of new files will be set to 664 . The permissions of new directories will be set to 2775 . To upload only 64-bit drivers for all printers, include this setting in the [global] section in the /etc/samba/smb.conf file: Without this setting, Windows only displays drivers for which you have uploaded at least the 32-bit version. Verify the /etc/samba/smb.conf file: Reload the Samba configuration Create the printadmin group if it does not exist: Grant the SePrintOperatorPrivilege privilege to the printadmin group. If you run SELinux in enforcing mode, set the samba_share_t context on the directory: Set the permissions on the /var/lib/samba/drivers/ directory: If you use POSIX ACLs, set: If you use Windows ACLs, set: Principal Access Apply to CREATOR OWNER Full control Subfolders and files only Authenticated Users Read & execute, List folder contents, Read This folder, subfolders, and files printadmin Full control This folder, subfolders, and files For details about setting ACLs on Windows, see the Windows documentation. Additional resources Enabling users to upload and preconfigure drivers . 3.16.4. Creating a GPO to enable clients to trust the Samba print server For security reasons, recent Windows operating systems prevent clients from downloading non-package-aware printer drivers from an untrusted server. If your print server is a member in an AD, you can create a Group Policy Object (GPO) in your domain to trust the Samba server. Prerequisites The Samba print server is a member of an AD domain. The Windows computer you are using to create the GPO must have the Windows Remote Server Administration Tools (RSAT) installed. For details, see the Windows documentation. Procedure Log into a Windows computer using an account that is allowed to edit group policies, such as the AD domain Administrator user. Open the Group Policy Management Console . Right-click to your AD domain and select Create a GPO in this domain, and Link it here . Enter a name for the GPO, such as Legacy Printer Driver Policy and click OK . The new GPO will be displayed under the domain entry. Right-click to the newly-created GPO and select Edit to open the Group Policy Management Editor . Navigate to Computer Configuration Policies Administrative Templates Printers . On the right side of the window, double-click Point and Print Restriction to edit the policy: Enable the policy and set the following options: Select Users can only point and print to these servers and enter the fully-qualified domain name (FQDN) of the Samba print server to the field to this option. In both check boxes under Security Prompts , select Do not show warning or elevation prompt . Click OK. Double-click Package Point and Print - Approved servers to edit the policy: Enable the policy and click the Show button. Enter the FQDN of the Samba print server. Close both the Show Contents and the policy's properties window by clicking OK . Close the Group Policy Management Editor . Close the Group Policy Management Console . After the Windows domain members applied the group policy, printer drivers are automatically downloaded from the Samba server when a user connects to a printer. Additional resources For using group policies, see the Windows documentation. 3.16.5. Uploading drivers and preconfiguring printers Use the Print Management application on a Windows client to upload drivers and preconfigure printers hosted on the Samba print server. For further details, see the Windows documentation. 3.17. Running Samba on a server with FIPS mode enabled This section provides an overview of the limitations of running Samba with FIPS mode enabled. It also provides the procedure for enabling FIPS mode on a Red Hat Enterprise Linux host running Samba. 3.17.1. Limitations of using Samba in FIPS mode The following Samba modes and features work in FIPS mode under the indicated conditions: Samba as a domain member only in Active Directory (AD) or Red Hat Enterprise Linux Identity Management (IdM) environments with Kerberos authentication that uses AES ciphers. Samba as a file server on an Active Directory domain member. However, this requires that clients use Kerberos to authenticate to the server. Due to the increased security of FIPS, the following Samba features and modes do not work if FIPS mode is enabled: NT LAN Manager (NTLM) authentication because RC4 ciphers are blocked The server message block version 1 (SMB1) protocol The stand-alone file server mode because it uses NTLM authentication NT4-style domain controllers NT4-style domain members. Note that Red Hat continues supporting the primary domain controller (PDC) functionality IdM uses in the background. Password changes against the Samba server. You can only perform password changes using Kerberos against an Active Directory domain controller. The following feature is not tested in FIPS mode and, therefore, is not supported by Red Hat: Running Samba as a print server 3.17.2. Using Samba in FIPS mode You can enable the FIPS mode on a RHEL host that runs Samba. Prerequisites Samba is configured on the Red Hat Enterprise Linux host. Samba runs in a mode that is supported in FIPS mode. Procedure Enable the FIPS mode on RHEL: Reboot the server: Use the testparm utility to verify the configuration: If the command displays any errors or incompatibilities, fix them to ensure that Samba works correctly. Additional resources Section 3.17.1, "Limitations of using Samba in FIPS mode" 3.18. Tuning the performance of a Samba server Learn what settings can improve the performance of Samba in certain situations, and which settings can have a negative performance impact. Parts of this section were adopted from the Performance Tuning documentation published in the Samba Wiki. License: CC BY 4.0 . Authors and contributors: See the history tab on the Wiki page. Prerequisites Samba is set up as a file or print server 3.18.1. Setting the SMB protocol version Each new SMB version adds features and improves the performance of the protocol. The recent Windows and Windows Server operating systems always supports the latest protocol version. If Samba also uses the latest protocol version, Windows clients connecting to Samba benefit from the performance improvements. In Samba, the default value of the server max protocol is set to the latest supported stable SMB protocol version. Note To always have the latest stable SMB protocol version enabled, do not set the server max protocol parameter. If you set the parameter manually, you will need to modify the setting with each new version of the SMB protocol, to have the latest protocol version enabled. The following procedure explains how to use the default value in the server max protocol parameter. Procedure Remove the server max protocol parameter from the [global] section in the /etc/samba/smb.conf file. Reload the Samba configuration 3.18.2. Tuning shares with directories that contain a large number of files Linux supports case-sensitive file names. For this reason, Samba needs to scan directories for uppercase and lowercase file names when searching or accessing a file. You can configure a share to create new files only in lowercase or uppercase, which improves the performance. Prerequisites Samba is configured as a file server Procedure Rename all files on the share to lowercase. Note Using the settings in this procedure, files with names other than in lowercase will no longer be displayed. Set the following parameters in the share's section: For details about the parameters, see their descriptions in the smb.conf(5) man page on your system. Verify the /etc/samba/smb.conf file: Reload the Samba configuration: After you applied these settings, the names of all newly created files on this share use lowercase. Because of these settings, Samba no longer needs to scan the directory for uppercase and lowercase, which improves the performance. 3.18.3. Settings that can have a negative performance impact By default, the kernel in Red Hat Enterprise Linux is tuned for high network performance. For example, the kernel uses an auto-tuning mechanism for buffer sizes. Setting the socket options parameter in the /etc/samba/smb.conf file overrides these kernel settings. As a result, setting this parameter decreases the Samba network performance in most cases. To use the optimized settings from the Kernel, remove the socket options parameter from the [global] section in the /etc/samba/smb.conf . 3.19. Configuring Samba to be compatible with clients that require an SMB version lower than the default Samba uses a reasonable and secure default value for the minimum server message block (SMB) version it supports. However, if you have clients that require an older SMB version, you can configure Samba to support it. 3.19.1. Setting the minimum SMB protocol version supported by a Samba server In Samba, the server min protocol parameter in the /etc/samba/smb.conf file defines the minimum server message block (SMB) protocol version the Samba server supports. You can change the minimum SMB protocol version. Note By default, Samba on RHEL 8.2 and later supports only SMB2 and newer protocol versions. Red Hat recommends to not use the deprecated SMB1 protocol. However, if your environment requires SMB1, you can manually set the server min protocol parameter to NT1 to re-enable SMB1. Prerequisites Samba is installed and configured. Procedure Edit the /etc/samba/smb.conf file, add the server min protocol parameter, and set the parameter to the minimum SMB protocol version the server should support. For example, to set the minimum SMB protocol version to SMB3 , add: Restart the smb service: Additional resources smb.conf(5) man page on your system 3.20. Frequently used Samba command-line utilities This chapter describes frequently used commands when working with a Samba server. 3.20.1. Using the net ads join and net rpc join commands Using the join subcommand of the net utility, you can join Samba to an AD or NT4 domain. To join the domain, you must create the /etc/samba/smb.conf file manually, and optionally update additional configurations, such as PAM. Important Red Hat recommends using the realm utility to join a domain. The realm utility automatically updates all involved configuration files. Procedure Manually create the /etc/samba/smb.conf file with the following settings: For an AD domain member: For an NT4 domain member: Add an ID mapping configuration for the * default domain and for the domain you want to join to the [global ] section in the /etc/samba/smb.conf file. Verify the /etc/samba/smb.conf file: Join the domain as the domain administrator: To join an AD domain: To join an NT4 domain: Append the winbind source to the passwd and group database entry in the /etc/nsswitch.conf file: Enable and start the winbind service: Optional: Configure PAM using the authselect utility. For details, see the authselect(8) man page on your system. Optional: For AD environments, configure the Kerberos client. For details, see the documentation of your Kerberos client. Additional resources Joining Samba to a domain . Understanding and configuring Samba ID mapping . 3.20.2. Using the net rpc rights command In Windows, you can assign privileges to accounts and groups to perform special operations, such as setting ACLs on a share or upload printer drivers. On a Samba server, you can use the net rpc rights command to manage privileges. Listing privileges you can set To list all available privileges and their owners, use the net rpc rights list command. For example: Granting privileges To grant a privilege to an account or group, use the net rpc rights grant command. For example, grant the SePrintOperatorPrivilege privilege to the DOMAIN \printadmin group: Revoking privileges To revoke a privilege from an account or group, use the net rpc rights revoke command. For example, to revoke the SePrintOperatorPrivilege privilege from the DOMAIN \printadmin group: 3.20.3. Using the net rpc share command The net rpc share command provides the capability to list, add, and remove shares on a local or remote Samba or Windows server. Listing shares To list the shares on an SMB server, use the net rpc share list command. Optionally, pass the -S server_name parameter to the command to list the shares of a remote server. For example: Note Shares hosted on a Samba server that have browseable = no set in their section in the /etc/samba/smb.conf file are not displayed in the output. Adding a share The net rpc share add command enables you to add a share to an SMB server. For example, to add a share named example on a remote Windows server that shares the C:\example\ directory: Note You must omit the trailing backslash in the path when specifying a Windows directory name. To use the command to add a share to a Samba server: The user specified in the -U parameter must have the SeDiskOperatorPrivilege privilege granted on the destination server. You must write a script that adds a share section to the /etc/samba/smb.conf file and reloads Samba. The script must be set in the add share command parameter in the [global] section in /etc/samba/smb.conf . For further details, see the add share command description in the smb.conf(5) man page on your system. Removing a share The net rpc share delete command enables you to remove a share from an SMB server. For example, to remove the share named example from a remote Windows server: To use the command to remove a share from a Samba server: The user specified in the -U parameter must have the SeDiskOperatorPrivilege privilege granted. You must write a script that removes the share's section from the /etc/samba/smb.conf file and reloads Samba. The script must be set in the delete share command parameter in the [global] section in /etc/samba/smb.conf . For further details, see the delete share command description in the smb.conf(5) man page on your system. 3.20.4. Using the net user command The net user command enables you to perform the following actions on an AD DC or NT4 PDC: List all user accounts Add users Remove Users Note Specifying a connection method, such as ads for AD domains or rpc for NT4 domains, is only required when you list domain user accounts. Other user-related subcommands can auto-detect the connection method. Pass the -U user_name parameter to the command to specify a user that is allowed to perform the requested action. Listing domain user accounts To list all users in an AD domain: To list all users in an NT4 domain: Adding a user account to the domain On a Samba domain member, you can use the net user add command to add a user account to the domain. For example, add the user account to the domain: Add the account: Optional: Use the remote procedure call (RPC) shell to enable the account on the AD DC or NT4 PDC. For example: Deleting a user account from the domain On a Samba domain member, you can use the net user delete command to remove a user account from the domain. For example, to remove the user account from the domain: 3.20.5. Using the rpcclient utility The rpcclient utility enables you to manually execute client-side Microsoft Remote Procedure Call (MS-RPC) functions on a local or remote SMB server. However, most of the features are integrated into separate utilities provided by Samba. Use rpcclient only for testing MS-PRC functions. Prerequisites The samba-client package is installed. Examples For example, you can use the rpcclient utility to: Manage the printer Spool Subsystem (SPOOLSS). Example 3.7. Assigning a Driver to a Printer Retrieve information about an SMB server. Example 3.8. Listing all File Shares and Shared Printers Perform actions using the Security Account Manager Remote (SAMR) protocol. Example 3.9. Listing Users on an SMB Server If you run the command against a standalone server or a domain member, it lists the users in the local database. Running the command against an AD DC or NT4 PDC lists the domain users. Additional resources rpcclient(1) man page on your system 3.20.6. Using the samba-regedit application Certain settings, such as printer configurations, are stored in the registry on the Samba server. You can use the ncurses-based samba-regedit application to edit the registry of a Samba server. Prerequisites The samba-client package is installed. Procedure To start the application, enter: Use the following keys: Cursor up and cursor down: Navigate through the registry tree and the values. Enter : Opens a key or edits a value. Tab : Switches between the Key and Value pane. Ctrl + C : Closes the application. 3.20.7. Using the smbcontrol utility The smbcontrol utility enables you to send command messages to the smbd , nmbd , winbindd , or all of these services. These control messages instruct the service, for example, to reload its configuration. Prerequisites The samba-common-tools package is installed. Procedure Reload the configuration of the smbd , nmbd , winbindd services by sending the reload-config message type to the all destination: Additional resources smbcontrol(1) man page on your system 3.20.8. Using the smbpasswd utility The smbpasswd utility manages user accounts and passwords in the local Samba database. Prerequisites The samba-common-tools package is installed. Procedure If you run the command as a user, smbpasswd changes the Samba password of the user who run the command. For example: If you run smbpasswd as the root user, you can use the utility, for example, to: Create a new user: Note Before you can add a user to the Samba database, you must create the account in the local operating system. See the Adding a new user from the command line section in the Configuring basic system settings guide. Enable a Samba user: Disable a Samba user: Delete a user: Additional resources smbpasswd(8) man page on your system 3.20.9. Using the smbstatus utility The smbstatus utility reports on: Connections per PID of each smbd daemon to the Samba server. This report includes the user name, primary group, SMB protocol version, encryption, and signing information. Connections per Samba share. This report includes the PID of the smbd daemon, the IP of the connecting machine, the time stamp when the connection was established, encryption, and signing information. A list of locked files. The report entries include further details, such as opportunistic lock (oplock) types Prerequisites The samba package is installed. The smbd service is running. Procedure Additional resources smbstatus(1) man page on your system 3.20.10. Using the smbtar utility The smbtar utility backs up the content of an SMB share or a subdirectory of it and stores the content in a tar archive. Alternatively, you can write the content to a tape device. Prerequisites The samba-client package is installed. Procedure Use the following command to back up the content of the demo directory on the //server/example/ share and store the content in the /root/example.tar archive: Additional resources smbtar(1) man page on your system 3.20.11. Using the wbinfo utility The wbinfo utility queries and returns information created and used by the winbindd service. Prerequisites The samba-winbind-clients package is installed. Procedure You can use wbinfo , for example, to: List domain users: List domain groups: Display the SID of a user: Display information about domains and trusts: Additional resources wbinfo(1) man page on your system 3.21. Additional resources smb.conf(5) man page on your system /usr/share/docs/samba-version/ directory contains general documentation, example scripts, and LDAP schema files, provided by the Samba project Setting up Samba and the Clustered Trivial Database (CDTB) to share directories stored on an GlusterFS volume Mounting an SMB Share on Red Hat Enterprise Linux
[ "cp /etc/samba/smb.conf /etc/samba/samba.conf. copy", "testparm -s /etc/samba/samba.conf. copy", "mv /etc/samba/samba.conf. copy /etc/samba/smb.conf", "smbcontrol all reload-config", "testparm Load smb config files from /etc/samba/smb.conf rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384) Unknown parameter encountered: \"log levell\" Processing section \"[example_share]\" Loaded services file OK. ERROR: The idmap range for the domain * (tdb) overlaps with the range of DOMAIN (ad)! Server role: ROLE_DOMAIN_MEMBER Press enter to see a dump of your service definitions Global parameters [global] [example_share]", "yum install samba", "[global] workgroup = Example-WG netbios name = Server security = user log file = /var/log/samba/%m.log log level = 1", "testparm", "firewall-cmd --permanent --add-service=samba firewall-cmd --reload", "systemctl enable --now smb", "useradd -M -s /sbin/nologin example", "passwd example Enter new UNIX password: password Retype new UNIX password: password passwd: password updated successfully", "smbpasswd -a example New SMB password: password Retype new SMB password: password Added user example.", "smbpasswd -e example Enabled user example.", "[global] idmap config * : backend = tdb idmap config * : range = 10000-999999 idmap config AD-DOM:backend = rid idmap config AD-DOM:range = 2000000-2999999 idmap config TRUST-DOM:backend = rid idmap config TRUST-DOM:range = 4000000-4999999", "idmap config * : backend = tdb idmap config * : range = 10000-999999", "idmap config * : backend = autorid idmap config * : range = 10000-999999", "idmap config * : backend = tdb idmap config * : range = 10000-999999", "idmap config * : backend = tdb idmap config * : range = 10000-999999", "idmap config DOMAIN : backend = ad", "idmap config DOMAIN : range = 2000000-2999999", "idmap config DOMAIN : schema_mode = rfc2307", "idmap config DOMAIN : unix_nss_info = yes", "template shell = /bin/bash template homedir = /home/%U", "idmap config DOMAIN : unix_primary_group = yes", "testparm", "smbcontrol all reload-config", "idmap config * : backend = tdb idmap config * : range = 10000-999999", "idmap config DOMAIN : backend = rid", "idmap config DOMAIN : range = 2000000-2999999", "template shell = /bin/bash template homedir = /home/%U", "testparm", "smbcontrol all reload-config", "idmap config * : backend = autorid", "idmap config * : range = 10000-999999", "idmap config * : rangesize = 200000", "template shell = /bin/bash template homedir = /home/%U", "testparm", "smbcontrol all reload-config", "update-crypto-policies --set DEFAULT:AD-SUPPORT", "yum install realmd oddjob-mkhomedir oddjob samba-winbind-clients samba-winbind samba-common-tools samba-winbind-krb5-locator krb5-workstation", "yum install samba", "mv /etc/samba/smb.conf /etc/samba/smb.conf.bak", "realm join --membership-software=samba --client-software=winbind ad.example.com", "systemctl status winbind Active: active (running) since Tue 2018-11-06 19:10:40 CET; 15s ago", "systemctl enable --now smb", "getent passwd \"AD\\administrator\" AD\\administrator:*:10000:10000::/home/administrator@AD:/bin/bash", "getent group \"AD\\Domain Users\" AD\\domain users:x:10000:user1,user2", "chown \"AD\\administrator\":\"AD\\Domain Users\" /srv/samba/example.txt", "kinit [email protected]", "klist Ticket cache: KCM:0 Default principal: [email protected] Valid starting Expires Service principal 01.11.2018 10:00:00 01.11.2018 20:00:00 krbtgt/[email protected] renew until 08.11.2018 05:00:00", "wbinfo --all-domains BUILTIN SAMBA-SERVER AD", "[plugins] localauth = { module = winbind:/usr/lib64/samba/krb5/winbind_krb5_localauth.so enable_only = winbind }", "yum install ipa-server-trust-ad samba-client", "kinit admin", "ipa-adtrust-install", "WARNING: The smb.conf already exists. Running ipa-adtrust-install will break your existing Samba configuration. Do you wish to continue? [no]: yes", "Do you want to enable support for trusted domains in Schema Compatibility plugin? This will allow clients older than SSSD 1.9 and non-Linux clients to work with trusted users. Enable trusted domains support in slapi-nis? [no]: yes", "Trust is configured but no NetBIOS domain name found, setting it now. Enter the NetBIOS name for the IPA domain. Only up to 15 uppercase ASCII letters, digits and dashes are allowed. Example: EXAMPLE. NetBIOS domain name [IDM]:", "Do you want to run the ipa-sidgen task? [no]: yes", "net conf setparm global 'rpc server dynamic port range' 55000-65000 firewall-cmd --add-port=55000-65000/tcp firewall-cmd --runtime-to-permanent", "ipactl restart", "smbclient -L ipaserver.idm.example.com -U user_name --use-kerberos=required lp_load_ex: changing to config backend registry Sharename Type Comment --------- ---- ------- IPCUSD IPC IPC Service (Samba 4.15.2)", "yum install ipa-client-samba", "ipa-client-samba Searching for IPA server IPA server: DNS discovery Chosen IPA master: idm_server.idm.example.com SMB principal to be created: cifs/ idm_client.idm.example.com @ IDM.EXAMPLE.COM NetBIOS name to be used: IDM_CLIENT Discovered domains to use: Domain name: idm.example.com NetBIOS name: IDM SID: S-1-5-21-525930803-952335037-206501584 ID range: 212000000 - 212199999 Domain name: ad.example.com NetBIOS name: AD SID: None ID range: 1918400000 - 1918599999 Continue to configure the system with these values? [no]: yes Samba domain member is configured. Please check configuration at /etc/samba/smb.conf and start smb and winbind services", "[homes] read only = no", "firewall-cmd --permanent --add-service=samba-client firewall-cmd --reload", "systemctl enable --now smb winbind", "smbclient -L idm_client.idm.example.com -U user_name --use-kerberos=required lp_load_ex: changing to config backend registry Sharename Type Comment --------- ---- ------- example Disk IPCUSD IPC IPC Service (Samba 4.15.2)", "kinit -k", "ipa idrange-find --name=\" AD.EXAMPLE.COM _id_range\" --raw --------------- 1 range matched --------------- cn: AD.EXAMPLE.COM _id_range ipabaseid: 1918400000 ipaidrangesize: 200000 ipabaserid: 0 ipanttrusteddomainsid: S-1-5-21-968346183-862388825-1738313271 iparangetype: ipa-ad-trust ---------------------------- Number of entries returned 1 ----------------------------", "maximum_range = ipabaseid + ipaidrangesize - 1", "idmap config AD : range = 1918400000 - 1918599999 idmap config AD : backend = sss", "systemctl restart smb winbind", "smbclient -L idm_client.idm.example.com -U user_name --use-kerberos=required lp_load_ex: changing to config backend registry Sharename Type Comment --------- ---- ------- example Disk IPCUSD IPC IPC Service (Samba 4.15.2)", "mkdir -p /srv/samba/example/", "semanage fcontext -a -t samba_share_t \"/srv/samba/example(/.*)?\" restorecon -Rv /srv/samba/example/", "[example] path = /srv/samba/example/ read only = no", "testparm", "firewall-cmd --permanent --add-service=samba firewall-cmd --reload", "systemctl restart smb", "chown root:\"Domain Users\" /srv/samba/example/ chmod 2770 /srv/samba/example/", "inherit acls = yes", "systemctl restart smb", "setfacl -m group::--- /srv/samba/example/ setfacl -m default:group::--- /srv/samba/example/", "setfacl -m group:\" DOMAIN \\Domain Admins\":rwx /srv/samba/example/", "setfacl -m group:\" DOMAIN \\Domain Users\":r-x /srv/samba/example/", "setfacl -R -m other::--- /srv/samba/example/", "setfacl -m default:group:\" DOMAIN \\Domain Admins\":rwx /srv/samba/example/ setfacl -m default:group:\" DOMAIN \\Domain Users\":r-x /srv/samba/example/ setfacl -m default:other::--- /srv/samba/example/", "valid users = + DOMAIN \\\"Domain Users\" invalid users = DOMAIN \\user", "smbcontrol all reload-config", "hosts allow = 127.0.0.1 192.0.2.0/24 client1.example.com hosts deny = client2.example.com", "smbcontrol all reload-config", "net rpc rights grant \" DOMAIN \\Domain Admins\" SeDiskOperatorPrivilege -U \" DOMAIN \\administrator\" Enter DOMAIN \\administrator's password: Successfully granted rights.", "net rpc rights list privileges SeDiskOperatorPrivilege -U \" DOMAIN \\administrator\" Enter administrator's password: SeDiskOperatorPrivilege: BUILTIN\\Administrators DOMAIN \\Domain Admins", "vfs objects = acl_xattr map acl inherit = yes store dos attributes = yes", "systemctl restart smb", "mkdir -p /srv/samba/example/", "semanage fcontext -a -t samba_share_t \"/srv/samba/example(/. )?\"* restorecon -Rv /srv/samba/example/", "[example] path = /srv/samba/example/ read only = no", "vfs objects = acl_xattr map acl inherit = yes store dos attributes = yes", "testparm", "firewall-cmd --permanent --add-service=samba firewall-cmd --reload", "systemctl restart smb", "security_principal : access_right / inheritance_information / permissions", "AD\\Domain Users:ALLOWED/OI|CI/CHANGE", "smbcacls //server/example / -U \" DOMAIN \\administrator\" Enter DOMAIN \\administrator's password: REVISION:1 CONTROL:SR|PD|DI|DP OWNER:AD\\Administrators GROUP:AD\\Domain Users ACL:AD\\Administrator:ALLOWED/OI|CI/FULL ACL:AD\\Domain Users:ALLOWED/OI|CI/CHANGE ACL:AD\\Domain Guests:ALLOWED/OI|CI/0x00100021", "echo USD(printf '0x%X' USD(( hex_value_1 | hex_value_2 | ... )))", "echo USD(printf '0x%X' USD(( 0x00100020 | 0x00100001 | 0x00100080 ))) 0x1000A1", "smbcacls //server/example / -U \" DOMAIN \\administrator --add ACL:\"AD\\Domain Users\":ALLOWED/OI|CI/CHANGE", "ACL for SID principal_name not found", "smbcacls //server/example / -U \" DOMAIN \\administrator --modify ACL:\"AD\\Domain Users\":ALLOWED/OI|CI/READ", "smbcacls //server/example / -U \" DOMAIN \\administrator --delete ACL:\"AD\\Domain Users\":ALLOWED/OI|CI/READ", "groupadd example", "mkdir -p /var/lib/samba/usershares/", "chgrp example /var/lib/samba/usershares/ chmod 1770 /var/lib/samba/usershares/", "usershare path = /var/lib/samba/usershares/", "usershare max shares = 100", "usershare prefix allow list = /data /srv", "testparm", "smbcontrol all reload-config", "net usershare add example /srv/samba/ \"\" \"AD\\Domain Users\":F,Everyone:R guest_ok=yes", "net usershare info -l [ share_1 ] path= /srv/samba/ comment= usershare_acl= Everyone:R,host_name\\user:F, guest_ok= y", "net usershare info -l share_*", "net usershare list -l share_1 share_2", "net usershare list -l share_*", "net usershare delete share_name", "-rw-r--r--. 1 root root 1024 1. Sep 10:00 file1.txt -rw-r-----. 1 nobody root 1024 1. Sep 10:00 file2.txt -rw-r-----. 1 root root 1024 1. Sep 10:00 file3.txt", "[global] map to guest = Bad User", "[global] guest account = user_name", "[example] guest ok = yes", "testparm", "smbcontrol all reload-config", "vfs objects = fruit streams_xattr", "fruit:time machine = yes", "testparm", "smbcontrol all reload-config", "smbclient -U \" DOMAIN\\user \" // server / example Enter domain\\user 's password: Try \"help\" to get a list of possible commands. smb: \\>", "smb: \\>", "smb: \\> help", "smb: \\> help command_name", "smbclient -U \" DOMAIN\\user_name \" // server_name / share_name", "smb: \\> d /example/", "smb: \\example\\> ls . D 0 Thu Nov 1 10:00:00 2018 .. D 0 Thu Nov 1 10:00:00 2018 example.txt N 1048576 Thu Nov 1 10:00:00 2018 9950208 blocks of size 1024. 8247144 blocks available", "smb: \\example\\> get example.txt getting file \\directory\\subdirectory\\example.txt of size 1048576 as example.txt (511975,0 KiloBytes/sec) (average 170666,7 KiloBytes/sec)", "smb: \\example\\> exit", "smbclient -U DOMAIN\\user_name // server_name / share_name -c \"cd /example/ ; get example.txt ; exit\"", "[printers] comment = All Printers path = /var/tmp/ printable = yes create mask = 0600", "cups server = printserver.example.com : 631", "rpcd_spoolss:idle_seconds = 200", "rpcd_spoolss:num_workers = 10", "testparm", "firewall-cmd --permanent --add-service=samba firewall-cmd --reload", "systemctl restart smb", "smbclient -U user // sambaserver.example.com / printer_name -c \"print example.pdf \"", "load printers = no", "[Example-Printer] path = /var/tmp/ printable = yes printer name = example", "testparm", "smbcontrol all reload-config", "net rpc rights grant \"printadmin\" SePrintOperatorPrivilege -U \" DOMAIN \\administrator\" Enter DOMAIN \\administrator's password: Successfully granted rights.", "net rpc rights list privileges SePrintOperatorPrivilege -U \" DOMAIN \\administrator\" Enter administrator's password: SePrintOperatorPrivilege: BUILTIN\\Administrators DOMAIN \\printadmin", "[printUSD] path = /var/lib/samba/drivers/ read only = no write list = @printadmin force group = @printadmin create mask = 0664 directory mask = 2775", "spoolss: architecture = Windows x64", "testparm", "smbcontrol all reload-config", "groupadd printadmin", "net rpc rights grant \"printadmin\" SePrintOperatorPrivilege -U \" DOMAIN \\administrator\" Enter DOMAIN \\administrator's password: Successfully granted rights.", "semanage fcontext -a -t samba_share_t \"/var/lib/samba/drivers(/.*)?\" restorecon -Rv /var/lib/samba/drivers/", "chgrp -R \"printadmin\" /var/lib/samba/drivers/ chmod -R 2775 /var/lib/samba/drivers/", "fips-mode-setup --enable", "reboot", "testparm -s", "smbcontrol all reload-config", "case sensitive = true default case = lower preserve case = no short preserve case = no", "testparm", "smbcontrol all reload-config", "server min protocol = SMB3", "systemctl restart smb", "[global] workgroup = domain_name security = ads passdb backend = tdbsam realm = AD_REALM", "[global] workgroup = domain_name security = user passdb backend = tdbsam", "testparm", "net ads join -U \"DOMAIN\\administrator\"", "net rpc join -U \"DOMAIN\\administrator\"", "passwd: files winbind group: files winbind", "systemctl enable --now winbind", "net rpc rights list -U \" DOMAIN \\administrator\" Enter DOMAIN \\administrator's password: SeMachineAccountPrivilege Add machines to domain SeTakeOwnershipPrivilege Take ownership of files or other objects SeBackupPrivilege Back up files and directories SeRestorePrivilege Restore files and directories SeRemoteShutdownPrivilege Force shutdown from a remote system SePrintOperatorPrivilege Manage printers SeAddUsersPrivilege Add users and groups to the domain SeDiskOperatorPrivilege Manage disk shares SeSecurityPrivilege System security", "net rpc rights grant \" DOMAIN \\printadmin\" SePrintOperatorPrivilege -U \" DOMAIN \\administrator\" Enter DOMAIN \\administrator's password: Successfully granted rights.", "net rpc rights remoke \" DOMAIN \\printadmin\" SePrintOperatorPrivilege -U \" DOMAIN \\administrator\" Enter DOMAIN \\administrator's password: Successfully revoked rights.", "net rpc share list -U \" DOMAIN \\administrator\" -S server_name Enter DOMAIN \\administrator's password: IPCUSD share_1 share_2", "net rpc share add example=\"C:\\example\" -U \" DOMAIN \\administrator\" -S server_name", "net rpc share delete example -U \" DOMAIN \\administrator\" -S server_name", "net ads user -U \" DOMAIN \\administrator\"", "net rpc user -U \" DOMAIN \\administrator\"", "net user add user password -U \" DOMAIN \\administrator\" User user added", "net rpc shell -U DOMAIN \\administrator -S DC_or_PDC_name Talking to domain DOMAIN ( S-1-5-21-1424831554-512457234-5642315751 ) net rpc> user edit disabled user : no Set user 's disabled flag from [yes] to [no] net rpc> exit", "net user delete user -U \" DOMAIN \\administrator\" User user deleted", "rpcclient server_name -U \" DOMAIN \\administrator\" -c 'setdriver \" printer_name \" \" driver_name \"' Enter DOMAIN \\administrators password: Successfully set printer_name to driver driver_name .", "rpcclient server_name -U \" DOMAIN \\administrator\" -c 'netshareenum' Enter DOMAIN \\administrators password: netname: Example_Share remark: path: C:\\srv\\samba\\example_share password: netname: Example_Printer remark: path: C:\\var\\spool\\samba password:", "rpcclient server_name -U \" DOMAIN \\administrator\" -c 'enumdomusers' Enter DOMAIN \\administrators password: user:[ user1 ] rid:[ 0x3e8 ] user:[ user2 ] rid:[ 0x3e9 ]", "samba-regedit", "smbcontrol all reload-config", "[user@server ~]USD smbpasswd New SMB password: password Retype new SMB password: password", "smbpasswd -a user_name New SMB password: password Retype new SMB password: password Added user user_name .", "smbpasswd -e user_name Enabled user user_name .", "smbpasswd -x user_name Disabled user user_name", "smbpasswd -x user_name Deleted user user_name .", "smbstatus Samba version 4.15.2 PID Username Group Machine Protocol Version Encryption Signing ....------------------------------------------------------------------------------------------------------------------------- 963 DOMAIN \\administrator DOMAIN \\domain users client-pc (ipv4:192.0.2.1:57786) SMB3_02 - AES-128-CMAC Service pid Machine Connected at Encryption Signing: ....--------------------------------------------------------------------------- example 969 192.0.2.1 Thu Nov 1 10:00:00 2018 CEST - AES-128-CMAC Locked files: Pid Uid DenyMode Access R/W Oplock SharePath Name Time ....-------------------------------------------------------------------------------------------------------- 969 10000 DENY_WRITE 0x120089 RDONLY LEASE(RWH) /srv/samba/example file.txt Thu Nov 1 10:00:00 2018", "smbtar -s server -x example -u user_name -p password -t /root/example.tar", "wbinfo -u AD\\administrator AD\\guest", "wbinfo -g AD\\domain computers AD\\domain admins AD\\domain users", "wbinfo --name-to-sid=\"AD\\administrator\" S-1-5-21-1762709870-351891212-3141221786-500 SID_USER (1)", "wbinfo --trusted-domains --verbose Domain Name DNS Domain Trust Type Transitive In Out BUILTIN None Yes Yes Yes server None Yes Yes Yes DOMAIN1 domain1.example.com None Yes Yes Yes DOMAIN2 domain2.example.com External No Yes Yes" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deploying_different_types_of_servers/assembly_using-samba-as-a-server_Deploying-different-types-of-servers
Chapter 12. Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates
Chapter 12. Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates In OpenShift Container Platform version 4.14, you can install a cluster on Amazon Web Services (AWS) that uses infrastructure that you provide. One way to create this infrastructure is to use the provided CloudFormation templates. You can modify the templates to customize your infrastructure or use the information that they contain to create AWS objects according to your company's policies. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several CloudFormation templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 12.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain long-term credentials . 12.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 12.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 12.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 12.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 12.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 12.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 12.3.3. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 12.1. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 12.3.4. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 12.2. Machine types based on 64-bit ARM architecture c6g.* m6g.* r8g.* 12.3.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 12.4. Required AWS infrastructure components To install OpenShift Container Platform on user-provisioned infrastructure in Amazon Web Services (AWS), you must manually create both the machines and their supporting infrastructure. For more information about the integration testing for different platforms, see the OpenShift Container Platform 4.x Tested Integrations page. By using the provided CloudFormation templates, you can create stacks of AWS resources that represent the following components: An AWS Virtual Private Cloud (VPC) Networking and load balancing components Security groups and roles An OpenShift Container Platform bootstrap node OpenShift Container Platform control plane nodes An OpenShift Container Platform compute node Alternatively, you can manually create the components or you can reuse existing infrastructure that meets the cluster requirements. Review the CloudFormation templates for more details about how the components interrelate. 12.4.1. Other infrastructure components A VPC DNS entries Load balancers (classic or network) and listeners A public and a private Route 53 zone Security groups IAM roles S3 buckets If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. Required DNS and load balancing components Your DNS and load balancer configuration needs to use a public hosted zone and can use a private hosted zone similar to the one that the installation program uses if it provisions the cluster's infrastructure. You must create a DNS entry that resolves to your load balancer. An entry for api.<cluster_name>.<domain> must point to the external load balancer, and an entry for api-int.<cluster_name>.<domain> must point to the internal load balancer. The cluster also requires load balancers and listeners for port 6443, which are required for the Kubernetes API and its extensions, and port 22623, which are required for the Ignition config files for new machines. The targets will be the control plane nodes. Port 6443 must be accessible to both clients external to the cluster and nodes within the cluster. Port 22623 must be accessible to nodes within the cluster. Component AWS type Description DNS AWS::Route53::HostedZone The hosted zone for your internal DNS. Public load balancer AWS::ElasticLoadBalancingV2::LoadBalancer The load balancer for your public subnets. External API server record AWS::Route53::RecordSetGroup Alias records for the external API server. External listener AWS::ElasticLoadBalancingV2::Listener A listener on port 6443 for the external load balancer. External target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the external load balancer. Private load balancer AWS::ElasticLoadBalancingV2::LoadBalancer The load balancer for your private subnets. Internal API server record AWS::Route53::RecordSetGroup Alias records for the internal API server. Internal listener AWS::ElasticLoadBalancingV2::Listener A listener on port 22623 for the internal load balancer. Internal target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the internal load balancer. Internal listener AWS::ElasticLoadBalancingV2::Listener A listener on port 6443 for the internal load balancer. Internal target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the internal load balancer. Security groups The control plane and worker machines require access to the following ports: Group Type IP Protocol Port range MasterSecurityGroup AWS::EC2::SecurityGroup icmp 0 tcp 22 tcp 6443 tcp 22623 WorkerSecurityGroup AWS::EC2::SecurityGroup icmp 0 tcp 22 BootstrapSecurityGroup AWS::EC2::SecurityGroup tcp 22 tcp 19531 Control plane Ingress The control plane machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource. Ingress group Description IP protocol Port range MasterIngressEtcd etcd tcp 2379 - 2380 MasterIngressVxlan Vxlan packets udp 4789 MasterIngressWorkerVxlan Vxlan packets udp 4789 MasterIngressInternal Internal cluster communication and Kubernetes proxy metrics tcp 9000 - 9999 MasterIngressWorkerInternal Internal cluster communication tcp 9000 - 9999 MasterIngressKube Kubernetes kubelet, scheduler and controller manager tcp 10250 - 10259 MasterIngressWorkerKube Kubernetes kubelet, scheduler and controller manager tcp 10250 - 10259 MasterIngressIngressServices Kubernetes Ingress services tcp 30000 - 32767 MasterIngressWorkerIngressServices Kubernetes Ingress services tcp 30000 - 32767 MasterIngressGeneve Geneve packets udp 6081 MasterIngressWorkerGeneve Geneve packets udp 6081 MasterIngressIpsecIke IPsec IKE packets udp 500 MasterIngressWorkerIpsecIke IPsec IKE packets udp 500 MasterIngressIpsecNat IPsec NAT-T packets udp 4500 MasterIngressWorkerIpsecNat IPsec NAT-T packets udp 4500 MasterIngressIpsecEsp IPsec ESP packets 50 All MasterIngressWorkerIpsecEsp IPsec ESP packets 50 All MasterIngressInternalUDP Internal cluster communication udp 9000 - 9999 MasterIngressWorkerInternalUDP Internal cluster communication udp 9000 - 9999 MasterIngressIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 MasterIngressWorkerIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 Worker Ingress The worker machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource. Ingress group Description IP protocol Port range WorkerIngressVxlan Vxlan packets udp 4789 WorkerIngressWorkerVxlan Vxlan packets udp 4789 WorkerIngressInternal Internal cluster communication tcp 9000 - 9999 WorkerIngressWorkerInternal Internal cluster communication tcp 9000 - 9999 WorkerIngressKube Kubernetes kubelet, scheduler, and controller manager tcp 10250 WorkerIngressWorkerKube Kubernetes kubelet, scheduler, and controller manager tcp 10250 WorkerIngressIngressServices Kubernetes Ingress services tcp 30000 - 32767 WorkerIngressWorkerIngressServices Kubernetes Ingress services tcp 30000 - 32767 WorkerIngressGeneve Geneve packets udp 6081 WorkerIngressMasterGeneve Geneve packets udp 6081 WorkerIngressIpsecIke IPsec IKE packets udp 500 WorkerIngressMasterIpsecIke IPsec IKE packets udp 500 WorkerIngressIpsecNat IPsec NAT-T packets udp 4500 WorkerIngressMasterIpsecNat IPsec NAT-T packets udp 4500 WorkerIngressIpsecEsp IPsec ESP packets 50 All WorkerIngressMasterIpsecEsp IPsec ESP packets 50 All WorkerIngressInternalUDP Internal cluster communication udp 9000 - 9999 WorkerIngressMasterInternalUDP Internal cluster communication udp 9000 - 9999 WorkerIngressIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 WorkerIngressMasterIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 Roles and instance profiles You must grant the machines permissions in AWS. The provided CloudFormation templates grant the machines Allow permissions for the following AWS::IAM::Role objects and provide a AWS::IAM::InstanceProfile for each set of roles. If you do not use the templates, you can grant the machines the following broad permissions or the following individual permissions. Role Effect Action Resource Master Allow ec2:* * Allow elasticloadbalancing:* * Allow iam:PassRole * Allow s3:GetObject * Worker Allow ec2:Describe* * Bootstrap Allow ec2:Describe* * Allow ec2:AttachVolume * Allow ec2:DetachVolume * 12.4.2. Cluster machines You need AWS::EC2::Instance objects for the following machines: A bootstrap machine. This machine is required during installation, but you can remove it after your cluster deploys. Three control plane machines. The control plane machines are not governed by a control plane machine set. Compute machines. You must create at least two compute machines, which are also known as worker machines, during installation. These machines are not governed by a compute machine set. 12.4.3. Required AWS permissions for the IAM user Note Your IAM user must have the permission tag:GetResources in the region us-east-1 to delete the base cluster resources. As part of the AWS API requirement, the OpenShift Container Platform installation program performs various actions in this region. When you attach the AdministratorAccess policy to the IAM user that you create in Amazon Web Services (AWS), you grant that user all of the required permissions. To deploy all components of an OpenShift Container Platform cluster, the IAM user requires the following permissions: Example 12.3. Required EC2 permissions for installation ec2:AttachNetworkInterface ec2:AuthorizeSecurityGroupEgress ec2:AuthorizeSecurityGroupIngress ec2:CopyImage ec2:CreateNetworkInterface ec2:CreateSecurityGroup ec2:CreateTags ec2:CreateVolume ec2:DeleteSecurityGroup ec2:DeleteSnapshot ec2:DeleteTags ec2:DeregisterImage ec2:DescribeAccountAttributes ec2:DescribeAddresses ec2:DescribeAvailabilityZones ec2:DescribeDhcpOptions ec2:DescribeImages ec2:DescribeInstanceAttribute ec2:DescribeInstanceCreditSpecifications ec2:DescribeInstances ec2:DescribeInstanceTypes ec2:DescribeInternetGateways ec2:DescribeKeyPairs ec2:DescribeNatGateways ec2:DescribeNetworkAcls ec2:DescribeNetworkInterfaces ec2:DescribePrefixLists ec2:DescribeRegions ec2:DescribeRouteTables ec2:DescribeSecurityGroupRules ec2:DescribeSecurityGroups ec2:DescribeSubnets ec2:DescribeTags ec2:DescribeVolumes ec2:DescribeVpcAttribute ec2:DescribeVpcClassicLink ec2:DescribeVpcClassicLinkDnsSupport ec2:DescribeVpcEndpoints ec2:DescribeVpcs ec2:GetEbsDefaultKmsKeyId ec2:ModifyInstanceAttribute ec2:ModifyNetworkInterfaceAttribute ec2:RevokeSecurityGroupEgress ec2:RevokeSecurityGroupIngress ec2:RunInstances ec2:TerminateInstances Example 12.4. Required permissions for creating network resources during installation ec2:AllocateAddress ec2:AssociateAddress ec2:AssociateDhcpOptions ec2:AssociateRouteTable ec2:AttachInternetGateway ec2:CreateDhcpOptions ec2:CreateInternetGateway ec2:CreateNatGateway ec2:CreateRoute ec2:CreateRouteTable ec2:CreateSubnet ec2:CreateVpc ec2:CreateVpcEndpoint ec2:ModifySubnetAttribute ec2:ModifyVpcAttribute Note If you use an existing Virtual Private Cloud (VPC), your account does not require these permissions for creating network resources. Example 12.5. Required Elastic Load Balancing permissions (ELB) for installation elasticloadbalancing:AddTags elasticloadbalancing:ApplySecurityGroupsToLoadBalancer elasticloadbalancing:AttachLoadBalancerToSubnets elasticloadbalancing:ConfigureHealthCheck elasticloadbalancing:CreateListener elasticloadbalancing:CreateLoadBalancer elasticloadbalancing:CreateLoadBalancerListeners elasticloadbalancing:CreateTargetGroup elasticloadbalancing:DeleteLoadBalancer elasticloadbalancing:DeregisterInstancesFromLoadBalancer elasticloadbalancing:DeregisterTargets elasticloadbalancing:DescribeInstanceHealth elasticloadbalancing:DescribeListeners elasticloadbalancing:DescribeLoadBalancerAttributes elasticloadbalancing:DescribeLoadBalancers elasticloadbalancing:DescribeTags elasticloadbalancing:DescribeTargetGroupAttributes elasticloadbalancing:DescribeTargetHealth elasticloadbalancing:ModifyLoadBalancerAttributes elasticloadbalancing:ModifyTargetGroup elasticloadbalancing:ModifyTargetGroupAttributes elasticloadbalancing:RegisterInstancesWithLoadBalancer elasticloadbalancing:RegisterTargets elasticloadbalancing:SetLoadBalancerPoliciesOfListener Important OpenShift Container Platform uses both the ELB and ELBv2 API services to provision load balancers. The permission list shows permissions required by both services. A known issue exists in the AWS web console where both services use the same elasticloadbalancing action prefix but do not recognize the same actions. You can ignore the warnings about the service not recognizing certain elasticloadbalancing actions. Example 12.6. Required IAM permissions for installation iam:AddRoleToInstanceProfile iam:CreateInstanceProfile iam:CreateRole iam:DeleteInstanceProfile iam:DeleteRole iam:DeleteRolePolicy iam:GetInstanceProfile iam:GetRole iam:GetRolePolicy iam:GetUser iam:ListInstanceProfilesForRole iam:ListRoles iam:ListUsers iam:PassRole iam:PutRolePolicy iam:RemoveRoleFromInstanceProfile iam:SimulatePrincipalPolicy iam:TagRole Note If you have not created a load balancer in your AWS account, the IAM user also requires the iam:CreateServiceLinkedRole permission. Example 12.7. Required Route 53 permissions for installation route53:ChangeResourceRecordSets route53:ChangeTagsForResource route53:CreateHostedZone route53:DeleteHostedZone route53:GetChange route53:GetHostedZone route53:ListHostedZones route53:ListHostedZonesByName route53:ListResourceRecordSets route53:ListTagsForResource route53:UpdateHostedZoneComment Example 12.8. Required Amazon Simple Storage Service (S3) permissions for installation s3:CreateBucket s3:DeleteBucket s3:GetAccelerateConfiguration s3:GetBucketAcl s3:GetBucketCors s3:GetBucketLocation s3:GetBucketLogging s3:GetBucketObjectLockConfiguration s3:GetBucketPolicy s3:GetBucketRequestPayment s3:GetBucketTagging s3:GetBucketVersioning s3:GetBucketWebsite s3:GetEncryptionConfiguration s3:GetLifecycleConfiguration s3:GetReplicationConfiguration s3:ListBucket s3:PutBucketAcl s3:PutBucketTagging s3:PutEncryptionConfiguration Example 12.9. S3 permissions that cluster Operators require s3:DeleteObject s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:GetObjectVersion s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Example 12.10. Required permissions to delete base cluster resources autoscaling:DescribeAutoScalingGroups ec2:DeleteNetworkInterface ec2:DeletePlacementGroup ec2:DeleteVolume elasticloadbalancing:DeleteTargetGroup elasticloadbalancing:DescribeTargetGroups iam:DeleteAccessKey iam:DeleteUser iam:DeleteUserPolicy iam:ListAttachedRolePolicies iam:ListInstanceProfiles iam:ListRolePolicies iam:ListUserPolicies s3:DeleteObject s3:ListBucketVersions tag:GetResources Example 12.11. Required permissions to delete network resources ec2:DeleteDhcpOptions ec2:DeleteInternetGateway ec2:DeleteNatGateway ec2:DeleteRoute ec2:DeleteRouteTable ec2:DeleteSubnet ec2:DeleteVpc ec2:DeleteVpcEndpoints ec2:DetachInternetGateway ec2:DisassociateRouteTable ec2:ReleaseAddress ec2:ReplaceRouteTableAssociation Note If you use an existing VPC, your account does not require these permissions to delete network resources. Instead, your account only requires the tag:UntagResources permission to delete network resources. Example 12.12. Optional permissions for installing a cluster with a custom Key Management Service (KMS) key kms:CreateGrant kms:Decrypt kms:DescribeKey kms:Encrypt kms:GenerateDataKey kms:GenerateDataKeyWithoutPlainText kms:ListGrants kms:RevokeGrant Example 12.13. Required permissions to delete a cluster with shared instance roles iam:UntagRole Example 12.14. Additional IAM and S3 permissions that are required to create manifests iam:GetUserPolicy iam:ListAccessKeys iam:PutUserPolicy iam:TagUser s3:AbortMultipartUpload s3:GetBucketPublicAccessBlock s3:ListBucket s3:ListBucketMultipartUploads s3:PutBucketPublicAccessBlock s3:PutLifecycleConfiguration Note If you are managing your cloud provider credentials with mint mode, the IAM user also requires the iam:CreateAccessKey and iam:CreateUser permissions. Example 12.15. Optional permissions for instance and quota checks for installation ec2:DescribeInstanceTypeOfferings servicequotas:ListAWSDefaultServiceQuotas Example 12.16. Optional permissions for the cluster owner account when installing a cluster on a shared VPC sts:AssumeRole 12.5. Obtaining an AWS Marketplace image If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy worker nodes. Prerequisites You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster. Procedure Complete the OpenShift Container Platform subscription from the AWS Marketplace . Record the AMI ID for your specific region. If you use the CloudFormation template to deploy your worker nodes, you must update the worker0.type.properties.ImageID parameter with this value. 12.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 12.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 12.8. Creating the installation files for AWS To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 12.8.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.14.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 12.8.2. Creating the installation configuration file Generate and customize the installation configuration file that the installation program needs to deploy your cluster. Prerequisites You obtained the OpenShift Container Platform installation program for user-provisioned infrastructure and the pull secret for your cluster. You checked that you are deploying your cluster to a region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to a region that requires a custom AMI, such as an AWS GovCloud region, you must create the install-config.yaml file manually. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from Red Hat OpenShift Cluster Manager . If you are installing a three-node cluster, modify the install-config.yaml file by setting the compute.replicas parameter to 0 . This ensures that the cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on AWS". Optional: Back up the install-config.yaml file. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources See Configuration and credential file settings in the AWS documentation for more information about AWS profile and credential configuration. 12.8.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 12.8.4. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Important If you disabled the MachineAPI capability when installing a cluster on user-provisioned infrastructure, you must remove the Kubernetes manifest files that define the worker machines. Otherwise, your cluster fails to install. Because you create and manage the worker machines yourself, you do not need to initialize these machines. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 12.9. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Amazon Web Services (AWS). The infrastructure name is also used to locate the appropriate AWS resources during an OpenShift Container Platform installation. The provided CloudFormation templates contain references to this infrastructure name, so you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 12.10. Creating a VPC in AWS You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements, including VPN and route tables. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "VpcCidr", 1 "ParameterValue": "10.0.0.0/16" 2 }, { "ParameterKey": "AvailabilityZoneCount", 3 "ParameterValue": "1" 4 }, { "ParameterKey": "SubnetBits", 5 "ParameterValue": "12" 6 } ] 1 The CIDR block for the VPC. 2 Specify a CIDR block in the format x.x.x.x/16-24 . 3 The number of availability zones to deploy the VPC in. 4 Specify an integer between 1 and 3 . 5 The size of each subnet in each availability zone. 6 Specify an integer between 5 and 13 , where 5 is /27 and 13 is /19 . Copy the template from the CloudFormation template for the VPC section of this topic and save it as a YAML file on your computer. This template describes the VPC that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-vpc . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: VpcId The ID of your VPC. PublicSubnetIds The IDs of the new public subnets. PrivateSubnetIds The IDs of the new private subnets. 12.10.1. CloudFormation template for the VPC You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster. Example 12.17. CloudFormation template for the VPC AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)" MinValue: 1 MaxValue: 3 Default: 1 Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Network Configuration" Parameters: - VpcCidr - SubnetBits - Label: default: "Availability Zones" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: "Availability Zone Count" VpcCidr: default: "VPC CIDR" SubnetBits: default: "Bits Per Subnet" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: "AWS::EC2::VPC" Properties: EnableDnsSupport: "true" EnableDnsHostnames: "true" CidrBlock: !Ref VpcCidr PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" InternetGateway: Type: "AWS::EC2::InternetGateway" GatewayToInternet: Type: "AWS::EC2::VPCGatewayAttachment" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PublicRoute: Type: "AWS::EC2::Route" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Properties: AllocationId: "Fn::GetAtt": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: "AWS::EC2::EIP" Properties: Domain: vpc Route: Type: "AWS::EC2::Route" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable2: Type: "AWS::EC2::RouteTable" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz2 Properties: AllocationId: "Fn::GetAtt": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: "AWS::EC2::EIP" Condition: DoAz2 Properties: Domain: vpc Route2: Type: "AWS::EC2::Route" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable3: Type: "AWS::EC2::RouteTable" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz3 Properties: AllocationId: "Fn::GetAtt": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: "AWS::EC2::EIP" Condition: DoAz3 Properties: Domain: vpc Route3: Type: "AWS::EC2::Route" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ ",", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ ",", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 12.11. Creating networking and load balancing components in AWS You must configure networking and classic or network load balancing in Amazon Web Services (AWS) that your OpenShift Container Platform cluster can use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the networking and load balancing components that your OpenShift Container Platform cluster requires. The template also creates a hosted zone and subnet tags. You can run the template multiple times within a single Virtual Private Cloud (VPC). Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Obtain the hosted zone ID for the Route 53 base domain that you specified in the install-config.yaml file for your cluster. You can obtain details about your hosted zone by running the following command: USD aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1 1 For the <route53_domain> , specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Example output mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10 In the example output, the hosted zone ID is Z21IXYZABCZ2A4 . Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "ClusterName", 1 "ParameterValue": "mycluster" 2 }, { "ParameterKey": "InfrastructureName", 3 "ParameterValue": "mycluster-<random_string>" 4 }, { "ParameterKey": "HostedZoneId", 5 "ParameterValue": "<random_string>" 6 }, { "ParameterKey": "HostedZoneName", 7 "ParameterValue": "example.com" 8 }, { "ParameterKey": "PublicSubnets", 9 "ParameterValue": "subnet-<random_string>" 10 }, { "ParameterKey": "PrivateSubnets", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "VpcId", 13 "ParameterValue": "vpc-<random_string>" 14 } ] 1 A short, representative cluster name to use for hostnames, etc. 2 Specify the cluster name that you used when you generated the install-config.yaml file for the cluster. 3 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 4 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 5 The Route 53 public zone ID to register the targets with. 6 Specify the Route 53 public zone ID, which as a format similar to Z21IXYZABCZ2A4 . You can obtain this value from the AWS console. 7 The Route 53 zone to register the targets with. 8 Specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 9 The public subnets that you created for your VPC. 10 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 11 The private subnets that you created for your VPC. 12 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 13 The VPC that you created for the cluster. 14 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for the network and load balancers section of this topic and save it as a YAML file on your computer. This template describes the networking and load balancing objects that your cluster requires. Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord in the CloudFormation template to use CNAME records. Records of type ALIAS are not supported for AWS government regions. Launch the CloudFormation template to create a stack of AWS resources that provide the networking and load balancing components: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-dns . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: PrivateHostedZoneId Hosted zone ID for the private DNS. ExternalApiLoadBalancerName Full name of the external API load balancer. InternalApiLoadBalancerName Full name of the internal API load balancer. ApiServerDnsName Full hostname of the API server. RegisterNlbIpTargetsLambda Lambda ARN useful to help register/deregister IP targets for these load balancers. ExternalApiTargetGroupArn ARN of external API target group. InternalApiTargetGroupArn ARN of internal API target group. InternalServiceTargetGroupArn ARN of internal service target group. 12.11.1. CloudFormation template for the network and load balancers You can use the following CloudFormation template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster. Example 12.18. CloudFormation template for the network and load balancers AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: "example.com" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - ClusterName - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: "DNS" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: "Cluster Name" InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" PublicSubnets: default: "Public Subnets" PrivateSubnets: default: "Private Subnets" HostedZoneName: default: "Public Hosted Zone Name" HostedZoneId: default: "Public Hosted Zone ID" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "ext"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "int"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: "AWS::Route53::HostedZone" Properties: HostedZoneConfig: Comment: "Managed by CloudFormation" Name: !Join [".", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join ["-", [!Ref InfrastructureName, "int"]] - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "owned" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref "AWS::Region" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ ".", ["api-int", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/healthz" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "nlb", "lambda", "role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalApiTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalServiceTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterTargetLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: "python3.8" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "subnet-tags-lambda-role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "subnet-tagging-policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "ec2:DeleteTags", "ec2:CreateTags" ] Resource: "arn:aws:ec2:*:*:subnet/*" - Effect: "Allow" Action: [ "ec2:DescribeSubnets", "ec2:DescribeTags" ] Resource: "*" RegisterSubnetTags: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterSubnetTagsLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: "python3.8" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [".", ["api-int", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord to use CNAME records. Records of type ALIAS are not supported for AWS government regions. For example: Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . You can view details about your hosted zones by navigating to the AWS Route 53 console . See Listing public hosted zones in the AWS documentation for more information about listing public hosted zones. 12.12. Creating security group and roles in AWS You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the security groups and roles that your OpenShift Container Platform cluster requires. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "VpcCidr", 3 "ParameterValue": "10.0.0.0/16" 4 }, { "ParameterKey": "PrivateSubnets", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "VpcId", 7 "ParameterValue": "vpc-<random_string>" 8 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 The CIDR block for the VPC. 4 Specify the CIDR block parameter that you used for the VPC that you defined in the form x.x.x.x/16-24 . 5 The private subnets that you created for your VPC. 6 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 7 The VPC that you created for the cluster. 8 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for security objects section of this topic and save it as a YAML file on your computer. This template describes the security groups and roles that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the security groups and roles: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-sec . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: MasterSecurityGroupId Master Security Group ID WorkerSecurityGroupId Worker Security Group ID MasterInstanceProfile Master IAM Instance Profile WorkerInstanceProfile Worker IAM Instance Profile 12.12.1. CloudFormation template for security objects You can use the following CloudFormation template to deploy the security objects that you need for your OpenShift Container Platform cluster. Example 12.19. CloudFormation template for security objects AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" VpcCidr: default: "VPC CIDR" PrivateSubnets: default: "Private Subnets" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:AttachVolume" - "ec2:AuthorizeSecurityGroupIngress" - "ec2:CreateSecurityGroup" - "ec2:CreateTags" - "ec2:CreateVolume" - "ec2:DeleteSecurityGroup" - "ec2:DeleteVolume" - "ec2:Describe*" - "ec2:DetachVolume" - "ec2:ModifyInstanceAttribute" - "ec2:ModifyVolume" - "ec2:RevokeSecurityGroupIngress" - "elasticloadbalancing:AddTags" - "elasticloadbalancing:AttachLoadBalancerToSubnets" - "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer" - "elasticloadbalancing:CreateListener" - "elasticloadbalancing:CreateLoadBalancer" - "elasticloadbalancing:CreateLoadBalancerPolicy" - "elasticloadbalancing:CreateLoadBalancerListeners" - "elasticloadbalancing:CreateTargetGroup" - "elasticloadbalancing:ConfigureHealthCheck" - "elasticloadbalancing:DeleteListener" - "elasticloadbalancing:DeleteLoadBalancer" - "elasticloadbalancing:DeleteLoadBalancerListeners" - "elasticloadbalancing:DeleteTargetGroup" - "elasticloadbalancing:DeregisterInstancesFromLoadBalancer" - "elasticloadbalancing:DeregisterTargets" - "elasticloadbalancing:Describe*" - "elasticloadbalancing:DetachLoadBalancerFromSubnets" - "elasticloadbalancing:ModifyListener" - "elasticloadbalancing:ModifyLoadBalancerAttributes" - "elasticloadbalancing:ModifyTargetGroup" - "elasticloadbalancing:ModifyTargetGroupAttributes" - "elasticloadbalancing:RegisterInstancesWithLoadBalancer" - "elasticloadbalancing:RegisterTargets" - "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer" - "elasticloadbalancing:SetLoadBalancerPoliciesOfListener" - "kms:DescribeKey" Resource: "*" MasterInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "MasterIamRole" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "worker", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:DescribeInstances" - "ec2:DescribeRegions" Resource: "*" WorkerInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "WorkerIamRole" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 12.13. Accessing RHCOS AMIs with stream metadata In OpenShift Container Platform, stream metadata provides standardized metadata about RHCOS in the JSON format and injects the metadata into the cluster. Stream metadata is a stable format that supports multiple architectures and is intended to be self-documenting for maintaining automation. You can use the coreos print-stream-json sub-command of openshift-install to access information about the boot images in the stream metadata format. This command provides a method for printing stream metadata in a scriptable, machine-readable format. For user-provisioned installations, the openshift-install binary contains references to the version of RHCOS boot images that are tested for use with OpenShift Container Platform, such as the AWS AMI. Procedure To parse the stream metadata, use one of the following methods: From a Go program, use the official stream-metadata-go library at https://github.com/coreos/stream-metadata-go . You can also view example code in the library. From another programming language, such as Python or Ruby, use the JSON library of your preferred programming language. From a command-line utility that handles JSON data, such as jq : Print the current x86_64 or aarch64 AMI for an AWS region, such as us-west-1 : For x86_64 USD openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions["us-west-1"].image' Example output ami-0d3e625f84626bbda For aarch64 USD openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions["us-west-1"].image' Example output ami-0af1d3b7fa5be2131 The output of this command is the AWS AMI ID for your designated architecture and the us-west-1 region. The AMI must belong to the same region as the cluster. 12.14. RHCOS AMIs for the AWS infrastructure Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs that are valid for the various AWS regions and instance architectures that you can manually specify for your OpenShift Container Platform nodes. Note By importing your own AMI, you can also install to regions that do not have a published RHCOS AMI. Table 12.3. x86_64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-01860370941726bdd ap-east-1 ami-05bc702cdaf7e4251 ap-northeast-1 ami-098932fd93c15690d ap-northeast-2 ami-006f4e02d97910a36 ap-northeast-3 ami-0c4bd5b1724f82273 ap-south-1 ami-0cbf22b638724853d ap-south-2 ami-031f4d165f4b429c4 ap-southeast-1 ami-0dc3e381a731ab9fc ap-southeast-2 ami-032ae8d0f287a66a6 ap-southeast-3 ami-0393130e034b86423 ap-southeast-4 ami-0b38f776bded7d7d7 ca-central-1 ami-058ea81b3a1d17edd eu-central-1 ami-011010debd974a250 eu-central-2 ami-0623b105ae811a5e2 eu-north-1 ami-0c4bb9ce04f3526d4 eu-south-1 ami-06c29eccd3d74df52 eu-south-2 ami-00e0b5f3181a3f98b eu-west-1 ami-087bfa513dc600676 eu-west-2 ami-0ebad59c0e9554473 eu-west-3 ami-074e63b65eaf83f96 me-central-1 ami-0179d6ae1d908ace9 me-south-1 ami-0b60c75273d3efcd7 sa-east-1 ami-0913cbfbfa9a7a53c us-east-1 ami-0f71dcd99e6a1cd53 us-east-2 ami-0545fae7edbbbf061 us-gov-east-1 ami-081eabdc478e501e5 us-gov-west-1 ami-076102c394767f319 us-west-1 ami-0609e4436c4ae5eff us-west-2 ami-0c5d3e03c0ab9b19a Table 12.4. aarch64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-08dd66a61a2caa326 ap-east-1 ami-0232cd715f8168c34 ap-northeast-1 ami-0bc0b17618da96700 ap-northeast-2 ami-0ee505fb62eed2fd6 ap-northeast-3 ami-0462cd2c3b7044c77 ap-south-1 ami-0e0b4d951b43adc58 ap-south-2 ami-06d457b151cc0e407 ap-southeast-1 ami-0874e1640dfc15f17 ap-southeast-2 ami-05f215734ceb0f5ad ap-southeast-3 ami-073030df265c88b3f ap-southeast-4 ami-043f4c40a6fc3238a ca-central-1 ami-0840622f99a32f586 eu-central-1 ami-09a5e6ebe667ae6b5 eu-central-2 ami-0835cb1bf387e609a eu-north-1 ami-069ddbda521a10a27 eu-south-1 ami-09c5cc21026032b4c eu-south-2 ami-0c36ab2a8bbeed045 eu-west-1 ami-0d2723c8228cb2df3 eu-west-2 ami-0abd66103d069f9a8 eu-west-3 ami-08c7249d59239fc5c me-central-1 ami-0685f33ebb18445a2 me-south-1 ami-0466941f4e5c56fe6 sa-east-1 ami-08cdc0c8a972f4763 us-east-1 ami-0d461970173c4332d us-east-2 ami-0e9cdc0e85e0a6aeb us-gov-east-1 ami-0b896df727672ce09 us-gov-west-1 ami-0b896df727672ce09 us-west-1 ami-027b7cc5f4c74e6c1 us-west-2 ami-0b189d89b44bdfbf2 12.14.1. AWS regions without a published RHCOS AMI You can deploy an OpenShift Container Platform cluster to Amazon Web Services (AWS) regions without native support for a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) or the AWS software development kit (SDK). If a published AMI is not available for an AWS region, you can upload a custom AMI prior to installing the cluster. If you are deploying to a region not supported by the AWS SDK and you do not specify a custom AMI, the installation program copies the us-east-1 AMI to the user account automatically. Then the installation program creates the control plane machines with encrypted EBS volumes using the default or user-specified Key Management Service (KMS) key. This allows the AMI to follow the same process workflow as published RHCOS AMIs. A region without native support for an RHCOS AMI is not available to select from the terminal during cluster creation because it is not published. However, you can install to this region by configuring the custom AMI in the install-config.yaml file. 12.14.2. Uploading a custom RHCOS AMI in AWS If you are deploying to a custom Amazon Web Services (AWS) region, you must upload a custom Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region. Prerequisites You configured an AWS account. You created an Amazon S3 bucket with the required IAM service role . You uploaded your RHCOS VMDK file to Amazon S3. The RHCOS VMDK file must be the highest version that is less than or equal to the OpenShift Container Platform version you are installing. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer . Procedure Export your AWS profile as an environment variable: USD export AWS_PROFILE=<aws_profile> 1 Export the region to associate with your custom AMI as an environment variable: USD export AWS_DEFAULT_REGION=<aws_region> 1 Export the version of RHCOS you uploaded to Amazon S3 as an environment variable: USD export RHCOS_VERSION=<version> 1 1 1 1 The RHCOS VMDK version, like 4.14.0 . Export the Amazon S3 bucket name as an environment variable: USD export VMIMPORT_BUCKET_NAME=<s3_bucket_name> Create the containers.json file and define your RHCOS VMDK file: USD cat <<EOF > containers.json { "Description": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64", "Format": "vmdk", "UserBucket": { "S3Bucket": "USD{VMIMPORT_BUCKET_NAME}", "S3Key": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk" } } EOF Import the RHCOS disk as an Amazon EBS snapshot: USD aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} \ --description "<description>" \ 1 --disk-container "file://<file_path>/containers.json" 2 1 The description of your RHCOS disk being imported, like rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64 . 2 The file path to the JSON file describing your RHCOS disk. The JSON file should contain your Amazon S3 bucket name and key. Check the status of the image import: USD watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION} Example output { "ImportSnapshotTasks": [ { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "ImportTaskId": "import-snap-fh6i8uil", "SnapshotTaskDetail": { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "DiskImageSize": 819056640.0, "Format": "VMDK", "SnapshotId": "snap-06331325870076318", "Status": "completed", "UserBucket": { "S3Bucket": "external-images", "S3Key": "rhcos-4.7.0-x86_64-aws.x86_64.vmdk" } } } ] } Copy the SnapshotId to register the image. Create a custom RHCOS AMI from the RHCOS snapshot: USD aws ec2 register-image \ --region USD{AWS_DEFAULT_REGION} \ --architecture x86_64 \ 1 --description "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 2 --ena-support \ --name "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 3 --virtualization-type hvm \ --root-device-name '/dev/xvda' \ --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4 1 The RHCOS VMDK architecture type, like x86_64 , aarch64 , s390x , or ppc64le . 2 The Description from the imported snapshot. 3 The name of the RHCOS AMI. 4 The SnapshotID from the imported snapshot. To learn more about these APIs, see the AWS documentation for importing snapshots and creating EBS-backed AMIs . 12.15. Creating the bootstrap node in AWS You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift Container Platform cluster initialization. You do this by: Providing a location to serve the bootstrap.ign Ignition config file to your cluster. This file is located in your installation directory. The provided CloudFormation Template assumes that the Ignition config files for your cluster are served from an S3 bucket. If you choose to serve the files from another location, you must modify the templates. Using the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the bootstrap node that your OpenShift Container Platform installation requires. Note If you do not use the provided CloudFormation template to create your bootstrap node, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. Procedure Create the bucket by running the following command: USD aws s3 mb s3://<cluster-name>-infra 1 1 <cluster-name>-infra is the bucket name. When creating the install-config.yaml file, replace <cluster-name> with the name specified for the cluster. You must use a presigned URL for your S3 bucket, instead of the s3:// schema, if you are: Deploying to a region that has endpoints that differ from the AWS SDK. Deploying a proxy. Providing your own custom endpoints. Upload the bootstrap.ign Ignition config file to the bucket by running the following command: USD aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify that the file uploaded by running the following command: USD aws s3 ls s3://<cluster-name>-infra/ Example output 2019-04-03 16:15:16 314878 bootstrap.ign Note The bootstrap Ignition config file does contain secrets, like X.509 keys. The following steps provide basic security for the S3 bucket. To provide additional security, you can enable an S3 bucket policy to allow only certain users, such as the OpenShift IAM user, to access objects that the bucket contains. You can avoid S3 entirely and serve your bootstrap Ignition config file from any address that the bootstrap machine can reach. Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AllowedBootstrapSshCidr", 5 "ParameterValue": "0.0.0.0/0" 6 }, { "ParameterKey": "PublicSubnet", 7 "ParameterValue": "subnet-<random_string>" 8 }, { "ParameterKey": "MasterSecurityGroupId", 9 "ParameterValue": "sg-<random_string>" 10 }, { "ParameterKey": "VpcId", 11 "ParameterValue": "vpc-<random_string>" 12 }, { "ParameterKey": "BootstrapIgnitionLocation", 13 "ParameterValue": "s3://<bucket_name>/bootstrap.ign" 14 }, { "ParameterKey": "AutoRegisterELB", 15 "ParameterValue": "yes" 16 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 17 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 18 }, { "ParameterKey": "ExternalApiTargetGroupArn", 19 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 20 }, { "ParameterKey": "InternalApiTargetGroupArn", 21 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 22 }, { "ParameterKey": "InternalServiceTargetGroupArn", 23 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 24 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node based on your selected architecture. 4 Specify a valid AWS::EC2::Image::Id value. 5 CIDR block to allow SSH access to the bootstrap node. 6 Specify a CIDR block in the format x.x.x.x/16-24 . 7 The public subnet that is associated with your VPC to launch the bootstrap node into. 8 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 9 The master security group ID (for registering temporary rules) 10 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 11 The VPC created resources will belong to. 12 Specify the VpcId value from the output of the CloudFormation template for the VPC. 13 Location to fetch bootstrap Ignition config file from. 14 Specify the S3 bucket and file name in the form s3://<bucket_name>/bootstrap.ign . 15 Whether or not to register a network load balancer (NLB). 16 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 17 The ARN for NLB IP target registration lambda group. 18 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 19 The ARN for external API load balancer target group. 20 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 21 The ARN for internal API load balancer target group. 22 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 23 The ARN for internal service load balancer target group. 24 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for the bootstrap machine section of this topic and save it as a YAML file on your computer. This template describes the bootstrap machine that your cluster requires. Optional: If you are deploying the cluster with a proxy, you must update the ignition in the template to add the ignition.config.proxy fields. Additionally, If you have added the Amazon EC2, Elastic Load Balancing, and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. Launch the CloudFormation template to create a stack of AWS resources that represent the bootstrap node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-bootstrap . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: BootstrapInstanceId The bootstrap Instance ID. BootstrapPublicIp The bootstrap node public IP address. BootstrapPrivateIp The bootstrap node private IP address. 12.15.1. CloudFormation template for the bootstrap machine You can use the following CloudFormation template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster. Example 12.20. CloudFormation template for the bootstrap machine AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: "i3.large" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" AllowedBootstrapSshCidr: default: "Allowed SSH Source" PublicSubnet: default: "Public Subnet" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Bootstrap Ignition Source" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "bootstrap", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: "ec2:Describe*" Resource: "*" - Effect: "Allow" Action: "ec2:AttachVolume" Resource: "*" - Effect: "Allow" Action: "ec2:DetachVolume" Resource: "*" - Effect: "Allow" Action: "s3:GetObject" Resource: "*" BootstrapInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Path: "/" Roles: - Ref: "BootstrapIamRole" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "true" DeviceIndex: "0" GroupSet: - !Ref "BootstrapSecurityGroup" - !Ref "MasterSecurityGroupId" SubnetId: !Ref "PublicSubnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"replace":{"source":"USD{S3Loc}"}},"version":"3.1.0"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . See RHCOS AMIs for the AWS infrastructure for details about the Red Hat Enterprise Linux CoreOS (RHCOS) AMIs for the AWS zones. 12.16. Creating the control plane machines in AWS You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the control plane nodes. Important The CloudFormation template creates a stack that represents three control plane nodes. Note If you do not use the provided CloudFormation template to create your control plane nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AutoRegisterDNS", 5 "ParameterValue": "yes" 6 }, { "ParameterKey": "PrivateHostedZoneId", 7 "ParameterValue": "<random_string>" 8 }, { "ParameterKey": "PrivateHostedZoneName", 9 "ParameterValue": "mycluster.example.com" 10 }, { "ParameterKey": "Master0Subnet", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "Master1Subnet", 13 "ParameterValue": "subnet-<random_string>" 14 }, { "ParameterKey": "Master2Subnet", 15 "ParameterValue": "subnet-<random_string>" 16 }, { "ParameterKey": "MasterSecurityGroupId", 17 "ParameterValue": "sg-<random_string>" 18 }, { "ParameterKey": "IgnitionLocation", 19 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/master" 20 }, { "ParameterKey": "CertificateAuthorities", 21 "ParameterValue": "data:text/plain;charset=utf-8;base64,ABC...xYz==" 22 }, { "ParameterKey": "MasterInstanceProfileName", 23 "ParameterValue": "<roles_stack>-MasterInstanceProfile-<random_string>" 24 }, { "ParameterKey": "MasterInstanceType", 25 "ParameterValue": "" 26 }, { "ParameterKey": "AutoRegisterELB", 27 "ParameterValue": "yes" 28 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 29 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 30 }, { "ParameterKey": "ExternalApiTargetGroupArn", 31 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 32 }, { "ParameterKey": "InternalApiTargetGroupArn", 33 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 34 }, { "ParameterKey": "InternalServiceTargetGroupArn", 35 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 36 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control plane machines based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 Whether or not to perform DNS etcd registration. 6 Specify yes or no . If you specify yes , you must provide hosted zone information. 7 The Route 53 private zone ID to register the etcd targets with. 8 Specify the PrivateHostedZoneId value from the output of the CloudFormation template for DNS and load balancing. 9 The Route 53 zone to register the targets with. 10 Specify <cluster_name>.<domain_name> where <domain_name> is the Route 53 base domain that you used when you generated install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 11 13 15 A subnet, preferably private, to launch the control plane machines on. 12 14 16 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 17 The master security group ID to associate with control plane nodes. 18 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 19 The location to fetch control plane Ignition config file from. 20 Specify the generated Ignition config file location, https://api-int.<cluster_name>.<domain_name>:22623/config/master . 21 The base64 encoded certificate authority string to use. 22 Specify the value from the master.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 23 The IAM profile to associate with control plane nodes. 24 Specify the MasterInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 25 The type of AWS instance to use for the control plane machines based on your selected architecture. 26 The instance type value corresponds to the minimum resource requirements for control plane machines. For example m6i.xlarge is a type for AMD64 and m6g.xlarge is a type for ARM64. 27 Whether or not to register a network load balancer (NLB). 28 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 29 The ARN for NLB IP target registration lambda group. 30 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 31 The ARN for external API load balancer target group. 32 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 33 The ARN for internal API load balancer target group. 34 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 35 The ARN for internal service load balancer target group. 36 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for control plane machines section of this topic and save it as a YAML file on your computer. This template describes the control plane machines that your cluster requires. If you specified an m5 instance type as the value for MasterInstanceType , add that instance type to the MasterInstanceType.AllowedValues parameter in the CloudFormation template. Launch the CloudFormation template to create a stack of AWS resources that represent the control plane nodes: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-control-plane . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b Note The CloudFormation template creates a stack that represents three control plane nodes. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> 12.16.1. CloudFormation template for control plane machines You can use the following CloudFormation template to deploy the control plane machines that you need for your OpenShift Container Platform cluster. Example 12.21. CloudFormation template for control plane machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: "" Description: unused Type: String PrivateHostedZoneId: Default: "" Description: unused Type: String PrivateHostedZoneName: Default: "" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" Master0Subnet: default: "Master-0 Subnet" Master1Subnet: default: "Master-1 Subnet" Master2Subnet: default: "Master-2 Subnet" MasterInstanceType: default: "Master Instance Type" MasterInstanceProfileName: default: "Master Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Master Ignition Source" CertificateAuthorities: default: "Ignition CA String" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master0Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master1Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master2Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ ",", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ] Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 12.17. Creating the worker nodes in AWS You can create worker nodes in Amazon Web Services (AWS) for your cluster to use. Note If you are installing a three-node cluster, skip this step. A three-node cluster consists of three control plane machines, which also act as compute machines. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent a worker node. Important The CloudFormation template creates a stack that represents one worker node. You must create a stack for each worker node. Note If you do not use the provided CloudFormation template to create your worker nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. Procedure Create a JSON file that contains the parameter values that the CloudFormation template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "Subnet", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "WorkerSecurityGroupId", 7 "ParameterValue": "sg-<random_string>" 8 }, { "ParameterKey": "IgnitionLocation", 9 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/worker" 10 }, { "ParameterKey": "CertificateAuthorities", 11 "ParameterValue": "" 12 }, { "ParameterKey": "WorkerInstanceProfileName", 13 "ParameterValue": "" 14 }, { "ParameterKey": "WorkerInstanceType", 15 "ParameterValue": "" 16 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 A subnet, preferably private, to start the worker nodes on. 6 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 7 The worker security group ID to associate with worker nodes. 8 Specify the WorkerSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 9 The location to fetch the bootstrap Ignition config file from. 10 Specify the generated Ignition config location, https://api-int.<cluster_name>.<domain_name>:22623/config/worker . 11 Base64 encoded certificate authority string to use. 12 Specify the value from the worker.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 13 The IAM profile to associate with worker nodes. 14 Specify the WorkerInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 15 The type of AWS instance to use for the compute machines based on your selected architecture. 16 The instance type value corresponds to the minimum resource requirements for compute machines. For example m6i.large is a type for AMD64 and m6g.large is a type for ARM64. Copy the template from the CloudFormation template for worker machines section of this topic and save it as a YAML file on your computer. This template describes the networking objects and load balancers that your cluster requires. Optional: If you specified an m5 instance type as the value for WorkerInstanceType , add that instance type to the WorkerInstanceType.AllowedValues parameter in the CloudFormation template. Optional: If you are deploying with an AWS Marketplace image, update the Worker0.type.properties.ImageID parameter with the AMI ID that you obtained from your subscription. Use the CloudFormation template to create a stack of AWS resources that represent a worker node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-worker-1 . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59 Note The CloudFormation template creates a stack that represents one worker node. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> Continue to create worker stacks until you have created enough worker machines for your cluster. You can create additional worker stacks by referencing the same template and parameter files and specifying a different stack name. Important You must create at least two worker machines, so you must create at least two stacks that use this CloudFormation template. 12.17.1. CloudFormation template for worker machines You can use the following CloudFormation template to deploy the worker machines that you need for your OpenShift Container Platform cluster. Example 12.22. CloudFormation template for worker machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: "Network Configuration" Parameters: - Subnet ParameterLabels: Subnet: default: "Subnet" InfrastructureName: default: "Infrastructure Name" WorkerInstanceType: default: "Worker Instance Type" WorkerInstanceProfileName: default: "Worker Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" IgnitionLocation: default: "Worker Ignition Source" CertificateAuthorities: default: "Ignition CA String" WorkerSecurityGroupId: default: "Worker Security Group ID" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "WorkerSecurityGroupId" SubnetId: !Ref "Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 12.18. Initializing the bootstrap sequence on AWS with user-provisioned infrastructure After you create all of the required infrastructure in Amazon Web Services (AWS), you can start the bootstrap sequence that initializes the OpenShift Container Platform control plane. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. You created the worker nodes. Procedure Change to the directory that contains the installation program and start the bootstrap process that initializes the OpenShift Container Platform control plane: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443... INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s If the command exits without a FATAL warning, your OpenShift Container Platform control plane has initialized. Note After the control plane initializes, it sets up the compute nodes and installs additional services in the form of Operators. Additional resources See Monitoring installation progress for details about monitoring the installation, bootstrap, and control plane logs as an OpenShift Container Platform installation progresses. See Gathering bootstrap node diagnostic data for information about troubleshooting issues related to the bootstrap process. You can view details about the running instances that are created by using the AWS EC2 console . 12.19. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 12.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 12.21. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 12.22. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m Configure the Operators that are not available. 12.22.1. Image registry storage configuration Amazon Web Services provides default storage, which means the Image Registry Operator is available after installation. However, if the Registry Operator cannot create an S3 bucket and automatically configure storage, you must manually configure registry storage. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. You can configure registry storage for user-provisioned infrastructure in AWS to deploy OpenShift Container Platform to hidden regions. See Configuring the registry for AWS user-provisioned infrastructure for more information. 12.22.1.1. Configuring registry storage for AWS with user-provisioned infrastructure During installation, your cloud credentials are sufficient to create an Amazon S3 bucket and the Registry Operator will automatically configure storage. If the Registry Operator cannot create an S3 bucket and automatically configure storage, you can create an S3 bucket and configure storage with the following procedure. Prerequisites You have a cluster on AWS with user-provisioned infrastructure. For Amazon S3 storage, the secret is expected to contain two keys: REGISTRY_STORAGE_S3_ACCESSKEY REGISTRY_STORAGE_S3_SECRETKEY Procedure Use the following procedure if the Registry Operator cannot create an S3 bucket and automatically configure storage. Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old. Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration storage: s3: bucket: <bucket-name> region: <region-name> Warning To secure your registry images in AWS, block public access to the S3 bucket. 12.22.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 12.23. Deleting the bootstrap resources After you complete the initial Operator configuration for the cluster, remove the bootstrap resources from Amazon Web Services (AWS). Prerequisites You completed the initial Operator configuration for your cluster. Procedure Delete the bootstrap resources. If you used the CloudFormation template, delete its stack : Delete the stack by using the AWS CLI: USD aws cloudformation delete-stack --stack-name <name> 1 1 <name> is the name of your bootstrap stack. Delete the stack by using the AWS CloudFormation console . 12.24. Creating the Ingress DNS Records If you removed the DNS Zone configuration, manually create DNS records that point to the Ingress load balancer. You can create either a wildcard record or specific records. While the following procedure uses A records, you can use other record types that you require, such as CNAME or alias. Prerequisites You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) that uses infrastructure that you provisioned. You installed the OpenShift CLI ( oc ). You installed the jq package. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) . Procedure Determine the routes to create. To create a wildcard record, use *.apps.<cluster_name>.<domain_name> , where <cluster_name> is your cluster name, and <domain_name> is the Route 53 base domain for your OpenShift Container Platform cluster. To create specific records, you must create a record for each route that your cluster uses, as shown in the output of the following command: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name> Retrieve the Ingress Operator load balancer status and note the value of the external IP address that it uses, which is shown in the EXTERNAL-IP column: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m Locate the hosted zone ID for the load balancer: USD aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == "<external_ip>").CanonicalHostedZoneNameID' 1 1 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer that you obtained. Example output Z3AADJGX6KTTL2 The output of this command is the load balancer hosted zone ID. Obtain the public hosted zone ID for your cluster's domain: USD aws route53 list-hosted-zones-by-name \ --dns-name "<domain_name>" \ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text 1 2 For <domain_name> , specify the Route 53 base domain for your OpenShift Container Platform cluster. Example output /hostedzone/Z3URY6TWQ91KVV The public hosted zone ID for your domain is shown in the command output. In this example, it is Z3URY6TWQ91KVV . Add the alias records to your private zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<private_hosted_zone_id>" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <private_hosted_zone_id> , specify the value from the output of the CloudFormation template for DNS and load balancing. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. Add the records to your public zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<public_hosted_zone_id>"" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <public_hosted_zone_id> , specify the public hosted zone for your domain. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. 12.25. Completing an AWS installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure, monitor the deployment to completion. Prerequisites You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure. You installed the oc CLI. Procedure From the directory that contains the installation program, complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize... INFO Waiting up to 10m0s for the openshift-console route to be created... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 1s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. /validating-an-installation.adoc 12.26. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 12.27. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service. 12.28. Additional resources See Working with stacks in the AWS documentation for more information about AWS CloudFormation stacks. 12.29. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials .
[ "tar -xvf openshift-install-linux.tar.gz", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.14.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"1\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable", "aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1", "mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10", "[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"InfrastructureName\", 3 \"ParameterValue\": \"mycluster-<random_string>\" 4 }, { \"ParameterKey\": \"HostedZoneId\", 5 \"ParameterValue\": \"<random_string>\" 6 }, { \"ParameterKey\": \"HostedZoneName\", 7 \"ParameterValue\": \"example.com\" 8 }, { \"ParameterKey\": \"PublicSubnets\", 9 \"ParameterValue\": \"subnet-<random_string>\" 10 }, { \"ParameterKey\": \"PrivateSubnets\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"VpcId\", 13 \"ParameterValue\": \"vpc-<random_string>\" 14 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: \"example.com\" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - ClusterName - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: \"DNS\" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: \"Cluster Name\" InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" PublicSubnets: default: \"Public Subnets\" PrivateSubnets: default: \"Private Subnets\" HostedZoneName: default: \"Public Hosted Zone Name\" HostedZoneId: default: \"Public Hosted Zone ID\" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"ext\"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: \"AWS::Route53::HostedZone\" Properties: HostedZoneConfig: Comment: \"Managed by CloudFormation\" Name: !Join [\".\", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"owned\" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref \"AWS::Region\" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ \".\", [\"api-int\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/healthz\" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"nlb\", \"lambda\", \"role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalApiTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalServiceTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterTargetLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: \"python3.8\" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tags-lambda-role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tagging-policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"ec2:DeleteTags\", \"ec2:CreateTags\" ] Resource: \"arn:aws:ec2:*:*:subnet/*\" - Effect: \"Allow\" Action: [ \"ec2:DescribeSubnets\", \"ec2:DescribeTags\" ] Resource: \"*\" RegisterSubnetTags: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterSubnetTagsLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: \"python3.8\" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [\".\", [\"api-int\", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup", "Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"VpcCidr\", 3 \"ParameterValue\": \"10.0.0.0/16\" 4 }, { \"ParameterKey\": \"PrivateSubnets\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"VpcId\", 7 \"ParameterValue\": \"vpc-<random_string>\" 8 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" VpcCidr: default: \"VPC CIDR\" PrivateSubnets: default: \"Private Subnets\" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:AttachVolume\" - \"ec2:AuthorizeSecurityGroupIngress\" - \"ec2:CreateSecurityGroup\" - \"ec2:CreateTags\" - \"ec2:CreateVolume\" - \"ec2:DeleteSecurityGroup\" - \"ec2:DeleteVolume\" - \"ec2:Describe*\" - \"ec2:DetachVolume\" - \"ec2:ModifyInstanceAttribute\" - \"ec2:ModifyVolume\" - \"ec2:RevokeSecurityGroupIngress\" - \"elasticloadbalancing:AddTags\" - \"elasticloadbalancing:AttachLoadBalancerToSubnets\" - \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\" - \"elasticloadbalancing:CreateListener\" - \"elasticloadbalancing:CreateLoadBalancer\" - \"elasticloadbalancing:CreateLoadBalancerPolicy\" - \"elasticloadbalancing:CreateLoadBalancerListeners\" - \"elasticloadbalancing:CreateTargetGroup\" - \"elasticloadbalancing:ConfigureHealthCheck\" - \"elasticloadbalancing:DeleteListener\" - \"elasticloadbalancing:DeleteLoadBalancer\" - \"elasticloadbalancing:DeleteLoadBalancerListeners\" - \"elasticloadbalancing:DeleteTargetGroup\" - \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\" - \"elasticloadbalancing:DeregisterTargets\" - \"elasticloadbalancing:Describe*\" - \"elasticloadbalancing:DetachLoadBalancerFromSubnets\" - \"elasticloadbalancing:ModifyListener\" - \"elasticloadbalancing:ModifyLoadBalancerAttributes\" - \"elasticloadbalancing:ModifyTargetGroup\" - \"elasticloadbalancing:ModifyTargetGroupAttributes\" - \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\" - \"elasticloadbalancing:RegisterTargets\" - \"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\" - \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\" - \"kms:DescribeKey\" Resource: \"*\" MasterInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"MasterIamRole\" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"worker\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:DescribeInstances\" - \"ec2:DescribeRegions\" Resource: \"*\" WorkerInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"WorkerIamRole\" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile", "openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions[\"us-west-1\"].image'", "ami-0d3e625f84626bbda", "openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions[\"us-west-1\"].image'", "ami-0af1d3b7fa5be2131", "export AWS_PROFILE=<aws_profile> 1", "export AWS_DEFAULT_REGION=<aws_region> 1", "export RHCOS_VERSION=<version> 1", "export VMIMPORT_BUCKET_NAME=<s3_bucket_name>", "cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF", "aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2", "watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}", "{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }", "aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4", "aws s3 mb s3://<cluster-name>-infra 1", "aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1", "aws s3 ls s3://<cluster-name>-infra/", "2019-04-03 16:15:16 314878 bootstrap.ign", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AllowedBootstrapSshCidr\", 5 \"ParameterValue\": \"0.0.0.0/0\" 6 }, { \"ParameterKey\": \"PublicSubnet\", 7 \"ParameterValue\": \"subnet-<random_string>\" 8 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 9 \"ParameterValue\": \"sg-<random_string>\" 10 }, { \"ParameterKey\": \"VpcId\", 11 \"ParameterValue\": \"vpc-<random_string>\" 12 }, { \"ParameterKey\": \"BootstrapIgnitionLocation\", 13 \"ParameterValue\": \"s3://<bucket_name>/bootstrap.ign\" 14 }, { \"ParameterKey\": \"AutoRegisterELB\", 15 \"ParameterValue\": \"yes\" 16 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 17 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 18 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 19 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 20 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 21 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 22 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 23 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 24 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: \"i3.large\" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" AllowedBootstrapSshCidr: default: \"Allowed SSH Source\" PublicSubnet: default: \"Public Subnet\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Bootstrap Ignition Source\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"bootstrap\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: \"ec2:Describe*\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:AttachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:DetachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"s3:GetObject\" Resource: \"*\" BootstrapInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Path: \"/\" Roles: - Ref: \"BootstrapIamRole\" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"true\" DeviceIndex: \"0\" GroupSet: - !Ref \"BootstrapSecurityGroup\" - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"PublicSubnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"USD{S3Loc}\"}},\"version\":\"3.1.0\"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AutoRegisterDNS\", 5 \"ParameterValue\": \"yes\" 6 }, { \"ParameterKey\": \"PrivateHostedZoneId\", 7 \"ParameterValue\": \"<random_string>\" 8 }, { \"ParameterKey\": \"PrivateHostedZoneName\", 9 \"ParameterValue\": \"mycluster.example.com\" 10 }, { \"ParameterKey\": \"Master0Subnet\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"Master1Subnet\", 13 \"ParameterValue\": \"subnet-<random_string>\" 14 }, { \"ParameterKey\": \"Master2Subnet\", 15 \"ParameterValue\": \"subnet-<random_string>\" 16 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 17 \"ParameterValue\": \"sg-<random_string>\" 18 }, { \"ParameterKey\": \"IgnitionLocation\", 19 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/master\" 20 }, { \"ParameterKey\": \"CertificateAuthorities\", 21 \"ParameterValue\": \"data:text/plain;charset=utf-8;base64,ABC...xYz==\" 22 }, { \"ParameterKey\": \"MasterInstanceProfileName\", 23 \"ParameterValue\": \"<roles_stack>-MasterInstanceProfile-<random_string>\" 24 }, { \"ParameterKey\": \"MasterInstanceType\", 25 \"ParameterValue\": \"\" 26 }, { \"ParameterKey\": \"AutoRegisterELB\", 27 \"ParameterValue\": \"yes\" 28 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 29 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 30 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 31 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 32 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 33 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 34 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 35 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 36 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: \"\" Description: unused Type: String PrivateHostedZoneId: Default: \"\" Description: unused Type: String PrivateHostedZoneName: Default: \"\" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" Master0Subnet: default: \"Master-0 Subnet\" Master1Subnet: default: \"Master-1 Subnet\" Master2Subnet: default: \"Master-2 Subnet\" MasterInstanceType: default: \"Master Instance Type\" MasterInstanceProfileName: default: \"Master Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Master Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master0Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master1Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master2Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ \",\", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ]", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"Subnet\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"WorkerSecurityGroupId\", 7 \"ParameterValue\": \"sg-<random_string>\" 8 }, { \"ParameterKey\": \"IgnitionLocation\", 9 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/worker\" 10 }, { \"ParameterKey\": \"CertificateAuthorities\", 11 \"ParameterValue\": \"\" 12 }, { \"ParameterKey\": \"WorkerInstanceProfileName\", 13 \"ParameterValue\": \"\" 14 }, { \"ParameterKey\": \"WorkerInstanceType\", 15 \"ParameterValue\": \"\" 16 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - Subnet ParameterLabels: Subnet: default: \"Subnet\" InfrastructureName: default: \"Infrastructure Name\" WorkerInstanceType: default: \"Worker Instance Type\" WorkerInstanceProfileName: default: \"Worker Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" IgnitionLocation: default: \"Worker Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" WorkerSecurityGroupId: default: \"Worker Security Group ID\" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"WorkerSecurityGroupId\" SubnetId: !Ref \"Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443 INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: s3: bucket: <bucket-name> region: <region-name>", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "aws cloudformation delete-stack --stack-name <name> 1", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m", "aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == \"<external_ip>\").CanonicalHostedZoneNameID' 1", "Z3AADJGX6KTTL2", "aws route53 list-hosted-zones-by-name --dns-name \"<domain_name>\" \\ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text", "/hostedzone/Z3URY6TWQ91KVV", "aws route53 change-resource-record-sets --hosted-zone-id \"<private_hosted_zone_id>\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'", "aws route53 change-resource-record-sets --hosted-zone-id \"<public_hosted_zone_id>\"\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize INFO Waiting up to 10m0s for the openshift-console route to be created INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 1s", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_aws/installing-aws-user-infra
Chapter 7. Red Hat Quay sizing and subscriptions
Chapter 7. Red Hat Quay sizing and subscriptions Scalability of Red Hat Quay is one of its key strengths, with a single code base supporting a broad spectrum of deployment sizes, including the following: Proof of Concept deployment on a single development machine Mid-size deployment of approximately 2,000 users that can serve content to dozens of Kubernetes clusters High-end deployment such as Quay.io that can serve thousands of Kubernetes clusters world-wide Since sizing heavily depends on a multitude of factors, such as the number of users, images, concurrent pulls and pushes, there are no standard sizing recommendations. The following are the minimum requirements for systems running Red Hat Quay (per container/pod instance): Quay: minimum 6 GB; recommended 8 GB, 2 more more vCPUs Clair: recommended 2 GB RAM and 2 or more vCPUs Storage: : recommended 30 GB NooBaa: minimum 2 GB, 1 vCPU (when objectstorage component is selected by the Operator) Clair database: minimum 5 GB required for security metadata Stateless components of Red Hat Quay can be scaled out, but this will cause a heavier load on stateful backend services. 7.1. Red Hat Quay sample sizings The following table shows approximate sizing for Proof of Concept, mid-size, and high-end deployments. Whether a deployment runs appropriately with the same metrics depends on many factors not shown below. Metric Proof of concept Mid-size High End (Quay.io) No. of Quay containers by default 1 4 15 No. of Quay containers max at scale-out N/A 8 30 No. of Clair containers by default 1 3 10 No. of Clair containers max at scale-out N/A 6 15 No. of mirroring pods (to mirror 100 repositories) 1 5-10 N/A Database sizing 2 -4 Cores 6-8 GB RAM 10-20 GB disk 4-8 Cores 6-32 GB RAM 100 GB - 1 TB disk 32 cores 244 GB 1+ TB disk Object storage backend sizing 10-100 GB 1 - 20 TB 50+ TB up to PB Redis cache sizing 2 Cores 2-4 GB RAM 4 cores 28 GB RAM Underlying node sizing (physical or virtual) 4 Cores 8 GB RAM 4-6 Cores 12-16 GB RAM Quay: 13 cores 56GB RAM Clair: 2 cores 4 GB RAM For further details on sizing & related recommendations for mirroring, see the section on repository mirroring . The sizing for the Redis cache is only relevant if you use Quay builders, otherwise it is not significant. 7.2. Red Hat Quay subscription information Red Hat Quay is available with Standard or Premium support, and subscriptions are based on deployments. Note Deployment means an installation of a single Red Hat Quay registry using a shared data backend. With a Red Hat Quay subscription, the following options are available: There is no limit on the number of pods, such as Quay, Clair, Builder, and so on, that you can deploy. Red Hat Quay pods can run in multiple data centers or availability zones. Storage and database backends can be deployed across multiple data centers or availability zones, but only as a single, shared storage backend and single, shared database backend. Red Hat Quay can manage content for an unlimited number of clusters or standalone servers. Clients can access the Red Hat Quay deployment regardless of their physical location. You can deploy Red Hat Quay on OpenShift Container Platform infrastructure nodes to minimize subscription requirements. You can run the Container Security Operator (CSO) and the Quay Bridge Operator (QBO) on your OpenShift Container Platform clusters at no additional cost. Note Red Hat Quay geo-replication requires a subscription for each storage replication. The database, however, is shared. For more information about purchasing a Red Hat Quay subscription, see Red Hat Quay . 7.3. Using Red Hat Quay with or without internal registry Red Hat Quay can be used as an external registry in front of multiple OpenShift Container Platform clusters with their internal registries. Red Hat Quay can also be used in place of the internal registry when it comes to automating builds and deployment rollouts. The required coordination of Secrets and ImageStreams is automated by the Quay Bridge Operator, which can be launched from the OperatorHub for OpenShift Container Platform.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/red_hat_quay_architecture/sizing-intro
Chapter 5. Preparing to update to OpenShift Container Platform 4.12
Chapter 5. Preparing to update to OpenShift Container Platform 4.12 OpenShift Container Platform 4.12 uses Kubernetes 1.25, which removed several deprecated APIs. A cluster administrator must provide a manual acknowledgment before the cluster can be updated from OpenShift Container Platform 4.11 to 4.12. This is to help prevent issues after upgrading to OpenShift Container Platform 4.12, where APIs that have been removed are still in use by workloads, tools, or other components running on or interacting with the cluster. Administrators must evaluate their cluster for any APIs in use that will be removed and migrate the affected components to use the appropriate new API version. After this evaluation and migration is complete, the administrator can provide the acknowledgment. Before you can update your OpenShift Container Platform 4.11 cluster to 4.12, you must provide the administrator acknowledgment. 5.1. Removed Kubernetes APIs OpenShift Container Platform 4.12 uses Kubernetes 1.25, which removed the following deprecated APIs. You must migrate manifests and API clients to use the appropriate API version. For more information about migrating removed APIs, see the Kubernetes documentation . Table 5.1. APIs removed from Kubernetes 1.25 Resource Removed API Migrate to Notable changes CronJob batch/v1beta1 batch/v1 No EndpointSlice discovery.k8s.io/v1beta1 discovery.k8s.io/v1 Yes Event events.k8s.io/v1beta1 events.k8s.io/v1 Yes HorizontalPodAutoscaler autoscaling/v2beta1 autoscaling/v2 No PodDisruptionBudget policy/v1beta1 policy/v1 Yes PodSecurityPolicy policy/v1beta1 Pod Security Admission [1] Yes RuntimeClass node.k8s.io/v1beta1 node.k8s.io/v1 No For more information about pod security admission in OpenShift Container Platform, see Understanding and managing pod security admission . Additional resources Understanding and managing pod security admission 5.2. Evaluating your cluster for removed APIs There are several methods to help administrators identify where APIs that will be removed are in use. However, OpenShift Container Platform cannot identify all instances, especially workloads that are idle or external tools that are used. It is the responsibility of the administrator to properly evaluate all workloads and other integrations for instances of removed APIs. 5.2.1. Reviewing alerts to identify uses of removed APIs Two alerts fire when an API is in use that will be removed in the release: APIRemovedInNextReleaseInUse - for APIs that will be removed in the OpenShift Container Platform release. APIRemovedInNextEUSReleaseInUse - for APIs that will be removed in the OpenShift Container Platform Extended Update Support (EUS) release. If either of these alerts are firing in your cluster, review the alerts and take action to clear the alerts by migrating manifests and API clients to use the new API version. Use the APIRequestCount API to get more information about which APIs are in use and which workloads are using removed APIs, because the alerts do not provide this information. Additionally, some APIs might not trigger these alerts but are still captured by APIRequestCount . The alerts are tuned to be less sensitive to avoid alerting fatigue in production systems. 5.2.2. Using APIRequestCount to identify uses of removed APIs You can use the APIRequestCount API to track API requests and review whether any of them are using one of the removed APIs. Prerequisites You must have access to the cluster as a user with the cluster-admin role. Procedure Run the following command and examine the REMOVEDINRELEASE column of the output to identify the removed APIs that are currently in use: USD oc get apirequestcounts Example output NAME REMOVEDINRELEASE REQUESTSINCURRENTHOUR REQUESTSINLAST24H ... poddisruptionbudgets.v1.policy 391 8114 poddisruptionbudgets.v1beta1.policy 1.25 2 23 podmonitors.v1.monitoring.coreos.com 3 70 podnetworkconnectivitychecks.v1alpha1.controlplane.operator.openshift.io 612 11748 pods.v1 1531 38634 podsecuritypolicies.v1beta1.policy 1.25 3 39 podtemplates.v1 2 79 preprovisioningimages.v1alpha1.metal3.io 2 39 priorityclasses.v1.scheduling.k8s.io 12 248 prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 1.26 3 86 ... Important You can safely ignore the following entries that appear in the results: The system:serviceaccount:kube-system:generic-garbage-collector and the system:serviceaccount:kube-system:namespace-controller users might appear in the results because these services invoke all registered APIs when searching for resources to remove. The system:kube-controller-manager and system:cluster-policy-controller users might appear in the results because they walk through all resources while enforcing various policies. You can also use -o jsonpath to filter the results: USD oc get apirequestcounts -o jsonpath='{range .items[?(@.status.removedInRelease!="")]}{.status.removedInRelease}{"\t"}{.metadata.name}{"\n"}{end}' Example output 1.26 flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 1.26 horizontalpodautoscalers.v2beta2.autoscaling 1.25 poddisruptionbudgets.v1beta1.policy 1.25 podsecuritypolicies.v1beta1.policy 1.26 prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 5.2.3. Using APIRequestCount to identify which workloads are using the removed APIs You can examine the APIRequestCount resource for a given API version to help identify which workloads are using the API. Prerequisites You must have access to the cluster as a user with the cluster-admin role. Procedure Run the following command and examine the username and userAgent fields to help identify the workloads that are using the API: USD oc get apirequestcounts <resource>.<version>.<group> -o yaml For example: USD oc get apirequestcounts poddisruptionbudgets.v1beta1.policy -o yaml You can also use -o jsonpath to extract the username and userAgent values from an APIRequestCount resource: USD oc get apirequestcounts poddisruptionbudgets.v1beta1.policy \ -o jsonpath='{range .status.currentHour..byUser[*]}{..byVerb[*].verb}{","}{.username}{","}{.userAgent}{"\n"}{end}' \ | sort -k 2 -t, -u | column -t -s, -NVERBS,USERNAME,USERAGENT Example output VERBS USERNAME USERAGENT watch system:serviceaccount:openshift-operators:3scale-operator manager/v0.0.0 watch system:serviceaccount:openshift-operators:datadog-operator-controller-manager manager/v0.0.0 5.3. Migrating instances of removed APIs For information about how to migrate removed Kubernetes APIs, see the Deprecated API Migration Guide in the Kubernetes documentation. 5.4. Providing the administrator acknowledgment After you have evaluated your cluster for any removed APIs and have migrated any removed APIs, you can acknowledge that your cluster is ready to upgrade from OpenShift Container Platform 4.11 to 4.12. Warning Be aware that all responsibility falls on the administrator to ensure that all uses of removed APIs have been resolved and migrated as necessary before providing this administrator acknowledgment. OpenShift Container Platform can assist with the evaluation, but cannot identify all possible uses of removed APIs, especially idle workloads or external tools. Prerequisites You must have access to the cluster as a user with the cluster-admin role. Procedure Run the following command to acknowledge that you have completed the evaluation and your cluster is ready for the Kubernetes API removals in OpenShift Container Platform 4.12: USD oc -n openshift-config patch cm admin-acks --patch '{"data":{"ack-4.11-kube-1.25-api-removals-in-4.12":"true"}}' --type=merge
[ "oc get apirequestcounts", "NAME REMOVEDINRELEASE REQUESTSINCURRENTHOUR REQUESTSINLAST24H poddisruptionbudgets.v1.policy 391 8114 poddisruptionbudgets.v1beta1.policy 1.25 2 23 podmonitors.v1.monitoring.coreos.com 3 70 podnetworkconnectivitychecks.v1alpha1.controlplane.operator.openshift.io 612 11748 pods.v1 1531 38634 podsecuritypolicies.v1beta1.policy 1.25 3 39 podtemplates.v1 2 79 preprovisioningimages.v1alpha1.metal3.io 2 39 priorityclasses.v1.scheduling.k8s.io 12 248 prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 1.26 3 86", "oc get apirequestcounts -o jsonpath='{range .items[?(@.status.removedInRelease!=\"\")]}{.status.removedInRelease}{\"\\t\"}{.metadata.name}{\"\\n\"}{end}'", "1.26 flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 1.26 horizontalpodautoscalers.v2beta2.autoscaling 1.25 poddisruptionbudgets.v1beta1.policy 1.25 podsecuritypolicies.v1beta1.policy 1.26 prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io", "oc get apirequestcounts <resource>.<version>.<group> -o yaml", "oc get apirequestcounts poddisruptionbudgets.v1beta1.policy -o yaml", "oc get apirequestcounts poddisruptionbudgets.v1beta1.policy -o jsonpath='{range .status.currentHour..byUser[*]}{..byVerb[*].verb}{\",\"}{.username}{\",\"}{.userAgent}{\"\\n\"}{end}' | sort -k 2 -t, -u | column -t -s, -NVERBS,USERNAME,USERAGENT", "VERBS USERNAME USERAGENT watch system:serviceaccount:openshift-operators:3scale-operator manager/v0.0.0 watch system:serviceaccount:openshift-operators:datadog-operator-controller-manager manager/v0.0.0", "oc -n openshift-config patch cm admin-acks --patch '{\"data\":{\"ack-4.11-kube-1.25-api-removals-in-4.12\":\"true\"}}' --type=merge" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/updating_clusters/updating-cluster-prepare
22.16.12. Configuring Symmetric Authentication Using a Key
22.16.12. Configuring Symmetric Authentication Using a Key To configure symmetric authentication using a key, add the following option to the end of a server or peer command: key number where number is in the range 1 to 65534 inclusive. This option enables the use of a message authentication code ( MAC ) in packets. This option is for use with the peer , server , broadcast , and manycastclient commands. The option can be used in the /etc/ntp.conf file as follows: See also Section 22.6, "Authentication Options for NTP" .
[ "server 192.168.1.1 key 10 broadcast 192.168.1.255 key 20 manycastclient 239.255.254.254 key 30" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2_configuring_symmetric_authentication_using_a_key
12.20. LDAP Translator
12.20. LDAP Translator 12.20.1. LDAP Translator The LDAP translator exposes an LDAP directory tree relationally with pushdown support for filtering via criteria. This is typically coupled with the LDAP resource adapter. The LDAP translator is implemented by the org.teiid.translator.ldap.LDAPExecutionFactory class and known by the translator type name ldap . Note The resource adapter for this translator is provided by configuring the ldap data source in the JBoss EAP instance. See the Red Hat JBoss Data Virtualization Administration and Configuration Guide for more configuration information. 12.20.2. LDAP Translator: Execution Properties Table 12.15. Execution Properties Name Description Default SearchDefaultBaseDN Default Base DN for LDAP Searches null SearchDefaultScope Default Scope for LDAP Searches. Can be one of SUBTREE_SCOPE, OBJECT_SCOPE, ONELEVEL_SCOPE. ONELEVEL_SCOPE RestrictToObjectClass Restrict Searches to objectClass named in the Name field for a table false UsePagination Use a PagedResultsControl to page through large results. This is not supported by all directory servers. false ExceptionOnSizeLimitExceeded Set to true to throw an exception when a SizeLimitExceededException is received and a LIMIT is not properly enforced. false Note There are no import settings for the LDAP translator; it also does not provide metadata. If one of the methods below is not used and the attribute is mapped to a non-array type, then any value may be returned on a read operation. Also insert/update/delete support will not be multi-value aware. String columns with a default value of "multivalued-concat" will concatenate all attribute values together in alphabetical order using a ? delimiter. If a multivalued attribute does not have a default value of "multivalued-concat", then any value may be returned. Multiple attribute values may also be supported as an array type. The array type mapping also allows for insert/update operations. This example shows a DDL with objectClass and uniqueMember as arrays: The array values can be retrieved with a SELECT. Here is an example insert with array values: 12.20.3. LDAP Translator: Native Queries LDAP procedures may optionally have native queries associated with them (see Section 12.7, "Parameterizable Native Queries" ). The operation prefix (for example, select;, insert;, update;, delete; - see the native procedure logic below) must be present in the native query, but it will not be issued as part of the query to the source. The following is an example DDL for an LDAP native procedure: Note Parameter values have reserved characters escaped, but are otherwise directly substituted into the query. 12.20.4. LDAP Translator: Native Procedure Warning This feature is turned off by default because of the security risk this exposes to execute any command against the source. To enable this feature, override the translator property called "SupportsNativeQueries" to true. See Section 12.6, "Override Execution Properties" . above. LDAP translator provides a procedure with name native that gives ability to execute any ad hoc native LDAP queries directly against the source without any JBoss Data Virtualization parsing or resolving. The metadata of this procedure's execution results are not known to JBoss Data Virtualization, and they are returned as object array. Users can use the ARRAYTABLE construct ( Section 2.6.10, "Nested Tables: ARRAYTABLE" ) to build tabular output for consumption by client applications. Since there is no known direct query language for LDAP, JBoss Data Virtualization exposes this procedure with a simple query structure as below. 12.20.5. LDAP Translator Example: Search Example 12.7. Search Example The " search " keyword is followed by the below properties. Each property must be delimited by semicolon (;) If a property contains a semicolon (;), it must be escaped by another semicolon. See also Section 12.7, "Parameterizable Native Queries" and the example in Section 12.20.3, "LDAP Translator: Native Queries" . Name Description Required context-name LDAP Context name Yes filter query to filter the records in the context No count-limit limit the number of results. same as using LIMIT No timeout Time out the query if not finished in given milliseconds No search-scope LDAP search scope, one of SUBTREE_SCOPE, OBJECT_SCOPE, ONELEVEL_SCOPE No attributes attributes to retrieve Yes 12.20.6. LDAP Translator Example: Delete Example 12.8. Delete Example In the above code, the " delete " keyword is followed by the "DN" string. All the string contents after the "delete;" are used as the DN. 12.20.7. LDAP Translator Example: Create and Update Example 12.9. Create Example In the above code, the " create " keyword is followed by the "DN" string. All the string contents after the "create;" is used as the DN. It also takes one property called "attributes" which is comma separated list of attributes. The values for each attribute is specified as separate argument to the "native" procedure. Update is similar to create: Example 12.10. Update Example Important By default, the name of the procedure that executes the queries directly is called native , however this can be changed by overriding an execution property in the vdb.xml file. See Section 12.6, "Override Execution Properties" . 12.20.8. LDAP Connector Capabilities Support LDAP does not provide the same set of functionality as a relational database. The LDAP Connector supports many standard SQL constructs, and performs the job of translating those constructs into an equivalent LDAP search statement. For example, the SQL statement: SELECT firstname, lastname, guid FROM public_views.people WHERE (lastname='Jones' and firstname IN ('Michael', 'John')) OR guid > 600000 Uses a number of SQL constructs, including: SELECT clause support select individual element support (firstname, lastname, guid) FROM support WHERE clause criteria support nested criteria support AND, OR support Compare criteria (Greater-than) support IN support The LDAP Connector executes LDAP searches by pushing down the equivalent LDAP search filter whenever possible, based on the supported capabilities. JBoss Data Virtualization automatically provides additional database functionality when the LDAP Connector does not explicitly provide support for a given SQL construct. In these cases, the SQL construct cannot be pushed down to the data source, so it will be evaluated in JBoss Data Virtualization, in order to ensure that the operation is performed. In cases where certain SQL capabilities cannot be pushed down to LDAP, JBoss Data Virtualization pushes down the capabilities that are supported, and fetches a set of data from LDAP. JBoss Data Virtualization then evaluates the additional capabilities, creating a subset of the original data set. Finally, JBoss Data Virtualization will pass the result to the client. It is useful to be aware of unsupported capabilities, in order to avoid fetching large data sets from LDAP when possible. 12.20.9. LDAP Connector Capabilities Support List The LDAP Connector has these capabilities: SELECT queries SELECT element pushdown (for example, individual attribute selection) AND criteria Compare criteria (e.g. <, <=, >, >=, =, !=) IN criteria LIKE criteria. OR criteria INSERT , UPDATE , DELETE statements (must meet Modeling requirements) Due to the nature of the LDAP source, the following capability is not supported: SELECT queries The following capabilities are not supported in the LDAP Connector, and will be evaluated by the JBoss Data Virtualization after data is fetched by the connector: Functions Aggregates BETWEEN Criteria Case Expressions Aliased Groups Correlated Subqueries EXISTS Criteria Joins Inline views IS NULL criteria NOT criteria ORDER BY Quantified compare criteria Row Offset Searched Case Expressions Select Distinct Select Literals UNION XA Transactions The ldap-as-a-datasource quick start shows you how to access data in the OpenLDAP Server. Use the ldap translator in the vdb.xml file. The translator does not provide a connection to OpenLDAP. Instead, you can use a JCA adapter that uses the Java Naming API. To do so, use the following XML fragment in the standalone-teiid.xml file. See a example in JBOSS-HOME/docs/teiid/datasources/ldap . The code above defines the translator and connector. The LDAP translator can derive the metadata based on existing Users/Groups in the LDAP Server. You need the user to define the metadata. For example, you can define a schema using DDL: When the SELECT operation is executed against a table using JDV, it retrieves the users and groups from the LDAP Server: 12.20.10. LDAP Attribute Datatype Support LDAP providers currently return attribute value types of java.lang.String and byte[] , and do not support the ability to return any other attribute value type. The LDAP Connector currently supports attribute value types of java.lang.String only. Therefore, all attributes are modeled using the String datatype in Teiid Designer. Conversion functions that are available in JBoss Data Virtualization allow you to use models that convert a String value from LDAP into a different data type. Some conversions may be applied implicitly, and do not require the use of any conversion functions. Other conversions must be applied explicitly, via the use of CONVERT functions. Since the CONVERT functions are not supported by the underlying LDAP system, they will be evaluated in JBoss Data Virtualization. Therefore, if any criteria is evaluated against a converted datatype, that evaluation cannot be pushed to the data source, since the native type is String. Note When converting from String to other types, be aware that criteria against that new data type will not be pushed down to the LDAP data source. This may decrease performance for certain queries. As an alternative, the data type can remain a string and the client application can make the conversion, or the client application can circumvent any LDAP supports <= and >=, but has no equivalent for < or >. In order to support < or > pushdown to the source, the LDAP Connector will translate < to <=, and it will translate > to >=. When using the LDAP Connector, be aware that strictly-less-than and strictly-greater-than comparisons will behave differently than expected. It is advisable to use <= and >= for queries against an LDAP based data source, since this has a direct mapping to comparison operators in LDAP. 12.20.11. LDAP: Testing Your Connector You must define LDAP Connector properties accurately or the JBoss Data Virtualization server will return unexpected results, or none at all. As you deploy the connector in Console, improper configuration can lead to problems when you attempt to start your connector. You can test your LDAP Connector in Teiid Designer prior to Console deployment by submitting queries at modeling time for verification. 12.20.12. LDAP: Console Deployment Issues The Console shows an Exception That Says Error Synchronizing the Server If you receive an exception when you synchronize the server and your LDAP Connector is the only service that does not start, it means that there was a problem starting the connector. Verify whether you have correctly typed in your connector properties to resolve this issue.
[ "create foreign table ldap_groups (objectClass string[], DN string, name string options (nameinsource 'cn'), uniqueMember string[]) options (nameinsource 'ou=groups,dc=teiid,dc=org', updatable true)", "insert into ldap_groups (objectClass, DN, name, uniqueMember) values (('top', 'groupOfUniqueNames'), 'cn=a,ou=groups,dc=teiid,dc=org', 'a', ('cn=Sam Smith,ou=people,dc=teiid,dc=org',))", "CREATE FOREIGN PROCEDURE proc (arg1 integer, arg2 string) OPTIONS (\"teiid_rel:native-query\" 'search;context-name=corporate;filter=(&(objectCategory=person)(objectClass=user)(!cn=USD2));count-limit=5;timeout=USD1;search-scope=ONELEVEL_SCOPE;attributes=uid,cn') returns (col1 string, col2 string);", "SELECT x.* FROM (call pm1.native('search;context-name=corporate;filter=(objectClass=*);count-limit=5;timeout=6;search-scope=ONELEVEL_SCOPE;attributes=uid,cn')) w, ARRAYTABLE(w.tuple COLUMNS \"uid\" string , \"cn\" string) AS x", "SELECT x.* FROM (call pm1.native('delete;uid=doe,ou=people,o=teiid.org')) w, ARRAYTABLE(w.tuple COLUMNS \"updatecount\" integer) AS x", "SELECT x.* FROM (call pm1.native('create;uid=doe,ou=people,o=teiid.org;attributes=one,two,three', 'one', 2, 3.0)) w, ARRAYTABLE(w.tuple COLUMNS \"update_count\" integer) AS x", "SELECT x.* FROM (call pm1.native('update;uid=doe,ou=people,o=teiid.org;attributes=one,two,three', 'one', 2, 3.0)) w, ARRAYTABLE(w.tuple COLUMNS \"update_count\" integer) AS x", "SELECT firstname, lastname, guid FROM public_views.people WHERE (lastname='Jones' and firstname IN ('Michael', 'John')) OR guid > 600000", "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <vdb name=\"ldapVDB\" version=\"1\"> <model name=\"HRModel\"> <source name=\"local\" translator-name=\"ldap\" connection-jndi-name=\"java:/ldapDS\"/> </model> </vdb>", "<resource-adapter id=\"ldapQS\"> <module slot=\"main\" id=\"org.jboss.teiid.resource-adapter.ldap\"/> <connection-definitions> <connection-definition class-name=\"org.teiid.resource.adapter.ldap.LDAPManagedConnectionFactory\" jndi-name=\"java:/ldapDS\" enabled=\"true\" use-java-context=\"true\" pool-name=\"ldapDS\"> <config-property name=\"LdapAdminUserPassword\"> redhat </config-property> <config-property name=\"LdapAdminUserDN\"> cn=Manager,dc=example,dc=com </config-property> <config-property name=\"LdapUrl\"> ldap://localhost:389 </config-property> </connection-definition> </connection-definitions> </resource-adapter>", "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <vdb name=\"ldapVDB\" version=\"1\"> <model name=\"HRModel\"> <metadata type=\"DDL\"><![CDATA[ CREATE FOREIGN TABLE HR_Group ( DN string options (nameinsource 'dn'), SN string options (nameinsource 'sn'), UID string options (nameinsource 'uid'), MAIL string options (nameinsource 'mail'), NAME string options (nameinsource 'cn') ) OPTIONS(nameinsource 'ou=HR,dc=example,dc=com', updatable true); </metadata> </model> </vdb>", "SELECT * FROM HR_Group" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/sect-ldap_translator
Chapter 6. Network security
Chapter 6. Network security 6.1. Understanding network policy APIs Kubernetes offers two features that users can use to enforce network security. One feature that allows users to enforce network policy is the NetworkPolicy API that is designed mainly for application developers and namespace tenants to protect their namespaces by creating namespace-scoped policies. The second feature is AdminNetworkPolicy which consists of two APIs: the AdminNetworkPolicy (ANP) API and the BaselineAdminNetworkPolicy (BANP) API. ANP and BANP are designed for cluster and network administrators to protect their entire cluster by creating cluster-scoped policies. Cluster administrators can use ANPs to enforce non-overridable policies that take precedence over NetworkPolicy objects. Administrators can use BANP to set up and enforce optional cluster-scoped network policy rules that are overridable by users using NetworkPolicy objects when necessary. When used together, ANP, BANP, and network policy can achieve full multi-tenant isolation that administrators can use to secure their cluster. OVN-Kubernetes CNI in OpenShift Container Platform implements these network policies using Access Control List (ACL) Tiers to evaluate and apply them. ACLs are evaluated in descending order from Tier 1 to Tier 3. Tier 1 evaluates AdminNetworkPolicy (ANP) objects. Tier 2 evaluates NetworkPolicy objects. Tier 3 evaluates BaselineAdminNetworkPolicy (BANP) objects. ANPs are evaluated first. When the match is an ANP allow or deny rule, any existing NetworkPolicy and BaselineAdminNetworkPolicy (BANP) objects in the cluster are skipped from evaluation. When the match is an ANP pass rule, then evaluation moves from tier 1 of the ACL to tier 2 where the NetworkPolicy policy is evaluated. If no NetworkPolicy matches the traffic then evaluation moves from tier 2 ACLs to tier 3 ACLs where BANP is evaluated. 6.1.1. Key differences between AdminNetworkPolicy and NetworkPolicy custom resources The following table explains key differences between the cluster scoped AdminNetworkPolicy API and the namespace scoped NetworkPolicy API. Policy elements AdminNetworkPolicy NetworkPolicy Applicable user Cluster administrator or equivalent Namespace owners Scope Cluster Namespaced Drop traffic Supported with an explicit Deny action set as a rule. Supported via implicit Deny isolation at policy creation time. Delegate traffic Supported with an Pass action set as a rule. Not applicable Allow traffic Supported with an explicit Allow action set as a rule. The default action for all rules is to allow. Rule precedence within the policy Depends on the order in which they appear within an ANP. The higher the rule's position the higher the precedence. Rules are additive Policy precedence Among ANPs the priority field sets the order for evaluation. The lower the priority number higher the policy precedence. There is no policy ordering between policies. Feature precedence Evaluated first via tier 1 ACL and BANP is evaluated last via tier 3 ACL. Enforced after ANP and before BANP, they are evaluated in tier 2 of the ACL. Matching pod selection Can apply different rules across namespaces. Can apply different rules across pods in single namespace. Cluster egress traffic Supported via nodes and networks peers Supported through ipBlock field along with accepted CIDR syntax. Cluster ingress traffic Not supported Not supported Fully qualified domain names (FQDN) peer support Not supported Not supported Namespace selectors Supports advanced selection of Namespaces with the use of namespaces.matchLabels field Supports label based namespace selection with the use of namespaceSelector field 6.2. Admin network policy 6.2.1. OVN-Kubernetes AdminNetworkPolicy 6.2.1.1. AdminNetworkPolicy An AdminNetworkPolicy (ANP) is a cluster-scoped custom resource definition (CRD). As a OpenShift Container Platform administrator, you can use ANP to secure your network by creating network policies before creating namespaces. Additionally, you can create network policies on a cluster-scoped level that is non-overridable by NetworkPolicy objects. The key difference between AdminNetworkPolicy and NetworkPolicy objects are that the former is for administrators and is cluster scoped while the latter is for tenant owners and is namespace scoped. An ANP allows administrators to specify the following: A priority value that determines the order of its evaluation. The lower the value the higher the precedence. A set of pods that consists of a set of namespaces or namespace on which the policy is applied. A list of ingress rules to be applied for all ingress traffic towards the subject . A list of egress rules to be applied for all egress traffic from the subject . AdminNetworkPolicy example Example 6.1. Example YAML file for an ANP apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: sample-anp-deny-pass-rules 1 spec: priority: 50 2 subject: namespaces: matchLabels: kubernetes.io/metadata.name: example.name 3 ingress: 4 - name: "deny-all-ingress-tenant-1" 5 action: "Deny" from: - pods: namespaceSelector: matchLabels: custom-anp: tenant-1 podSelector: matchLabels: custom-anp: tenant-1 6 egress: 7 - name: "pass-all-egress-to-tenant-1" action: "Pass" to: - pods: namespaceSelector: matchLabels: custom-anp: tenant-1 podSelector: matchLabels: custom-anp: tenant-1 1 Specify a name for your ANP. 2 The spec.priority field supports a maximum of 100 ANPs in the range of values 0-99 in a cluster. The lower the value, the higher the precedence because the range is read in order from the lowest to highest value. Because there is no guarantee which policy takes precedence when ANPs are created at the same priority, set ANPs at different priorities so that precedence is deliberate. 3 Specify the namespace to apply the ANP resource. 4 ANP have both ingress and egress rules. ANP rules for spec.ingress field accepts values of Pass , Deny , and Allow for the action field. 5 Specify a name for the ingress.name . 6 Specify podSelector.matchLabels to select pods within the namespaces selected by namespaceSelector.matchLabels as ingress peers. 7 ANPs have both ingress and egress rules. ANP rules for spec.egress field accepts values of Pass , Deny , and Allow for the action field. Additional resources Network Policy API Working Group 6.2.1.1.1. AdminNetworkPolicy actions for rules As an administrator, you can set Allow , Deny , or Pass as the action field for your AdminNetworkPolicy rules. Because OVN-Kubernetes uses a tiered ACLs to evaluate network traffic rules, ANP allow you to set very strong policy rules that can only be changed by an administrator modifying them, deleting the rule, or overriding them by setting a higher priority rule. AdminNetworkPolicy Allow example The following ANP that is defined at priority 9 ensures all ingress traffic is allowed from the monitoring namespace towards any tenant (all other namespaces) in the cluster. Example 6.2. Example YAML file for a strong Allow ANP apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: allow-monitoring spec: priority: 9 subject: namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well. ingress: - name: "allow-ingress-from-monitoring" action: "Allow" from: - namespaces: matchLabels: kubernetes.io/metadata.name: monitoring # ... This is an example of a strong Allow ANP because it is non-overridable by all the parties involved. No tenants can block themselves from being monitored using NetworkPolicy objects and the monitoring tenant also has no say in what it can or cannot monitor. AdminNetworkPolicy Deny example The following ANP that is defined at priority 5 ensures all ingress traffic from the monitoring namespace is blocked towards restricted tenants (namespaces that have labels security: restricted ). Example 6.3. Example YAML file for a strong Deny ANP apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: block-monitoring spec: priority: 5 subject: namespaces: matchLabels: security: restricted ingress: - name: "deny-ingress-from-monitoring" action: "Deny" from: - namespaces: matchLabels: kubernetes.io/metadata.name: monitoring # ... This is a strong Deny ANP that is non-overridable by all the parties involved. The restricted tenant owners cannot authorize themselves to allow monitoring traffic, and the infrastructure's monitoring service cannot scrape anything from these sensitive namespaces. When combined with the strong Allow example, the block-monitoring ANP has a lower priority value giving it higher precedence, which ensures restricted tenants are never monitored. AdminNetworkPolicy Pass example The following ANP that is defined at priority 7 ensures all ingress traffic from the monitoring namespace towards internal infrastructure tenants (namespaces that have labels security: internal ) are passed on to tier 2 of the ACLs and evaluated by the namespaces' NetworkPolicy objects. Example 6.4. Example YAML file for a strong Pass ANP apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: pass-monitoring spec: priority: 7 subject: namespaces: matchLabels: security: internal ingress: - name: "pass-ingress-from-monitoring" action: "Pass" from: - namespaces: matchLabels: kubernetes.io/metadata.name: monitoring # ... This example is a strong Pass action ANP because it delegates the decision to NetworkPolicy objects defined by tenant owners. This pass-monitoring ANP allows all tenant owners grouped at security level internal to choose if their metrics should be scraped by the infrastructures' monitoring service using namespace scoped NetworkPolicy objects. 6.2.2. OVN-Kubernetes BaselineAdminNetworkPolicy 6.2.2.1. BaselineAdminNetworkPolicy BaselineAdminNetworkPolicy (BANP) is a cluster-scoped custom resource definition (CRD). As a OpenShift Container Platform administrator, you can use BANP to setup and enforce optional baseline network policy rules that are overridable by users using NetworkPolicy objects if need be. Rule actions for BANP are allow or deny . The BaselineAdminNetworkPolicy resource is a cluster singleton object that can be used as a guardrail policy incase a passed traffic policy does not match any NetworkPolicy objects in the cluster. A BANP can also be used as a default security model that provides guardrails that intra-cluster traffic is blocked by default and a user will need to use NetworkPolicy objects to allow known traffic. You must use default as the name when creating a BANP resource. A BANP allows administrators to specify: A subject that consists of a set of namespaces or namespace. A list of ingress rules to be applied for all ingress traffic towards the subject . A list of egress rules to be applied for all egress traffic from the subject . BaselineAdminNetworkPolicy example Example 6.5. Example YAML file for BANP apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: name: default 1 spec: subject: namespaces: matchLabels: kubernetes.io/metadata.name: example.name 2 ingress: 3 - name: "deny-all-ingress-from-tenant-1" 4 action: "Deny" from: - pods: namespaceSelector: matchLabels: custom-banp: tenant-1 5 podSelector: matchLabels: custom-banp: tenant-1 6 egress: - name: "allow-all-egress-to-tenant-1" action: "Allow" to: - pods: namespaceSelector: matchLabels: custom-banp: tenant-1 podSelector: matchLabels: custom-banp: tenant-1 1 The policy name must be default because BANP is a singleton object. 2 Specify the namespace to apply the ANP to. 3 BANP have both ingress and egress rules. BANP rules for spec.ingress and spec.egress fields accepts values of Deny and Allow for the action field. 4 Specify a name for the ingress.name 5 Specify the namespaces to select the pods from to apply the BANP resource. 6 Specify podSelector.matchLabels name of the pods to apply the BANP resource. BaselineAdminNetworkPolicy Deny example The following BANP singleton ensures that the administrator has set up a default deny policy for all ingress monitoring traffic coming into the tenants at internal security level. When combined with the "AdminNetworkPolicy Pass example", this deny policy acts as a guardrail policy for all ingress traffic that is passed by the ANP pass-monitoring policy. Example 6.6. Example YAML file for a guardrail Deny rule apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: name: default spec: subject: namespaces: matchLabels: security: internal ingress: - name: "deny-ingress-from-monitoring" action: "Deny" from: - namespaces: matchLabels: kubernetes.io/metadata.name: monitoring # ... You can use an AdminNetworkPolicy resource with a Pass value for the action field in conjunction with the BaselineAdminNetworkPolicy resource to create a multi-tenant policy. This multi-tenant policy allows one tenant to collect monitoring data on their application while simultaneously not collecting data from a second tenant. As an administrator, if you apply both the "AdminNetworkPolicy Pass action example" and the "BaselineAdminNetwork Policy Deny example", tenants are then left with the ability to choose to create a NetworkPolicy resource that will be evaluated before the BANP. For example, Tenant 1 can set up the following NetworkPolicy resource to monitor ingress traffic: Example 6.7. Example NetworkPolicy apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-monitoring namespace: tenant 1 spec: podSelector: policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring # ... In this scenario, Tenant 1's policy would be evaluated after the "AdminNetworkPolicy Pass action example" and before the "BaselineAdminNetwork Policy Deny example", which denies all ingress monitoring traffic coming into tenants with security level internal . With Tenant 1's NetworkPolicy object in place, they will be able to collect data on their application. Tenant 2, however, who does not have any NetworkPolicy objects in place, will not be able to collect data. As an administrator, you have not by default monitored internal tenants, but instead, you created a BANP that allows tenants to use NetworkPolicy objects to override the default behavior of your BANP. 6.2.3. Monitoring ANP and BANP AdminNetworkPolicy and BaselineAdminNetworkPolicy resources have metrics that can be used for monitoring and managing your policies. See the following table for more details on the metrics. 6.2.3.1. Metrics for AdminNetworkPolicy Name Description Explanation ovnkube_controller_admin_network_policies Not applicable The total number of AdminNetworkPolicy resources in the cluster. ovnkube_controller_baseline_admin_network_policies Not applicable The total number of BaselineAdminNetworkPolicy resources in the cluster. The value should be 0 or 1. ovnkube_controller_admin_network_policies_rules direction : specifies either Ingress or Egress . action : specifies either Pass , Allow , or Deny . The total number of rules across all ANP policies in the cluster grouped by direction and action . ovnkube_controller_baseline_admin_network_policies_rules direction : specifies either Ingress or Egress . action : specifies either Allow or Deny . The total number of rules across all BANP policies in the cluster grouped by direction and action . ovnkube_controller_admin_network_policies_db_objects table_name : specifies either ACL or Address_Set The total number of OVN Northbound database (nbdb) objects that are created by all the ANP in the cluster grouped by the table_name . ovnkube_controller_baseline_admin_network_policies_db_objects table_name : specifies either ACL or Address_Set The total number of OVN Northbound database (nbdb) objects that are created by all the BANP in the cluster grouped by the table_name . 6.2.4. Egress nodes and networks peer for AdminNetworkPolicy This section explains nodes and networks peers. Administrators can use the examples in this section to design AdminNetworkPolicy and BaselineAdminNetworkPolicy to control northbound traffic in their cluster. 6.2.4.1. Northbound traffic controls for AdminNetworkPolicy and BaselineAdminNetworkPolicy In addition to supporting east-west traffic controls, ANP and BANP also allow administrators to control their northbound traffic leaving the cluster or traffic leaving the node to other nodes in the cluster. End-users can do the following: Implement egress traffic control towards cluster nodes using nodes egress peer Implement egress traffic control towards Kubernetes API servers using nodes or networks egress peers Implement egress traffic control towards external destinations outside the cluster using networks peer Note For ANP and BANP, nodes and networks peers can be specified for egress rules only. 6.2.4.1.1. Using nodes peer to control egress traffic to cluster nodes Using the nodes peer administrators can control egress traffic from pods to nodes in the cluster. A benefit of this is that you do not have to change the policy when nodes are added to or deleted from the cluster. The following example allows egress traffic to the Kubernetes API server on port 6443 by any of the namespaces with a restricted , confidential , or internal level of security using the node selector peer. It also denies traffic to all worker nodes in your cluster from any of the namespaces with a restricted , confidential , or internal level of security. Example 6.8. Example of ANP Allow egress using nodes peer apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: egress-security-allow spec: egress: - action: Deny to: - nodes: matchExpressions: - key: node-role.kubernetes.io/worker operator: Exists - action: Allow name: allow-to-kubernetes-api-server-and-engr-dept-pods ports: - portNumber: port: 6443 protocol: TCP to: - nodes: 1 matchExpressions: - key: node-role.kubernetes.io/control-plane operator: Exists - pods: 2 namespaceSelector: matchLabels: dept: engr podSelector: {} priority: 55 subject: 3 namespaces: matchExpressions: - key: security 4 operator: In values: - restricted - confidential - internal 1 Specifies a node or set of nodes in the cluster using the matchExpressions field. 2 Specifies all the pods labeled with dept: engr . 3 Specifies the subject of the ANP which includes any namespaces that match the labels used by the network policy. The example matches any of the namespaces with restricted , confidential , or internal level of security . 4 Specifies key/value pairs for matchExpressions field. 6.2.4.1.2. Using networks peer to control egress traffic towards external destinations Cluster administrators can use CIDR ranges in networks peer and apply a policy to control egress traffic leaving from pods and going to a destination configured at the IP address that is within the CIDR range specified with networks field. The following example uses networks peer and combines ANP and BANP policies to restrict egress traffic. Important Use the empty selector ({}) in the namespace field for ANP and BANP with caution. When using an empty selector, it also selects OpenShift namespaces. If you use values of 0.0.0.0/0 in a ANP or BANP Deny rule, you must set a higher priority ANP Allow rule to necessary destinations before setting the Deny to 0.0.0.0/0 . Example 6.9. Example of ANP and BANP using networks peers apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: network-as-egress-peer spec: priority: 70 subject: namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well. egress: - name: "deny-egress-to-external-dns-servers" action: "Deny" to: - networks: 1 - 8.8.8.8/32 - 8.8.4.4/32 - 208.67.222.222/32 ports: - portNumber: protocol: UDP port: 53 - name: "allow-all-egress-to-intranet" action: "Allow" to: - networks: 2 - 89.246.180.0/22 - 60.45.72.0/22 - name: "allow-all-intra-cluster-traffic" action: "Allow" to: - namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well. - name: "pass-all-egress-to-internet" action: "Pass" to: - networks: - 0.0.0.0/0 3 --- apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: name: default spec: subject: namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well. egress: - name: "deny-all-egress-to-internet" action: "Deny" to: - networks: - 0.0.0.0/0 4 --- 1 Use networks to specify a range of CIDR networks outside of the cluster. 2 Specifies the CIDR ranges for the intra-cluster traffic from your resources. 3 4 Specifies a Deny egress to everything by setting networks values to 0.0.0.0/0 . Make sure you have a higher priority Allow rule to necessary destinations before setting a Deny to 0.0.0.0/0 because this will deny all traffic including to Kubernetes API and DNS servers. Collectively the network-as-egress-peer ANP and default BANP using networks peers enforces the following egress policy: All pods cannot talk to external DNS servers at the listed IP addresses. All pods can talk to rest of the company's intranet. All pods can talk to other pods, nodes, and services. All pods cannot talk to the internet. Combining the last ANP Pass rule and the strong BANP Deny rule a guardrail policy is created that secures traffic in the cluster. 6.2.4.1.3. Using nodes peer and networks peer together Cluster administrators can combine nodes and networks peer in your ANP and BANP policies. Example 6.10. Example of nodes and networks peer apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: egress-peer-1 1 spec: egress: 2 - action: "Allow" name: "allow-egress" to: - nodes: matchExpressions: - key: worker-group operator: In values: - workloads # Egress traffic from nodes with label worker-group: workloads is allowed. - networks: - 104.154.164.170/32 - pods: namespaceSelector: matchLabels: apps: external-apps podSelector: matchLabels: app: web # This rule in the policy allows the traffic directed to pods labeled apps: web in projects with apps: external-apps to leave the cluster. - action: "Deny" name: "deny-egress" to: - nodes: matchExpressions: - key: worker-group operator: In values: - infra # Egress traffic from nodes with label worker-group: infra is denied. - networks: - 104.154.164.160/32 # Egress traffic to this IP address from cluster is denied. - pods: namespaceSelector: matchLabels: apps: internal-apps podSelector: {} - action: "Pass" name: "pass-egress" to: - nodes: matchExpressions: - key: node-role.kubernetes.io/worker operator: Exists # All other egress traffic is passed to NetworkPolicy or BANP for evaluation. priority: 30 3 subject: 4 namespaces: matchLabels: apps: all-apps 1 Specifies the name of the policy. 2 For nodes and networks peers, you can only use northbound traffic controls in ANP as egress . 3 Specifies the priority of the ANP, determining the order in which they should be evaluated. Lower priority rules have higher precedence. ANP accepts values of 0-99 with 0 being the highest priority and 99 being the lowest. 4 Specifies the set of pods in the cluster on which the rules of the policy are to be applied. In the example, any pods with the apps: all-apps label across all namespaces are the subject of the policy. 6.2.5. Troubleshooting AdminNetworkPolicy 6.2.5.1. Checking creation of ANP To check that your AdminNetworkPolicy (ANP) and BaselineAdminNetworkPolicy (BANP) are created correctly, check the status outputs of the following commands: oc describe anp or oc describe banp . A good status indicates OVN DB plumbing was successful and the SetupSucceeded . Example 6.11. Example ANP with a good status ... Conditions: Last Transition Time: 2024-06-08T20:29:00Z Message: Setting up OVN DB plumbing was successful Reason: SetupSucceeded Status: True Type: Ready-In-Zone-ovn-control-plane Last Transition Time: 2024-06-08T20:29:00Z Message: Setting up OVN DB plumbing was successful Reason: SetupSucceeded Status: True Type: Ready-In-Zone-ovn-worker Last Transition Time: 2024-06-08T20:29:00Z Message: Setting up OVN DB plumbing was successful Reason: SetupSucceeded Status: True Type: Ready-In-Zone-ovn-worker2 ... If plumbing is unsuccessful, an error is reported from the respective zone controller. Example 6.12. Example of an ANP with a bad status and error message ... Status: Conditions: Last Transition Time: 2024-06-25T12:47:44Z Message: error attempting to add ANP cluster-control with priority 600 because, OVNK only supports priority ranges 0-99 Reason: SetupFailed Status: False Type: Ready-In-Zone-example-worker-1.example.example-org.net Last Transition Time: 2024-06-25T12:47:45Z Message: error attempting to add ANP cluster-control with priority 600 because, OVNK only supports priority ranges 0-99 Reason: SetupFailed Status: False Type: Ready-In-Zone-example-worker-0.example.example-org.net Last Transition Time: 2024-06-25T12:47:44Z Message: error attempting to add ANP cluster-control with priority 600 because, OVNK only supports priority ranges 0-99 Reason: SetupFailed Status: False Type: Ready-In-Zone-example-ctlplane-1.example.example-org.net Last Transition Time: 2024-06-25T12:47:44Z Message: error attempting to add ANP cluster-control with priority 600 because, OVNK only supports priority ranges 0-99 Reason: SetupFailed Status: False Type: Ready-In-Zone-example-ctlplane-2.example.example-org.net Last Transition Time: 2024-06-25T12:47:44Z Message: error attempting to add ANP cluster-control with priority 600 because, OVNK only supports priority ranges 0-99 Reason: SetupFailed Status: False Type: Ready-In-Zone-example-ctlplane-0.example.example-org.net ``` See the following section for nbctl commands to help troubleshoot unsuccessful policies. 6.2.5.1.1. Using nbctl commands for ANP and BANP To troubleshoot an unsuccessful setup, start by looking at OVN Northbound database (nbdb) objects including ACL , AdressSet , and Port_Group . To view the nbdb, you need to be inside the pod on that node to view the objects in that node's database. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift CLI ( oc ) installed. Note To run ovn nbctl commands in a cluster, you must open a remote shell into the `nbdb`on the relevant node. The following policy was used to generate outputs. Example 6.13. AdminNetworkPolicy used to generate outputs apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: cluster-control spec: priority: 34 subject: namespaces: matchLabels: anp: cluster-control-anp # Only namespaces with this label have this ANP ingress: - name: "allow-from-ingress-router" # rule0 action: "Allow" from: - namespaces: matchLabels: policy-group.network.openshift.io/ingress: "" - name: "allow-from-monitoring" # rule1 action: "Allow" from: - namespaces: matchLabels: kubernetes.io/metadata.name: openshift-monitoring ports: - portNumber: protocol: TCP port: 7564 - namedPort: "scrape" - name: "allow-from-open-tenants" # rule2 action: "Allow" from: - namespaces: # open tenants matchLabels: tenant: open - name: "pass-from-restricted-tenants" # rule3 action: "Pass" from: - namespaces: # restricted tenants matchLabels: tenant: restricted - name: "default-deny" # rule4 action: "Deny" from: - namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well. egress: - name: "allow-to-dns" # rule0 action: "Allow" to: - pods: namespaceSelector: matchlabels: kubernetes.io/metadata.name: openshift-dns podSelector: matchlabels: app: dns ports: - portNumber: protocol: UDP port: 5353 - name: "allow-to-kapi-server" # rule1 action: "Allow" to: - nodes: matchExpressions: - key: node-role.kubernetes.io/control-plane operator: Exists ports: - portNumber: protocol: TCP port: 6443 - name: "allow-to-splunk" # rule2 action: "Allow" to: - namespaces: matchlabels: tenant: splunk ports: - portNumber: protocol: TCP port: 8991 - portNumber: protocol: TCP port: 8992 - name: "allow-to-open-tenants-and-intranet-and-worker-nodes" # rule3 action: "Allow" to: - nodes: # worker-nodes matchExpressions: - key: node-role.kubernetes.io/worker operator: Exists - networks: # intranet - 172.29.0.0/30 - 10.0.54.0/19 - 10.0.56.38/32 - 10.0.69.0/24 - namespaces: # open tenants matchLabels: tenant: open - name: "pass-to-restricted-tenants" # rule4 action: "Pass" to: - namespaces: # restricted tenants matchLabels: tenant: restricted - name: "default-deny" action: "Deny" to: - networks: - 0.0.0.0/0 Procedure List pods with node information by running the following command: USD oc get pods -n openshift-ovn-kubernetes -owide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovnkube-control-plane-5c95487779-8k9fd 2/2 Running 0 34m 10.0.0.5 ci-ln-0tv5gg2-72292-6sjw5-master-0 <none> <none> ovnkube-control-plane-5c95487779-v2xn8 2/2 Running 0 34m 10.0.0.3 ci-ln-0tv5gg2-72292-6sjw5-master-1 <none> <none> ovnkube-node-524dt 8/8 Running 0 33m 10.0.0.4 ci-ln-0tv5gg2-72292-6sjw5-master-2 <none> <none> ovnkube-node-gbwr9 8/8 Running 0 24m 10.0.128.4 ci-ln-0tv5gg2-72292-6sjw5-worker-c-s9gqt <none> <none> ovnkube-node-h4fpx 8/8 Running 0 33m 10.0.0.5 ci-ln-0tv5gg2-72292-6sjw5-master-0 <none> <none> ovnkube-node-j4hzw 8/8 Running 0 24m 10.0.128.2 ci-ln-0tv5gg2-72292-6sjw5-worker-a-hzbh5 <none> <none> ovnkube-node-wdhgv 8/8 Running 0 33m 10.0.0.3 ci-ln-0tv5gg2-72292-6sjw5-master-1 <none> <none> ovnkube-node-wfncn 8/8 Running 0 24m 10.0.128.3 ci-ln-0tv5gg2-72292-6sjw5-worker-b-5bb7f <none> <none> Navigate into a pod to look at the northbound database by running the following command: USD oc rsh -c nbdb -n openshift-ovn-kubernetes ovnkube-node-524dt Run the following command to look at the ACLs nbdb: USD ovn-nbctl find ACL 'external_ids{>=}{"k8s.ovn.org/owner-type"=AdminNetworkPolicy,"k8s.ovn.org/name"=cluster-control}' Where, cluster-control Specifies the name of the AdminNetworkPolicy you are troubleshooting. AdminNetworkPolicy Specifies the type: AdminNetworkPolicy or BaselineAdminNetworkPolicy . Example 6.14. Example output for ACLs _uuid : 0d5e4722-b608-4bb1-b625-23c323cc9926 action : allow-related direction : to-lport external_ids : {direction=Ingress, gress-index="2", "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:2:None", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy, port-policy-protocol=None} label : 0 log : false match : "outport == @a14645450421485494999 && ((ip4.src == USDa13730899355151937870))" meter : acl-logging name : "ANP:cluster-control:Ingress:2" options : {} priority : 26598 severity : [] tier : 1 _uuid : b7be6472-df67-439c-8c9c-f55929f0a6e0 action : drop direction : from-lport external_ids : {direction=Egress, gress-index="5", "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:5:None", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy, port-policy-protocol=None} label : 0 log : false match : "inport == @a14645450421485494999 && ((ip4.dst == USDa11452480169090787059))" meter : acl-logging name : "ANP:cluster-control:Egress:5" options : {apply-after-lb="true"} priority : 26595 severity : [] tier : 1 _uuid : 5a6e5bb4-36eb-4209-b8bc-c611983d4624 action : pass direction : to-lport external_ids : {direction=Ingress, gress-index="3", "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:3:None", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy, port-policy-protocol=None} label : 0 log : false match : "outport == @a14645450421485494999 && ((ip4.src == USDa764182844364804195))" meter : acl-logging name : "ANP:cluster-control:Ingress:3" options : {} priority : 26597 severity : [] tier : 1 _uuid : 04f20275-c410-405c-a923-0e677f767889 action : pass direction : from-lport external_ids : {direction=Egress, gress-index="4", "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:4:None", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy, port-policy-protocol=None} label : 0 log : false match : "inport == @a14645450421485494999 && ((ip4.dst == USDa5972452606168369118))" meter : acl-logging name : "ANP:cluster-control:Egress:4" options : {apply-after-lb="true"} priority : 26596 severity : [] tier : 1 _uuid : 4b5d836a-e0a3-4088-825e-f9f0ca58e538 action : drop direction : to-lport external_ids : {direction=Ingress, gress-index="4", "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:4:None", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy, port-policy-protocol=None} label : 0 log : false match : "outport == @a14645450421485494999 && ((ip4.src == USDa13814616246365836720))" meter : acl-logging name : "ANP:cluster-control:Ingress:4" options : {} priority : 26596 severity : [] tier : 1 _uuid : 5d09957d-d2cc-4f5a-9ddd-b97d9d772023 action : allow-related direction : from-lport external_ids : {direction=Egress, gress-index="2", "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:2:tcp", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy, port-policy-protocol=tcp} label : 0 log : false match : "inport == @a14645450421485494999 && ((ip4.dst == USDa18396736153283155648)) && tcp && tcp.dst=={8991,8992}" meter : acl-logging name : "ANP:cluster-control:Egress:2" options : {apply-after-lb="true"} priority : 26598 severity : [] tier : 1 _uuid : 1a68a5ed-e7f9-47d0-b55c-89184d97e81a action : allow-related direction : from-lport external_ids : {direction=Egress, gress-index="1", "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:1:tcp", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy, port-policy-protocol=tcp} label : 0 log : false match : "inport == @a14645450421485494999 && ((ip4.dst == USDa10706246167277696183)) && tcp && tcp.dst==6443" meter : acl-logging name : "ANP:cluster-control:Egress:1" options : {apply-after-lb="true"} priority : 26599 severity : [] tier : 1 _uuid : aa1a224d-7960-4952-bdfb-35246bafbac8 action : allow-related direction : to-lport external_ids : {direction=Ingress, gress-index="1", "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:1:tcp", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy, port-policy-protocol=tcp} label : 0 log : false match : "outport == @a14645450421485494999 && ((ip4.src == USDa6786643370959569281)) && tcp && tcp.dst==7564" meter : acl-logging name : "ANP:cluster-control:Ingress:1" options : {} priority : 26599 severity : [] tier : 1 _uuid : 1a27d30e-3f96-4915-8ddd-ade7f22c117b action : allow-related direction : from-lport external_ids : {direction=Egress, gress-index="3", "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:3:None", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy, port-policy-protocol=None} label : 0 log : false match : "inport == @a14645450421485494999 && ((ip4.dst == USDa10622494091691694581))" meter : acl-logging name : "ANP:cluster-control:Egress:3" options : {apply-after-lb="true"} priority : 26597 severity : [] tier : 1 _uuid : b23a087f-08f8-4225-8c27-4a9a9ee0c407 action : allow-related direction : from-lport external_ids : {direction=Egress, gress-index="0", "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:0:udp", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy, port-policy-protocol=udp} label : 0 log : false match : "inport == @a14645450421485494999 && ((ip4.dst == USDa13517855690389298082)) && udp && udp.dst==5353" meter : acl-logging name : "ANP:cluster-control:Egress:0" options : {apply-after-lb="true"} priority : 26600 severity : [] tier : 1 _uuid : d14ed5cf-2e06-496e-8cae-6b76d5dd5ccd action : allow-related direction : to-lport external_ids : {direction=Ingress, gress-index="0", "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:0:None", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy, port-policy-protocol=None} label : 0 log : false match : "outport == @a14645450421485494999 && ((ip4.src == USDa14545668191619617708))" meter : acl-logging name : "ANP:cluster-control:Ingress:0" options : {} priority : 26600 severity : [] tier : 1 Note The outputs for ingress and egress show you the logic of the policy in the ACL. For example, every time a packet matches the provided match the action is taken. Examine the specific ACL for the rule by running the following command: USD ovn-nbctl find ACL 'external_ids{>=}{"k8s.ovn.org/owner-type"=AdminNetworkPolicy,direction=Ingress,"k8s.ovn.org/name"=cluster-control,gress-index="1"}' Where, cluster-control Specifies the name of your ANP. Ingress Specifies the direction of traffic either of type Ingress or Egress . 1 Specifies the rule you want to look at. For the example ANP named cluster-control at priority 34 , the following is an example output for Ingress rule 1: Example 6.15. Example output _uuid : aa1a224d-7960-4952-bdfb-35246bafbac8 action : allow-related direction : to-lport external_ids : {direction=Ingress, gress-index="1", "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:1:tcp", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy, port-policy-protocol=tcp} label : 0 log : false match : "outport == @a14645450421485494999 && ((ip4.src == USDa6786643370959569281)) && tcp && tcp.dst==7564" meter : acl-logging name : "ANP:cluster-control:Ingress:1" options : {} priority : 26599 severity : [] tier : 1 Run the following command to look at address sets in the nbdb: USD ovn-nbctl find Address_Set 'external_ids{>=}{"k8s.ovn.org/owner-type"=AdminNetworkPolicy,"k8s.ovn.org/name"=cluster-control}' Example 6.16. Example outputs for Address_Set _uuid : 56e89601-5552-4238-9fc3-8833f5494869 addresses : ["192.168.194.135", "192.168.194.152", "192.168.194.193", "192.168.194.254"] external_ids : {direction=Egress, gress-index="1", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:1:v4", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy} name : a10706246167277696183 _uuid : 7df9330d-380b-4bdb-8acd-4eddeda2419c addresses : ["10.132.0.10", "10.132.0.11", "10.132.0.12", "10.132.0.13", "10.132.0.14", "10.132.0.15", "10.132.0.16", "10.132.0.17", "10.132.0.5", "10.132.0.7", "10.132.0.71", "10.132.0.75", "10.132.0.8", "10.132.0.81", "10.132.0.9", "10.132.2.10", "10.132.2.11", "10.132.2.12", "10.132.2.14", "10.132.2.15", "10.132.2.3", "10.132.2.4", "10.132.2.5", "10.132.2.6", "10.132.2.7", "10.132.2.8", "10.132.2.9", "10.132.3.64", "10.132.3.65", "10.132.3.72", "10.132.3.73", "10.132.3.76", "10.133.0.10", "10.133.0.11", "10.133.0.12", "10.133.0.13", "10.133.0.14", "10.133.0.15", "10.133.0.16", "10.133.0.17", "10.133.0.18", "10.133.0.19", "10.133.0.20", "10.133.0.21", "10.133.0.22", "10.133.0.23", "10.133.0.24", "10.133.0.25", "10.133.0.26", "10.133.0.27", "10.133.0.28", "10.133.0.29", "10.133.0.30", "10.133.0.31", "10.133.0.32", "10.133.0.33", "10.133.0.34", "10.133.0.35", "10.133.0.36", "10.133.0.37", "10.133.0.38", "10.133.0.39", "10.133.0.40", "10.133.0.41", "10.133.0.42", "10.133.0.44", "10.133.0.45", "10.133.0.46", "10.133.0.47", "10.133.0.48", "10.133.0.5", "10.133.0.6", "10.133.0.7", "10.133.0.8", "10.133.0.9", "10.134.0.10", "10.134.0.11", "10.134.0.12", "10.134.0.13", "10.134.0.14", "10.134.0.15", "10.134.0.16", "10.134.0.17", "10.134.0.18", "10.134.0.19", "10.134.0.20", "10.134.0.21", "10.134.0.22", "10.134.0.23", "10.134.0.24", "10.134.0.25", "10.134.0.26", "10.134.0.27", "10.134.0.28", "10.134.0.30", "10.134.0.31", "10.134.0.32", "10.134.0.33", "10.134.0.34", "10.134.0.35", "10.134.0.36", "10.134.0.37", "10.134.0.38", "10.134.0.4", "10.134.0.42", "10.134.0.9", "10.135.0.10", "10.135.0.11", "10.135.0.12", "10.135.0.13", "10.135.0.14", "10.135.0.15", "10.135.0.16", "10.135.0.17", "10.135.0.18", "10.135.0.19", "10.135.0.23", "10.135.0.24", "10.135.0.26", "10.135.0.27", "10.135.0.29", "10.135.0.3", "10.135.0.4", "10.135.0.40", "10.135.0.41", "10.135.0.42", "10.135.0.43", "10.135.0.44", "10.135.0.5", "10.135.0.6", "10.135.0.7", "10.135.0.8", "10.135.0.9"] external_ids : {direction=Ingress, gress-index="4", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:4:v4", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy} name : a13814616246365836720 _uuid : 84d76f13-ad95-4c00-8329-a0b1d023c289 addresses : ["10.132.3.76", "10.135.0.44"] external_ids : {direction=Egress, gress-index="4", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:4:v4", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy} name : a5972452606168369118 _uuid : 0c53e917-f7ee-4256-8f3a-9522c0481e52 addresses : ["10.132.0.10", "10.132.0.11", "10.132.0.12", "10.132.0.13", "10.132.0.14", "10.132.0.15", "10.132.0.16", "10.132.0.17", "10.132.0.5", "10.132.0.7", "10.132.0.71", "10.132.0.75", "10.132.0.8", "10.132.0.81", "10.132.0.9", "10.132.2.10", "10.132.2.11", "10.132.2.12", "10.132.2.14", "10.132.2.15", "10.132.2.3", "10.132.2.4", "10.132.2.5", "10.132.2.6", "10.132.2.7", "10.132.2.8", "10.132.2.9", "10.132.3.64", "10.132.3.65", "10.132.3.72", "10.132.3.73", "10.132.3.76", "10.133.0.10", "10.133.0.11", "10.133.0.12", "10.133.0.13", "10.133.0.14", "10.133.0.15", "10.133.0.16", "10.133.0.17", "10.133.0.18", "10.133.0.19", "10.133.0.20", "10.133.0.21", "10.133.0.22", "10.133.0.23", "10.133.0.24", "10.133.0.25", "10.133.0.26", "10.133.0.27", "10.133.0.28", "10.133.0.29", "10.133.0.30", "10.133.0.31", "10.133.0.32", "10.133.0.33", "10.133.0.34", "10.133.0.35", "10.133.0.36", "10.133.0.37", "10.133.0.38", "10.133.0.39", "10.133.0.40", "10.133.0.41", "10.133.0.42", "10.133.0.44", "10.133.0.45", "10.133.0.46", "10.133.0.47", "10.133.0.48", "10.133.0.5", "10.133.0.6", "10.133.0.7", "10.133.0.8", "10.133.0.9", "10.134.0.10", "10.134.0.11", "10.134.0.12", "10.134.0.13", "10.134.0.14", "10.134.0.15", "10.134.0.16", "10.134.0.17", "10.134.0.18", "10.134.0.19", "10.134.0.20", "10.134.0.21", "10.134.0.22", "10.134.0.23", "10.134.0.24", "10.134.0.25", "10.134.0.26", "10.134.0.27", "10.134.0.28", "10.134.0.30", "10.134.0.31", "10.134.0.32", "10.134.0.33", "10.134.0.34", "10.134.0.35", "10.134.0.36", "10.134.0.37", "10.134.0.38", "10.134.0.4", "10.134.0.42", "10.134.0.9", "10.135.0.10", "10.135.0.11", "10.135.0.12", "10.135.0.13", "10.135.0.14", "10.135.0.15", "10.135.0.16", "10.135.0.17", "10.135.0.18", "10.135.0.19", "10.135.0.23", "10.135.0.24", "10.135.0.26", "10.135.0.27", "10.135.0.29", "10.135.0.3", "10.135.0.4", "10.135.0.40", "10.135.0.41", "10.135.0.42", "10.135.0.43", "10.135.0.44", "10.135.0.5", "10.135.0.6", "10.135.0.7", "10.135.0.8", "10.135.0.9"] external_ids : {direction=Egress, gress-index="2", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:2:v4", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy} name : a18396736153283155648 _uuid : 5228bf1b-dfd8-40ec-bfa8-95c5bf9aded9 addresses : [] external_ids : {direction=Ingress, gress-index="0", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:0:v4", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy} name : a14545668191619617708 _uuid : 46530d69-70da-4558-8c63-884ec9dc4f25 addresses : ["10.132.2.10", "10.132.2.5", "10.132.2.6", "10.132.2.7", "10.132.2.8", "10.132.2.9", "10.133.0.47", "10.134.0.33", "10.135.0.10", "10.135.0.11", "10.135.0.12", "10.135.0.19", "10.135.0.24", "10.135.0.7", "10.135.0.8", "10.135.0.9"] external_ids : {direction=Ingress, gress-index="1", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:1:v4", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy} name : a6786643370959569281 _uuid : 65fdcdea-0b9f-4318-9884-1b51d231ad1d addresses : ["10.132.3.72", "10.135.0.42"] external_ids : {direction=Ingress, gress-index="2", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:2:v4", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy} name : a13730899355151937870 _uuid : 73eabdb0-36bf-4ca3-b66d-156ac710df4c addresses : ["10.0.32.0/19", "10.0.56.38/32", "10.0.69.0/24", "10.132.3.72", "10.135.0.42", "172.29.0.0/30", "192.168.194.103", "192.168.194.2"] external_ids : {direction=Egress, gress-index="3", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:3:v4", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy} name : a10622494091691694581 _uuid : 50cdbef2-71b5-474b-914c-6fcd1d7712d3 addresses : ["10.132.0.10", "10.132.0.11", "10.132.0.12", "10.132.0.13", "10.132.0.14", "10.132.0.15", "10.132.0.16", "10.132.0.17", "10.132.0.5", "10.132.0.7", "10.132.0.71", "10.132.0.75", "10.132.0.8", "10.132.0.81", "10.132.0.9", "10.132.2.10", "10.132.2.11", "10.132.2.12", "10.132.2.14", "10.132.2.15", "10.132.2.3", "10.132.2.4", "10.132.2.5", "10.132.2.6", "10.132.2.7", "10.132.2.8", "10.132.2.9", "10.132.3.64", "10.132.3.65", "10.132.3.72", "10.132.3.73", "10.132.3.76", "10.133.0.10", "10.133.0.11", "10.133.0.12", "10.133.0.13", "10.133.0.14", "10.133.0.15", "10.133.0.16", "10.133.0.17", "10.133.0.18", "10.133.0.19", "10.133.0.20", "10.133.0.21", "10.133.0.22", "10.133.0.23", "10.133.0.24", "10.133.0.25", "10.133.0.26", "10.133.0.27", "10.133.0.28", "10.133.0.29", "10.133.0.30", "10.133.0.31", "10.133.0.32", "10.133.0.33", "10.133.0.34", "10.133.0.35", "10.133.0.36", "10.133.0.37", "10.133.0.38", "10.133.0.39", "10.133.0.40", "10.133.0.41", "10.133.0.42", "10.133.0.44", "10.133.0.45", "10.133.0.46", "10.133.0.47", "10.133.0.48", "10.133.0.5", "10.133.0.6", "10.133.0.7", "10.133.0.8", "10.133.0.9", "10.134.0.10", "10.134.0.11", "10.134.0.12", "10.134.0.13", "10.134.0.14", "10.134.0.15", "10.134.0.16", "10.134.0.17", "10.134.0.18", "10.134.0.19", "10.134.0.20", "10.134.0.21", "10.134.0.22", "10.134.0.23", "10.134.0.24", "10.134.0.25", "10.134.0.26", "10.134.0.27", "10.134.0.28", "10.134.0.30", "10.134.0.31", "10.134.0.32", "10.134.0.33", "10.134.0.34", "10.134.0.35", "10.134.0.36", "10.134.0.37", "10.134.0.38", "10.134.0.4", "10.134.0.42", "10.134.0.9", "10.135.0.10", "10.135.0.11", "10.135.0.12", "10.135.0.13", "10.135.0.14", "10.135.0.15", "10.135.0.16", "10.135.0.17", "10.135.0.18", "10.135.0.19", "10.135.0.23", "10.135.0.24", "10.135.0.26", "10.135.0.27", "10.135.0.29", "10.135.0.3", "10.135.0.4", "10.135.0.40", "10.135.0.41", "10.135.0.42", "10.135.0.43", "10.135.0.44", "10.135.0.5", "10.135.0.6", "10.135.0.7", "10.135.0.8", "10.135.0.9"] external_ids : {direction=Egress, gress-index="0", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:0:v4", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy} name : a13517855690389298082 _uuid : 32a42f32-2d11-43dd-979d-a56d7ee6aa57 addresses : ["10.132.3.76", "10.135.0.44"] external_ids : {direction=Ingress, gress-index="3", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:3:v4", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy} name : a764182844364804195 _uuid : 8fd3b977-6e1c-47aa-82b7-e3e3136c4a72 addresses : ["0.0.0.0/0"] external_ids : {direction=Egress, gress-index="5", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:5:v4", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy} name : a11452480169090787059 Examine the specific address set of the rule by running the following command: USD ovn-nbctl find Address_Set 'external_ids{>=}{"k8s.ovn.org/owner-type"=AdminNetworkPolicy,direction=Egress,"k8s.ovn.org/name"=cluster-control,gress-index="5"}' Example 6.17. Example outputs for Address_Set _uuid : 8fd3b977-6e1c-47aa-82b7-e3e3136c4a72 addresses : ["0.0.0.0/0"] external_ids : {direction=Egress, gress-index="5", ip-family=v4, "k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control:Egress:5:v4", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy} name : a11452480169090787059 Run the following command to look at the port groups in the nbdb: USD ovn-nbctl find Port_Group 'external_ids{>=}{"k8s.ovn.org/owner-type"=AdminNetworkPolicy,"k8s.ovn.org/name"=cluster-control}' Example 6.18. Example outputs for Port_Group _uuid : f50acf71-7488-4b9a-b7b8-c8a024e99d21 acls : [04f20275-c410-405c-a923-0e677f767889, 0d5e4722-b608-4bb1-b625-23c323cc9926, 1a27d30e-3f96-4915-8ddd-ade7f22c117b, 1a68a5ed-e7f9-47d0-b55c-89184d97e81a, 4b5d836a-e0a3-4088-825e-f9f0ca58e538, 5a6e5bb4-36eb-4209-b8bc-c611983d4624, 5d09957d-d2cc-4f5a-9ddd-b97d9d772023, aa1a224d-7960-4952-bdfb-35246bafbac8, b23a087f-08f8-4225-8c27-4a9a9ee0c407, b7be6472-df67-439c-8c9c-f55929f0a6e0, d14ed5cf-2e06-496e-8cae-6b76d5dd5ccd] external_ids : {"k8s.ovn.org/id"="default-network-controller:AdminNetworkPolicy:cluster-control", "k8s.ovn.org/name"=cluster-control, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=AdminNetworkPolicy} name : a14645450421485494999 ports : [5e75f289-8273-4f8a-8798-8c10f7318833, de7e1b71-6184-445d-93e7-b20acadf41ea] 6.2.5.2. Additional resources Tracing Openflow with ovnkube-trace Troubleshooting OVN-Kubernetes 6.2.6. Best practices for AdminNetworkPolicy This section provides best practices for the AdminNetworkPolicy and BaselineAdminNetworkPolicy resources. 6.2.6.1. Designing AdminNetworkPolicy When building AdminNetworkPolicy (ANP) resources, you might consider the following when creating your policies: You can create ANPs that have the same priority. If you do create two ANPs at the same priority, ensure that they do not apply overlapping rules to the same traffic. Only one rule per value is applied and there is no guarantee which rule is applied when there is more than one at the same priority value. Because there is no guarantee which policy takes precedence when overlapping ANPs are created, set ANPs at different priorities so that precedence is well defined. Administrators must create ANP that apply to user namespaces not system namespaces. Important Applying ANP and BaselineAdminNetworkPolicy (BANP) to system namespaces ( default , kube-system , any namespace whose name starts with openshift- , etc) is not supported, and this can leave your cluster unresponsive and in a non-functional state. Because 0-100 is the supported priority range, you might design your ANP to use a middle range like 30-70 . This leaves some placeholder for priorities before and after. Even in the middle range, you might want to leave gaps so that as your infrastructure requirements evolve over time, you are able to insert new ANPs when needed at the right priority level. If you pack your ANPs, then you might need to recreate all of them to accommodate any changes in the future. When using 0.0.0.0/0 or ::/0 to create a strong Deny policy, ensure that you have higher priority Allow or Pass rules for essential traffic. Use Allow as your action field when you want to ensure that a connection is allowed no matter what. An Allow rule in an ANP means that the connection will always be allowed, and NetworkPolicy will be ignored. Use Pass as your action field to delegate the policy decision of allowing or denying the connection to the NetworkPolicy layer. Ensure that the selectors across multiple rules do not overlap so that the same IPs do not appear in multiple policies, which can cause performance and scale limitations. Avoid using namedPorts in conjunction with PortNumber and PortRange because this creates 6 ACLs and cause inefficiencies in your cluster. 6.2.6.1.1. Considerations for using BaselineAdminNetworkPolicy You can define only a single BaselineAdminNetworkPolicy (BANP) resource within a cluster. The following are supported uses for BANP that administrators might consider in designing their BANP: You can set a default deny policy for cluster-local ingress in user namespaces. This BANP will force developers to have to add NetworkPolicy objects to allow the ingress traffic that they want to allow, and if they do not add network policies for ingress it will be denied. You can set a default deny policy for cluster-local egress in user namespaces. This BANP will force developers to have to add NetworkPolicy objects to allow the egress traffic that they want to allow, and if they do not add network policies it will be denied. You can set a default allow policy for egress to the in-cluster DNS service. Such a BANP ensures that the namespaced users do not have to set an allow egress NetworkPolicy to the in-cluster DNS service. You can set an egress policy that allows internal egress traffic to all pods but denies access to all external endpoints (i.e 0.0.0.0/0 and ::/0 ). This BANP allows user workloads to send traffic to other in-cluster endpoints, but not to external endpoints by default. NetworkPolicy can then be used by developers in order to allow their applications to send traffic to an explicit set of external services. Ensure you scope your BANP so that it only denies traffic to user namespaces and not to system namespaces. This is because the system namespaces do not have NetworkPolicy objects to override your BANP. 6.2.6.1.2. Differences to consider between AdminNetworkPolicy and NetworkPolicy Unlike NetworkPolicy objects, you must use explicit labels to reference your workloads within ANP and BANP rather than using the empty ( {} ) catch all selector to avoid accidental traffic selection. Important An empty namespace selector applied to a infrastructure namespace can make your cluster unresponsive and in a non-functional state. In API semantics for ANP, you have to explicitly define allow or deny rules when you create the policy, unlike NetworkPolicy objects which have an implicit deny. Unlike NetworkPolicy objects, AdminNetworkPolicy objects ingress rules are limited to in-cluster pods and namespaces so you cannot, and do not need to, set rules for ingress from the host network. 6.3. Network policy 6.3.1. About network policy As a developer, you can define network policies that restrict traffic to pods in your cluster. 6.3.1.1. About network policy In a cluster using a network plugin that supports Kubernetes network policy, network isolation is controlled entirely by NetworkPolicy objects. In OpenShift Container Platform 4.16, OpenShift SDN supports using network policy in its default network isolation mode. Warning Network policy does not apply to the host network namespace. Pods with host networking enabled are unaffected by network policy rules. However, pods connecting to the host-networked pods might be affected by the network policy rules. Network policies cannot block traffic from localhost or from their resident nodes. By default, all pods in a project are accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete NetworkPolicy objects within their own project. If a pod is matched by selectors in one or more NetworkPolicy objects, then the pod will accept only connections that are allowed by at least one of those NetworkPolicy objects. A pod that is not selected by any NetworkPolicy objects is fully accessible. A network policy applies to only the TCP, UDP, ICMP, and SCTP protocols. Other protocols are not affected. The following example NetworkPolicy objects demonstrate supporting different scenarios: Deny all traffic: To make a project deny by default, add a NetworkPolicy object that matches all pods but accepts no traffic: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: [] Only allow connections from the OpenShift Container Platform Ingress Controller: To make a project allow only connections from the OpenShift Container Platform Ingress Controller, add the following NetworkPolicy object. apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress Only accept connections from pods within a project: Important To allow ingress connections from hostNetwork pods in the same namespace, you need to apply the allow-from-hostnetwork policy together with the allow-same-namespace policy. To make pods accept connections from other pods in the same project, but reject all other connections from pods in other projects, add the following NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} Only allow HTTP and HTTPS traffic based on pod labels: To enable only HTTP and HTTPS access to the pods with a specific label ( role=frontend in following example), add a NetworkPolicy object similar to the following: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443 Accept connections by using both namespace and pod selectors: To match network traffic by combining namespace and pod selectors, you can use a NetworkPolicy object similar to the following: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods NetworkPolicy objects are additive, which means you can combine multiple NetworkPolicy objects together to satisfy complex network requirements. For example, for the NetworkPolicy objects defined in samples, you can define both allow-same-namespace and allow-http-and-https policies within the same project. Thus allowing the pods with the label role=frontend , to accept any connection allowed by each policy. That is, connections on any port from pods in the same namespace, and connections on ports 80 and 443 from pods in any namespace. 6.3.1.1.1. Using the allow-from-router network policy Use the following NetworkPolicy to allow external traffic regardless of the router configuration: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-router spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: "" 1 podSelector: {} policyTypes: - Ingress 1 policy-group.network.openshift.io/ingress:"" label supports both OpenShift-SDN and OVN-Kubernetes. 6.3.1.1.2. Using the allow-from-hostnetwork network policy Add the following allow-from-hostnetwork NetworkPolicy object to direct traffic from the host network pods. apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-hostnetwork spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/host-network: "" podSelector: {} policyTypes: - Ingress 6.3.1.2. Optimizations for network policy with OpenShift SDN Use a network policy to isolate pods that are differentiated from one another by labels within a namespace. It is inefficient to apply NetworkPolicy objects to large numbers of individual pods in a single namespace. Pod labels do not exist at the IP address level, so a network policy generates a separate Open vSwitch (OVS) flow rule for every possible link between every pod selected with a podSelector . For example, if the spec podSelector and the ingress podSelector within a NetworkPolicy object each match 200 pods, then 40,000 (200*200) OVS flow rules are generated. This might slow down a node. When designing your network policy, refer to the following guidelines: Reduce the number of OVS flow rules by using namespaces to contain groups of pods that need to be isolated. NetworkPolicy objects that select a whole namespace, by using the namespaceSelector or an empty podSelector , generate only a single OVS flow rule that matches the VXLAN virtual network ID (VNID) of the namespace. Keep the pods that do not need to be isolated in their original namespace, and move the pods that require isolation into one or more different namespaces. Create additional targeted cross-namespace network policies to allow the specific traffic that you do want to allow from the isolated pods. 6.3.1.3. Optimizations for network policy with OVN-Kubernetes network plugin When designing your network policy, refer to the following guidelines: For network policies with the same spec.podSelector spec, it is more efficient to use one network policy with multiple ingress or egress rules, than multiple network policies with subsets of ingress or egress rules. Every ingress or egress rule based on the podSelector or namespaceSelector spec generates the number of OVS flows proportional to number of pods selected by network policy + number of pods selected by ingress or egress rule . Therefore, it is preferable to use the podSelector or namespaceSelector spec that can select as many pods as you need in one rule, instead of creating individual rules for every pod. For example, the following policy contains two rules: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchLabels: role: frontend - from: - podSelector: matchLabels: role: backend The following policy expresses those same two rules as one: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchExpressions: - {key: role, operator: In, values: [frontend, backend]} The same guideline applies to the spec.podSelector spec. If you have the same ingress or egress rules for different network policies, it might be more efficient to create one network policy with a common spec.podSelector spec. For example, the following two policies have different rules: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy1 spec: podSelector: matchLabels: role: db ingress: - from: - podSelector: matchLabels: role: frontend --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy2 spec: podSelector: matchLabels: role: client ingress: - from: - podSelector: matchLabels: role: frontend The following network policy expresses those same two rules as one: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy3 spec: podSelector: matchExpressions: - {key: role, operator: In, values: [db, client]} ingress: - from: - podSelector: matchLabels: role: frontend You can apply this optimization when only multiple selectors are expressed as one. In cases where selectors are based on different labels, it may not be possible to apply this optimization. In those cases, consider applying some new labels for network policy optimization specifically. 6.3.1.4. steps Creating a network policy Optional: Defining a default network policy for projects 6.3.1.5. Additional resources Projects and namespaces Configuring multitenant isolation with network policy NetworkPolicy API 6.3.2. Creating a network policy As a user with the admin role, you can create a network policy for a namespace. 6.3.2.1. Example NetworkPolicy object The following annotates an example NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017 1 The name of the NetworkPolicy object. 2 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. 3 A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. 4 A list of one or more destination ports on which to accept traffic. 6.3.2.2. Creating a network policy using the CLI To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a network policy. Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create a policy rule: Create a <policy_name>.yaml file: USD touch <policy_name>.yaml where: <policy_name> Specifies the network policy file name. Define a network policy in the file that you just created, such as in the following examples: Deny ingress from all pods in all namespaces This is a fundamental policy, blocking all cross-pod networking other than cross-pod traffic allowed by the configuration of other Network Policies. kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} policyTypes: - Ingress ingress: [] Allow ingress from all pods in the same namespace kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} Allow ingress traffic to one pod from a particular namespace This policy allows traffic to pods labelled pod-a from pods running in namespace-y . kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-traffic-pod spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y To create the network policy object, enter the following command: USD oc apply -f <policy_name>.yaml -n <namespace> where: <policy_name> Specifies the network policy file name. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output networkpolicy.networking.k8s.io/deny-by-default created Note If you log in to the web console with cluster-admin privileges, you have a choice of creating a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 6.3.2.3. Creating a default deny all network policy This is a fundamental policy, blocking all cross-pod networking other than network traffic allowed by the configuration of other deployed network policies. This procedure enforces a default deny-by-default policy. Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create the following YAML that defines a deny-by-default policy to deny ingress from all pods in all namespaces. Save the YAML in the deny-by-default.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: default 1 spec: podSelector: {} 2 ingress: [] 3 1 namespace: default deploys this policy to the default namespace. 2 podSelector: is empty, this means it matches all the pods. Therefore, the policy applies to all pods in the default namespace. 3 There are no ingress rules specified. This causes incoming traffic to be dropped to all pods. Apply the policy by entering the following command: USD oc apply -f deny-by-default.yaml Example output networkpolicy.networking.k8s.io/deny-by-default created 6.3.2.4. Creating a network policy to allow traffic from external clients With the deny-by-default policy in place you can proceed to configure a policy that allows traffic from external clients to a pod with the label app=web . Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows external service from the public Internet directly or by using a Load Balancer to access the pod. Traffic is only allowed to a pod with the label app=web . Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create a policy that allows traffic from the public Internet directly or by using a load balancer to access the pod. Save the YAML in the web-allow-external.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-external namespace: default spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {} Apply the policy by entering the following command: USD oc apply -f web-allow-external.yaml Example output networkpolicy.networking.k8s.io/web-allow-external created This policy allows traffic from all resources, including external traffic as illustrated in the following diagram: 6.3.2.5. Creating a network policy allowing traffic to an application from all namespaces Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows traffic from all pods in all namespaces to a particular application. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create a policy that allows traffic from all pods in all namespaces to a particular application. Save the YAML in the web-allow-all-namespaces.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-all-namespaces namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2 1 Applies the policy only to app:web pods in default namespace. 2 Selects all pods in all namespaces. Note By default, if you omit specifying a namespaceSelector it does not select any namespaces, which means the policy allows traffic only from the namespace the network policy is deployed to. Apply the policy by entering the following command: USD oc apply -f web-allow-all-namespaces.yaml Example output networkpolicy.networking.k8s.io/web-allow-all-namespaces created Verification Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80 Run the following command to deploy an alpine image in the secondary namespace and to start a shell: USD oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is allowed: # wget -qO- --timeout=2 http://web.default Expected output <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 6.3.2.6. Creating a network policy allowing traffic to an application from a namespace Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows traffic to a pod with the label app=web from a particular namespace. You might want to do this to: Restrict traffic to a production database only to namespaces where production workloads are deployed. Enable monitoring tools deployed to a particular namespace to scrape metrics from the current namespace. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create a policy that allows traffic from all pods in a particular namespaces with a label purpose=production . Save the YAML in the web-allow-prod.yaml file: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-prod namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2 1 Applies the policy only to app:web pods in the default namespace. 2 Restricts traffic to only pods in namespaces that have the label purpose=production . Apply the policy by entering the following command: USD oc apply -f web-allow-prod.yaml Example output networkpolicy.networking.k8s.io/web-allow-prod created Verification Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80 Run the following command to create the prod namespace: USD oc create namespace prod Run the following command to label the prod namespace: USD oc label namespace/prod purpose=production Run the following command to create the dev namespace: USD oc create namespace dev Run the following command to label the dev namespace: USD oc label namespace/dev purpose=testing Run the following command to deploy an alpine image in the dev namespace and to start a shell: USD oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is blocked: # wget -qO- --timeout=2 http://web.default Expected output wget: download timed out Run the following command to deploy an alpine image in the prod namespace and start a shell: USD oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is allowed: # wget -qO- --timeout=2 http://web.default Expected output <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 6.3.2.7. Additional resources Accessing the web console Logging for egress firewall and network policy rules 6.3.3. Viewing a network policy As a user with the admin role, you can view a network policy for a namespace. 6.3.3.1. Example NetworkPolicy object The following annotates an example NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017 1 The name of the NetworkPolicy object. 2 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. 3 A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. 4 A list of one or more destination ports on which to accept traffic. 6.3.3.2. Viewing network policies using the CLI You can examine the network policies in a namespace. Note If you log in with a user with the cluster-admin role, then you can view any network policy in the cluster. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace where the network policy exists. Procedure List network policies in a namespace: To view network policy objects defined in a namespace, enter the following command: USD oc get networkpolicy Optional: To examine a specific network policy, enter the following command: USD oc describe networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy to inspect. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. For example: USD oc describe networkpolicy allow-same-namespace Output for oc describe command Name: allow-same-namespace Namespace: ns1 Created on: 2021-05-24 22:28:56 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: PodSelector: <none> Not affecting egress traffic Policy Types: Ingress Note If you log in to the web console with cluster-admin privileges, you have a choice of viewing a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 6.3.4. Editing a network policy As a user with the admin role, you can edit an existing network policy for a namespace. 6.3.4.1. Editing a network policy You can edit a network policy in a namespace. Note If you log in with a user with the cluster-admin role, then you can edit a network policy in any namespace in the cluster. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace where the network policy exists. Procedure Optional: To list the network policy objects in a namespace, enter the following command: USD oc get networkpolicy where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Edit the network policy object. If you saved the network policy definition in a file, edit the file and make any necessary changes, and then enter the following command. USD oc apply -n <namespace> -f <policy_file>.yaml where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. <policy_file> Specifies the name of the file containing the network policy. If you need to update the network policy object directly, enter the following command: USD oc edit networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Confirm that the network policy object is updated. USD oc describe networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Note If you log in to the web console with cluster-admin privileges, you have a choice of editing a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 6.3.4.2. Example NetworkPolicy object The following annotates an example NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017 1 The name of the NetworkPolicy object. 2 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. 3 A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. 4 A list of one or more destination ports on which to accept traffic. 6.3.4.3. Additional resources Creating a network policy 6.3.5. Deleting a network policy As a user with the admin role, you can delete a network policy from a namespace. 6.3.5.1. Deleting a network policy using the CLI You can delete a network policy in a namespace. Note If you log in with a user with the cluster-admin role, then you can delete any network policy in the cluster. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace where the network policy exists. Procedure To delete a network policy object, enter the following command: USD oc delete networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output networkpolicy.networking.k8s.io/default-deny deleted Note If you log in to the web console with cluster-admin privileges, you have a choice of deleting a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 6.3.6. Defining a default network policy for projects As a cluster administrator, you can modify the new project template to automatically include network policies when you create a new project. If you do not yet have a customized template for new projects, you must first create one. 6.3.6.1. Modifying the template for new projects As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements. To create your own custom project template: Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Log in as a user with cluster-admin privileges. Generate the default project template: USD oc adm create-bootstrap-project-template -o yaml > template.yaml Use a text editor to modify the generated template.yaml file by adding objects or modifying existing objects. The project template must be created in the openshift-config namespace. Load your modified template: USD oc create -f template.yaml -n openshift-config Edit the project configuration resource using the web console or CLI. Using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . Using the CLI: Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section to include the projectRequestTemplate and name parameters, and set the name of your uploaded project template. The default name is project-request . Project configuration resource with custom project template apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestTemplate: name: <template_name> # ... After you save your changes, create a new project to verify that your changes were successfully applied. 6.3.6.2. Adding network policies to the new project template As a cluster administrator, you can add network policies to the default template for new projects. OpenShift Container Platform will automatically create all the NetworkPolicy objects specified in the template in the project. Prerequisites Your cluster uses a default CNI network plugin that supports NetworkPolicy objects, such as the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You must log in to the cluster with a user with cluster-admin privileges. You must have created a custom default project template for new projects. Procedure Edit the default template for a new project by running the following command: USD oc edit template <project_template> -n openshift-config Replace <project_template> with the name of the default template that you configured for your cluster. The default template name is project-request . In the template, add each NetworkPolicy object as an element to the objects parameter. The objects parameter accepts a collection of one or more objects. In the following example, the objects parameter collection includes several NetworkPolicy objects. objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress ... Optional: Create a new project to confirm that your network policy objects are created successfully by running the following commands: Create a new project: USD oc new-project <project> 1 1 Replace <project> with the name for the project you are creating. Confirm that the network policy objects in the new project template exist in the new project: USD oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s 6.3.7. Configuring multitenant isolation with network policy As a cluster administrator, you can configure your network policies to provide multitenant network isolation. Note If you are using the OpenShift SDN network plugin, configuring network policies as described in this section provides network isolation similar to multitenant mode but with network policy mode set. 6.3.7.1. Configuring multitenant isolation by using network policy You can configure your project to isolate it from pods and services in other project namespaces. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. Procedure Create the following NetworkPolicy objects: A policy named allow-from-openshift-ingress . USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: "" podSelector: {} policyTypes: - Ingress EOF Note policy-group.network.openshift.io/ingress: "" is the preferred namespace selector label for OpenShift SDN. You can use the network.openshift.io/policy-group: ingress namespace selector label, but this is a legacy label. A policy named allow-from-openshift-monitoring : USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF A policy named allow-same-namespace : USD cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF A policy named allow-from-kube-apiserver-operator : USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress EOF For more details, see New kube-apiserver-operator webhook controller validating health of webhook . Optional: To confirm that the network policies exist in your current project, enter the following command: USD oc describe networkpolicy Example output Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress 6.3.7.2. steps Defining a default network policy for a project 6.3.7.3. Additional resources OpenShift SDN network isolation modes 6.4. Audit logging for network security The OVN-Kubernetes network plugin uses Open Virtual Network (OVN) access control lists (ACLs) to manage AdminNetworkPolicy , BaselineAdminNetworkPolicy , NetworkPolicy , and EgressFirewall objects. Audit logging exposes allow and deny ACL events for NetworkPolicy , EgressFirewall and BaselineAdminNetworkPolicy custom resources (CR). Logging also exposes allow , deny , and pass ACL events for AdminNetworkPolicy (ANP) CR. Note Audit logging is available for only the OVN-Kubernetes network plugin . 6.4.1. Audit configuration The configuration for audit logging is specified as part of the OVN-Kubernetes cluster network provider configuration. The following YAML illustrates the default values for the audit logging: Audit logging configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: "null" maxFileSize: 50 rateLimit: 20 syslogFacility: local0 The following table describes the configuration fields for audit logging. Table 6.1. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . 6.4.2. Audit logging You can configure the destination for audit logs, such as a syslog server or a UNIX domain socket. Regardless of any additional configuration, an audit log is always saved to /var/log/ovn/acl-audit-log.log on each OVN-Kubernetes pod in the cluster. You can enable audit logging for each namespace by annotating each namespace configuration with a k8s.ovn.org/acl-logging section. In the k8s.ovn.org/acl-logging section, you must specify allow , deny , or both values to enable audit logging for a namespace. Note A network policy does not support setting the Pass action set as a rule. The ACL-logging implementation logs access control list (ACL) events for a network. You can view these logs to analyze any potential security issues. Example namespace annotation kind: Namespace apiVersion: v1 metadata: name: example1 annotations: k8s.ovn.org/acl-logging: |- { "deny": "info", "allow": "info" } To view the default ACL logging configuration values, see the policyAuditConfig object in the cluster-network-03-config.yml file. If required, you can change the ACL logging configuration values for log file parameters in this file. The logging message format is compatible with syslog as defined by RFC5424. The syslog facility is configurable and defaults to local0 . The following example shows key parameters and their values outputted in a log message: Example logging message that outputs parameters and their values <timestamp>|<message_serial>|acl_log(ovn_pinctrl0)|<severity>|name="<acl_name>", verdict="<verdict>", severity="<severity>", direction="<direction>": <flow> Where: <timestamp> states the time and date for the creation of a log message. <message_serial> lists the serial number for a log message. acl_log(ovn_pinctrl0) is a literal string that prints the location of the log message in the OVN-Kubernetes plugin. <severity> sets the severity level for a log message. If you enable audit logging that supports allow and deny tasks then two severity levels show in the log message output. <name> states the name of the ACL-logging implementation in the OVN Network Bridging Database ( nbdb ) that was created by the network policy. <verdict> can be either allow or drop . <direction> can be either to-lport or from-lport to indicate that the policy was applied to traffic going to or away from a pod. <flow> shows packet information in a format equivalent to the OpenFlow protocol. This parameter comprises Open vSwitch (OVS) fields. The following example shows OVS fields that the flow parameter uses to extract packet information from system memory: Example of OVS fields used by the flow parameter to extract packet information <proto>,vlan_tci=0x0000,dl_src=<src_mac>,dl_dst=<source_mac>,nw_src=<source_ip>,nw_dst=<target_ip>,nw_tos=<tos_dscp>,nw_ecn=<tos_ecn>,nw_ttl=<ip_ttl>,nw_frag=<fragment>,tp_src=<tcp_src_port>,tp_dst=<tcp_dst_port>,tcp_flags=<tcp_flags> Where: <proto> states the protocol. Valid values are tcp and udp . vlan_tci=0x0000 states the VLAN header as 0 because a VLAN ID is not set for internal pod network traffic. <src_mac> specifies the source for the Media Access Control (MAC) address. <source_mac> specifies the destination for the MAC address. <source_ip> lists the source IP address <target_ip> lists the target IP address. <tos_dscp> states Differentiated Services Code Point (DSCP) values to classify and prioritize certain network traffic over other traffic. <tos_ecn> states Explicit Congestion Notification (ECN) values that indicate any congested traffic in your network. <ip_ttl> states the Time To Live (TTP) information for an packet. <fragment> specifies what type of IP fragments or IP non-fragments to match. <tcp_src_port> shows the source for the port for TCP and UDP protocols. <tcp_dst_port> lists the destination port for TCP and UDP protocols. <tcp_flags> supports numerous flags such as SYN , ACK , PSH and so on. If you need to set multiple values then each value is separated by a vertical bar ( | ). The UDP protocol does not support this parameter. Note For more information about the field descriptions, go to the OVS manual page for ovs-fields . Example ACL deny log entry for a network policy 2023-11-02T16:28:54.139Z|00004|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:55.187Z|00005|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:57.235Z|00006|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn The following table describes namespace annotation values: Table 6.2. Audit logging namespace annotation for k8s.ovn.org/acl-logging Field Description deny Blocks namespace access to any traffic that matches an ACL rule with the deny action. The field supports alert , warning , notice , info , or debug values. allow Permits namespace access to any traffic that matches an ACL rule with the allow action. The field supports alert , warning , notice , info , or debug values. pass A pass action applies to an admin network policy's ACL rule. A pass action allows either the network policy in the namespace or the baseline admin network policy rule to evaluate all incoming and outgoing traffic. A network policy does not support a pass action. Additional resources Understanding network policy APIs 6.4.3. AdminNetworkPolicy audit logging Audit logging is enabled per AdminNetworkPolicy CR by annotating an ANP policy with the k8s.ovn.org/acl-logging key such as in the following example: Example 6.19. Example of annotation for AdminNetworkPolicy CR apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: annotations: k8s.ovn.org/acl-logging: '{ "deny": "alert", "allow": "alert", "pass" : "warning" }' name: anp-tenant-log spec: priority: 5 subject: namespaces: matchLabels: tenant: backend-storage # Selects all pods owned by storage tenant. ingress: - name: "allow-all-ingress-product-development-and-customer" # Product development and customer tenant ingress to backend storage. action: "Allow" from: - pods: namespaceSelector: matchExpressions: - key: tenant operator: In values: - product-development - customer podSelector: {} - name: "pass-all-ingress-product-security" action: "Pass" from: - namespaces: matchLabels: tenant: product-security - name: "deny-all-ingress" # Ingress to backend from all other pods in the cluster. action: "Deny" from: - namespaces: {} egress: - name: "allow-all-egress-product-development" action: "Allow" to: - pods: namespaceSelector: matchLabels: tenant: product-development podSelector: {} - name: "pass-egress-product-security" action: "Pass" to: - namespaces: matchLabels: tenant: product-security - name: "deny-all-egress" # Egress from backend denied to all other pods. action: "Deny" to: - namespaces: {} Logs are generated whenever a specific OVN ACL is hit and meets the action criteria set in your logging annotation. For example, an event in which any of the namespaces with the label tenant: product-development accesses the namespaces with the label tenant: backend-storage , a log is generated. Note ACL logging is limited to 60 characters. If your ANP name field is long, the rest of the log will be truncated. The following is a direction index for the examples log entries that follow: Direction Rule Ingress Rule0 Allow from tenant product-development and customer to tenant backend-storage ; Ingress0: Allow Rule1 Pass from product-security`to tenant `backend-storage ; Ingress1: Pass Rule2 Deny ingress from all pods; Ingress2: Deny Egress Rule0 Allow to product-development ; Egress0: Allow Rule1 Pass to product-security ; Egress1: Pass Rule2 Deny egress to all other pods; Egress2: Deny Example 6.20. Example ACL log entry for Allow action of the AdminNetworkPolicy named anp-tenant-log with Ingress:0 and Egress:0 2024-06-10T16:27:45.194Z|00052|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:1a,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.26,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=57814,tp_dst=8080,tcp_flags=syn 2024-06-10T16:28:23.130Z|00059|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:18,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.24,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=38620,tp_dst=8080,tcp_flags=ack 2024-06-10T16:28:38.293Z|00069|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Egress:0", verdict=allow, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:19,dl_dst=0a:58:0a:80:02:1a,nw_src=10.128.2.25,nw_dst=10.128.2.26,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=47566,tp_dst=8080,tcp_flags=fin|ack=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=55704,tp_dst=8080,tcp_flags=ack Example 6.21. Example ACL log entry for Pass action of the AdminNetworkPolicy named anp-tenant-log with Ingress:1 and Egress:1 2024-06-10T16:33:12.019Z|00075|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Ingress:1", verdict=pass, severity=warning, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:1b,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.27,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=37394,tp_dst=8080,tcp_flags=ack 2024-06-10T16:35:04.209Z|00081|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Egress:1", verdict=pass, severity=warning, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:19,dl_dst=0a:58:0a:80:02:1b,nw_src=10.128.2.25,nw_dst=10.128.2.27,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=34018,tp_dst=8080,tcp_flags=ack Example 6.22. Example ACL log entry for Deny action of the AdminNetworkPolicy named anp-tenant-log with Egress:2 and Ingress2 2024-06-10T16:43:05.287Z|00087|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Egress:2", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:19,dl_dst=0a:58:0a:80:02:18,nw_src=10.128.2.25,nw_dst=10.128.2.24,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=51598,tp_dst=8080,tcp_flags=syn 2024-06-10T16:44:43.591Z|00090|acl_log(ovn_pinctrl0)|INFO|name="ANP:anp-tenant-log:Ingress:2", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:1c,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.28,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=33774,tp_dst=8080,tcp_flags=syn The following table describes ANP annotation: Table 6.3. Audit logging AdminNetworkPolicy annotation Annotation Value k8s.ovn.org/acl-logging You must specify at least one of Allow , Deny , or Pass to enable audit logging for a namespace. Deny Optional: Specify alert , warning , notice , info , or debug . Allow Optional: Specify alert , warning , notice , info , or debug . Pass Optional: Specify alert , warning , notice , info , or debug . 6.4.4. BaselineAdminNetworkPolicy audit logging Audit logging is enabled in the BaselineAdminNetworkPolicy CR by annotating an BANP policy with the k8s.ovn.org/acl-logging key such as in the following example: Example 6.23. Example of annotation for BaselineAdminNetworkPolicy CR apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: annotations: k8s.ovn.org/acl-logging: '{ "deny": "alert", "allow": "alert"}' name: default spec: subject: namespaces: matchLabels: tenant: workloads # Selects all workload pods in the cluster. ingress: - name: "default-allow-dns" # This rule allows ingress from dns tenant to all workloads. action: "Allow" from: - namespaces: matchLabels: tenant: dns - name: "default-deny-dns" # This rule denies all ingress from all pods to workloads. action: "Deny" from: - namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well. egress: - name: "default-deny-dns" # This rule denies all egress from workloads. It will be applied when no ANP or network policy matches. action: "Deny" to: - namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well. In the example, an event in which any of the namespaces with the label tenant: dns accesses the namespaces with the label tenant: workloads , a log is generated. The following is a direction index for the examples log entries that follow: Direction Rule Ingress Rule0 Allow from tenant dns to tenant workloads ; Ingress0: Allow Rule1 Deny to tenant workloads from all pods; Ingress1: Deny Egress Rule0 Deny to all pods; Egress0: Deny Example 6.24. Example ACL allow log entry for Allow action of default BANP with Ingress:0 2024-06-10T18:11:58.263Z|00022|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=syn 2024-06-10T18:11:58.264Z|00023|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=psh|ack 2024-06-10T18:11:58.264Z|00024|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=ack 2024-06-10T18:11:58.264Z|00025|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=ack 2024-06-10T18:11:58.264Z|00026|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=fin|ack 2024-06-10T18:11:58.264Z|00027|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:0", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=ack Example 6.25. Example ACL allow log entry for Allow action of default BANP with Egress:0 and Ingress:1 2024-06-10T18:09:57.774Z|00016|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Egress:0", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:56,dl_dst=0a:58:0a:82:02:57,nw_src=10.130.2.86,nw_dst=10.130.2.87,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=45614,tp_dst=8080,tcp_flags=syn 2024-06-10T18:09:58.809Z|00017|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Egress:0", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:56,dl_dst=0a:58:0a:82:02:57,nw_src=10.130.2.86,nw_dst=10.130.2.87,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=45614,tp_dst=8080,tcp_flags=syn 2024-06-10T18:10:00.857Z|00018|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Egress:0", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:56,dl_dst=0a:58:0a:82:02:57,nw_src=10.130.2.86,nw_dst=10.130.2.87,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=45614,tp_dst=8080,tcp_flags=syn 2024-06-10T18:10:25.414Z|00019|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:1", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:58,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.88,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=40630,tp_dst=8080,tcp_flags=syn 2024-06-10T18:10:26.457Z|00020|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:1", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:58,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.88,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=40630,tp_dst=8080,tcp_flags=syn 2024-06-10T18:10:28.505Z|00021|acl_log(ovn_pinctrl0)|INFO|name="BANP:default:Ingress:1", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:58,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.88,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=40630,tp_dst=8080,tcp_flags=syn The following table describes BANP annotation: Table 6.4. Audit logging BaselineAdminNetworkPolicy annotation Annotation Value k8s.ovn.org/acl-logging You must specify at least one of Allow or Deny to enable audit logging for a namespace. Deny Optional: Specify alert , warning , notice , info , or debug . Allow Optional: Specify alert , warning , notice , info , or debug . 6.4.5. Configuring egress firewall and network policy auditing for a cluster As a cluster administrator, you can customize audit logging for your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To customize the audit logging configuration, enter the following command: USD oc edit network.operator.openshift.io/cluster Tip You can alternatively customize and apply the following YAML to configure audit logging: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: "null" maxFileSize: 50 rateLimit: 20 syslogFacility: local0 Verification To create a namespace with network policies complete the following steps: Create a namespace for verification: USD cat <<EOF| oc create -f - kind: Namespace apiVersion: v1 metadata: name: verify-audit-logging annotations: k8s.ovn.org/acl-logging: '{ "deny": "alert", "allow": "alert" }' EOF Example output namespace/verify-audit-logging created Create network policies for the namespace: USD cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all spec: podSelector: matchLabels: policyTypes: - Ingress - Egress --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace namespace: verify-audit-logging spec: podSelector: {} policyTypes: - Ingress - Egress ingress: - from: - podSelector: {} egress: - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: verify-audit-logging EOF Example output networkpolicy.networking.k8s.io/deny-all created networkpolicy.networking.k8s.io/allow-from-same-namespace created Create a pod for source traffic in the default namespace: USD cat <<EOF| oc create -n default -f - apiVersion: v1 kind: Pod metadata: name: client spec: containers: - name: client image: registry.access.redhat.com/rhel7/rhel-tools command: ["/bin/sh", "-c"] args: ["sleep inf"] EOF Create two pods in the verify-audit-logging namespace: USD for name in client server; do cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: v1 kind: Pod metadata: name: USD{name} spec: containers: - name: USD{name} image: registry.access.redhat.com/rhel7/rhel-tools command: ["/bin/sh", "-c"] args: ["sleep inf"] EOF done Example output pod/client created pod/server created To generate traffic and produce network policy audit log entries, complete the following steps: Obtain the IP address for pod named server in the verify-audit-logging namespace: USD POD_IP=USD(oc get pods server -n verify-audit-logging -o jsonpath='{.status.podIP}') Ping the IP address from the command from the pod named client in the default namespace and confirm that all packets are dropped: USD oc exec -it client -n default -- /bin/ping -c 2 USDPOD_IP Example output PING 10.128.2.55 (10.128.2.55) 56(84) bytes of data. --- 10.128.2.55 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 2041ms Ping the IP address saved in the POD_IP shell environment variable from the pod named client in the verify-audit-logging namespace and confirm that all packets are allowed: USD oc exec -it client -n verify-audit-logging -- /bin/ping -c 2 USDPOD_IP Example output PING 10.128.0.86 (10.128.0.86) 56(84) bytes of data. 64 bytes from 10.128.0.86: icmp_seq=1 ttl=64 time=2.21 ms 64 bytes from 10.128.0.86: icmp_seq=2 ttl=64 time=0.440 ms --- 10.128.0.86 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.440/1.329/2.219/0.890 ms Display the latest entries in the network policy audit log: USD for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done Example output 2023-11-02T16:28:54.139Z|00004|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:55.187Z|00005|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:57.235Z|00006|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:Ingress", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:49:57.909Z|00028|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:57.909Z|00029|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00030|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00031|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 6.4.6. Enabling egress firewall and network policy audit logging for a namespace As a cluster administrator, you can enable audit logging for a namespace. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To enable audit logging for a namespace, enter the following command: USD oc annotate namespace <namespace> \ k8s.ovn.org/acl-logging='{ "deny": "alert", "allow": "notice" }' where: <namespace> Specifies the name of the namespace. Tip You can alternatively apply the following YAML to enable audit logging: kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: |- { "deny": "alert", "allow": "notice" } Example output namespace/verify-audit-logging annotated Verification Display the latest entries in the audit log: USD for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done Example output 2023-11-02T16:49:57.909Z|00028|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:57.909Z|00029|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00030|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Egress:0", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00031|acl_log(ovn_pinctrl0)|INFO|name="NP:verify-audit-logging:allow-from-same-namespace:Ingress:0", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 6.4.7. Disabling egress firewall and network policy audit logging for a namespace As a cluster administrator, you can disable audit logging for a namespace. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To disable audit logging for a namespace, enter the following command: USD oc annotate --overwrite namespace <namespace> k8s.ovn.org/acl-logging- where: <namespace> Specifies the name of the namespace. Tip You can alternatively apply the following YAML to disable audit logging: kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: null Example output namespace/verify-audit-logging annotated 6.4.8. Additional resources About network policy Configuring an egress firewall for a project 6.5. Egress Firewall 6.5.1. Viewing an egress firewall for a project As a cluster administrator, you can list the names of any existing egress firewalls and view the traffic rules for a specific egress firewall. Note OpenShift SDN CNI is deprecated as of OpenShift Container Platform 4.14. As of OpenShift Container Platform 4.15, the network plugin is not an option for new installations. In a subsequent future release, the OpenShift SDN network plugin is planned to be removed and no longer supported. Red Hat will provide bug fixes and support for this feature until it is removed, but this feature will no longer receive enhancements. As an alternative to OpenShift SDN CNI, you can use OVN Kubernetes CNI instead. For more information, see OpenShift SDN CNI removal . 6.5.1.1. Viewing an EgressFirewall object You can view an EgressFirewall object in your cluster. Prerequisites A cluster using the OVN-Kubernetes network plugin. Install the OpenShift Command-line Interface (CLI), commonly known as oc . You must log in to the cluster. Procedure Optional: To view the names of the EgressFirewall objects defined in your cluster, enter the following command: USD oc get egressfirewall --all-namespaces To inspect a policy, enter the following command. Replace <policy_name> with the name of the policy to inspect. USD oc describe egressfirewall <policy_name> Example output Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0 6.5.2. Editing an egress firewall for a project As a cluster administrator, you can modify network traffic rules for an existing egress firewall. 6.5.2.1. Editing an EgressFirewall object As a cluster administrator, you can update the egress firewall for a project. Prerequisites A cluster using the OVN-Kubernetes network plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Find the name of the EgressFirewall object for the project. Replace <project> with the name of the project. USD oc get -n <project> egressfirewall Optional: If you did not save a copy of the EgressFirewall object when you created the egress network firewall, enter the following command to create a copy. USD oc get -n <project> egressfirewall <name> -o yaml > <filename>.yaml Replace <project> with the name of the project. Replace <name> with the name of the object. Replace <filename> with the name of the file to save the YAML to. After making changes to the policy rules, enter the following command to replace the EgressFirewall object. Replace <filename> with the name of the file containing the updated EgressFirewall object. USD oc replace -f <filename>.yaml 6.5.3. Removing an egress firewall from a project As a cluster administrator, you can remove an egress firewall from a project to remove all restrictions on network traffic from the project that leaves the OpenShift Container Platform cluster. 6.5.3.1. Removing an EgressFirewall object As a cluster administrator, you can remove an egress firewall from a project. Prerequisites A cluster using the OVN-Kubernetes network plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Find the name of the EgressFirewall object for the project. Replace <project> with the name of the project. USD oc get -n <project> egressfirewall Enter the following command to delete the EgressFirewall object. Replace <project> with the name of the project and <name> with the name of the object. USD oc delete -n <project> egressfirewall <name> 6.5.4. Configuring an egress firewall for a project As a cluster administrator, you can create an egress firewall for a project that restricts egress traffic leaving your OpenShift Container Platform cluster. 6.5.4.1. How an egress firewall works in a project As a cluster administrator, you can use an egress firewall to limit the external hosts that some or all pods can access from within the cluster. An egress firewall supports the following scenarios: A pod can only connect to internal hosts and cannot initiate connections to the public internet. A pod can only connect to the public internet and cannot initiate connections to internal hosts that are outside the OpenShift Container Platform cluster. A pod cannot reach specified internal subnets or hosts outside the OpenShift Container Platform cluster. A pod can connect to only specific external hosts. For example, you can allow one project access to a specified IP range but deny the same access to a different project. Or you can restrict application developers from updating from Python pip mirrors, and force updates to come only from approved sources. Note Egress firewall does not apply to the host network namespace. Pods with host networking enabled are unaffected by egress firewall rules. You configure an egress firewall policy by creating an EgressFirewall custom resource (CR) object. The egress firewall matches network traffic that meets any of the following criteria: An IP address range in CIDR format A DNS name that resolves to an IP address A port number A protocol that is one of the following protocols: TCP, UDP, and SCTP Important If your egress firewall includes a deny rule for 0.0.0.0/0 , access to your OpenShift Container Platform API servers is blocked. You must either add allow rules for each IP address or use the nodeSelector type allow rule in your egress policy rules to connect to API servers. The following example illustrates the order of the egress firewall rules necessary to ensure API server access: apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow # ... - to: cidrSelector: 0.0.0.0/0 3 type: Deny 1 The namespace for the egress firewall. 2 The IP address range that includes your OpenShift Container Platform API servers. 3 A global deny rule prevents access to the OpenShift Container Platform API servers. To find the IP address for your API servers, run oc get ep kubernetes -n default . For more information, see BZ#1988324 . Warning Egress firewall rules do not apply to traffic that goes through routers. Any user with permission to create a Route CR object can bypass egress firewall policy rules by creating a route that points to a forbidden destination. 6.5.4.1.1. Limitations of an egress firewall An egress firewall has the following limitations: No project can have more than one EgressFirewall object. A maximum of one EgressFirewall object with a maximum of 8,000 rules can be defined per project. If you are using the OVN-Kubernetes network plugin with shared gateway mode in Red Hat OpenShift Networking, return ingress replies are affected by egress firewall rules. If the egress firewall rules drop the ingress reply destination IP, the traffic is dropped. Violating any of these restrictions results in a broken egress firewall for the project. Consequently, all external network traffic is dropped, which can cause security risks for your organization. An Egress Firewall resource can be created in the kube-node-lease , kube-public , kube-system , openshift and openshift- projects. 6.5.4.1.2. Matching order for egress firewall policy rules The egress firewall policy rules are evaluated in the order that they are defined, from first to last. The first rule that matches an egress connection from a pod applies. Any subsequent rules are ignored for that connection. 6.5.4.1.3. How Domain Name Server (DNS) resolution works If you use DNS names in any of your egress firewall policy rules, proper resolution of the domain names is subject to the following restrictions: Domain name updates are polled based on a time-to-live (TTL) duration. By default, the duration is 30 minutes. When the egress firewall controller queries the local name servers for a domain name, if the response includes a TTL and the TTL is less than 30 minutes, the controller sets the duration for that DNS name to the returned value. Each DNS name is queried after the TTL for the DNS record expires. The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, the egress firewall might not be enforced consistently. Because the egress firewall controller and pods asynchronously poll the same local name server, the pod might obtain the updated IP address before the egress controller does, which causes a race condition. Due to this current limitation, domain name usage in EgressFirewall objects is only recommended for domains with infrequent IP address changes. Note Using DNS names in your egress firewall policy does not affect local DNS resolution through CoreDNS. However, if your egress firewall policy uses domain names, and an external DNS server handles DNS resolution for an affected pod, you must include egress firewall rules that permit access to the IP addresses of your DNS server. 6.5.4.1.3.1. Improved DNS resolution and resolving wildcard domain names There might be situations where the IP addresses associated with a DNS record change frequently, or you might want to specify wildcard domain names in your egress firewall policy rules. In this situation, the OVN-Kubernetes cluster manager creates a DNSNameResolver custom resource object for each unique DNS name used in your egress firewall policy rules. This custom resource stores the following information: Important Improved DNS resolution for egress firewall rules is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Example DNSNameResolver CR definition apiVersion: networking.openshift.io/v1alpha1 kind: DNSNameResolver spec: name: www.example.com. 1 status: resolvedNames: - dnsName: www.example.com. 2 resolvedAddress: - ip: "1.2.3.4" 3 ttlSeconds: 60 4 lastLookupTime: "2023-08-08T15:07:04Z" 5 1 The DNS name. This can be either a standard DNS name or a wildcard DNS name. For a wildcard DNS name, the DNS name resolution information contains all of the DNS names that match the wildcard DNS name. 2 The resolved DNS name matching the spec.name field. If the spec.name field contains a wildcard DNS name, then multiple dnsName entries are created that contain the standard DNS names that match the wildcard DNS name when resolved. If the wildcard DNS name can also be successfully resolved, then this field also stores the wildcard DNS name. 3 The current IP addresses associated with the DNS name. 4 The last time-to-live (TTL) duration. 5 The last lookup time. If during DNS resolution the DNS name in the query matches any name defined in a DNSNameResolver CR, then the information is updated accordingly in the CR status field. For unsuccessful DNS wildcard name lookups, the request is retried after a default TTL of 30 minutes. The OVN-Kubernetes cluster manager watches for updates to an EgressFirewall custom resource object, and creates, modifies, or deletes DNSNameResolver CRs associated with those egress firewall policies when that update occurs. Warning Do not modify DNSNameResolver custom resources directly. This can lead to unwanted behavior of your egress firewall. 6.5.4.2. EgressFirewall custom resource (CR) object You can define one or more rules for an egress firewall. A rule is either an Allow rule or a Deny rule, with a specification for the traffic that the rule applies to. The following YAML describes an EgressFirewall CR object: EgressFirewall object apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: <name> 1 spec: egress: 2 ... 1 The name for the object must be default . 2 A collection of one or more egress network policy rules as described in the following section. 6.5.4.2.1. EgressFirewall rules The following YAML describes an egress firewall rule object. The user can select either an IP address range in CIDR format, a domain name, or use the nodeSelector to allow or deny egress traffic. The egress stanza expects an array of one or more objects. Egress policy rule stanza egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4 nodeSelector: <label_name>: <label_value> 5 ports: 6 ... 1 The type of rule. The value must be either Allow or Deny . 2 A stanza describing an egress traffic match rule that specifies the cidrSelector field or the dnsName field. You cannot use both fields in the same rule. 3 An IP address range in CIDR format. 4 A DNS domain name. 5 Labels are key/value pairs that the user defines. Labels are attached to objects, such as pods. The nodeSelector allows for one or more node labels to be selected and attached to pods. 6 Optional: A stanza describing a collection of network ports and protocols for the rule. Ports stanza ports: - port: <port> 1 protocol: <protocol> 2 1 A network port, such as 80 or 443 . If you specify a value for this field, you must also specify a value for protocol . 2 A network protocol. The value must be either TCP , UDP , or SCTP . 6.5.4.2.2. Example EgressFirewall CR objects The following example defines several egress firewall policy rules: apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Deny to: cidrSelector: 0.0.0.0/0 1 A collection of egress firewall policy rule objects. The following example defines a policy rule that denies traffic to the host at the 172.16.1.1/32 IP address, if the traffic is using either the TCP protocol and destination port 80 or any protocol and destination port 443 . apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: - type: Deny to: cidrSelector: 172.16.1.1/32 ports: - port: 80 protocol: TCP - port: 443 6.5.4.2.3. Example nodeSelector for EgressFirewall As a cluster administrator, you can allow or deny egress traffic to nodes in your cluster by specifying a label using nodeSelector . Labels can be applied to one or more nodes. The following is an example with the region=east label: apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: - to: nodeSelector: matchLabels: region: east type: Allow Tip Instead of adding manual rules per node IP address, use node selectors to create a label that allows pods behind an egress firewall to access host network pods. 6.5.4.3. Creating an egress firewall policy object As a cluster administrator, you can create an egress firewall policy object for a project. Important If the project already has an EgressFirewall object defined, you must edit the existing policy to make changes to the egress firewall rules. Prerequisites A cluster that uses the OVN-Kubernetes network plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Create a policy rule: Create a <policy_name>.yaml file where <policy_name> describes the egress policy rules. In the file you created, define an egress policy object. Enter the following command to create the policy object. Replace <policy_name> with the name of the policy and <project> with the project that the rule applies to. USD oc create -f <policy_name>.yaml -n <project> In the following example, a new EgressFirewall object is created in a project named project1 : USD oc create -f default.yaml -n project1 Example output egressfirewall.k8s.ovn.org/v1 created Optional: Save the <policy_name>.yaml file so that you can make changes later. 6.6. Configuring IPsec encryption By enabling IPsec, you can encrypt both internal pod-to-pod cluster traffic between nodes and external traffic between pods and IPsec endpoints external to your cluster. All pod-to-pod network traffic between nodes on the OVN-Kubernetes cluster network is encrypted with IPsec in Transport mode . IPsec is disabled by default. You can enable IPsec either during or after installing the cluster. For information about cluster installation, see OpenShift Container Platform installation overview . The following support limitations exist for IPsec on a OpenShift Container Platform cluster: You must disable IPsec before updating to OpenShift Container Platform 4.15. There is a known issue that can cause interruptions in pod-to-pod communication if you update without disabling IPsec. ( OCPBUGS-43323 ) On IBM Cloud(R), IPsec supports only NAT-T. Encapsulating Security Payload (ESP) is not supported on this platform. If your cluster uses hosted control planes for Red Hat OpenShift Container Platform, IPsec is not supported for IPsec encryption of either pod-to-pod or traffic to external hosts. Using ESP hardware offloading on any network interface is not supported if one or more of those interfaces is attached to Open vSwitch (OVS). Enabling IPsec for your cluster triggers the use of IPsec with interfaces attached to OVS. By default, OpenShift Container Platform disables ESP hardware offloading on any interfaces attached to OVS. If you enabled IPsec for network interfaces that are not attached to OVS, a cluster administrator must manually disable ESP hardware offloading on each interface that is not attached to OVS. The following list outlines key tasks in the IPsec documentation: Enable and disable IPsec after cluster installation. Configure IPsec encryption for traffic between the cluster and external hosts. Verify that IPsec encrypts traffic between pods on different nodes. 6.6.1. Modes of operation When using IPsec on your OpenShift Container Platform cluster, you can choose from the following operating modes: Table 6.5. IPsec modes of operation Mode Description Default Disabled No traffic is encrypted. This is the cluster default. Yes Full Pod-to-pod traffic is encrypted as described in "Types of network traffic flows encrypted by pod-to-pod IPsec". Traffic to external nodes may be encrypted after you complete the required configuration steps for IPsec. No External Traffic to external nodes may be encrypted after you complete the required configuration steps for IPsec. No 6.6.2. Prerequisites For IPsec support for encrypting traffic to external hosts, ensure that the following prerequisites are met: The OVN-Kubernetes network plugin must be configured in local gateway mode, where ovnKubernetesConfig.gatewayConfig.routingViaHost=true . The NMState Operator is installed. This Operator is required for specifying the IPsec configuration. For more information, see Kubernetes NMState Operator . Note The NMState Operator is supported on Google Cloud Platform (GCP) only for configuring IPsec. The Butane tool ( butane ) is installed. To install Butane, see Installing Butane . These prerequisites are required to add certificates into the host NSS database and to configure IPsec to communicate with external hosts. 6.6.3. Network connectivity requirements when IPsec is enabled You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. Table 6.6. Ports used for all-machine to all-machine communications Protocol Port Description UDP 500 IPsec IKE packets 4500 IPsec NAT-T packets ESP N/A IPsec Encapsulating Security Payload (ESP) 6.6.4. IPsec encryption for pod-to-pod traffic For IPsec encryption of pod-to-pod traffic, the following sections describe which specific pod-to-pod traffic is encrypted, what kind of encryption protocol is used, and how X.509 certificates are handled. These sections do not apply to IPsec encryption between the cluster and external hosts, which you must configure manually for your specific external network infrastructure. 6.6.4.1. Types of network traffic flows encrypted by pod-to-pod IPsec With IPsec enabled, only the following network traffic flows between pods are encrypted: Traffic between pods on different nodes on the cluster network Traffic from a pod on the host network to a pod on the cluster network The following traffic flows are not encrypted: Traffic between pods on the same node on the cluster network Traffic between pods on the host network Traffic from a pod on the cluster network to a pod on the host network The encrypted and unencrypted flows are illustrated in the following diagram: 6.6.4.2. Encryption protocol and IPsec mode The encrypt cipher used is AES-GCM-16-256 . The integrity check value (ICV) is 16 bytes. The key length is 256 bits. The IPsec mode used is Transport mode , a mode that encrypts end-to-end communication by adding an Encapsulated Security Payload (ESP) header to the IP header of the original packet and encrypts the packet data. OpenShift Container Platform does not currently use or support IPsec Tunnel mode for pod-to-pod communication. 6.6.4.3. Security certificate generation and rotation The Cluster Network Operator (CNO) generates a self-signed X.509 certificate authority (CA) that is used by IPsec for encryption. Certificate signing requests (CSRs) from each node are automatically fulfilled by the CNO. The CA is valid for 10 years. The individual node certificates are valid for 5 years and are automatically rotated after 4 1/2 years elapse. 6.6.5. IPsec encryption for external traffic OpenShift Container Platform supports IPsec encryption for traffic to external hosts with TLS certificates that you must supply. 6.6.5.1. Supported platforms This feature is supported on the following platforms: Bare metal Google Cloud Platform (GCP) Red Hat OpenStack Platform (RHOSP) VMware vSphere Important If you have Red Hat Enterprise Linux (RHEL) worker nodes, these do not support IPsec encryption for external traffic. If your cluster uses hosted control planes for Red Hat OpenShift Container Platform, configuring IPsec for encrypting traffic to external hosts is not supported. 6.6.5.2. Limitations Ensure that the following prohibitions are observed: IPv6 configuration is not currently supported by the NMState Operator when configuring IPsec for external traffic. Certificate common names (CN) in the provided certificate bundle must not begin with the ovs_ prefix, because this naming can conflict with pod-to-pod IPsec CN names in the Network Security Services (NSS) database of each node. 6.6.6. Enabling IPsec encryption As a cluster administrator, you can enable pod-to-pod IPsec encryption and IPsec encryption between the cluster and external IPsec endpoints. You can configure IPsec in either of the following modes: Full : Encryption for pod-to-pod and external traffic External : Encryption for external traffic If you need to configure encryption for external traffic in addition to pod-to-pod traffic, you must also complete the "Configuring IPsec encryption for external traffic" procedure. Prerequisites Install the OpenShift CLI ( oc ). You are logged in to the cluster as a user with cluster-admin privileges. You have reduced the size of your cluster MTU by 46 bytes to allow for the overhead of the IPsec ESP header. Procedure To enable IPsec encryption, enter the following command: USD oc patch networks.operator.openshift.io cluster --type=merge \ -p '{ "spec":{ "defaultNetwork":{ "ovnKubernetesConfig":{ "ipsecConfig":{ "mode":<mode> }}}}}' where: mode Specify External to encrypt only traffic to external hosts or specify Full to encrypt pod to pod traffic and optionally traffic to external hosts. By default, IPsec is disabled. Optional: If you need to encrypt traffic to external hosts, complete the "Configuring IPsec encryption for external traffic" procedure. Verification To find the names of the OVN-Kubernetes data plane pods, enter the following command: USD oc get pods -n openshift-ovn-kubernetes -l=app=ovnkube-node Example output ovnkube-node-5xqbf 8/8 Running 0 28m ovnkube-node-6mwcx 8/8 Running 0 29m ovnkube-node-ck5fr 8/8 Running 0 31m ovnkube-node-fr4ld 8/8 Running 0 26m ovnkube-node-wgs4l 8/8 Running 0 33m ovnkube-node-zfvcl 8/8 Running 0 34m Verify that IPsec is enabled on your cluster by running the following command: Note As a cluster administrator, you can verify that IPsec is enabled between pods on your cluster when IPsec is configured in Full mode. This step does not verify whether IPsec is working between your cluster and external hosts. USD oc -n openshift-ovn-kubernetes rsh ovnkube-node-<XXXXX> ovn-nbctl --no-leader-only get nb_global . ipsec where: <XXXXX> Specifies the random sequence of letters for a pod from the step. Example output true 6.6.7. Configuring IPsec encryption for external traffic As a cluster administrator, to encrypt external traffic with IPsec you must configure IPsec for your network infrastructure, including providing PKCS#12 certificates. Because this procedure uses Butane to create machine configs, you must have the butane command installed. Note After you apply the machine config, the Machine Config Operator reboots affected nodes in your cluster to rollout the new machine config. Prerequisites Install the OpenShift CLI ( oc ). You have installed the butane utility on your local computer. You have installed the NMState Operator on the cluster. You are logged in to the cluster as a user with cluster-admin privileges. You have an existing PKCS#12 certificate for the IPsec endpoint and a CA cert in PEM format. You enabled IPsec in either Full or External mode on your cluster. The OVN-Kubernetes network plugin must be configured in local gateway mode, where ovnKubernetesConfig.gatewayConfig.routingViaHost=true . Procedure Create an IPsec configuration with an NMState Operator node network configuration policy. For more information, see Libreswan as an IPsec VPN implementation . To identify the IP address of the cluster node that is the IPsec endpoint, enter the following command: Create a file named ipsec-config.yaml that contains a node network configuration policy for the NMState Operator, such as in the following examples. For an overview about NodeNetworkConfigurationPolicy objects, see The Kubernetes NMState project . Example NMState IPsec transport configuration apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ipsec-config spec: nodeSelector: kubernetes.io/hostname: "<hostname>" 1 desiredState: interfaces: - name: <interface_name> 2 type: ipsec libreswan: left: <cluster_node> 3 leftid: '%fromcert' leftrsasigkey: '%cert' leftcert: left_server leftmodecfgclient: false right: <external_host> 4 rightid: '%fromcert' rightrsasigkey: '%cert' rightsubnet: <external_address>/32 5 ikev2: insist type: transport 1 Specifies the host name to apply the policy to. This host serves as the left side host in the IPsec configuration. 2 Specifies the name of the interface to create on the host. 3 Specifies the host name of the cluster node that terminates the IPsec tunnel on the cluster side. The name should match SAN [Subject Alternate Name] from your supplied PKCS#12 certificates. 4 Specifies the external host name, such as host.example.com . The name should match the SAN [Subject Alternate Name] from your supplied PKCS#12 certificates. 5 Specifies the IP address of the external host, such as 10.1.2.3/32 . Example NMState IPsec tunnel configuration apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ipsec-config spec: nodeSelector: kubernetes.io/hostname: "<hostname>" 1 desiredState: interfaces: - name: <interface_name> 2 type: ipsec libreswan: left: <cluster_node> 3 leftid: '%fromcert' leftmodecfgclient: false leftrsasigkey: '%cert' leftcert: left_server right: <external_host> 4 rightid: '%fromcert' rightrsasigkey: '%cert' rightsubnet: <external_address>/32 5 ikev2: insist type: tunnel 1 Specifies the host name to apply the policy to. This host serves as the left side host in the IPsec configuration. 2 Specifies the name of the interface to create on the host. 3 Specifies the host name of the cluster node that terminates the IPsec tunnel on the cluster side. The name should match SAN [Subject Alternate Name] from your supplied PKCS#12 certificates. 4 Specifies the external host name, such as host.example.com . The name should match the SAN [Subject Alternate Name] from your supplied PKCS#12 certificates. 5 Specifies the IP address of the external host, such as 10.1.2.3/32 . To configure the IPsec interface, enter the following command: USD oc create -f ipsec-config.yaml Provide the following certificate files to add to the Network Security Services (NSS) database on each host. These files are imported as part of the Butane configuration in subsequent steps. left_server.p12 : The certificate bundle for the IPsec endpoints ca.pem : The certificate authority that you signed your certificates with Create a machine config to add your certificates to the cluster: To create Butane config files for the control plane and worker nodes, enter the following command: USD for role in master worker; do cat >> "99-ipsec-USD{role}-endpoint-config.bu" <<-EOF variant: openshift version: 4.16.0 metadata: name: 99-USD{role}-import-certs labels: machineconfiguration.openshift.io/role: USDrole systemd: units: - name: ipsec-import.service enabled: true contents: | [Unit] Description=Import external certs into ipsec NSS Before=ipsec.service [Service] Type=oneshot ExecStart=/usr/local/bin/ipsec-addcert.sh RemainAfterExit=false StandardOutput=journal [Install] WantedBy=multi-user.target storage: files: - path: /etc/pki/certs/ca.pem mode: 0400 overwrite: true contents: local: ca.pem - path: /etc/pki/certs/left_server.p12 mode: 0400 overwrite: true contents: local: left_server.p12 - path: /usr/local/bin/ipsec-addcert.sh mode: 0740 overwrite: true contents: inline: | #!/bin/bash -e echo "importing cert to NSS" certutil -A -n "CA" -t "CT,C,C" -d /var/lib/ipsec/nss/ -i /etc/pki/certs/ca.pem pk12util -W "" -i /etc/pki/certs/left_server.p12 -d /var/lib/ipsec/nss/ certutil -M -n "left_server" -t "u,u,u" -d /var/lib/ipsec/nss/ EOF done To transform the Butane files that you created in the step into machine configs, enter the following command: USD for role in master worker; do butane -d . 99-ipsec-USD{role}-endpoint-config.bu -o ./99-ipsec-USDrole-endpoint-config.yaml done To apply the machine configs to your cluster, enter the following command: USD for role in master worker; do oc apply -f 99-ipsec-USD{role}-endpoint-config.yaml done Important As the Machine Config Operator (MCO) updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated before external IPsec connectivity is available. Check the machine config pool status by entering the following command: USD oc get mcp A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the MCO updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. To confirm that IPsec machine configs rolled out successfully, enter the following commands: Confirm that the IPsec machine configs were created: USD oc get mc | grep ipsec Example output 80-ipsec-master-extensions 3.2.0 6d15h 80-ipsec-worker-extensions 3.2.0 6d15h Confirm that the that the IPsec extension are applied to control plane nodes: USD oc get mcp master -o yaml | grep 80-ipsec-master-extensions -c Expected output 2 Confirm that the that the IPsec extension are applied to worker nodes: USD oc get mcp worker -o yaml | grep 80-ipsec-worker-extensions -c Expected output 2 Additional resources For more information about the nmstate IPsec API, see IPsec Encryption 6.6.8. Disabling IPsec encryption for an external IPsec endpoint As a cluster administrator, you can remove an existing IPsec tunnel to an external host. Prerequisites Install the OpenShift CLI ( oc ). You are logged in to the cluster as a user with cluster-admin privileges. You enabled IPsec in either Full or External mode on your cluster. Procedure Create a file named remove-ipsec-tunnel.yaml with the following YAML: kind: NodeNetworkConfigurationPolicy apiVersion: nmstate.io/v1 metadata: name: <name> spec: nodeSelector: kubernetes.io/hostname: <node_name> desiredState: interfaces: - name: <tunnel_name> type: ipsec state: absent where: name Specifies a name for the node network configuration policy. node_name Specifies the name of the node where the IPsec tunnel that you want to remove exists. tunnel_name Specifies the interface name for the existing IPsec tunnel. To remove the IPsec tunnel, enter the following command: USD oc apply -f remove-ipsec-tunnel.yaml 6.6.9. Disabling IPsec encryption As a cluster administrator, you can disable IPsec encryption. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To disable IPsec encryption, enter the following command: USD oc patch networks.operator.openshift.io cluster --type=merge \ -p '{ "spec":{ "defaultNetwork":{ "ovnKubernetesConfig":{ "ipsecConfig":{ "mode":"Disabled" }}}}}' Optional: You can increase the size of your cluster MTU by 46 bytes because there is no longer any overhead from the IPsec ESP header in IP packets. 6.6.10. Additional resources Configuring a VPN with IPsec in Red Hat Enterprise Linux (RHEL) 9 Installing Butane About the OVN-Kubernetes Container Network Interface (CNI) network plugin Changing the MTU for the cluster network Network [operator.openshift.io/v1] API 6.7. Zero trust networking Zero trust is an approach to designing security architectures based on the premise that every interaction begins in an untrusted state. This contrasts with traditional architectures, which might determine trustworthiness based on whether communication starts inside a firewall. More specifically, zero trust attempts to close gaps in security architectures that rely on implicit trust models and one-time authentication. OpenShift Container Platform can add some zero trust networking capabilities to containers running on the platform without requiring changes to the containers or the software running in them. There are also several products that Red Hat offers that can further augment the zero trust networking capabilities of containers. If you have the ability to change the software running in the containers, then there are other projects that Red Hat supports that can add further capabilities. Explore the following targeted capabilities of zero trust networking. 6.7.1. Root of trust Public certificates and private keys are critical to zero trust networking. These are used to identify components to one another, authenticate, and to secure traffic. The certificates are signed by other certificates, and there is a chain of trust to a root certificate authority (CA). Everything participating in the network needs to ultimately have the public key for a root CA so that it can validate the chain of trust. For public-facing things, these are usually the set of root CAs that are globally known, and whose keys are distributed with operating systems, web browsers, and so on. However, it is possible to run a private CA for a cluster or a corporation if the certificate of the private CA is distributed to all parties. Leverage: OpenShift Container Platform: OpenShift creates a cluster CA at installation that is used to secure the cluster resources. However, OpenShift Container Platform can also create and sign certificates for services in the cluster, and can inject the cluster CA bundle into a pod if requested. Service certificates created and signed by OpenShift Container Platform have a 26-month time to live (TTL) and are rotated automatically at 13 months. They can also be rotated manually if necessary. OpenShift cert-manager Operator : cert-manager allows you to request keys that are signed by an external root of trust. There are many configurable issuers to integrate with external issuers, along with ways to run with a delegated signing certificate. The cert-manager API can be used by other software in zero trust networking to request the necessary certificates (for example, Red Hat OpenShift Service Mesh), or can be used directly by customer software. 6.7.2. Traffic authentication and encryption Ensure that all traffic on the wire is encrypted and the endpoints are identifiable. An example of this is Mutual TLS, or mTLS, which is a method for mutual authentication. Leverage: OpenShift Container Platform: With transparent pod-to-pod IPsec , the source and destination of the traffic can be identified by the IP address. There is the capability for egress traffic to be encrypted using IPsec . By using the egress IP feature, the source IP address of the traffic can be used to identify the source of the traffic inside the cluster. Red Hat OpenShift Service Mesh : Provides powerful mTLS capabilities that can transparently augment traffic leaving a pod to provide authentication and encryption. OpenShift cert-manager Operator : Use custom resource definitions (CRDs) to request certificates that can be mounted for your programs to use for SSL/TLS protocols. 6.7.3. Identification and authentication After you have the ability to mint certificates using a CA, you can use it to establish trust relationships by verification of the identity of the other end of a connection - either a user or a client machine. This also requires management of certificate lifecycles to limit use if compromised. Leverage: OpenShift Container Platform: Cluster-signed service certificates to ensure that a client is talking to a trusted endpoint. This requires that the service uses SSL/TLS and that the client uses the cluster CA . The client identity must be provided using some other means. Red Hat Single Sign-On : Provides request authentication integration with enterprise user directories or third-party identity providers. Red Hat OpenShift Service Mesh : Transparent upgrade of connections to mTLS, auto-rotation, custom certificate expiration, and request authentication with JSON web token (JWT). OpenShift cert-manager Operator : Creation and management of certificates for use by your application. Certificates can be controlled by CRDs and mounted as secrets, or your application can be changed to interact directly with the cert-manager API. 6.7.4. Inter-service authorization It is critical to be able to control access to services based on the identity of the requester. This is done by the platform and does not require each application to implement it. That allows better auditing and inspection of the policies. Leverage: OpenShift Container Platform: Can enforce isolation in the networking layer of the platform using the Kubernetes NetworkPolicy and AdminNetworkPolicy objects. Red Hat OpenShift Service Mesh : Sophisticated L4 and L7 control of traffic using standard Istio objects and using mTLS to identify the source and destination of traffic and then apply policies based on that information. 6.7.5. Transaction-level verification In addition to the ability to identify and authenticate connections, it is also useful to control access to individual transactions. This can include rate-limiting by source, observability, and semantic validation that a transaction is well formed. Leverage: Red Hat OpenShift Service Mesh : Perform L7 inspection of requests, rejecting malformed HTTP requests, transaction-level observability and reporting . Service Mesh can also provide request-based authentication using JWT. 6.7.6. Risk assessment As the number of security policies in a cluster increase, visualization of what the policies allow and deny becomes increasingly important. These tools make it easier to create, visualize, and manage cluster security policies. Leverage: Red Hat OpenShift Service Mesh : Create and visualize Kubernetes NetworkPolicy and AdminNetworkPolicy , and OpenShift Networking EgressFirewall objects using the OpenShift web console . Red Hat Advanced Cluster Security for Kubernetes : Advanced visualization of objects . 6.7.7. Site-wide policy enforcement and distribution After deploying applications on a cluster, it becomes challenging to manage all of the objects that make up the security rules. It becomes critical to be able to apply site-wide policies and audit the deployed objects for compliance with the policies. This should allow for delegation of some permissions to users and cluster administrators within defined bounds, and should allow for exceptions to the policies if necessary. Leverage: Red Hat OpenShift Service Mesh : RBAC to control policy object s and delegate control. Red Hat Advanced Cluster Security for Kubernetes : Policy enforcement engine. Red Hat Advanced Cluster Management (RHACM) for Kubernetes : Centralized policy control. 6.7.8. Observability for constant, and retrospective, evaluation After you have a running cluster, you want to be able to observe the traffic and verify that the traffic comports with the defined rules. This is important for intrusion detection, forensics, and is helpful for operational load management. Leverage: Network Observability Operator : Allows for inspection, monitoring, and alerting on network connections to pods and nodes in the cluster. Red Hat Advanced Cluster Management (RHACM) for Kubernetes : Monitors, collects, and evaluates system-level events such as process execution, network connections and flows, and privilege escalation. It can determine a baseline for a cluster, and then detect anomalous activity and alert you about it. Red Hat OpenShift Service Mesh : Can monitor traffic entering and leaving a pod. Red Hat OpenShift distributed tracing platform : For suitably instrumented applications, you can see all traffic associated with a particular action as it splits into sub-requests to microservices. This allows you to identify bottlenecks within a distributed application. 6.7.9. Endpoint security It is important to be able to trust that the software running the services in your cluster has not been compromised. For example, you might need to ensure that certified images are run on trusted hardware, and have policies to only allow connections to or from an endpoint based on endpoint characteristics. Leverage: OpenShift Container Platform: Secureboot can ensure that the nodes in the cluster are running trusted software, so the platform itself (including the container runtime) have not been tampered with. You can configure OpenShift Container Platform to only run images that have been signed by certain signatures . Red Hat Trusted Artifact Signer : This can be used in a trusted build chain and produce signed container images. 6.7.10. Extending trust outside of the cluster You might want to extend trust outside of the cluster by allowing a cluster to mint CAs for a subdomain. Alternatively, you might want to attest to workload identity in the cluster to a remote endpoint. Leverage: OpenShift cert-manager Operator : You can use cert-manager to manage delegated CAs so that you can distribute trust across different clusters, or through your organization. Red Hat OpenShift Service Mesh : Can use SPIFFE to provide remote attestation of workloads to endpoints running in remote or local clusters.
[ "apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: sample-anp-deny-pass-rules 1 spec: priority: 50 2 subject: namespaces: matchLabels: kubernetes.io/metadata.name: example.name 3 ingress: 4 - name: \"deny-all-ingress-tenant-1\" 5 action: \"Deny\" from: - pods: namespaceSelector: matchLabels: custom-anp: tenant-1 podSelector: matchLabels: custom-anp: tenant-1 6 egress: 7 - name: \"pass-all-egress-to-tenant-1\" action: \"Pass\" to: - pods: namespaceSelector: matchLabels: custom-anp: tenant-1 podSelector: matchLabels: custom-anp: tenant-1", "apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: allow-monitoring spec: priority: 9 subject: namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well. ingress: - name: \"allow-ingress-from-monitoring\" action: \"Allow\" from: - namespaces: matchLabels: kubernetes.io/metadata.name: monitoring", "apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: block-monitoring spec: priority: 5 subject: namespaces: matchLabels: security: restricted ingress: - name: \"deny-ingress-from-monitoring\" action: \"Deny\" from: - namespaces: matchLabels: kubernetes.io/metadata.name: monitoring", "apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: pass-monitoring spec: priority: 7 subject: namespaces: matchLabels: security: internal ingress: - name: \"pass-ingress-from-monitoring\" action: \"Pass\" from: - namespaces: matchLabels: kubernetes.io/metadata.name: monitoring", "apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: name: default 1 spec: subject: namespaces: matchLabels: kubernetes.io/metadata.name: example.name 2 ingress: 3 - name: \"deny-all-ingress-from-tenant-1\" 4 action: \"Deny\" from: - pods: namespaceSelector: matchLabels: custom-banp: tenant-1 5 podSelector: matchLabels: custom-banp: tenant-1 6 egress: - name: \"allow-all-egress-to-tenant-1\" action: \"Allow\" to: - pods: namespaceSelector: matchLabels: custom-banp: tenant-1 podSelector: matchLabels: custom-banp: tenant-1", "apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: name: default spec: subject: namespaces: matchLabels: security: internal ingress: - name: \"deny-ingress-from-monitoring\" action: \"Deny\" from: - namespaces: matchLabels: kubernetes.io/metadata.name: monitoring", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-monitoring namespace: tenant 1 spec: podSelector: policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring", "apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: egress-security-allow spec: egress: - action: Deny to: - nodes: matchExpressions: - key: node-role.kubernetes.io/worker operator: Exists - action: Allow name: allow-to-kubernetes-api-server-and-engr-dept-pods ports: - portNumber: port: 6443 protocol: TCP to: - nodes: 1 matchExpressions: - key: node-role.kubernetes.io/control-plane operator: Exists - pods: 2 namespaceSelector: matchLabels: dept: engr podSelector: {} priority: 55 subject: 3 namespaces: matchExpressions: - key: security 4 operator: In values: - restricted - confidential - internal", "apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: network-as-egress-peer spec: priority: 70 subject: namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well. egress: - name: \"deny-egress-to-external-dns-servers\" action: \"Deny\" to: - networks: 1 - 8.8.8.8/32 - 8.8.4.4/32 - 208.67.222.222/32 ports: - portNumber: protocol: UDP port: 53 - name: \"allow-all-egress-to-intranet\" action: \"Allow\" to: - networks: 2 - 89.246.180.0/22 - 60.45.72.0/22 - name: \"allow-all-intra-cluster-traffic\" action: \"Allow\" to: - namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well. - name: \"pass-all-egress-to-internet\" action: \"Pass\" to: - networks: - 0.0.0.0/0 3 --- apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: name: default spec: subject: namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well. egress: - name: \"deny-all-egress-to-internet\" action: \"Deny\" to: - networks: - 0.0.0.0/0 4 ---", "apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: egress-peer-1 1 spec: egress: 2 - action: \"Allow\" name: \"allow-egress\" to: - nodes: matchExpressions: - key: worker-group operator: In values: - workloads # Egress traffic from nodes with label worker-group: workloads is allowed. - networks: - 104.154.164.170/32 - pods: namespaceSelector: matchLabels: apps: external-apps podSelector: matchLabels: app: web # This rule in the policy allows the traffic directed to pods labeled apps: web in projects with apps: external-apps to leave the cluster. - action: \"Deny\" name: \"deny-egress\" to: - nodes: matchExpressions: - key: worker-group operator: In values: - infra # Egress traffic from nodes with label worker-group: infra is denied. - networks: - 104.154.164.160/32 # Egress traffic to this IP address from cluster is denied. - pods: namespaceSelector: matchLabels: apps: internal-apps podSelector: {} - action: \"Pass\" name: \"pass-egress\" to: - nodes: matchExpressions: - key: node-role.kubernetes.io/worker operator: Exists # All other egress traffic is passed to NetworkPolicy or BANP for evaluation. priority: 30 3 subject: 4 namespaces: matchLabels: apps: all-apps", "Conditions: Last Transition Time: 2024-06-08T20:29:00Z Message: Setting up OVN DB plumbing was successful Reason: SetupSucceeded Status: True Type: Ready-In-Zone-ovn-control-plane Last Transition Time: 2024-06-08T20:29:00Z Message: Setting up OVN DB plumbing was successful Reason: SetupSucceeded Status: True Type: Ready-In-Zone-ovn-worker Last Transition Time: 2024-06-08T20:29:00Z Message: Setting up OVN DB plumbing was successful Reason: SetupSucceeded Status: True Type: Ready-In-Zone-ovn-worker2", "Status: Conditions: Last Transition Time: 2024-06-25T12:47:44Z Message: error attempting to add ANP cluster-control with priority 600 because, OVNK only supports priority ranges 0-99 Reason: SetupFailed Status: False Type: Ready-In-Zone-example-worker-1.example.example-org.net Last Transition Time: 2024-06-25T12:47:45Z Message: error attempting to add ANP cluster-control with priority 600 because, OVNK only supports priority ranges 0-99 Reason: SetupFailed Status: False Type: Ready-In-Zone-example-worker-0.example.example-org.net Last Transition Time: 2024-06-25T12:47:44Z Message: error attempting to add ANP cluster-control with priority 600 because, OVNK only supports priority ranges 0-99 Reason: SetupFailed Status: False Type: Ready-In-Zone-example-ctlplane-1.example.example-org.net Last Transition Time: 2024-06-25T12:47:44Z Message: error attempting to add ANP cluster-control with priority 600 because, OVNK only supports priority ranges 0-99 Reason: SetupFailed Status: False Type: Ready-In-Zone-example-ctlplane-2.example.example-org.net Last Transition Time: 2024-06-25T12:47:44Z Message: error attempting to add ANP cluster-control with priority 600 because, OVNK only supports priority ranges 0-99 Reason: SetupFailed Status: False Type: Ready-In-Zone-example-ctlplane-0.example.example-org.net ```", "apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: cluster-control spec: priority: 34 subject: namespaces: matchLabels: anp: cluster-control-anp # Only namespaces with this label have this ANP ingress: - name: \"allow-from-ingress-router\" # rule0 action: \"Allow\" from: - namespaces: matchLabels: policy-group.network.openshift.io/ingress: \"\" - name: \"allow-from-monitoring\" # rule1 action: \"Allow\" from: - namespaces: matchLabels: kubernetes.io/metadata.name: openshift-monitoring ports: - portNumber: protocol: TCP port: 7564 - namedPort: \"scrape\" - name: \"allow-from-open-tenants\" # rule2 action: \"Allow\" from: - namespaces: # open tenants matchLabels: tenant: open - name: \"pass-from-restricted-tenants\" # rule3 action: \"Pass\" from: - namespaces: # restricted tenants matchLabels: tenant: restricted - name: \"default-deny\" # rule4 action: \"Deny\" from: - namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well. egress: - name: \"allow-to-dns\" # rule0 action: \"Allow\" to: - pods: namespaceSelector: matchlabels: kubernetes.io/metadata.name: openshift-dns podSelector: matchlabels: app: dns ports: - portNumber: protocol: UDP port: 5353 - name: \"allow-to-kapi-server\" # rule1 action: \"Allow\" to: - nodes: matchExpressions: - key: node-role.kubernetes.io/control-plane operator: Exists ports: - portNumber: protocol: TCP port: 6443 - name: \"allow-to-splunk\" # rule2 action: \"Allow\" to: - namespaces: matchlabels: tenant: splunk ports: - portNumber: protocol: TCP port: 8991 - portNumber: protocol: TCP port: 8992 - name: \"allow-to-open-tenants-and-intranet-and-worker-nodes\" # rule3 action: \"Allow\" to: - nodes: # worker-nodes matchExpressions: - key: node-role.kubernetes.io/worker operator: Exists - networks: # intranet - 172.29.0.0/30 - 10.0.54.0/19 - 10.0.56.38/32 - 10.0.69.0/24 - namespaces: # open tenants matchLabels: tenant: open - name: \"pass-to-restricted-tenants\" # rule4 action: \"Pass\" to: - namespaces: # restricted tenants matchLabels: tenant: restricted - name: \"default-deny\" action: \"Deny\" to: - networks: - 0.0.0.0/0", "oc get pods -n openshift-ovn-kubernetes -owide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovnkube-control-plane-5c95487779-8k9fd 2/2 Running 0 34m 10.0.0.5 ci-ln-0tv5gg2-72292-6sjw5-master-0 <none> <none> ovnkube-control-plane-5c95487779-v2xn8 2/2 Running 0 34m 10.0.0.3 ci-ln-0tv5gg2-72292-6sjw5-master-1 <none> <none> ovnkube-node-524dt 8/8 Running 0 33m 10.0.0.4 ci-ln-0tv5gg2-72292-6sjw5-master-2 <none> <none> ovnkube-node-gbwr9 8/8 Running 0 24m 10.0.128.4 ci-ln-0tv5gg2-72292-6sjw5-worker-c-s9gqt <none> <none> ovnkube-node-h4fpx 8/8 Running 0 33m 10.0.0.5 ci-ln-0tv5gg2-72292-6sjw5-master-0 <none> <none> ovnkube-node-j4hzw 8/8 Running 0 24m 10.0.128.2 ci-ln-0tv5gg2-72292-6sjw5-worker-a-hzbh5 <none> <none> ovnkube-node-wdhgv 8/8 Running 0 33m 10.0.0.3 ci-ln-0tv5gg2-72292-6sjw5-master-1 <none> <none> ovnkube-node-wfncn 8/8 Running 0 24m 10.0.128.3 ci-ln-0tv5gg2-72292-6sjw5-worker-b-5bb7f <none> <none>", "oc rsh -c nbdb -n openshift-ovn-kubernetes ovnkube-node-524dt", "ovn-nbctl find ACL 'external_ids{>=}{\"k8s.ovn.org/owner-type\"=AdminNetworkPolicy,\"k8s.ovn.org/name\"=cluster-control}'", "_uuid : 0d5e4722-b608-4bb1-b625-23c323cc9926 action : allow-related direction : to-lport external_ids : {direction=Ingress, gress-index=\"2\", \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:2:None\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy, port-policy-protocol=None} label : 0 log : false match : \"outport == @a14645450421485494999 && ((ip4.src == USDa13730899355151937870))\" meter : acl-logging name : \"ANP:cluster-control:Ingress:2\" options : {} priority : 26598 severity : [] tier : 1 _uuid : b7be6472-df67-439c-8c9c-f55929f0a6e0 action : drop direction : from-lport external_ids : {direction=Egress, gress-index=\"5\", \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:5:None\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy, port-policy-protocol=None} label : 0 log : false match : \"inport == @a14645450421485494999 && ((ip4.dst == USDa11452480169090787059))\" meter : acl-logging name : \"ANP:cluster-control:Egress:5\" options : {apply-after-lb=\"true\"} priority : 26595 severity : [] tier : 1 _uuid : 5a6e5bb4-36eb-4209-b8bc-c611983d4624 action : pass direction : to-lport external_ids : {direction=Ingress, gress-index=\"3\", \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:3:None\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy, port-policy-protocol=None} label : 0 log : false match : \"outport == @a14645450421485494999 && ((ip4.src == USDa764182844364804195))\" meter : acl-logging name : \"ANP:cluster-control:Ingress:3\" options : {} priority : 26597 severity : [] tier : 1 _uuid : 04f20275-c410-405c-a923-0e677f767889 action : pass direction : from-lport external_ids : {direction=Egress, gress-index=\"4\", \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:4:None\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy, port-policy-protocol=None} label : 0 log : false match : \"inport == @a14645450421485494999 && ((ip4.dst == USDa5972452606168369118))\" meter : acl-logging name : \"ANP:cluster-control:Egress:4\" options : {apply-after-lb=\"true\"} priority : 26596 severity : [] tier : 1 _uuid : 4b5d836a-e0a3-4088-825e-f9f0ca58e538 action : drop direction : to-lport external_ids : {direction=Ingress, gress-index=\"4\", \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:4:None\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy, port-policy-protocol=None} label : 0 log : false match : \"outport == @a14645450421485494999 && ((ip4.src == USDa13814616246365836720))\" meter : acl-logging name : \"ANP:cluster-control:Ingress:4\" options : {} priority : 26596 severity : [] tier : 1 _uuid : 5d09957d-d2cc-4f5a-9ddd-b97d9d772023 action : allow-related direction : from-lport external_ids : {direction=Egress, gress-index=\"2\", \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:2:tcp\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy, port-policy-protocol=tcp} label : 0 log : false match : \"inport == @a14645450421485494999 && ((ip4.dst == USDa18396736153283155648)) && tcp && tcp.dst=={8991,8992}\" meter : acl-logging name : \"ANP:cluster-control:Egress:2\" options : {apply-after-lb=\"true\"} priority : 26598 severity : [] tier : 1 _uuid : 1a68a5ed-e7f9-47d0-b55c-89184d97e81a action : allow-related direction : from-lport external_ids : {direction=Egress, gress-index=\"1\", \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:1:tcp\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy, port-policy-protocol=tcp} label : 0 log : false match : \"inport == @a14645450421485494999 && ((ip4.dst == USDa10706246167277696183)) && tcp && tcp.dst==6443\" meter : acl-logging name : \"ANP:cluster-control:Egress:1\" options : {apply-after-lb=\"true\"} priority : 26599 severity : [] tier : 1 _uuid : aa1a224d-7960-4952-bdfb-35246bafbac8 action : allow-related direction : to-lport external_ids : {direction=Ingress, gress-index=\"1\", \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:1:tcp\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy, port-policy-protocol=tcp} label : 0 log : false match : \"outport == @a14645450421485494999 && ((ip4.src == USDa6786643370959569281)) && tcp && tcp.dst==7564\" meter : acl-logging name : \"ANP:cluster-control:Ingress:1\" options : {} priority : 26599 severity : [] tier : 1 _uuid : 1a27d30e-3f96-4915-8ddd-ade7f22c117b action : allow-related direction : from-lport external_ids : {direction=Egress, gress-index=\"3\", \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:3:None\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy, port-policy-protocol=None} label : 0 log : false match : \"inport == @a14645450421485494999 && ((ip4.dst == USDa10622494091691694581))\" meter : acl-logging name : \"ANP:cluster-control:Egress:3\" options : {apply-after-lb=\"true\"} priority : 26597 severity : [] tier : 1 _uuid : b23a087f-08f8-4225-8c27-4a9a9ee0c407 action : allow-related direction : from-lport external_ids : {direction=Egress, gress-index=\"0\", \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:0:udp\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy, port-policy-protocol=udp} label : 0 log : false match : \"inport == @a14645450421485494999 && ((ip4.dst == USDa13517855690389298082)) && udp && udp.dst==5353\" meter : acl-logging name : \"ANP:cluster-control:Egress:0\" options : {apply-after-lb=\"true\"} priority : 26600 severity : [] tier : 1 _uuid : d14ed5cf-2e06-496e-8cae-6b76d5dd5ccd action : allow-related direction : to-lport external_ids : {direction=Ingress, gress-index=\"0\", \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:0:None\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy, port-policy-protocol=None} label : 0 log : false match : \"outport == @a14645450421485494999 && ((ip4.src == USDa14545668191619617708))\" meter : acl-logging name : \"ANP:cluster-control:Ingress:0\" options : {} priority : 26600 severity : [] tier : 1", "ovn-nbctl find ACL 'external_ids{>=}{\"k8s.ovn.org/owner-type\"=AdminNetworkPolicy,direction=Ingress,\"k8s.ovn.org/name\"=cluster-control,gress-index=\"1\"}'", "_uuid : aa1a224d-7960-4952-bdfb-35246bafbac8 action : allow-related direction : to-lport external_ids : {direction=Ingress, gress-index=\"1\", \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:1:tcp\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy, port-policy-protocol=tcp} label : 0 log : false match : \"outport == @a14645450421485494999 && ((ip4.src == USDa6786643370959569281)) && tcp && tcp.dst==7564\" meter : acl-logging name : \"ANP:cluster-control:Ingress:1\" options : {} priority : 26599 severity : [] tier : 1", "ovn-nbctl find Address_Set 'external_ids{>=}{\"k8s.ovn.org/owner-type\"=AdminNetworkPolicy,\"k8s.ovn.org/name\"=cluster-control}'", "_uuid : 56e89601-5552-4238-9fc3-8833f5494869 addresses : [\"192.168.194.135\", \"192.168.194.152\", \"192.168.194.193\", \"192.168.194.254\"] external_ids : {direction=Egress, gress-index=\"1\", ip-family=v4, \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:1:v4\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a10706246167277696183 _uuid : 7df9330d-380b-4bdb-8acd-4eddeda2419c addresses : [\"10.132.0.10\", \"10.132.0.11\", \"10.132.0.12\", \"10.132.0.13\", \"10.132.0.14\", \"10.132.0.15\", \"10.132.0.16\", \"10.132.0.17\", \"10.132.0.5\", \"10.132.0.7\", \"10.132.0.71\", \"10.132.0.75\", \"10.132.0.8\", \"10.132.0.81\", \"10.132.0.9\", \"10.132.2.10\", \"10.132.2.11\", \"10.132.2.12\", \"10.132.2.14\", \"10.132.2.15\", \"10.132.2.3\", \"10.132.2.4\", \"10.132.2.5\", \"10.132.2.6\", \"10.132.2.7\", \"10.132.2.8\", \"10.132.2.9\", \"10.132.3.64\", \"10.132.3.65\", \"10.132.3.72\", \"10.132.3.73\", \"10.132.3.76\", \"10.133.0.10\", \"10.133.0.11\", \"10.133.0.12\", \"10.133.0.13\", \"10.133.0.14\", \"10.133.0.15\", \"10.133.0.16\", \"10.133.0.17\", \"10.133.0.18\", \"10.133.0.19\", \"10.133.0.20\", \"10.133.0.21\", \"10.133.0.22\", \"10.133.0.23\", \"10.133.0.24\", \"10.133.0.25\", \"10.133.0.26\", \"10.133.0.27\", \"10.133.0.28\", \"10.133.0.29\", \"10.133.0.30\", \"10.133.0.31\", \"10.133.0.32\", \"10.133.0.33\", \"10.133.0.34\", \"10.133.0.35\", \"10.133.0.36\", \"10.133.0.37\", \"10.133.0.38\", \"10.133.0.39\", \"10.133.0.40\", \"10.133.0.41\", \"10.133.0.42\", \"10.133.0.44\", \"10.133.0.45\", \"10.133.0.46\", \"10.133.0.47\", \"10.133.0.48\", \"10.133.0.5\", \"10.133.0.6\", \"10.133.0.7\", \"10.133.0.8\", \"10.133.0.9\", \"10.134.0.10\", \"10.134.0.11\", \"10.134.0.12\", \"10.134.0.13\", \"10.134.0.14\", \"10.134.0.15\", \"10.134.0.16\", \"10.134.0.17\", \"10.134.0.18\", \"10.134.0.19\", \"10.134.0.20\", \"10.134.0.21\", \"10.134.0.22\", \"10.134.0.23\", \"10.134.0.24\", \"10.134.0.25\", \"10.134.0.26\", \"10.134.0.27\", \"10.134.0.28\", \"10.134.0.30\", \"10.134.0.31\", \"10.134.0.32\", \"10.134.0.33\", \"10.134.0.34\", \"10.134.0.35\", \"10.134.0.36\", \"10.134.0.37\", \"10.134.0.38\", \"10.134.0.4\", \"10.134.0.42\", \"10.134.0.9\", \"10.135.0.10\", \"10.135.0.11\", \"10.135.0.12\", \"10.135.0.13\", \"10.135.0.14\", \"10.135.0.15\", \"10.135.0.16\", \"10.135.0.17\", \"10.135.0.18\", \"10.135.0.19\", \"10.135.0.23\", \"10.135.0.24\", \"10.135.0.26\", \"10.135.0.27\", \"10.135.0.29\", \"10.135.0.3\", \"10.135.0.4\", \"10.135.0.40\", \"10.135.0.41\", \"10.135.0.42\", \"10.135.0.43\", \"10.135.0.44\", \"10.135.0.5\", \"10.135.0.6\", \"10.135.0.7\", \"10.135.0.8\", \"10.135.0.9\"] external_ids : {direction=Ingress, gress-index=\"4\", ip-family=v4, \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:4:v4\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a13814616246365836720 _uuid : 84d76f13-ad95-4c00-8329-a0b1d023c289 addresses : [\"10.132.3.76\", \"10.135.0.44\"] external_ids : {direction=Egress, gress-index=\"4\", ip-family=v4, \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:4:v4\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a5972452606168369118 _uuid : 0c53e917-f7ee-4256-8f3a-9522c0481e52 addresses : [\"10.132.0.10\", \"10.132.0.11\", \"10.132.0.12\", \"10.132.0.13\", \"10.132.0.14\", \"10.132.0.15\", \"10.132.0.16\", \"10.132.0.17\", \"10.132.0.5\", \"10.132.0.7\", \"10.132.0.71\", \"10.132.0.75\", \"10.132.0.8\", \"10.132.0.81\", \"10.132.0.9\", \"10.132.2.10\", \"10.132.2.11\", \"10.132.2.12\", \"10.132.2.14\", \"10.132.2.15\", \"10.132.2.3\", \"10.132.2.4\", \"10.132.2.5\", \"10.132.2.6\", \"10.132.2.7\", \"10.132.2.8\", \"10.132.2.9\", \"10.132.3.64\", \"10.132.3.65\", \"10.132.3.72\", \"10.132.3.73\", \"10.132.3.76\", \"10.133.0.10\", \"10.133.0.11\", \"10.133.0.12\", \"10.133.0.13\", \"10.133.0.14\", \"10.133.0.15\", \"10.133.0.16\", \"10.133.0.17\", \"10.133.0.18\", \"10.133.0.19\", \"10.133.0.20\", \"10.133.0.21\", \"10.133.0.22\", \"10.133.0.23\", \"10.133.0.24\", \"10.133.0.25\", \"10.133.0.26\", \"10.133.0.27\", \"10.133.0.28\", \"10.133.0.29\", \"10.133.0.30\", \"10.133.0.31\", \"10.133.0.32\", \"10.133.0.33\", \"10.133.0.34\", \"10.133.0.35\", \"10.133.0.36\", \"10.133.0.37\", \"10.133.0.38\", \"10.133.0.39\", \"10.133.0.40\", \"10.133.0.41\", \"10.133.0.42\", \"10.133.0.44\", \"10.133.0.45\", \"10.133.0.46\", \"10.133.0.47\", \"10.133.0.48\", \"10.133.0.5\", \"10.133.0.6\", \"10.133.0.7\", \"10.133.0.8\", \"10.133.0.9\", \"10.134.0.10\", \"10.134.0.11\", \"10.134.0.12\", \"10.134.0.13\", \"10.134.0.14\", \"10.134.0.15\", \"10.134.0.16\", \"10.134.0.17\", \"10.134.0.18\", \"10.134.0.19\", \"10.134.0.20\", \"10.134.0.21\", \"10.134.0.22\", \"10.134.0.23\", \"10.134.0.24\", \"10.134.0.25\", \"10.134.0.26\", \"10.134.0.27\", \"10.134.0.28\", \"10.134.0.30\", \"10.134.0.31\", \"10.134.0.32\", \"10.134.0.33\", \"10.134.0.34\", \"10.134.0.35\", \"10.134.0.36\", \"10.134.0.37\", \"10.134.0.38\", \"10.134.0.4\", \"10.134.0.42\", \"10.134.0.9\", \"10.135.0.10\", \"10.135.0.11\", \"10.135.0.12\", \"10.135.0.13\", \"10.135.0.14\", \"10.135.0.15\", \"10.135.0.16\", \"10.135.0.17\", \"10.135.0.18\", \"10.135.0.19\", \"10.135.0.23\", \"10.135.0.24\", \"10.135.0.26\", \"10.135.0.27\", \"10.135.0.29\", \"10.135.0.3\", \"10.135.0.4\", \"10.135.0.40\", \"10.135.0.41\", \"10.135.0.42\", \"10.135.0.43\", \"10.135.0.44\", \"10.135.0.5\", \"10.135.0.6\", \"10.135.0.7\", \"10.135.0.8\", \"10.135.0.9\"] external_ids : {direction=Egress, gress-index=\"2\", ip-family=v4, \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:2:v4\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a18396736153283155648 _uuid : 5228bf1b-dfd8-40ec-bfa8-95c5bf9aded9 addresses : [] external_ids : {direction=Ingress, gress-index=\"0\", ip-family=v4, \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:0:v4\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a14545668191619617708 _uuid : 46530d69-70da-4558-8c63-884ec9dc4f25 addresses : [\"10.132.2.10\", \"10.132.2.5\", \"10.132.2.6\", \"10.132.2.7\", \"10.132.2.8\", \"10.132.2.9\", \"10.133.0.47\", \"10.134.0.33\", \"10.135.0.10\", \"10.135.0.11\", \"10.135.0.12\", \"10.135.0.19\", \"10.135.0.24\", \"10.135.0.7\", \"10.135.0.8\", \"10.135.0.9\"] external_ids : {direction=Ingress, gress-index=\"1\", ip-family=v4, \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:1:v4\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a6786643370959569281 _uuid : 65fdcdea-0b9f-4318-9884-1b51d231ad1d addresses : [\"10.132.3.72\", \"10.135.0.42\"] external_ids : {direction=Ingress, gress-index=\"2\", ip-family=v4, \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:2:v4\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a13730899355151937870 _uuid : 73eabdb0-36bf-4ca3-b66d-156ac710df4c addresses : [\"10.0.32.0/19\", \"10.0.56.38/32\", \"10.0.69.0/24\", \"10.132.3.72\", \"10.135.0.42\", \"172.29.0.0/30\", \"192.168.194.103\", \"192.168.194.2\"] external_ids : {direction=Egress, gress-index=\"3\", ip-family=v4, \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:3:v4\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a10622494091691694581 _uuid : 50cdbef2-71b5-474b-914c-6fcd1d7712d3 addresses : [\"10.132.0.10\", \"10.132.0.11\", \"10.132.0.12\", \"10.132.0.13\", \"10.132.0.14\", \"10.132.0.15\", \"10.132.0.16\", \"10.132.0.17\", \"10.132.0.5\", \"10.132.0.7\", \"10.132.0.71\", \"10.132.0.75\", \"10.132.0.8\", \"10.132.0.81\", \"10.132.0.9\", \"10.132.2.10\", \"10.132.2.11\", \"10.132.2.12\", \"10.132.2.14\", \"10.132.2.15\", \"10.132.2.3\", \"10.132.2.4\", \"10.132.2.5\", \"10.132.2.6\", \"10.132.2.7\", \"10.132.2.8\", \"10.132.2.9\", \"10.132.3.64\", \"10.132.3.65\", \"10.132.3.72\", \"10.132.3.73\", \"10.132.3.76\", \"10.133.0.10\", \"10.133.0.11\", \"10.133.0.12\", \"10.133.0.13\", \"10.133.0.14\", \"10.133.0.15\", \"10.133.0.16\", \"10.133.0.17\", \"10.133.0.18\", \"10.133.0.19\", \"10.133.0.20\", \"10.133.0.21\", \"10.133.0.22\", \"10.133.0.23\", \"10.133.0.24\", \"10.133.0.25\", \"10.133.0.26\", \"10.133.0.27\", \"10.133.0.28\", \"10.133.0.29\", \"10.133.0.30\", \"10.133.0.31\", \"10.133.0.32\", \"10.133.0.33\", \"10.133.0.34\", \"10.133.0.35\", \"10.133.0.36\", \"10.133.0.37\", \"10.133.0.38\", \"10.133.0.39\", \"10.133.0.40\", \"10.133.0.41\", \"10.133.0.42\", \"10.133.0.44\", \"10.133.0.45\", \"10.133.0.46\", \"10.133.0.47\", \"10.133.0.48\", \"10.133.0.5\", \"10.133.0.6\", \"10.133.0.7\", \"10.133.0.8\", \"10.133.0.9\", \"10.134.0.10\", \"10.134.0.11\", \"10.134.0.12\", \"10.134.0.13\", \"10.134.0.14\", \"10.134.0.15\", \"10.134.0.16\", \"10.134.0.17\", \"10.134.0.18\", \"10.134.0.19\", \"10.134.0.20\", \"10.134.0.21\", \"10.134.0.22\", \"10.134.0.23\", \"10.134.0.24\", \"10.134.0.25\", \"10.134.0.26\", \"10.134.0.27\", \"10.134.0.28\", \"10.134.0.30\", \"10.134.0.31\", \"10.134.0.32\", \"10.134.0.33\", \"10.134.0.34\", \"10.134.0.35\", \"10.134.0.36\", \"10.134.0.37\", \"10.134.0.38\", \"10.134.0.4\", \"10.134.0.42\", \"10.134.0.9\", \"10.135.0.10\", \"10.135.0.11\", \"10.135.0.12\", \"10.135.0.13\", \"10.135.0.14\", \"10.135.0.15\", \"10.135.0.16\", \"10.135.0.17\", \"10.135.0.18\", \"10.135.0.19\", \"10.135.0.23\", \"10.135.0.24\", \"10.135.0.26\", \"10.135.0.27\", \"10.135.0.29\", \"10.135.0.3\", \"10.135.0.4\", \"10.135.0.40\", \"10.135.0.41\", \"10.135.0.42\", \"10.135.0.43\", \"10.135.0.44\", \"10.135.0.5\", \"10.135.0.6\", \"10.135.0.7\", \"10.135.0.8\", \"10.135.0.9\"] external_ids : {direction=Egress, gress-index=\"0\", ip-family=v4, \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:0:v4\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a13517855690389298082 _uuid : 32a42f32-2d11-43dd-979d-a56d7ee6aa57 addresses : [\"10.132.3.76\", \"10.135.0.44\"] external_ids : {direction=Ingress, gress-index=\"3\", ip-family=v4, \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:3:v4\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a764182844364804195 _uuid : 8fd3b977-6e1c-47aa-82b7-e3e3136c4a72 addresses : [\"0.0.0.0/0\"] external_ids : {direction=Egress, gress-index=\"5\", ip-family=v4, \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:5:v4\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a11452480169090787059", "ovn-nbctl find Address_Set 'external_ids{>=}{\"k8s.ovn.org/owner-type\"=AdminNetworkPolicy,direction=Egress,\"k8s.ovn.org/name\"=cluster-control,gress-index=\"5\"}'", "_uuid : 8fd3b977-6e1c-47aa-82b7-e3e3136c4a72 addresses : [\"0.0.0.0/0\"] external_ids : {direction=Egress, gress-index=\"5\", ip-family=v4, \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:5:v4\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a11452480169090787059", "ovn-nbctl find Port_Group 'external_ids{>=}{\"k8s.ovn.org/owner-type\"=AdminNetworkPolicy,\"k8s.ovn.org/name\"=cluster-control}'", "_uuid : f50acf71-7488-4b9a-b7b8-c8a024e99d21 acls : [04f20275-c410-405c-a923-0e677f767889, 0d5e4722-b608-4bb1-b625-23c323cc9926, 1a27d30e-3f96-4915-8ddd-ade7f22c117b, 1a68a5ed-e7f9-47d0-b55c-89184d97e81a, 4b5d836a-e0a3-4088-825e-f9f0ca58e538, 5a6e5bb4-36eb-4209-b8bc-c611983d4624, 5d09957d-d2cc-4f5a-9ddd-b97d9d772023, aa1a224d-7960-4952-bdfb-35246bafbac8, b23a087f-08f8-4225-8c27-4a9a9ee0c407, b7be6472-df67-439c-8c9c-f55929f0a6e0, d14ed5cf-2e06-496e-8cae-6b76d5dd5ccd] external_ids : {\"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a14645450421485494999 ports : [5e75f289-8273-4f8a-8798-8c10f7318833, de7e1b71-6184-445d-93e7-b20acadf41ea]", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: []", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {}", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-router spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" 1 podSelector: {} policyTypes: - Ingress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-hostnetwork spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/host-network: \"\" podSelector: {} policyTypes: - Ingress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchLabels: role: frontend - from: - podSelector: matchLabels: role: backend", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchExpressions: - {key: role, operator: In, values: [frontend, backend]}", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy1 spec: podSelector: matchLabels: role: db ingress: - from: - podSelector: matchLabels: role: frontend --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy2 spec: podSelector: matchLabels: role: client ingress: - from: - podSelector: matchLabels: role: frontend", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy3 spec: podSelector: matchExpressions: - {key: role, operator: In, values: [db, client]} ingress: - from: - podSelector: matchLabels: role: frontend", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "touch <policy_name>.yaml", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} policyTypes: - Ingress ingress: []", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {}", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-traffic-pod spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y", "oc apply -f <policy_name>.yaml -n <namespace>", "networkpolicy.networking.k8s.io/deny-by-default created", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: default 1 spec: podSelector: {} 2 ingress: [] 3", "oc apply -f deny-by-default.yaml", "networkpolicy.networking.k8s.io/deny-by-default created", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-external namespace: default spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {}", "oc apply -f web-allow-external.yaml", "networkpolicy.networking.k8s.io/web-allow-external created", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-all-namespaces namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2", "oc apply -f web-allow-all-namespaces.yaml", "networkpolicy.networking.k8s.io/web-allow-all-namespaces created", "oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80", "oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh", "wget -qO- --timeout=2 http://web.default", "<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-prod namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2", "oc apply -f web-allow-prod.yaml", "networkpolicy.networking.k8s.io/web-allow-prod created", "oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80", "oc create namespace prod", "oc label namespace/prod purpose=production", "oc create namespace dev", "oc label namespace/dev purpose=testing", "oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh", "wget -qO- --timeout=2 http://web.default", "wget: download timed out", "oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh", "wget -qO- --timeout=2 http://web.default", "<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "oc get networkpolicy", "oc describe networkpolicy <policy_name> -n <namespace>", "oc describe networkpolicy allow-same-namespace", "Name: allow-same-namespace Namespace: ns1 Created on: 2021-05-24 22:28:56 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: PodSelector: <none> Not affecting egress traffic Policy Types: Ingress", "oc get networkpolicy", "oc apply -n <namespace> -f <policy_file>.yaml", "oc edit networkpolicy <policy_name> -n <namespace>", "oc describe networkpolicy <policy_name> -n <namespace>", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "oc delete networkpolicy <policy_name> -n <namespace>", "networkpolicy.networking.k8s.io/default-deny deleted", "oc adm create-bootstrap-project-template -o yaml > template.yaml", "oc create -f template.yaml -n openshift-config", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>", "oc edit template <project_template> -n openshift-config", "objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress", "oc new-project <project> 1", "oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" podSelector: {} policyTypes: - Ingress EOF", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF", "cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress EOF", "oc describe networkpolicy", "Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: \"null\" maxFileSize: 50 rateLimit: 20 syslogFacility: local0", "kind: Namespace apiVersion: v1 metadata: name: example1 annotations: k8s.ovn.org/acl-logging: |- { \"deny\": \"info\", \"allow\": \"info\" }", "<timestamp>|<message_serial>|acl_log(ovn_pinctrl0)|<severity>|name=\"<acl_name>\", verdict=\"<verdict>\", severity=\"<severity>\", direction=\"<direction>\": <flow>", "<proto>,vlan_tci=0x0000,dl_src=<src_mac>,dl_dst=<source_mac>,nw_src=<source_ip>,nw_dst=<target_ip>,nw_tos=<tos_dscp>,nw_ecn=<tos_ecn>,nw_ttl=<ip_ttl>,nw_frag=<fragment>,tp_src=<tcp_src_port>,tp_dst=<tcp_dst_port>,tcp_flags=<tcp_flags>", "2023-11-02T16:28:54.139Z|00004|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:55.187Z|00005|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:57.235Z|00006|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn", "apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: annotations: k8s.ovn.org/acl-logging: '{ \"deny\": \"alert\", \"allow\": \"alert\", \"pass\" : \"warning\" }' name: anp-tenant-log spec: priority: 5 subject: namespaces: matchLabels: tenant: backend-storage # Selects all pods owned by storage tenant. ingress: - name: \"allow-all-ingress-product-development-and-customer\" # Product development and customer tenant ingress to backend storage. action: \"Allow\" from: - pods: namespaceSelector: matchExpressions: - key: tenant operator: In values: - product-development - customer podSelector: {} - name: \"pass-all-ingress-product-security\" action: \"Pass\" from: - namespaces: matchLabels: tenant: product-security - name: \"deny-all-ingress\" # Ingress to backend from all other pods in the cluster. action: \"Deny\" from: - namespaces: {} egress: - name: \"allow-all-egress-product-development\" action: \"Allow\" to: - pods: namespaceSelector: matchLabels: tenant: product-development podSelector: {} - name: \"pass-egress-product-security\" action: \"Pass\" to: - namespaces: matchLabels: tenant: product-security - name: \"deny-all-egress\" # Egress from backend denied to all other pods. action: \"Deny\" to: - namespaces: {}", "2024-06-10T16:27:45.194Z|00052|acl_log(ovn_pinctrl0)|INFO|name=\"ANP:anp-tenant-log:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:1a,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.26,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=57814,tp_dst=8080,tcp_flags=syn 2024-06-10T16:28:23.130Z|00059|acl_log(ovn_pinctrl0)|INFO|name=\"ANP:anp-tenant-log:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:18,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.24,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=38620,tp_dst=8080,tcp_flags=ack 2024-06-10T16:28:38.293Z|00069|acl_log(ovn_pinctrl0)|INFO|name=\"ANP:anp-tenant-log:Egress:0\", verdict=allow, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:19,dl_dst=0a:58:0a:80:02:1a,nw_src=10.128.2.25,nw_dst=10.128.2.26,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=47566,tp_dst=8080,tcp_flags=fin|ack=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=55704,tp_dst=8080,tcp_flags=ack", "2024-06-10T16:33:12.019Z|00075|acl_log(ovn_pinctrl0)|INFO|name=\"ANP:anp-tenant-log:Ingress:1\", verdict=pass, severity=warning, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:1b,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.27,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=37394,tp_dst=8080,tcp_flags=ack 2024-06-10T16:35:04.209Z|00081|acl_log(ovn_pinctrl0)|INFO|name=\"ANP:anp-tenant-log:Egress:1\", verdict=pass, severity=warning, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:19,dl_dst=0a:58:0a:80:02:1b,nw_src=10.128.2.25,nw_dst=10.128.2.27,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=34018,tp_dst=8080,tcp_flags=ack", "2024-06-10T16:43:05.287Z|00087|acl_log(ovn_pinctrl0)|INFO|name=\"ANP:anp-tenant-log:Egress:2\", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:19,dl_dst=0a:58:0a:80:02:18,nw_src=10.128.2.25,nw_dst=10.128.2.24,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=51598,tp_dst=8080,tcp_flags=syn 2024-06-10T16:44:43.591Z|00090|acl_log(ovn_pinctrl0)|INFO|name=\"ANP:anp-tenant-log:Ingress:2\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:1c,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.28,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=33774,tp_dst=8080,tcp_flags=syn", "apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: annotations: k8s.ovn.org/acl-logging: '{ \"deny\": \"alert\", \"allow\": \"alert\"}' name: default spec: subject: namespaces: matchLabels: tenant: workloads # Selects all workload pods in the cluster. ingress: - name: \"default-allow-dns\" # This rule allows ingress from dns tenant to all workloads. action: \"Allow\" from: - namespaces: matchLabels: tenant: dns - name: \"default-deny-dns\" # This rule denies all ingress from all pods to workloads. action: \"Deny\" from: - namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well. egress: - name: \"default-deny-dns\" # This rule denies all egress from workloads. It will be applied when no ANP or network policy matches. action: \"Deny\" to: - namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well.", "2024-06-10T18:11:58.263Z|00022|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=syn 2024-06-10T18:11:58.264Z|00023|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=psh|ack 2024-06-10T18:11:58.264Z|00024|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=ack 2024-06-10T18:11:58.264Z|00025|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=ack 2024-06-10T18:11:58.264Z|00026|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=fin|ack 2024-06-10T18:11:58.264Z|00027|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=ack", "2024-06-10T18:09:57.774Z|00016|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Egress:0\", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:56,dl_dst=0a:58:0a:82:02:57,nw_src=10.130.2.86,nw_dst=10.130.2.87,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=45614,tp_dst=8080,tcp_flags=syn 2024-06-10T18:09:58.809Z|00017|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Egress:0\", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:56,dl_dst=0a:58:0a:82:02:57,nw_src=10.130.2.86,nw_dst=10.130.2.87,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=45614,tp_dst=8080,tcp_flags=syn 2024-06-10T18:10:00.857Z|00018|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Egress:0\", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:56,dl_dst=0a:58:0a:82:02:57,nw_src=10.130.2.86,nw_dst=10.130.2.87,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=45614,tp_dst=8080,tcp_flags=syn 2024-06-10T18:10:25.414Z|00019|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:1\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:58,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.88,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=40630,tp_dst=8080,tcp_flags=syn 2024-06-10T18:10:26.457Z|00020|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:1\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:58,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.88,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=40630,tp_dst=8080,tcp_flags=syn 2024-06-10T18:10:28.505Z|00021|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:1\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:58,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.88,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=40630,tp_dst=8080,tcp_flags=syn", "oc edit network.operator.openshift.io/cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: \"null\" maxFileSize: 50 rateLimit: 20 syslogFacility: local0", "cat <<EOF| oc create -f - kind: Namespace apiVersion: v1 metadata: name: verify-audit-logging annotations: k8s.ovn.org/acl-logging: '{ \"deny\": \"alert\", \"allow\": \"alert\" }' EOF", "namespace/verify-audit-logging created", "cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all spec: podSelector: matchLabels: policyTypes: - Ingress - Egress --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace namespace: verify-audit-logging spec: podSelector: {} policyTypes: - Ingress - Egress ingress: - from: - podSelector: {} egress: - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: verify-audit-logging EOF", "networkpolicy.networking.k8s.io/deny-all created networkpolicy.networking.k8s.io/allow-from-same-namespace created", "cat <<EOF| oc create -n default -f - apiVersion: v1 kind: Pod metadata: name: client spec: containers: - name: client image: registry.access.redhat.com/rhel7/rhel-tools command: [\"/bin/sh\", \"-c\"] args: [\"sleep inf\"] EOF", "for name in client server; do cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: v1 kind: Pod metadata: name: USD{name} spec: containers: - name: USD{name} image: registry.access.redhat.com/rhel7/rhel-tools command: [\"/bin/sh\", \"-c\"] args: [\"sleep inf\"] EOF done", "pod/client created pod/server created", "POD_IP=USD(oc get pods server -n verify-audit-logging -o jsonpath='{.status.podIP}')", "oc exec -it client -n default -- /bin/ping -c 2 USDPOD_IP", "PING 10.128.2.55 (10.128.2.55) 56(84) bytes of data. --- 10.128.2.55 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 2041ms", "oc exec -it client -n verify-audit-logging -- /bin/ping -c 2 USDPOD_IP", "PING 10.128.0.86 (10.128.0.86) 56(84) bytes of data. 64 bytes from 10.128.0.86: icmp_seq=1 ttl=64 time=2.21 ms 64 bytes from 10.128.0.86: icmp_seq=2 ttl=64 time=0.440 ms --- 10.128.0.86 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.440/1.329/2.219/0.890 ms", "for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done", "2023-11-02T16:28:54.139Z|00004|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:55.187Z|00005|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:57.235Z|00006|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:49:57.909Z|00028|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Egress:0\", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:57.909Z|00029|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00030|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Egress:0\", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00031|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0", "oc annotate namespace <namespace> k8s.ovn.org/acl-logging='{ \"deny\": \"alert\", \"allow\": \"notice\" }'", "kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: |- { \"deny\": \"alert\", \"allow\": \"notice\" }", "namespace/verify-audit-logging annotated", "for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done", "2023-11-02T16:49:57.909Z|00028|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Egress:0\", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:57.909Z|00029|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00030|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Egress:0\", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00031|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0", "oc annotate --overwrite namespace <namespace> k8s.ovn.org/acl-logging-", "kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: null", "namespace/verify-audit-logging annotated", "oc get egressfirewall --all-namespaces", "oc describe egressfirewall <policy_name>", "Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0", "oc get -n <project> egressfirewall", "oc get -n <project> egressfirewall <name> -o yaml > <filename>.yaml", "oc replace -f <filename>.yaml", "oc get -n <project> egressfirewall", "oc delete -n <project> egressfirewall <name>", "apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow - to: cidrSelector: 0.0.0.0/0 3 type: Deny", "apiVersion: networking.openshift.io/v1alpha1 kind: DNSNameResolver spec: name: www.example.com. 1 status: resolvedNames: - dnsName: www.example.com. 2 resolvedAddress: - ip: \"1.2.3.4\" 3 ttlSeconds: 60 4 lastLookupTime: \"2023-08-08T15:07:04Z\" 5", "apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: <name> 1 spec: egress: 2", "egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4 nodeSelector: <label_name>: <label_value> 5 ports: 6", "ports: - port: <port> 1 protocol: <protocol> 2", "apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Deny to: cidrSelector: 0.0.0.0/0", "apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: - type: Deny to: cidrSelector: 172.16.1.1/32 ports: - port: 80 protocol: TCP - port: 443", "apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: - to: nodeSelector: matchLabels: region: east type: Allow", "oc create -f <policy_name>.yaml -n <project>", "oc create -f default.yaml -n project1", "egressfirewall.k8s.ovn.org/v1 created", "oc patch networks.operator.openshift.io cluster --type=merge -p '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"ipsecConfig\":{ \"mode\":<mode> }}}}}'", "oc get pods -n openshift-ovn-kubernetes -l=app=ovnkube-node", "ovnkube-node-5xqbf 8/8 Running 0 28m ovnkube-node-6mwcx 8/8 Running 0 29m ovnkube-node-ck5fr 8/8 Running 0 31m ovnkube-node-fr4ld 8/8 Running 0 26m ovnkube-node-wgs4l 8/8 Running 0 33m ovnkube-node-zfvcl 8/8 Running 0 34m", "oc -n openshift-ovn-kubernetes rsh ovnkube-node-<XXXXX> ovn-nbctl --no-leader-only get nb_global . ipsec", "true", "oc get nodes", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ipsec-config spec: nodeSelector: kubernetes.io/hostname: \"<hostname>\" 1 desiredState: interfaces: - name: <interface_name> 2 type: ipsec libreswan: left: <cluster_node> 3 leftid: '%fromcert' leftrsasigkey: '%cert' leftcert: left_server leftmodecfgclient: false right: <external_host> 4 rightid: '%fromcert' rightrsasigkey: '%cert' rightsubnet: <external_address>/32 5 ikev2: insist type: transport", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ipsec-config spec: nodeSelector: kubernetes.io/hostname: \"<hostname>\" 1 desiredState: interfaces: - name: <interface_name> 2 type: ipsec libreswan: left: <cluster_node> 3 leftid: '%fromcert' leftmodecfgclient: false leftrsasigkey: '%cert' leftcert: left_server right: <external_host> 4 rightid: '%fromcert' rightrsasigkey: '%cert' rightsubnet: <external_address>/32 5 ikev2: insist type: tunnel", "oc create -f ipsec-config.yaml", "for role in master worker; do cat >> \"99-ipsec-USD{role}-endpoint-config.bu\" <<-EOF variant: openshift version: 4.16.0 metadata: name: 99-USD{role}-import-certs labels: machineconfiguration.openshift.io/role: USDrole systemd: units: - name: ipsec-import.service enabled: true contents: | [Unit] Description=Import external certs into ipsec NSS Before=ipsec.service [Service] Type=oneshot ExecStart=/usr/local/bin/ipsec-addcert.sh RemainAfterExit=false StandardOutput=journal [Install] WantedBy=multi-user.target storage: files: - path: /etc/pki/certs/ca.pem mode: 0400 overwrite: true contents: local: ca.pem - path: /etc/pki/certs/left_server.p12 mode: 0400 overwrite: true contents: local: left_server.p12 - path: /usr/local/bin/ipsec-addcert.sh mode: 0740 overwrite: true contents: inline: | #!/bin/bash -e echo \"importing cert to NSS\" certutil -A -n \"CA\" -t \"CT,C,C\" -d /var/lib/ipsec/nss/ -i /etc/pki/certs/ca.pem pk12util -W \"\" -i /etc/pki/certs/left_server.p12 -d /var/lib/ipsec/nss/ certutil -M -n \"left_server\" -t \"u,u,u\" -d /var/lib/ipsec/nss/ EOF done", "for role in master worker; do butane -d . 99-ipsec-USD{role}-endpoint-config.bu -o ./99-ipsec-USDrole-endpoint-config.yaml done", "for role in master worker; do oc apply -f 99-ipsec-USD{role}-endpoint-config.yaml done", "oc get mcp", "oc get mc | grep ipsec", "80-ipsec-master-extensions 3.2.0 6d15h 80-ipsec-worker-extensions 3.2.0 6d15h", "oc get mcp master -o yaml | grep 80-ipsec-master-extensions -c", "2", "oc get mcp worker -o yaml | grep 80-ipsec-worker-extensions -c", "2", "kind: NodeNetworkConfigurationPolicy apiVersion: nmstate.io/v1 metadata: name: <name> spec: nodeSelector: kubernetes.io/hostname: <node_name> desiredState: interfaces: - name: <tunnel_name> type: ipsec state: absent", "oc apply -f remove-ipsec-tunnel.yaml", "oc patch networks.operator.openshift.io cluster --type=merge -p '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"ipsecConfig\":{ \"mode\":\"Disabled\" }}}}}'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/networking/network-security
Chapter 39. Viewing process instance logs
Chapter 39. Viewing process instance logs You can view all the process events of an instance from its Logs tab. The instance logs list all the current and process states. Business Central has two types of logs for process instances, Business and Technical logs. Procedure In Business Central, go to Menu Manage Process Instances . On the Manage Process Instances page, click the process instance whose log you want to view. Select the Logs tab: Click Business to view the business events log. Click Technical to view the technical events log. Click Asc or Desc to change the order of the log files.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/managing_red_hat_decision_manager_and_kie_server_settings/managing-business-central-using-process-instances-logs-proc
18.10. Keyboard Configuration
18.10. Keyboard Configuration To add multiple keyboard layouts to your system, select Keyboard from the Installation Summary screen. Upon saving, the keyboard layouts are immediately available in the installation program and you can switch between them by using the keyboard icon located at all times in the upper right corner of the screen. Initially, only the language you selected in the welcome screen is listed as the keyboard layout in the left pane. You can either replace the initial layout or add more layouts. However, if your language does not use ASCII characters, you might need to add a keyboard layout that does, to be able to properly set a password for an encrypted disk partition or the root user, among other things. Figure 18.6. Keyboard Configuration To add an additional layout, click the + button, select it from the list, and click Add . To delete a layout, select it and click the - button. Use the arrow buttons to arrange the layouts in order of preference. For a visual preview of the keyboard layout, select it and click the keyboard button. To test a layout, use the mouse to click inside the text box on the right. Type some text to confirm that your selection functions correctly. To test additional layouts, you can click the language selector at the top on the screen to switch them. However, it is recommended to set up a keyboard combination for switching layout. Click the Options button at the right to open the Layout Switching Options dialog and choose a combination from the list by selecting its check box. The combination will then be displayed above the Options button. This combination applies both during the installation and on the installed system, so you must configure a combination here in order to use one after installation. You can also select more than one combination to switch between layouts. Important If you use a layout that cannot accept Latin characters, such as Russian , Red Hat recommends additionally adding the English (United States) layout and configuring a keyboard combination to switch between the two layouts. If you only select a layout without Latin characters, you might be unable to enter a valid root password and user credentials later in the installation process. This can prevent you from completing the installation. Once you have made your selection, click Done to return to the Installation Summary screen. Note To change your keyboard configuration after you have completed the installation, visit the Keyboard section of the Settings dialogue window.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-keyboard-configuration-s390
12.4.2. Related Books
12.4.2. Related Books Red Hat Enterprise Linux 6 Security Guide A guide to securing Red Hat Enterprise Linux 6. It contains valuable information on how to set up the firewall, as well as the configuration of SELinux .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-services-additional-resources-books
16.4. Search Models Via Metadata Properties
16.4. Search Models Via Metadata Properties The Teiid Designer provides a search capability to find model objects that are characterized by one or more metadata property values. To search your models using metadata: Click Search > Teiid Designer > Transformations... action on the main toolbar which opens the Search dialog. Specify desired search options for Criteria, Object Type, Data Type and Scope. Click Search . The search will be performed and the results will be displayed in the Search Results View . If the view is not yet open, it will be opened automatically.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/search_models_via_metadata_properties
14.4. Changing the Media of a CDROM
14.4. Changing the Media of a CDROM Changing the media of a CDROM to another source or format --path - A string containing a fully-qualified path or target of disk device --source - A string containing the source of the media --eject - Eject the media --insert - Insert the media --update - Update the media --current - can be either or both of --live and --config , depends on implementation of hypervisor driver --live - alter live configuration of running domain --config - alter persistent configuration, effect observed on boot --force - force media changing
[ "change-media domain path source --eject --insert --update --current --live --config --force" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-managing_guest_virtual_machines_with_virsh-changing_the_media_of_a_cdrom
3.3. Configuring Fencing for Cluster Members
3.3. Configuring Fencing for Cluster Members Once you have completed the initial steps of creating a cluster and creating fence devices, you need to configure fencing for the cluster nodes. To configure fencing for the nodes after creating a new cluster and configuring the fencing devices for the cluster, follow the steps in this section. Note that you must configure fencing for each node in the cluster. The following sections provide procedures for configuring a single fence device for a node, configuring a node with a backup fence device, and configuring a node with redundant power supplies: Section 3.3.1, "Configuring a Single Fence Device for a Node" Section 3.3.2, "Configuring a Backup Fence Device" Section 3.3.3, "Configuring a Node with Redundant Power" 3.3.1. Configuring a Single Fence Device for a Node Use the following procedure to configure a node with a single fence device. From the cluster-specific page, you can configure fencing for the nodes in the cluster by clicking on Nodes along the top of the cluster display. This displays the nodes that constitute the cluster. This is also the default page that appears when you click on the cluster name beneath Manage Clusters from the menu on the left side of the luci Homebase page. Click on a node name. Clicking a link for a node causes a page to be displayed for that link showing how that node is configured. The node-specific page displays any services that are currently running on the node, as well as any failover domains of which this node is a member. You can modify an existing failover domain by clicking on its name. On the node-specific page, under Fence Devices , click Add Fence Method . This displays the Add Fence Method to Node dialog box. Enter a Method Name for the fencing method that you are configuring for this node. This is an arbitrary name that will be used by Red Hat High Availability Add-On; it is not the same as the DNS name for the device. Click Submit . This displays the node-specific screen that now displays the method you have just added under Fence Devices . Configure a fence instance for this method by clicking the Add Fence Instance button that appears beneath the fence method. This displays the Add Fence Device (Instance) drop-down menu from which you can select a fence device you have previously configured, as described in Section 3.2.1, "Creating a Fence Device" . Select a fence device for this method. If this fence device requires that you configure node-specific parameters, the display shows the parameters to configure. Note For non-power fence methods (that is, SAN/storage fencing), Unfencing is selected by default on the node-specific parameters display. This ensures that a fenced node's access to storage is not re-enabled until the node has been rebooted. For information on unfencing a node, see the fence_node (8) man page. Click Submit . This returns you to the node-specific screen with the fence method and fence instance displayed.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-config-member-conga-ca
Preface
Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) bare metal clusters in connected or disconnected environments along with out-of-the-box support for proxy environments. Both internal and external OpenShift Data Foundation clusters are supported on bare metal. See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, follow the appropriate deployment process based on your requirement: Internal mode Deploy using local storage devices Deploy standalone Multicloud Object Gateway component External mode
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure/preface-baremetal
Policy APIs
Policy APIs OpenShift Container Platform 4.12 Reference guide for policy APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/policy_apis/index
Appendix A. Fuse Console Configuration Properties
Appendix A. Fuse Console Configuration Properties By default, the Fuse Console configuration is defined in the hawtconfig.json file. You can customize the Fuse Console configuration information, such as title, logo, and login page information. Table A.1, "Fuse Console Configuration Properties" provides a description of the properties and lists whether or not each property requires a value. Table A.1. Fuse Console Configuration Properties Section Property Name Default Value Description Required? About Title Red Hat Fuse Management Console The title that shows on the About page of the Fuse Console. Required productInfo Empty value Product information that shows on the About page of the Fuse Console. Optional additionalInfo Empty value Any additional information that shows on the About page of the Fuse Console. Optional
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_openshift/r_fuse-console-configuration
Chapter 3. Logging 6.0
Chapter 3. Logging 6.0 3.1. Release notes 3.1.1. Logging 6.0.5 This release includes RHBA-2025:1986 . 3.1.1.1. CVEs CVE-2020-11023 CVE-2024-9287 CVE-2024-12797 3.1.2. Logging 6.0.4 This release includes RHBA-2025:1228 . 3.1.2.1. New features and enhancements This enhancement adds OTel semantic stream labels to the lokiStack output so that you can query logs by using both ViaQ and OTel stream labels. ( LOG-6580 ) 3.1.2.2. Bug fixes Before this update, the Operator used a cached client to fetch the SecurityContextConstraint cluster resource, which could result in an error when the cache is invalid. With this update, the Operator now always retrieves data from the API server instead of using the cache. ( LOG-6130 ) Before this update, the Vector startup script attempted to delete buffer lock files during startup. With this update, the Vector startup script no longer attempts to delete buffer lock files during startup. ( LOG-6348 ) Before this update, a bug in the must-gather script for the cluster-logging-operator prevented the LokiStack from being gathered correctly when it existed. With this update, the LokiStack is gathered correctly. ( LOG-6499 ) Before this update, the collector metrics dashboard could get removed after an Operator upgrade due to a race condition during the change from the old to the new pod deployment. With this update, labels are added to the dashboard ConfigMap to identify the upgraded deployment as the current owner so that it will not be removed. ( LOG-6608 ) Before this update, the logging must-gather did not collect resources such as UIPlugin , ClusterLogForwarder , LogFileMetricExporter and LokiStack CR. With this update, these resources are now collected in their namespace directory instead of the cluster-logging one. ( LOG-6654 ) Before this update, Vector did not retain process information, such as the program name, app-name, procID, and other details, when forwarding journal logs by using the syslog protocol. This could lead to the loss of important information. With this update, the Vector collector now preserves all required process information, and the data format adheres to the specifications of RFC3164 and RFC5424 . ( LOG-6659 ) 3.1.3. Logging 6.0.3 This release includes RHBA-2024:10991 . 3.1.3.1. New features and enhancements With this update, the Loki Operator supports the configuring of the workload identity federation on the Google Cloud Platform (GCP) by using the Cluster Credential Operator (CCO) in OpenShift Container Platform 4.17 or later. ( LOG-6421 ) 3.1.3.2. Bug fixes Before this update, the collector used the default settings to collect audit logs, which did not account for back pressure from output receivers. With this update, the audit log collection is optimized for file handling and log reading. ( LOG-6034 ) Before this update, any namespace containing openshift or kube was treated as an infrastructure namespace. With this update, only the following namespaces are treated as infrastructure namespaces: default , kube , openshift , and namespaces that begin with openshift- or kube- . ( LOG-6204 ) Before this update, an input receiver service was repeatedly created and deleted, causing issues with mounting the TLS secrets. With this update, the service is created once and only deleted if it is not defined in the ClusterLogForwarder custom resource. ( LOG-6343 ) Before this update, pipeline validation might enter an infinite loop if a name was a substring of another name. With this update, stricter name equality checks prevent the infinite loop. ( LOG-6352 ) Before this update, the collector alerting rules included the summary and message fields. With this update, the collector alerting rules include the summary and description fields. ( LOG-6406 ) Before this update, setting up the custom audit inputs in the ClusterLogForwarder custom resource with configured LokiStack output caused errors due to the nil pointer dereference. With this update, the Operator performs the nil checks, preventing such errors. ( LOG-6441 ) Before this update, the collector did not correctly mount the /var/log/oauth-server/ path, which prevented the collection of the audit logs. With this update, the volume mount is added, and the audit logs are collected as expected. ( LOG-6486 ) Before this update, the collector did not correctly mount the oauth-apiserver audit log file. As a result, such audit logs were not collected. With this update, the volume mount is correctly mounted, and the logs are collected as expected. ( LOG-6543 ) 3.1.3.3. CVEs CVE-2019-12900 CVE-2024-2511 CVE-2024-3596 CVE-2024-4603 CVE-2024-4741 CVE-2024-5535 CVE-2024-10963 CVE-2024-50602 3.1.4. Logging 6.0.2 This release includes RHBA-2024:10051 . 3.1.4.1. Bug fixes Before this update, Loki did not correctly load some configurations, which caused issues when using Alibaba Cloud or IBM Cloud object storage. This update fixes the configuration-loading code in Loki, resolving the issue. ( LOG-5325 ) Before this update, the collector would discard audit log messages that exceeded the configured threshold. This modifies the audit configuration thresholds for the maximum line size as well as the number of bytes read during a read cycle. ( LOG-5998 ) Before this update, the Cluster Logging Operator did not watch and reconcile resources associated with an instance of a ClusterLogForwarder like it did in prior releases. This update modifies the operator to watch and reconcile all resources it owns and creates. ( LOG-6264 ) Before this update, log events with an unknown severity level sent to Google Cloud Logging would trigger a warning in the vector collector, which would then default the severity to 'DEFAULT'. With this update, log severity levels are now standardized to match Google Cloud Logging specifications, and audit logs are assigned a severity of 'INFO'. ( LOG-6296 ) Before this update, when infrastructure namespaces were included in application inputs, the log_type was set as application . With this update, the log_type of infrastructure namespaces included in application inputs is set to infrastructure . ( LOG-6354 ) Before this update, specifying a value for the syslog.enrichment field of the ClusterLogForwarder added namespace_name , container_name , and pod_name to the messages of non-container logs. With this update, only container logs include namespace_name , container_name , and pod_name in their messages when syslog.enrichment is set. ( LOG-6402 ) 3.1.4.2. CVEs CVE-2024-6119 CVE-2024-6232 3.1.5. Logging 6.0.1 This release includes OpenShift Logging Bug Fix Release 6.0.1 . 3.1.5.1. Bug fixes With this update, the default memory limit for the collector has been increased from 1024 Mi to 2024 Mi. However, users should always adjust their resource limits according to their cluster specifications and needs. ( LOG-6180 ) Before this update, the Loki Operator failed to add the default namespace label to all AlertingRule resources, which caused the User-Workload-Monitoring Alertmanager to skip routing these alerts. This update adds the rule namespace as a label to all alerting and recording rules, resolving the issue and restoring proper alert routing in Alertmanager. ( LOG-6151 ) Before this update, the LokiStack ruler component view did not initialize properly, causing an invalid field error when the ruler component was disabled. This update ensures that the component view initializes with an empty value, resolving the issue. ( LOG-6129 ) Before this update, it was possible to set log_source in the prune filter, which could lead to inconsistent log data. With this update, the configuration is validated before being applied, and any configuration that includes log_source in the prune filter is rejected. ( LOG-6202 ) 3.1.5.2. CVEs CVE-2024-24791 CVE-2024-34155 CVE-2024-34156 CVE-2024-34158 CVE-2024-6104 CVE-2024-6119 CVE-2024-45490 CVE-2024-45491 CVE-2024-45492 3.1.6. Logging 6.0.0 This release includes Logging for Red Hat OpenShift Bug Fix Release 6.0.0 Note Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Table 3.1. Upstream component versions logging Version Component Version Operator eventrouter logfilemetricexporter loki lokistack-gateway opa-openshift vector 6.0 0.4 1.1 3.1.0 0.1 0.1 0.37.1 3.1.7. Removal notice With this release, logging no longer supports the ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io custom resources. Refer to the product documentation for details on the replacement features. ( LOG-5803 ) With this release, logging no longer manages or deploys log storage (such as Elasticsearch), visualization (such as Kibana), or Fluentd-based log collectors. ( LOG-5368 ) Note In order to continue to use Elasticsearch and Kibana managed by the elasticsearch-operator, the administrator must modify those object's ownerRefs before deleting the ClusterLogging resource. 3.1.8. New features and enhancements This feature introduces a new architecture for logging for Red Hat OpenShift by shifting component responsibilities to their relevant Operators, such as for storage, visualization, and collection. It introduces the ClusterLogForwarder.observability.openshift.io API for log collection and forwarding. Support for the ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io APIs, along with the Red Hat managed Elastic stack (Elasticsearch and Kibana), is removed. Users are encouraged to migrate to the Red Hat LokiStack for log storage. Existing managed Elasticsearch deployments can be used for a limited time. Automated migration for log collection is not provided, so administrators need to create a new ClusterLogForwarder.observability.openshift.io specification to replace their custom resources. Refer to the official product documentation for more details. ( LOG-3493 ) With this release, the responsibility for deploying the logging view plugin shifts from the Red Hat OpenShift Logging Operator to the Cluster Observability Operator (COO). For new log storage installations that need visualization, the Cluster Observability Operator and the associated UIPlugin resource must be deployed. Refer to the Cluster Observability Operator Overview product documentation for more details. ( LOG-5461 ) This enhancement sets default requests and limits for Vector collector deployments' memory and CPU usage based on Vector documentation recommendations. ( LOG-4745 ) This enhancement updates Vector to align with the upstream version v0.37.1. ( LOG-5296 ) This enhancement introduces an alert that triggers when log collectors buffer logs to a node's file system and use over 15% of the available space, indicating potential back pressure issues. ( LOG-5381 ) This enhancement updates the selectors for all components to use common Kubernetes labels. ( LOG-5906 ) This enhancement changes the collector configuration to deploy as a ConfigMap instead of a secret, allowing users to view and edit the configuration when the ClusterLogForwarder is set to Unmanaged. ( LOG-5599 ) This enhancement adds the ability to configure the Vector collector log level using an annotation on the ClusterLogForwarder, with options including trace, debug, info, warn, error, or off. ( LOG-5372 ) This enhancement adds validation to reject configurations where Amazon CloudWatch outputs use multiple AWS roles, preventing incorrect log routing. ( LOG-5640 ) This enhancement removes the Log Bytes Collected and Log Bytes Sent graphs from the metrics dashboard. ( LOG-5964 ) This enhancement updates the must-gather functionality to only capture information for inspecting Logging 6.0 components, including Vector deployments from ClusterLogForwarder.observability.openshift.io resources and the Red Hat managed LokiStack. ( LOG-5949 ) This enhancement improves Azure storage secret validation by providing early warnings for specific error conditions. ( LOG-4571 ) This enhancement updates the ClusterLogForwarder API to follow the Kubernetes standards. ( LOG-5977 ) Example of a new configuration in the ClusterLogForwarder custom resource for the updated API apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <name> spec: outputs: - name: <output_name> type: <output_type> <output_type>: tuning: deliveryMode: AtMostOnce 3.1.9. Technology Preview features This release introduces a Technology Preview feature for log forwarding using OpenTelemetry. A new output type,` OTLP`, allows sending JSON-encoded log records using the OpenTelemetry data model and resource semantic conventions. ( LOG-4225 ) 3.1.10. Bug fixes Before this update, the CollectorHighErrorRate and CollectorVeryHighErrorRate alerts were still present. With this update, both alerts are removed in the logging 6.0 release but might return in a future release. ( LOG-3432 ) 3.1.11. CVEs CVE-2024-34397 3.2. Logging 6.0 The ClusterLogForwarder custom resource (CR) is the central configuration point for log collection and forwarding. 3.2.1. Inputs and Outputs Inputs specify the sources of logs to be forwarded. Logging provides the following built-in input types that select logs from different parts of your cluster: application receiver infrastructure audit You can also define custom inputs based on namespaces or pod labels to fine-tune log selection. Outputs define the destinations where logs are sent. Each output type has its own set of configuration options, allowing you to customize the behavior and authentication settings. 3.2.2. Receiver Input Type The receiver input type enables the Logging system to accept logs from external sources. It supports two formats for receiving logs: http and syslog . The ReceiverSpec field defines the configuration for a receiver input. 3.2.3. Pipelines and Filters Pipelines determine the flow of logs from inputs to outputs. A pipeline consists of one or more input refs, output refs, and optional filter refs. You can use filters to transform or drop log messages within a pipeline. The order of filters matters, as they are applied sequentially, and earlier filters can prevent log messages from reaching later stages. 3.2.4. Operator Behavior The Cluster Logging Operator manages the deployment and configuration of the collector based on the managementState field: When set to Managed (default), the Operator actively manages the logging resources to match the configuration defined in the spec. When set to Unmanaged , the Operator does not take any action, allowing you to manually manage the logging components. 3.2.5. Validation Logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The ClusterLogForwarder resource enforces validation checks on required fields, dependencies between fields, and the format of input values. Default values are provided for certain fields, reducing the need for explicit configuration in common scenarios. 3.2.6. Quick Start Prerequisites You have access to an OpenShift Container Platform cluster with cluster-admin permissions. You installed the OpenShift CLI ( oc ). You have access to a supported object store. For example, AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. Procedure Install the Red Hat OpenShift Logging Operator , Loki Operator , and Cluster Observability Operator (COO) from OperatorHub. Create a secret to access an existing object storage bucket: Example command for AWS USD oc create secret generic logging-loki-s3 \ --from-literal=bucketnames="<bucket_name>" \ --from-literal=endpoint="<aws_bucket_endpoint>" \ --from-literal=access_key_id="<aws_access_key_id>" \ --from-literal=access_key_secret="<aws_access_key_secret>" \ --from-literal=region="<aws_region_of_your_bucket>" \ -n openshift-logging Create a LokiStack custom resource (CR) in the openshift-logging namespace: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2022-06-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging Create a service account for the collector: USD oc create sa collector -n openshift-logging Bind the ClusterRole to the service account: USD oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging Create a UIPlugin to enable the Log section in the Observe tab: apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki Add additional roles to the collector service account: USD oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging USD oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging USD oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging Create a ClusterLogForwarder CR to configure log forwarding: apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: serviceAccount: name: collector outputs: - name: default-lokistack type: lokiStack lokiStack: target: name: logging-loki namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: default-logstore inputRefs: - application - infrastructure outputRefs: - default-lokistack Verification Verify that logs are visible in the Log section of the Observe tab in the OpenShift Container Platform web console. 3.3. Installing Logging OpenShift Container Platform Operators use custom resources (CRs) to manage applications and their components. You provide high-level configuration and settings through the CR. The Operator translates high-level directives into low-level actions, based on best practices embedded within the logic of the Operator. A custom resource definition (CRD) defines a CR and lists all the configurations available to users of the Operator. Installing an Operator creates the CRDs to generate CRs. To get started with logging, you must install the following Operators: Loki Operator to manage your log store. Red Hat OpenShift Logging Operator to manage log collection and forwarding. Cluster Observability Operator (COO) to manage visualization. You can use either the OpenShift Container Platform web console or the OpenShift Container Platform CLI to install or configure logging. Important You must configure the Red Hat OpenShift Logging Operator after the Loki Operator. 3.3.1. Installation by using the CLI The following sections describe installing the Loki Operator and the Red Hat OpenShift Logging Operator by using the CLI. 3.3.1.1. Installing the Loki Operator by using the CLI Install Loki Operator on your OpenShift Container Platform cluster to manage the log store Loki by using the OpenShift Container Platform command-line interface (CLI). You can deploy and configure the Loki log store by reconciling the resource LokiStack with the Loki Operator. Prerequisites You have administrator permissions. You installed the OpenShift CLI ( oc ). You have access to a supported object store. For example: AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. Procedure Create a Namespace object for Loki Operator: Example Namespace object apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 labels: openshift.io/cluster-monitoring: "true" 2 1 You must specify openshift-operators-redhat as the namespace. To enable monitoring for the operator, configure Cluster Monitoring Operator to scrape metrics from the openshift-operators-redhat namespace and not the openshift-operators namespace. The openshift-operators namespace might contain community operators, which are untrusted and could publish a metric with the same name as an OpenShift Container Platform metric, causing conflicts. 2 A string value that specifies the label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Apply the Namespace object by running the following command: USD oc apply -f <filename>.yaml Create an OperatorGroup object. Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: upgradeStrategy: Default 1 You must specify openshift-operators-redhat as the namespace. Apply the OperatorGroup object by running the following command: USD oc apply -f <filename>.yaml Create a Subscription object for Loki Operator: Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable-6.<y> 2 installPlanApproval: Automatic 3 name: loki-operator source: redhat-operators 4 sourceNamespace: openshift-marketplace 1 You must specify openshift-operators-redhat as the namespace. 2 Specify stable-6.<y> as the channel. 3 If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. 4 Specify redhat-operators as the value. If your OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object that you created when you configured Operator Lifecycle Manager (OLM). Apply the Subscription object by running the following command: USD oc apply -f <filename>.yaml Create a namespace object for deploy the LokiStack: Example namespace object apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 labels: openshift.io/cluster-monitoring: "true" 2 1 The openshift-logging namespace is dedicated for all logging workloads. 2 A string value that specifies the label, as shown, to ensure that cluster monitoring scrapes the openshift-logging namespace. Apply the namespace object by running the following command: USD oc apply -f <filename>.yaml Create a secret with the credentials to access the object storage. For example, create a secret to access Amazon Web Services (AWS) s3. Example Secret object apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 1 namespace: openshift-logging stringData: 2 access_key_id: <access_key_id> access_key_secret: <access_secret> bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1 1 Use the name logging-loki-s3 to match the name used in LokiStack. 2 For the contents of the secret see the Loki object storage section. Important If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. Apply the Secret object by running the following command: USD oc apply -f <filename>.yaml Create a LokiStack CR: Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: "<yyyy>-<mm>-<dd>" 4 secret: name: logging-loki-s3 5 type: s3 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8 1 Use the name logging-loki . 2 You must specify openshift-logging as the namespace. 3 Specify the deployment size. Supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . Additionally, 1x.pico is supported starting with logging 6.1. 4 For new installations this date should be set to the equivalent of "yesterday", as this will be the date from when the schema takes effect. 5 Specify the name of your log store secret. 6 Specify the corresponding storage type. 7 Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. You can list the available storage classes for your cluster by using the oc get storageclasses command. 8 The openshift-logging mode is the default tenancy mode where a tenant is created for log types, such as audit, infrastructure, and application. This enables access control for individual users and user groups to different log streams. Apply the LokiStack CR object by running the following command: USD oc apply -f <filename>.yaml Verification Verify the installation by running the following command: USD oc get pods -n openshift-logging Example output USD oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE logging-loki-compactor-0 1/1 Running 0 42m logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m logging-loki-index-gateway-0 1/1 Running 0 42m logging-loki-ingester-0 1/1 Running 0 42m logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m 3.3.1.2. Installing Red Hat OpenShift Logging Operator by using the CLI Install Red Hat OpenShift Logging Operator on your OpenShift Container Platform cluster to collect and forward logs to a log store by using the OpenShift CLI ( oc ). Prerequisites You have administrator permissions. You installed the OpenShift CLI ( oc ). You installed and configured Loki Operator. You have created the openshift-logging namespace. Procedure Create an OperatorGroup object: Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: upgradeStrategy: Default 1 You must specify openshift-logging as the namespace. Apply the OperatorGroup object by running the following command: USD oc apply -f <filename>.yaml Create a Subscription object for Red Hat OpenShift Logging Operator: Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable-6.<y> 2 installPlanApproval: Automatic 3 name: cluster-logging source: redhat-operators 4 sourceNamespace: openshift-marketplace 1 You must specify openshift-logging as the namespace. 2 Specify stable-6.<y> as the channel. 3 If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. 4 Specify redhat-operators as the value. If your OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object that you created when you configured Operator Lifecycle Manager (OLM). Apply the Subscription object by running the following command: USD oc apply -f <filename>.yaml Create a service account to be used by the log collector: USD oc create sa logging-collector -n openshift-logging Assign the necessary permissions to the service account for the collector to be able to collect and forward logs. In this example, the collector is provided permissions to collect logs from both infrastructure and application logs. USD oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z logging-collector -n openshift-logging USD oc adm policy add-cluster-role-to-user collect-application-logs -z logging-collector -n openshift-logging USD oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logging-collector -n openshift-logging Create a ClusterLogForwarder CR: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging 1 spec: serviceAccount: name: logging-collector 2 outputs: - name: lokistack-out type: lokiStack 3 lokiStack: target: 4 name: logging-loki namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: infra-app-logs inputRefs: 5 - application - infrastructure outputRefs: - lokistack-out 1 You must specify the openshift-logging namespace. 2 Specify the name of the service account created before. 3 Select the lokiStack output type to send logs to the LokiStack instance. 4 Point the ClusterLogForwarder to the LokiStack instance created earlier. 5 Select the log output types you want to send to the LokiStack instance. Apply the ClusterLogForwarder CR object by running the following command: USD oc apply -f <filename>.yaml Verification Verify the installation by running the following command: USD oc get pods -n openshift-logging Example output USD oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m instance-222js 2/2 Running 0 18m instance-g9ddv 2/2 Running 0 18m instance-hfqq8 2/2 Running 0 18m instance-sphwg 2/2 Running 0 18m instance-vv7zn 2/2 Running 0 18m instance-wk5zz 2/2 Running 0 18m logging-loki-compactor-0 1/1 Running 0 42m logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m logging-loki-index-gateway-0 1/1 Running 0 42m logging-loki-ingester-0 1/1 Running 0 42m logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m 3.3.2. Installation by using the web console The following sections describe installing the Loki Operator and the Red Hat OpenShift Logging Operator by using the web console. 3.3.2.1. Installing Logging by using the web console Install Loki Operator on your OpenShift Container Platform cluster to manage the log store Loki from the OperatorHub by using the OpenShift Container Platform web console. You can deploy and configure the Loki log store by reconciling the resource LokiStack with the Loki Operator. Prerequisites You have administrator permissions. You have access to the OpenShift Container Platform web console. You have access to a supported object store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation). Procedure In the OpenShift Container Platform web console Administrator perspective, go to Operators OperatorHub . Type Loki Operator in the Filter by keyword field. Click Loki Operator in the list of available Operators, and then click Install . Important The Community Loki Operator is not supported by Red Hat. Select stable-x.y as the Update channel . The Loki Operator must be deployed to the global Operator group namespace openshift-operators-redhat , so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it will be created for you. Select Enable Operator-recommended cluster monitoring on this namespace. This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. For Update approval select Automatic , then click Install . If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. Note An Operator might display a Failed status before the installation completes. If the Operator install completes with an InstallSucceeded message, refresh the page. While the Operator installs, create the namespace to which the log store will be deployed. Click + in the top right of the screen to access the Import YAML page. Add the YAML definition for the openshift-logging namespace: Example namespace object apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 labels: openshift.io/cluster-monitoring: "true" 2 1 The openshift-logging namespace is dedicated for all logging workloads. 2 A string value that specifies the label, as shown, to ensure that cluster monitoring scrapes the openshift-logging namespace. Click Create . Create a secret with the credentials to access the object storage. Click + in the top right of the screen to access the Import YAML page. Add the YAML definition for the secret. For example, create a secret to access Amazon Web Services (AWS) s3: Example Secret object apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 1 namespace: openshift-logging 2 stringData: 3 access_key_id: <access_key_id> access_key_secret: <access_key> bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1 1 Note down the name used for the secret logging-loki-s3 to use it later when creating the LokiStack resource. 2 Set the namespace to openshift-logging as that will be the namespace used to deploy LokiStack . 3 For the contents of the secret see the Loki object storage section. Important If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. Click Create . Navigate to the Installed Operators page. Select the Loki Operator under the Provided APIs find the LokiStack resource and click Create Instance . Select YAML view , and then use the following template to create a LokiStack CR: Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: "<yyyy>-<mm>-<dd>" secret: name: logging-loki-s3 4 type: s3 5 storageClassName: <storage_class_name> 6 tenants: mode: openshift-logging 7 1 Use the name logging-loki . 2 You must specify openshift-logging as the namespace. 3 Specify the deployment size. Supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . Additionally, 1x.pico is supported starting with logging 6.1. 4 Specify the name of your log store secret. 5 Specify the corresponding storage type. 6 Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. You can list the available storage classes for your cluster by using the oc get storageclasses command. 7 The openshift-logging mode is the default tenancy mode where a tenant is created for log types, such as audit, infrastructure, and application. This enables access control for individual users and user groups to different log streams. Click Create . Verification In the LokiStack tab veriy that you see your LokiStack instance. In the Status column, verify that you see the message Condition: Ready with a green checkmark. 3.3.2.2. Installing Red Hat OpenShift Logging Operator by using the web console Install Red Hat OpenShift Logging Operator on your OpenShift Container Platform cluster to collect and forward logs to a log store from the OperatorHub by using the OpenShift Container Platform web console. Prerequisites You have administrator permissions. You have access to the OpenShift Container Platform web console. You installed and configured Loki Operator. Procedure In the OpenShift Container Platform web console Administrator perspective, go to Operators OperatorHub . Type Red Hat OpenShift Logging Operator in the Filter by keyword field. Click Red Hat OpenShift Logging Operator in the list of available Operators, and then click Install . Select stable-x.y as the Update channel . The latest version is already selected in the Version field. The Red Hat OpenShift Logging Operator must be deployed to the logging namespace openshift-logging , so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it will be created for you. Select Enable Operator-recommended cluster monitoring on this namespace. This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-logging namespace. For Update approval select Automatic , then click Install . If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. Note An Operator might display a Failed status before the installation completes. If the operator installation completes with an InstallSucceeded message, refresh the page. While the operator installs, create the service account that will be used by the log collector to collect the logs. Click the + in the top right of the screen to access the Import YAML page. Enter the YAML definition for the service account. Example ServiceAccount object apiVersion: v1 kind: ServiceAccount metadata: name: logging-collector 1 namespace: openshift-logging 2 1 Note down the name used for the service account logging-collector to use it later when creating the ClusterLogForwarder resource. 2 Set the namespace to openshift-logging because that is the namespace for deploying the ClusterLogForwarder resource. Click the Create button. Create the ClusterRoleBinding objects to grant the necessary permissions to the log collector for accessing the logs that you want to collect and to write the log store, for example infrastructure and application logs. Click the + in the top right of the screen to access the Import YAML page. Enter the YAML definition for the ClusterRoleBinding resources. Example ClusterRoleBinding resources apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logging-collector:write-logs roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: logging-collector-logs-writer 1 subjects: - kind: ServiceAccount name: logging-collector namespace: openshift-logging --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logging-collector:collect-application roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: collect-application-logs 2 subjects: - kind: ServiceAccount name: logging-collector namespace: openshift-logging --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logging-collector:collect-infrastructure roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: collect-infrastructure-logs 3 subjects: - kind: ServiceAccount name: logging-collector namespace: openshift-logging 1 The cluster role to allow the log collector to write logs to LokiStack. 2 The cluster role to allow the log collector to collect logs from applications. 3 The cluster role to allow the log collector to collect logs from infrastructure. Click the Create button. Go to the Operators Installed Operators page. Select the operator and click the All instances tab. After granting the necessary permissions to the service account, navigate to the Installed Operators page. Select the Red Hat OpenShift Logging Operator under the Provided APIs , find the ClusterLogForwarder resource and click Create Instance . Select YAML view , and then use the following template to create a ClusterLogForwarder CR: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging 1 spec: serviceAccount: name: logging-collector 2 outputs: - name: lokistack-out type: lokiStack 3 lokiStack: target: 4 name: logging-loki namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: infra-app-logs inputRefs: 5 - application - infrastructure outputRefs: - lokistack-out 1 You must specify openshift-logging as the namespace. 2 Specify the name of the service account created earlier. 3 Select the lokiStack output type to send logs to the LokiStack instance. 4 Point the ClusterLogForwarder to the LokiStack instance created earlier. 5 Select the log output types you want to send to the LokiStack instance. Click Create . Verification In the ClusterLogForwarder tab verify that you see your ClusterLogForwarder instance. In the Status column, verify that you see the messages: Condition: observability.openshift.io/Authorized observability.openshift.io/Valid, Ready Additional resources About OVN-Kubernetes network policy 3.4. Upgrading to Logging 6.0 Logging v6.0 is a significant upgrade from releases, achieving several longstanding goals of Cluster Logging: Introduction of distinct operators to manage logging components (e.g., collectors, storage, visualization). Removal of support for managed log storage and visualization based on Elastic products (i.e., Elasticsearch, Kibana). Deprecation of the Fluentd log collector implementation. Removal of support for ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io resources. Note The cluster-logging-operator does not provide an automated upgrade process. Given the various configurations for log collection, forwarding, and storage, no automated upgrade is provided by the cluster-logging-operator . This documentation assists administrators in converting existing ClusterLogging.logging.openshift.io and ClusterLogForwarder.logging.openshift.io specifications to the new API. Examples of migrated ClusterLogForwarder.observability.openshift.io resources for common use cases are included. 3.4.1. Using the oc explain command The oc explain command is an essential tool in the OpenShift CLI oc that provides detailed descriptions of the fields within Custom Resources (CRs). This command is invaluable for administrators and developers who are configuring or troubleshooting resources in an OpenShift cluster. 3.4.1.1. Resource Descriptions oc explain offers in-depth explanations of all fields associated with a specific object. This includes standard resources like pods and services, as well as more complex entities like statefulsets and custom resources defined by Operators. To view the documentation for the outputs field of the ClusterLogForwarder custom resource, you can use: USD oc explain clusterlogforwarders.observability.openshift.io.spec.outputs Note In place of clusterlogforwarder the short form obsclf can be used. This will display detailed information about these fields, including their types, default values, and any associated sub-fields. 3.4.1.2. Hierarchical Structure The command displays the structure of resource fields in a hierarchical format, clarifying the relationships between different configuration options. For instance, here's how you can drill down into the storage configuration for a LokiStack custom resource: USD oc explain lokistacks.loki.grafana.com USD oc explain lokistacks.loki.grafana.com.spec USD oc explain lokistacks.loki.grafana.com.spec.storage USD oc explain lokistacks.loki.grafana.com.spec.storage.schemas Each command reveals a deeper level of the resource specification, making the structure clear. 3.4.1.3. Type Information oc explain also indicates the type of each field (such as string, integer, or boolean), allowing you to verify that resource definitions use the correct data types. For example: USD oc explain lokistacks.loki.grafana.com.spec.size This will show that size should be defined using an integer value. 3.4.1.4. Default Values When applicable, the command shows the default values for fields, providing insights into what values will be used if none are explicitly specified. Again using lokistacks.loki.grafana.com as an example: USD oc explain lokistacks.spec.template.distributor.replicas Example output GROUP: loki.grafana.com KIND: LokiStack VERSION: v1 FIELD: replicas <integer> DESCRIPTION: Replicas defines the number of replica pods of the component. 3.4.2. Log Storage The only managed log storage solution available in this release is a Lokistack, managed by the loki-operator . This solution, previously available as the preferred alternative to the managed Elasticsearch offering, remains unchanged in its deployment process. Important To continue using an existing Red Hat managed Elasticsearch or Kibana deployment provided by the elasticsearch-operator , remove the owner references from the Elasticsearch resource named elasticsearch , and the Kibana resource named kibana in the openshift-logging namespace before removing the ClusterLogging resource named instance in the same namespace. Temporarily set ClusterLogging to state Unmanaged USD oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Unmanaged"}}' --type=merge Remove ClusterLogging ownerReferences from the Elasticsearch resource The following command ensures that ClusterLogging no longer owns the Elasticsearch resource. Updates to the ClusterLogging resource's logStore field will no longer affect the Elasticsearch resource. USD oc -n openshift-logging patch elasticsearch/elasticsearch -p '{"metadata":{"ownerReferences": []}}' --type=merge Remove ClusterLogging ownerReferences from the Kibana resource The following command ensures that ClusterLogging no longer owns the Kibana resource. Updates to the ClusterLogging resource's visualization field will no longer affect the Kibana resource. USD oc -n openshift-logging patch kibana/kibana -p '{"metadata":{"ownerReferences": []}}' --type=merge Set ClusterLogging to state Managed USD oc -n openshift-logging patch clusterlogging/instance -p '{"spec":{"managementState": "Managed"}}' --type=merge 3.4.3. Log Visualization The OpenShift console UI plugin for log visualization has been moved to the cluster-observability-operator from the cluster-logging-operator . 3.4.4. Log Collection and Forwarding Log collection and forwarding configurations are now specified under the new API , part of the observability.openshift.io API group. The following sections highlight the differences from the old API resources. Note Vector is the only supported collector implementation. 3.4.5. Management, Resource Allocation, and Workload Scheduling Configuration for management state (e.g., Managed, Unmanaged), resource requests and limits, tolerations, and node selection is now part of the new ClusterLogForwarder API. Configuration apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" spec: managementState: "Managed" collection: resources: limits: {} requests: {} nodeSelector: {} tolerations: {} Current Configuration apiVersion: "observability.openshift.io/v1" kind: ClusterLogForwarder spec: managementState: Managed collector: resources: limits: {} requests: {} nodeSelector: {} tolerations: {} 3.4.6. Input Specifications The input specification is an optional part of the ClusterLogForwarder specification. Administrators can continue to use the predefined values of application , infrastructure , and audit to collect these sources. 3.4.6.1. Application Inputs Namespace and container inclusions and exclusions have been consolidated into a single field. 5.9 Application Input with Namespace and Container Includes and Excludes apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder spec: inputs: - name: application-logs type: application application: namespaces: - foo - bar includes: - namespace: my-important container: main excludes: - container: too-verbose 6.0 Application Input with Namespace and Container Includes and Excludes apiVersion: "observability.openshift.io/v1" kind: ClusterLogForwarder spec: inputs: - name: application-logs type: application application: includes: - namespace: foo - namespace: bar - namespace: my-important container: main excludes: - container: too-verbose Note application , infrastructure , and audit are reserved words and cannot be used as names when defining an input. 3.4.6.2. Input Receivers Changes to input receivers include: Explicit configuration of the type at the receiver level. Port settings moved to the receiver level. 5.9 Input Receivers apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder spec: inputs: - name: an-http receiver: http: port: 8443 format: kubeAPIAudit - name: a-syslog receiver: type: syslog syslog: port: 9442 6.0 Input Receivers apiVersion: "observability.openshift.io/v1" kind: ClusterLogForwarder spec: inputs: - name: an-http type: receiver receiver: type: http port: 8443 http: format: kubeAPIAudit - name: a-syslog type: receiver receiver: type: syslog port: 9442 3.4.7. Output Specifications High-level changes to output specifications include: URL settings moved to each output type specification. Tuning parameters moved to each output type specification. Separation of TLS configuration from authentication. Explicit configuration of keys and secret/configmap for TLS and authentication. 3.4.8. Secrets and TLS Configuration Secrets and TLS configurations are now separated into authentication and TLS configuration for each output. They must be explicitly defined in the specification rather than relying on administrators to define secrets with recognized keys. Upgrading TLS and authorization configurations requires administrators to understand previously recognized keys to continue using existing secrets. Examples in the following sections provide details on how to configure ClusterLogForwarder secrets to forward to existing Red Hat managed log storage solutions. 3.4.9. Red Hat Managed Elasticsearch v5.9 Forwarding to Red Hat Managed Elasticsearch apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: logStore: type: elasticsearch v6.0 Forwarding to Red Hat Managed Elasticsearch apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: serviceAccount: name: <service_account_name> managementState: Managed outputs: - name: audit-elasticsearch type: elasticsearch elasticsearch: url: https://elasticsearch:9200 version: 6 index: audit-write tls: ca: key: ca-bundle.crt secretName: collector certificate: key: tls.crt secretName: collector key: key: tls.key secretName: collector - name: app-elasticsearch type: elasticsearch elasticsearch: url: https://elasticsearch:9200 version: 6 index: app-write tls: ca: key: ca-bundle.crt secretName: collector certificate: key: tls.crt secretName: collector key: key: tls.key secretName: collector - name: infra-elasticsearch type: elasticsearch elasticsearch: url: https://elasticsearch:9200 version: 6 index: infra-write tls: ca: key: ca-bundle.crt secretName: collector certificate: key: tls.crt secretName: collector key: key: tls.key secretName: collector pipelines: - name: app inputRefs: - application outputRefs: - app-elasticsearch - name: audit inputRefs: - audit outputRefs: - audit-elasticsearch - name: infra inputRefs: - infrastructure outputRefs: - infra-elasticsearch 3.4.10. Red Hat Managed LokiStack v5.9 Forwarding to Red Hat Managed LokiStack apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: logStore: type: lokistack lokistack: name: lokistack-dev v6.0 Forwarding to Red Hat Managed LokiStack apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: default-lokistack type: lokiStack lokiStack: target: name: lokistack-dev namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - outputRefs: - default-lokistack - inputRefs: - application - infrastructure 3.4.11. Filters and Pipeline Configuration Pipeline configurations now define only the routing of input sources to their output destinations, with any required transformations configured separately as filters. All attributes of pipelines from releases have been converted to filters in this release. Individual filters are defined in the filters specification and referenced by a pipeline. 5.9 Filters apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder spec: pipelines: - name: application-logs parse: json labels: foo: bar detectMultilineErrors: true 6.0 Filter Configuration apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: filters: - name: detectexception type: detectMultilineException - name: parse-json type: parse - name: labels type: openshiftLabels openshiftLabels: foo: bar pipelines: - name: application-logs filterRefs: - detectexception - labels - parse-json 3.4.12. Validation and Status Most validations are enforced when a resource is created or updated, providing immediate feedback. This is a departure from releases, where validation occurred post-creation and required inspecting the resource status. Some validation still occurs post-creation for cases where it is not possible to validate at creation or update time. Instances of the ClusterLogForwarder.observability.openshift.io must satisfy the following conditions before the operator will deploy the log collector: Authorized, Valid, Ready. An example of these conditions is: 6.0 Status Conditions apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder status: conditions: - lastTransitionTime: "2024-09-13T03:28:44Z" message: 'permitted to collect log types: [application]' reason: ClusterRolesExist status: "True" type: observability.openshift.io/Authorized - lastTransitionTime: "2024-09-13T12:16:45Z" message: "" reason: ValidationSuccess status: "True" type: observability.openshift.io/Valid - lastTransitionTime: "2024-09-13T12:16:45Z" message: "" reason: ReconciliationComplete status: "True" type: Ready filterConditions: - lastTransitionTime: "2024-09-13T13:02:59Z" message: filter "detectexception" is valid reason: ValidationSuccess status: "True" type: observability.openshift.io/ValidFilter-detectexception - lastTransitionTime: "2024-09-13T13:02:59Z" message: filter "parse-json" is valid reason: ValidationSuccess status: "True" type: observability.openshift.io/ValidFilter-parse-json inputConditions: - lastTransitionTime: "2024-09-13T12:23:03Z" message: input "application1" is valid reason: ValidationSuccess status: "True" type: observability.openshift.io/ValidInput-application1 outputConditions: - lastTransitionTime: "2024-09-13T13:02:59Z" message: output "default-lokistack-application1" is valid reason: ValidationSuccess status: "True" type: observability.openshift.io/ValidOutput-default-lokistack-application1 pipelineConditions: - lastTransitionTime: "2024-09-13T03:28:44Z" message: pipeline "default-before" is valid reason: ValidationSuccess status: "True" type: observability.openshift.io/ValidPipeline-default-before Note Conditions that are satisfied and applicable have a "status" value of "True". Conditions with a status other than "True" provide a reason and a message explaining the issue. 3.5. Configuring log forwarding The ClusterLogForwarder (CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs. Key Functions of the ClusterLogForwarder Selects log messages using inputs Forwards logs to external destinations using outputs Filters, transforms, and drops log messages using filters Defines log forwarding pipelines connecting inputs, filters and outputs 3.5.1. Setting up log collection This release of Cluster Logging requires administrators to explicitly grant log collection permissions to the service account associated with ClusterLogForwarder . This was not required in releases for the legacy logging scenario consisting of a ClusterLogging and, optionally, a ClusterLogForwarder.logging.openshift.io resource. The Red Hat OpenShift Logging Operator provides collect-audit-logs , collect-application-logs , and collect-infrastructure-logs cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively. Setup log collection by binding the required cluster roles to your service account. 3.5.1.1. Legacy service accounts To use the existing legacy service account logcollector , create the following ClusterRoleBinding : USD oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector USD oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector Additionally, create the following ClusterRoleBinding if collecting audit logs: USD oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector 3.5.1.2. Creating service accounts Prerequisites The Red Hat OpenShift Logging Operator is installed in the openshift-logging namespace. You have administrator permissions. Procedure Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account. Bind the appropriate cluster roles to the service account: Example binding command USD oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name> 3.5.1.2.1. Cluster Role Binding for your Service Account The role_binding.yaml file binds the ClusterLogging operator's ClusterRole to a specific ServiceAccount, allowing it to manage Kubernetes resources cluster-wide. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: manager-rolebinding roleRef: 1 apiGroup: rbac.authorization.k8s.io 2 kind: ClusterRole 3 name: cluster-logging-operator 4 subjects: 5 - kind: ServiceAccount 6 name: cluster-logging-operator 7 namespace: openshift-logging 8 1 roleRef: References the ClusterRole to which the binding applies. 2 apiGroup: Indicates the RBAC API group, specifying that the ClusterRole is part of Kubernetes' RBAC system. 3 kind: Specifies that the referenced role is a ClusterRole, which applies cluster-wide. 4 name: The name of the ClusterRole being bound to the ServiceAccount, here cluster-logging-operator. 5 subjects: Defines the entities (users or service accounts) that are being granted the permissions from the ClusterRole. 6 kind: Specifies that the subject is a ServiceAccount. 7 Name: The name of the ServiceAccount being granted the permissions. 8 namespace: Indicates the namespace where the ServiceAccount is located. 3.5.1.2.2. Writing application logs The write-application-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to write application logs to the Loki logging application. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-application-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - application 5 resourceNames: 6 - logs 7 verbs: 8 - create 9 1 rules: Specifies the permissions granted by this ClusterRole. 2 apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system. 3 loki.grafana.com: The API group for managing Loki-related resources. 4 resources: The resource type that the ClusterRole grants permission to interact with. 5 application: Refers to the application resources within the Loki logging system. 6 resourceNames: Specifies the names of resources that this role can manage. 7 logs: Refers to the log resources that can be created. 8 verbs: The actions allowed on the resources. 9 create: Grants permission to create new logs in the Loki system. 3.5.1.2.3. Writing audit logs The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-audit-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - audit 5 resourceNames: 6 - logs 7 verbs: 8 - create 9 1 rules: Defines the permissions granted by this ClusterRole. 2 apiGroups: Specifies the API group loki.grafana.com. 3 loki.grafana.com: The API group responsible for Loki logging resources. 4 resources: Refers to the resource type this role manages, in this case, audit. 5 audit: Specifies that the role manages audit logs within Loki. 6 resourceNames: Defines the specific resources that the role can access. 7 logs: Refers to the logs that can be managed under this role. 8 verbs: The actions allowed on the resources. 9 create: Grants permission to create new audit logs. 3.5.1.2.4. Writing infrastructure logs The write-infrastructure-logs-clusterrole.yaml file defines a ClusterRole that grants permission to create infrastructure logs in the Loki logging system. Sample YAML apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-infrastructure-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - infrastructure 5 resourceNames: 6 - logs 7 verbs: 8 - create 9 1 rules: Specifies the permissions this ClusterRole grants. 2 apiGroups: Specifies the API group for Loki-related resources. 3 loki.grafana.com: The API group managing the Loki logging system. 4 resources: Defines the resource type that this role can interact with. 5 infrastructure: Refers to infrastructure-related resources that this role manages. 6 resourceNames: Specifies the names of resources this role can manage. 7 logs: Refers to the log resources related to infrastructure. 8 verbs: The actions permitted by this role. 9 create: Grants permission to create infrastructure logs in the Loki system. 3.5.1.2.5. ClusterLogForwarder editor role The clusterlogforwarder-editor-role.yaml file defines a ClusterRole that allows users to manage ClusterLogForwarders in OpenShift. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: clusterlogforwarder-editor-role rules: 1 - apiGroups: 2 - observability.openshift.io 3 resources: 4 - clusterlogforwarders 5 verbs: 6 - create 7 - delete 8 - get 9 - list 10 - patch 11 - update 12 - watch 13 1 rules: Specifies the permissions this ClusterRole grants. 2 apiGroups: Refers to the OpenShift-specific API group 3 obervability.openshift.io: The API group for managing observability resources, like logging. 4 resources: Specifies the resources this role can manage. 5 clusterlogforwarders: Refers to the log forwarding resources in OpenShift. 6 verbs: Specifies the actions allowed on the ClusterLogForwarders. 7 create: Grants permission to create new ClusterLogForwarders. 8 delete: Grants permission to delete existing ClusterLogForwarders. 9 get: Grants permission to retrieve information about specific ClusterLogForwarders. 10 list: Allows listing all ClusterLogForwarders. 11 patch: Grants permission to partially modify ClusterLogForwarders. 12 update: Grants permission to update existing ClusterLogForwarders. 13 watch: Grants permission to monitor changes to ClusterLogForwarders. 3.5.2. Modifying log level in collector To modify the log level in the collector, you can set the observability.openshift.io/log-level annotation to trace , debug , info , warn , error , and off . Example log level annotation apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector annotations: observability.openshift.io/log-level: debug # ... 3.5.3. Managing the Operator The ClusterLogForwarder resource has a managementState field that controls whether the operator actively manages its resources or leaves them Unmanaged: Managed (default) The operator will drive the logging resources to match the desired state in the CLF spec. Unmanaged The operator will not take any action related to the logging components. This allows administrators to temporarily pause log forwarding by setting managementState to Unmanaged . 3.5.4. Structure of the ClusterLogForwarder The CLF has a spec section that contains the following key components: Inputs Select log messages to be forwarded. Built-in input types application , infrastructure and audit forward logs from different parts of the cluster. You can also define custom inputs. Outputs Define destinations to forward logs to. Each output has a unique name and type-specific configuration. Pipelines Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names. Filters Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline. 3.5.4.1. Inputs Inputs are configured in an array under spec.inputs . There are three built-in input types: application Selects logs from all application containers, excluding those in infrastructure namespaces. infrastructure Selects logs from nodes and from infrastructure components running in the following namespaces: default kube openshift Containing the kube- or openshift- prefix audit Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd. Users can define custom inputs of type application that select logs from specific namespaces or using pod labels. 3.5.4.2. Outputs Outputs are configured in an array under spec.outputs . Each output must have a unique name and a type. Supported types are: azureMonitor Forwards logs to Azure Monitor. cloudwatch Forwards logs to AWS CloudWatch. googleCloudLogging Forwards logs to Google Cloud Logging. http Forwards logs to a generic HTTP endpoint. kafka Forwards logs to a Kafka broker. loki Forwards logs to a Loki logging backend. lokistack Forwards logs to the logging supported combination of Loki and web proxy with OpenShift Container Platform authentication integration. LokiStack's proxy uses OpenShift Container Platform authentication to enforce multi-tenancy otlp Forwards logs using the OpenTelemetry Protocol. splunk Forwards logs to Splunk. syslog Forwards logs to an external syslog server. Each output type has its own configuration fields. 3.5.4.3. Pipelines Pipelines are configured in an array under spec.pipelines . Each pipeline must have a unique name and consists of: inputRefs Names of inputs whose logs should be forwarded to this pipeline. outputRefs Names of outputs to send logs to. filterRefs (optional) Names of filters to apply. The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters. 3.5.4.4. Filters Filters are configured in an array under spec.filters . They can match incoming log messages based on the value of structured fields and modify or drop them. Administrators can configure the following types of filters: 3.5.4.5. Enabling multi-line exception detection Enables multi-line error detection of container logs. Warning Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions. Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information. Example java exception java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10) To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder Custom Resource (CR) contains a detectMultilineErrors field under the .spec.filters . Example ClusterLogForwarder CR apiVersion: "observability.openshift.io/v1" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> filters: - name: <name> type: detectMultilineException pipelines: - inputRefs: - <input-name> name: <pipeline-name> filterRefs: - <filter-name> outputRefs: - <output-name> 3.5.4.5.1. Details When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message's content is replaced with the concatenated content of all the message fields in the sequence. The collector supports the following languages: Java JS Ruby Python Golang PHP Dart 3.5.4.6. Configuring content filters to drop unwanted log records When the drop filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration. Procedure Add a configuration for a filter to the filters spec in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to drop log records based on regular expressions: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels."foo-bar/baz" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: "my-pod" 6 pipelines: - name: <pipeline_name> 7 filterRefs: ["<filter_name>"] # ... 1 Specifies the type of filter. The drop filter drops log records that match the filter configuration. 2 Specifies configuration options for applying the drop filter. 3 Specifies the configuration for tests that are used to evaluate whether a log record is dropped. If all the conditions specified for a test are true, the test passes and the log record is dropped. When multiple tests are specified for the drop filter configuration, if any of the tests pass, the record is dropped. If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false. 4 Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores ( a-zA-Z0-9_ ), for example, .kubernetes.namespace_name . If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz" . You can include multiple field paths in a single test configuration, but they must all evaluate to true for the test to pass and the drop filter to be applied. 5 Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. 6 Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. 7 Specifies the pipeline that the drop filter is applied to. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml Additional examples The following additional example shows how you can configure the drop filter to only keep higher priority log records: apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .message notMatches: "(?i)critical|error" - field: .level matches: "info|warning" # ... In addition to including multiple field paths in a single test configuration, you can also include additional tests that are treated as OR checks. In the following example, records are dropped if either test configuration evaluates to true. However, for the second test configuration, both field specs must be true for it to be evaluated to true: apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .kubernetes.namespace_name matches: "^open" - test: - field: .log_type matches: "application" - field: .kubernetes.pod_name notMatches: "my-pod" # ... 3.5.4.7. Overview of API audit filter OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, and checking stops at the first match. The amount of data that is included in an event is determined by the value of the level field: None : The event is dropped. Metadata : Audit metadata is included, request and response bodies are removed. Request : Audit metadata and the request body are included, the response body is removed. RequestResponse : All data is included: metadata, request body and response body. The response body can be very large. For example, oc get pods -A generates a response body containing the YAML description of every pod in the cluster. The ClusterLogForwarder custom resource (CR) uses the same format as the standard Kubernetes audit policy , while providing the following additional functions: Wildcards Names of users, groups, namespaces, and resources can have a leading or trailing * asterisk character. For example, the namespace openshift-\* matches openshift-apiserver or openshift-authentication . Resource \*/status matches Pod/status or Deployment/status . Default Rules Events that do not match any rule in the policy are filtered as follows: Read-only system events such as get , list , and watch are dropped. Service account write events that occur within the same namespace as the service account are dropped. All other events are forwarded, subject to any configured rate limits. To disable these defaults, either end your rules list with a rule that has only a level field or add an empty rule. Omit Response Codes A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the OmitResponseCodes field, which lists HTTP status codes for which no events are created. The default value is [404, 409, 422, 429] . If the value is an empty list, [] , then no status codes are omitted. The ClusterLogForwarder CR audit policy acts in addition to the OpenShift Container Platform audit policy. The ClusterLogForwarder CR audit filter changes what the log collector forwards and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store and a less detailed stream to a remote site. Note You must have a cluster role collect-audit-logs to collect the audit logs. The following example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration. Example audit policy apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - "RequestReceived" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: "" resources: ["pods"] # Log "pods/log", "pods/status" at Metadata level - level: Metadata resources: - group: "" resources: ["pods/log", "pods/status"] # Don't log requests to a configmap called "controller-leader" - level: None resources: - group: "" resources: ["configmaps"] resourceNames: ["controller-leader"] # Don't log watch requests by the "system:kube-proxy" on endpoints or services - level: None users: ["system:kube-proxy"] verbs: ["watch"] resources: - group: "" # core API group resources: ["endpoints", "services"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: ["system:authenticated"] nonResourceURLs: - "/api*" # Wildcard matching. - "/version" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: "" # core API group resources: ["configmaps"] # This rule only applies to resources in the "kube-system" namespace. # The empty string "" can be used to select non-namespaced resources. namespaces: ["kube-system"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: "" # core API group resources: ["secrets", "configmaps"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: "" # core API group - group: "extensions" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata 1 The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application. 2 The name of your audit policy. 3.5.4.8. Filtering application logs at input by including the label expressions or a matching label key and values You can include the application logs based on the label expressions or a matching label key and its values by using the input selector. Procedure Add a configuration for a filter to the input spec in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to include logs based on label expressions or matched label key/values: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: ["prod", "qa"] 3 - key: zone operator: NotIn values: ["east", "west"] matchLabels: 4 app: one name: app1 type: application # ... 1 Specifies the label key to match. 2 Specifies the operator. Valid values include: In , NotIn , Exists , and DoesNotExist . 3 Specifies an array of string values. If the operator value is either Exists or DoesNotExist , the value array must be empty. 4 Specifies an exact key or value mapping. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 3.5.4.9. Configuring content filters to prune log records When the prune filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations. Procedure Add a configuration for a filter to the prune spec in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to prune log records based on field paths: Important If both are specified, records are pruned based on the notIn array first, which takes precedence over the in array. After records have been pruned by using the notIn array, they are then pruned by using the in array. Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,."@timestamp"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: ["<filter_name>"] # ... 1 Specify the type of filter. The prune filter prunes log records by configured fields. 2 Specify configuration options for applying the prune filter. The in and notIn fields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores ( a-zA-Z0-9_ ), for example, .kubernetes.namespace_name . If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz" . 3 Optional: Any fields that are specified in this array are removed from the log record. 4 Optional: Any fields that are not specified in this array are removed from the log record. 5 Specify the pipeline that the prune filter is applied to. Note The filters exempts the log_type , .log_source , and .message fields. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 3.5.5. Filtering the audit and infrastructure log inputs by source You can define the list of audit and infrastructure sources to collect the logs by using the input selector. Procedure Add a configuration to define the audit and infrastructure sources in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to define audit and infrastructure sources: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs1 type: infrastructure infrastructure: sources: 1 - node - name: mylogs2 type: audit audit: sources: 2 - kubeAPI - openshiftAPI - ovn # ... 1 Specifies the list of infrastructure sources to collect. The valid sources include: node : Journal log from the node container : Logs from the workloads deployed in the namespaces 2 Specifies the list of audit sources to collect. The valid sources include: kubeAPI : Logs from the Kubernetes API servers openshiftAPI : Logs from the OpenShift API servers auditd : Logs from a node auditd service ovn : Logs from an open virtual network service Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 3.5.6. Filtering application logs at input by including or excluding the namespace or container name You can include or exclude the application logs based on the namespace and container name by using the input selector. Procedure Add a configuration to include or exclude the namespace and container names in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to include or exclude namespaces and container names: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: includes: - namespace: "my-project" 1 container: "my-container" 2 excludes: - container: "other-container*" 3 namespace: "other-namespace" 4 type: application # ... 1 Specifies that the logs are only collected from these namespaces. 2 Specifies that the logs are only collected from these containers. 3 Specifies the pattern of namespaces to ignore when collecting the logs. 4 Specifies the set of containers to ignore when collecting the logs. Note The excludes field takes precedence over the includes field. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 3.6. Storing logs with LokiStack You can configure a LokiStack CR to store application, audit, and infrastructure-related logs. Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. Important For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Loki sizing is only tested and supported for short term storage, for a maximum of 30 days. 3.6.1. Prerequisites You have installed the Loki Operator by using the CLI or web console. You have a serviceAccount in the same namespace in which you create the ClusterLogForwarder . The serviceAccount is assigned collect-audit-logs , collect-application-logs , and collect-infrastructure-logs cluster roles. 3.6.2. Core Setup and Configuration Role-based access controls, basic monitoring, and pod placement to deploy Loki. 3.6.3. Authorizing LokiStack rules RBAC permissions Administrators can allow users to create and manage their own alerting and recording rules by binding cluster roles to usernames. Cluster roles are defined as ClusterRole objects that contain necessary role-based access control (RBAC) permissions for users. The following cluster roles for alerting and recording rules are available for LokiStack: Rule name Description alertingrules.loki.grafana.com-v1-admin Users with this role have administrative-level access to manage alerting rules. This cluster role grants permissions to create, read, update, delete, list, and watch AlertingRule resources within the loki.grafana.com/v1 API group. alertingrules.loki.grafana.com-v1-crdview Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to AlertingRule resources within the loki.grafana.com/v1 API group, but do not have permissions for modifying or managing these resources. alertingrules.loki.grafana.com-v1-edit Users with this role have permission to create, update, and delete AlertingRule resources. alertingrules.loki.grafana.com-v1-view Users with this role can read AlertingRule resources within the loki.grafana.com/v1 API group. They can inspect configurations, labels, and annotations for existing alerting rules but cannot make any modifications to them. recordingrules.loki.grafana.com-v1-admin Users with this role have administrative-level access to manage recording rules. This cluster role grants permissions to create, read, update, delete, list, and watch RecordingRule resources within the loki.grafana.com/v1 API group. recordingrules.loki.grafana.com-v1-crdview Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to RecordingRule resources within the loki.grafana.com/v1 API group, but do not have permissions for modifying or managing these resources. recordingrules.loki.grafana.com-v1-edit Users with this role have permission to create, update, and delete RecordingRule resources. recordingrules.loki.grafana.com-v1-view Users with this role can read RecordingRule resources within the loki.grafana.com/v1 API group. They can inspect configurations, labels, and annotations for existing alerting rules but cannot make any modifications to them. 3.6.3.1. Examples To apply cluster roles for a user, you must bind an existing cluster role to a specific username. Cluster roles can be cluster or namespace scoped, depending on which type of role binding you use. When a RoleBinding object is used, as when using the oc adm policy add-role-to-user command, the cluster role only applies to the specified namespace. When a ClusterRoleBinding object is used, as when using the oc adm policy add-cluster-role-to-user command, the cluster role applies to all namespaces in the cluster. The following example command gives the specified user create, read, update and delete (CRUD) permissions for alerting rules in a specific namespace in the cluster: Example cluster role binding command for alerting rule CRUD permissions in a specific namespace USD oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username> The following command gives the specified user administrator permissions for alerting rules in all namespaces: Example cluster role binding command for administrator permissions USD oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username> 3.6.4. Creating a log-based alerting rule with Loki The AlertingRule CR contains a set of specifications and webhook validation definitions to declare groups of alerting rules for a single LokiStack instance. In addition, the webhook validation definition provides support for rule validation conditions: If an AlertingRule CR includes an invalid interval period, it is an invalid alerting rule If an AlertingRule CR includes an invalid for period, it is an invalid alerting rule. If an AlertingRule CR includes an invalid LogQL expr , it is an invalid alerting rule. If an AlertingRule CR includes two groups with the same name, it is an invalid alerting rule. If none of the above applies, an alerting rule is considered valid. Table 3.2. AlertingRule definitions Tenant type Valid namespaces for AlertingRule CRs application <your_application_namespace> audit openshift-logging infrastructure openshift-/* , kube-/\* , default Procedure Create an AlertingRule custom resource (CR): Example infrastructure AlertingRule CR apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: "true" spec: tenantID: "infrastructure" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"} |= "error" [1m])) by (job) / sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7 1 The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. 2 The labels block must match the LokiStack spec.rules.selector definition. 3 AlertingRule CRs for infrastructure tenants are only supported in the openshift-* , kube-\* , or default namespaces. 4 The value for kubernetes_namespace_name: must match the value for metadata.namespace . 5 The value of this mandatory field must be critical , warning , or info . 6 This field is mandatory. 7 This field is mandatory. Example application AlertingRule CR apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: "true" spec: tenantID: "application" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name="app-ns", kubernetes_pod_name=~"podName.*"} |= "error" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6 1 The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. 2 The labels block must match the LokiStack spec.rules.selector definition. 3 Value for kubernetes_namespace_name: must match the value for metadata.namespace . 4 The value of this mandatory field must be critical , warning , or info . 5 The value of this mandatory field is a summary of the rule. 6 The value of this mandatory field is a detailed description of the rule. Apply the AlertingRule CR: USD oc apply -f <filename>.yaml 3.6.5. Configuring Loki to tolerate memberlist creation failure In an OpenShift Container Platform cluster, administrators generally use a non-private IP network range. As a result, the LokiStack memberlist configuration fails because, by default, it only uses private IP networks. As an administrator, you can select the pod network for the memberlist configuration. You can modify the LokiStack custom resource (CR) to use the podIP address in the hashRing spec. To configure the LokiStack CR, use the following command: USD oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP"},"type":"memberlist"}}}' Example LokiStack to include podIP apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... hashRing: type: memberlist memberlist: instanceAddrType: podIP # ... 3.6.6. Enabling stream-based retention with Loki You can configure retention policies based on log streams. Rules for these may be set globally, per-tenant, or both. If you configure both, tenant rules apply before global rules. Important If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. Note Schema v13 is recommended. Procedure Create a LokiStack CR: Enable stream-based retention globally as shown in the following example: Example global stream-based retention for AWS apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~"test.+"}' 3 - days: 1 priority: 1 selector: '{log_type="infrastructure"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging 1 Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage. 2 Retention is enabled in the cluster when this block is added to the CR. 3 Contains the LogQL query used to define the log stream.spec: limits: Enable stream-based retention per-tenant basis as shown in the following example: Example per-tenant stream-based retention for AWS apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~"test.+"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging 1 Sets retention policy by tenant. Valid tenant types are application , audit , and infrastructure . 2 Contains the LogQL query used to define the log stream. Apply the LokiStack CR: USD oc apply -f <filename>.yaml 3.6.7. Loki pod placement You can control which nodes the Loki pods run on, and prevent other workloads from using those nodes, by using tolerations or node selectors on the pods. You can apply tolerations to the log store pods with the LokiStack custom resource (CR) and apply taints to a node with the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not allow the taint. Using a specific key:value pair that is not on other pods ensures that only the log store pods can run on that node. Example LokiStack with node selectors apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: "" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: "" gateway: nodeSelector: node-role.kubernetes.io/infra: "" indexGateway: nodeSelector: node-role.kubernetes.io/infra: "" ingester: nodeSelector: node-role.kubernetes.io/infra: "" querier: nodeSelector: node-role.kubernetes.io/infra: "" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: "" ruler: nodeSelector: node-role.kubernetes.io/infra: "" # ... 1 Specifies the component pod type that applies to the node selector. 2 Specifies the pods that are moved to nodes containing the defined label. Example LokiStack CR with node selectors and tolerations apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: compactor: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved # ... To configure the nodeSelector and tolerations fields of the LokiStack (CR), you can use the oc explain command to view the description and fields for a particular resource: USD oc explain lokistack.spec.template Example output KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec. ... For more detailed information, you can add a specific field: USD oc explain lokistack.spec.template.compactor Example output KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it. ... 3.6.7.1. Enhanced Reliability and Performance Configurations to ensure Loki's reliability and efficiency in production. 3.6.7.2. Enabling authentication to cloud-based log stores using short-lived tokens Workload identity federation enables authentication to cloud-based log stores using short-lived tokens. Procedure Use one of the following options to enable authentication: If you use the OpenShift Container Platform web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a CredentialsRequest object, which populates a secret. If you use the OpenShift CLI ( oc ) to install the Loki Operator, you must manually create a Subscription object using the appropriate template for your storage provider, as shown in the following examples. This authentication strategy is only supported for the storage providers indicated. Example Azure sample subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: "stable-6.0" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region> Example AWS sample subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: "stable-6.0" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN> 3.6.7.3. Configuring Loki to tolerate node failure The Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster. Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods that prevents a pod from being scheduled on a node. In OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods. The Operator sets default, preferred podAntiAffinity rules for all Loki components, which includes the compactor , distributor , gateway , indexGateway , ingester , querier , queryFrontend , and ruler components. You can override the preferred podAntiAffinity settings for Loki components by configuring required settings in the requiredDuringSchedulingIgnoredDuringExecution field: Example user settings for the ingester component apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: ingester: podAntiAffinity: # ... requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname # ... 1 The stanza to define a required rule. 2 The key-value pair (label) that must be matched to apply the rule. 3.6.7.4. LokiStack behavior during cluster restarts When an OpenShift Container Platform cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during OpenShift Container Platform cluster updates. This behavior is achieved by using PodDisruptionBudget resources. The Loki Operator provisions PodDisruptionBudget resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions. 3.6.7.5. Advanced Deployment and Scalability Specialized configurations for high availability, scalability, and error handling. 3.6.7.6. Zone aware data replication The Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as 1x.extra-small , 1x.small , or 1x.medium , the replication.factor field is automatically set to 2. To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation. Example LokiStack CR with zone replication enabled apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4 1 Deprecated field, values entered are overwritten by replication.factor . 2 This value is automatically set when deployment size is selected at setup. 3 The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0. 4 Defines zones in the form of a topology key that corresponds to a node label. 3.6.7.7. Recovering Loki pods from failed zones In OpenShift Container Platform a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider's data center, aimed at enhancing redundancy and fault tolerance. If your OpenShift Container Platform cluster is not configured to handle this, a zone failure can lead to service or data loss. Loki pods are part of a StatefulSet , and they come with Persistent Volume Claims (PVCs) provisioned by a StorageClass object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone. Warning The following procedure will delete the PVCs in the failed zone, and all data contained therein. To avoid complete data loss the replication factor field of the LokiStack CR should always be set to a value greater than 1 to ensure that Loki is replicating. Prerequisites Verify your LokiStack CR has a replication factor greater than 1. Zone failure detected by the control plane, and nodes in the failed zone are marked by cloud provider integration. The StatefulSet controller automatically attempts to reschedule pods in a failed zone. Because the associated PVCs are also in the failed zone, automatic rescheduling to a different zone does not work. You must manually delete the PVCs in the failed zone to allow successful re-creation of the stateful Loki Pod and its provisioned PVC in the new zone. Procedure List the pods in Pending status by running the following command: USD oc get pods --field-selector status.phase==Pending -n openshift-logging Example oc get pods output NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m 1 These pods are in Pending status because their corresponding PVCs are in the failed zone. List the PVCs in Pending status by running the following command: USD oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -r Example oc get pvc output storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1 Delete the PVC(s) for a pod by running the following command: USD oc delete pvc <pvc_name> -n openshift-logging Delete the pod(s) by running the following command: USD oc delete pod <pod_name> -n openshift-logging Once these objects have been successfully deleted, they should automatically be rescheduled in an available zone. 3.6.7.7.1. Troubleshooting PVC in a terminating state The PVCs might hang in the terminating state without being deleted, if PVC metadata finalizers are set to kubernetes.io/pv-protection . Removing the finalizers should allow the PVCs to delete successfully. Remove the finalizer for each PVC by running the command below, then retry deletion. USD oc patch pvc <pvc_name> -p '{"metadata":{"finalizers":null}}' -n openshift-logging 3.6.7.8. Troubleshooting Loki rate limit errors If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit ( 429 ) errors. These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention. In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR). Important The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers. Conditions The Log Forwarder API is configured to forward logs to Loki. Your system sends a block of messages that is larger than 2 MB to Loki. For example: "values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ ....... ...... ...... ...... \"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]} After you enter oc logs -n openshift-logging -l component=collector , the collector logs in your cluster show a line containing one of the following error messages: 429 Too Many Requests Ingestion rate limit exceeded Example Vector error message 2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true The error is also visible on the receiving end. For example, in the LokiStack ingester pod: Example Loki ingester error message level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream Procedure Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2 # ... 1 The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted. 2 The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention. 3.7. Visualization for logging Visualization for logging is provided by deploying the Logging UI Plugin of the Cluster Observability Operator , which requires Operator installation. Important Until the approaching General Availability (GA) release of the Cluster Observability Operator (COO), which is currently in Technology Preview (TP), Red Hat provides support to customers who are using Logging 6.0 or later with the COO for its Logging UI Plugin on OpenShift Container Platform 4.14 or later. This support exception is temporary as the COO includes several independent features, some of which are still TP features, but the Logging UI Plugin is ready for GA.
[ "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <name> spec: outputs: - name: <output_name> type: <output_type> <output_type>: tuning: deliveryMode: AtMostOnce", "oc create secret generic logging-loki-s3 --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"<aws_bucket_endpoint>\" --from-literal=access_key_id=\"<aws_access_key_id>\" --from-literal=access_key_secret=\"<aws_access_key_secret>\" --from-literal=region=\"<aws_region_of_your_bucket>\" -n openshift-logging", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2022-06-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging", "oc create sa collector -n openshift-logging", "oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging", "apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki", "oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: serviceAccount: name: collector outputs: - name: default-lokistack type: lokiStack lokiStack: target: name: logging-loki namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: default-logstore inputRefs: - application - infrastructure outputRefs: - default-lokistack", "apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 labels: openshift.io/cluster-monitoring: \"true\" 2", "oc apply -f <filename>.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: upgradeStrategy: Default", "oc apply -f <filename>.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable-6.<y> 2 installPlanApproval: Automatic 3 name: loki-operator source: redhat-operators 4 sourceNamespace: openshift-marketplace", "oc apply -f <filename>.yaml", "apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 labels: openshift.io/cluster-monitoring: \"true\" 2", "oc apply -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 1 namespace: openshift-logging stringData: 2 access_key_id: <access_key_id> access_key_secret: <access_secret> bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1", "oc apply -f <filename>.yaml", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: \"<yyyy>-<mm>-<dd>\" 4 secret: name: logging-loki-s3 5 type: s3 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8", "oc apply -f <filename>.yaml", "oc get pods -n openshift-logging", "oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE logging-loki-compactor-0 1/1 Running 0 42m logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m logging-loki-index-gateway-0 1/1 Running 0 42m logging-loki-ingester-0 1/1 Running 0 42m logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: upgradeStrategy: Default", "oc apply -f <filename>.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable-6.<y> 2 installPlanApproval: Automatic 3 name: cluster-logging source: redhat-operators 4 sourceNamespace: openshift-marketplace", "oc apply -f <filename>.yaml", "oc create sa logging-collector -n openshift-logging", "oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z logging-collector -n openshift-logging oc adm policy add-cluster-role-to-user collect-application-logs -z logging-collector -n openshift-logging oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logging-collector -n openshift-logging", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging 1 spec: serviceAccount: name: logging-collector 2 outputs: - name: lokistack-out type: lokiStack 3 lokiStack: target: 4 name: logging-loki namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: infra-app-logs inputRefs: 5 - application - infrastructure outputRefs: - lokistack-out", "oc apply -f <filename>.yaml", "oc get pods -n openshift-logging", "oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m instance-222js 2/2 Running 0 18m instance-g9ddv 2/2 Running 0 18m instance-hfqq8 2/2 Running 0 18m instance-sphwg 2/2 Running 0 18m instance-vv7zn 2/2 Running 0 18m instance-wk5zz 2/2 Running 0 18m logging-loki-compactor-0 1/1 Running 0 42m logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m logging-loki-index-gateway-0 1/1 Running 0 42m logging-loki-ingester-0 1/1 Running 0 42m logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m", "apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 labels: openshift.io/cluster-monitoring: \"true\" 2", "apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 1 namespace: openshift-logging 2 stringData: 3 access_key_id: <access_key_id> access_key_secret: <access_key> bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: \"<yyyy>-<mm>-<dd>\" secret: name: logging-loki-s3 4 type: s3 5 storageClassName: <storage_class_name> 6 tenants: mode: openshift-logging 7", "apiVersion: v1 kind: ServiceAccount metadata: name: logging-collector 1 namespace: openshift-logging 2", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logging-collector:write-logs roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: logging-collector-logs-writer 1 subjects: - kind: ServiceAccount name: logging-collector namespace: openshift-logging --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logging-collector:collect-application roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: collect-application-logs 2 subjects: - kind: ServiceAccount name: logging-collector namespace: openshift-logging --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logging-collector:collect-infrastructure roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: collect-infrastructure-logs 3 subjects: - kind: ServiceAccount name: logging-collector namespace: openshift-logging", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging 1 spec: serviceAccount: name: logging-collector 2 outputs: - name: lokistack-out type: lokiStack 3 lokiStack: target: 4 name: logging-loki namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: infra-app-logs inputRefs: 5 - application - infrastructure outputRefs: - lokistack-out", "oc explain clusterlogforwarders.observability.openshift.io.spec.outputs", "oc explain lokistacks.loki.grafana.com oc explain lokistacks.loki.grafana.com.spec oc explain lokistacks.loki.grafana.com.spec.storage oc explain lokistacks.loki.grafana.com.spec.storage.schemas", "oc explain lokistacks.loki.grafana.com.spec.size", "oc explain lokistacks.spec.template.distributor.replicas", "GROUP: loki.grafana.com KIND: LokiStack VERSION: v1 FIELD: replicas <integer> DESCRIPTION: Replicas defines the number of replica pods of the component.", "oc -n openshift-logging patch clusterlogging/instance -p '{\"spec\":{\"managementState\": \"Unmanaged\"}}' --type=merge", "oc -n openshift-logging patch elasticsearch/elasticsearch -p '{\"metadata\":{\"ownerReferences\": []}}' --type=merge", "oc -n openshift-logging patch kibana/kibana -p '{\"metadata\":{\"ownerReferences\": []}}' --type=merge", "oc -n openshift-logging patch clusterlogging/instance -p '{\"spec\":{\"managementState\": \"Managed\"}}' --type=merge", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" spec: managementState: \"Managed\" collection: resources: limits: {} requests: {} nodeSelector: {} tolerations: {}", "apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder spec: managementState: Managed collector: resources: limits: {} requests: {} nodeSelector: {} tolerations: {}", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: application-logs type: application application: namespaces: - foo - bar includes: - namespace: my-important container: main excludes: - container: too-verbose", "apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: application-logs type: application application: includes: - namespace: foo - namespace: bar - namespace: my-important container: main excludes: - container: too-verbose", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: an-http receiver: http: port: 8443 format: kubeAPIAudit - name: a-syslog receiver: type: syslog syslog: port: 9442", "apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: an-http type: receiver receiver: type: http port: 8443 http: format: kubeAPIAudit - name: a-syslog type: receiver receiver: type: syslog port: 9442", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: logStore: type: elasticsearch", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: serviceAccount: name: <service_account_name> managementState: Managed outputs: - name: audit-elasticsearch type: elasticsearch elasticsearch: url: https://elasticsearch:9200 version: 6 index: audit-write tls: ca: key: ca-bundle.crt secretName: collector certificate: key: tls.crt secretName: collector key: key: tls.key secretName: collector - name: app-elasticsearch type: elasticsearch elasticsearch: url: https://elasticsearch:9200 version: 6 index: app-write tls: ca: key: ca-bundle.crt secretName: collector certificate: key: tls.crt secretName: collector key: key: tls.key secretName: collector - name: infra-elasticsearch type: elasticsearch elasticsearch: url: https://elasticsearch:9200 version: 6 index: infra-write tls: ca: key: ca-bundle.crt secretName: collector certificate: key: tls.crt secretName: collector key: key: tls.key secretName: collector pipelines: - name: app inputRefs: - application outputRefs: - app-elasticsearch - name: audit inputRefs: - audit outputRefs: - audit-elasticsearch - name: infra inputRefs: - infrastructure outputRefs: - infra-elasticsearch", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: logStore: type: lokistack lokistack: name: lokistack-dev", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: default-lokistack type: lokiStack lokiStack: target: name: lokistack-dev namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - outputRefs: - default-lokistack - inputRefs: - application - infrastructure", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder spec: pipelines: - name: application-logs parse: json labels: foo: bar detectMultilineErrors: true", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: filters: - name: detectexception type: detectMultilineException - name: parse-json type: parse - name: labels type: openshiftLabels openshiftLabels: foo: bar pipelines: - name: application-logs filterRefs: - detectexception - labels - parse-json", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder status: conditions: - lastTransitionTime: \"2024-09-13T03:28:44Z\" message: 'permitted to collect log types: [application]' reason: ClusterRolesExist status: \"True\" type: observability.openshift.io/Authorized - lastTransitionTime: \"2024-09-13T12:16:45Z\" message: \"\" reason: ValidationSuccess status: \"True\" type: observability.openshift.io/Valid - lastTransitionTime: \"2024-09-13T12:16:45Z\" message: \"\" reason: ReconciliationComplete status: \"True\" type: Ready filterConditions: - lastTransitionTime: \"2024-09-13T13:02:59Z\" message: filter \"detectexception\" is valid reason: ValidationSuccess status: \"True\" type: observability.openshift.io/ValidFilter-detectexception - lastTransitionTime: \"2024-09-13T13:02:59Z\" message: filter \"parse-json\" is valid reason: ValidationSuccess status: \"True\" type: observability.openshift.io/ValidFilter-parse-json inputConditions: - lastTransitionTime: \"2024-09-13T12:23:03Z\" message: input \"application1\" is valid reason: ValidationSuccess status: \"True\" type: observability.openshift.io/ValidInput-application1 outputConditions: - lastTransitionTime: \"2024-09-13T13:02:59Z\" message: output \"default-lokistack-application1\" is valid reason: ValidationSuccess status: \"True\" type: observability.openshift.io/ValidOutput-default-lokistack-application1 pipelineConditions: - lastTransitionTime: \"2024-09-13T03:28:44Z\" message: pipeline \"default-before\" is valid reason: ValidationSuccess status: \"True\" type: observability.openshift.io/ValidPipeline-default-before", "oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector", "oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector", "oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector", "oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: manager-rolebinding roleRef: 1 apiGroup: rbac.authorization.k8s.io 2 kind: ClusterRole 3 name: cluster-logging-operator 4 subjects: 5 - kind: ServiceAccount 6 name: cluster-logging-operator 7 namespace: openshift-logging 8", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-application-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - application 5 resourceNames: 6 - logs 7 verbs: 8 - create 9", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-audit-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - audit 5 resourceNames: 6 - logs 7 verbs: 8 - create 9", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-infrastructure-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - infrastructure 5 resourceNames: 6 - logs 7 verbs: 8 - create 9", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: clusterlogforwarder-editor-role rules: 1 - apiGroups: 2 - observability.openshift.io 3 resources: 4 - clusterlogforwarders 5 verbs: 6 - create 7 - delete 8 - get 9 - list 10 - patch 11 - update 12 - watch 13", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector annotations: observability.openshift.io/log-level: debug", "java.lang.NullPointerException: Cannot invoke \"String.toString()\" because \"<param1>\" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)", "apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> filters: - name: <name> type: detectMultilineException pipelines: - inputRefs: - <input-name> name: <pipeline-name> filterRefs: - <filter-name> outputRefs: - <output-name>", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels.\"foo-bar/baz\" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: \"my-pod\" 6 pipelines: - name: <pipeline_name> 7 filterRefs: [\"<filter_name>\"]", "oc apply -f <filename>.yaml", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .message notMatches: \"(?i)critical|error\" - field: .level matches: \"info|warning\"", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .kubernetes.namespace_name matches: \"^open\" - test: - field: .log_type matches: \"application\" - field: .kubernetes.pod_name notMatches: \"my-pod\"", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - \"RequestReceived\" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: \"\" resources: [\"pods\"] # Log \"pods/log\", \"pods/status\" at Metadata level - level: Metadata resources: - group: \"\" resources: [\"pods/log\", \"pods/status\"] # Don't log requests to a configmap called \"controller-leader\" - level: None resources: - group: \"\" resources: [\"configmaps\"] resourceNames: [\"controller-leader\"] # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services - level: None users: [\"system:kube-proxy\"] verbs: [\"watch\"] resources: - group: \"\" # core API group resources: [\"endpoints\", \"services\"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: [\"system:authenticated\"] nonResourceURLs: - \"/api*\" # Wildcard matching. - \"/version\" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: \"\" # core API group resources: [\"configmaps\"] # This rule only applies to resources in the \"kube-system\" namespace. # The empty string \"\" can be used to select non-namespaced resources. namespaces: [\"kube-system\"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: \"\" # core API group resources: [\"secrets\", \"configmaps\"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: \"\" # core API group - group: \"extensions\" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: [\"prod\", \"qa\"] 3 - key: zone operator: NotIn values: [\"east\", \"west\"] matchLabels: 4 app: one name: app1 type: application", "oc apply -f <filename>.yaml", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,.\"@timestamp\"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: [\"<filter_name>\"]", "oc apply -f <filename>.yaml", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs1 type: infrastructure infrastructure: sources: 1 - node - name: mylogs2 type: audit audit: sources: 2 - kubeAPI - openshiftAPI - ovn", "oc apply -f <filename>.yaml", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: includes: - namespace: \"my-project\" 1 container: \"my-container\" 2 excludes: - container: \"other-container*\" 3 namespace: \"other-namespace\" 4 type: application", "oc apply -f <filename>.yaml", "oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>", "oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>", "apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"infrastructure\" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"} |= \"error\" [1m])) by (job) / sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7", "apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"application\" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name=\"app-ns\", kubernetes_pod_name=~\"podName.*\"} |= \"error\" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6", "oc apply -f <filename>.yaml", "oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{\"spec\": {\"hashRing\":{\"memberlist\":{\"instanceAddrType\":\"podIP\"},\"type\":\"memberlist\"}}}'", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: hashRing: type: memberlist memberlist: instanceAddrType: podIP", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~\"test.+\"}' 3 - days: 1 priority: 1 selector: '{log_type=\"infrastructure\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~\"test.+\"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~\"openshift-cluster.+\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging", "oc apply -f <filename>.yaml", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" querier: nodeSelector: node-role.kubernetes.io/infra: \"\" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" ruler: nodeSelector: node-role.kubernetes.io/infra: \"\"", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc explain lokistack.spec.template", "KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec.", "oc explain lokistack.spec.template.compactor", "KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it.", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region>", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN>", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: ingester: podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4", "oc get pods --field-selector status.phase==Pending -n openshift-logging", "NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m", "oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == \"Pending\") | .metadata.name' -r", "storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1", "oc delete pvc <pvc_name> -n openshift-logging", "oc delete pod <pod_name> -n openshift-logging", "oc patch pvc <pvc_name> -p '{\"metadata\":{\"finalizers\":null}}' -n openshift-logging", "\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}", "429 Too Many Requests Ingestion rate limit exceeded", "2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true", "level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/logging/logging-6-0