title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 4. Debug support for Camel
Chapter 4. Debug support for Camel Important The VS Code extensions for Apache Camel are listed as development support. For more information about scope of development support, see Development Support Scope of Coverage for Red Hat Build of Apache Camel . 4.1. About Debug Adapter for Apache Camel routes The VS Code Debug Adapter is a Visual Studio Code extension that you can use to debug running Camel routes written in Java, Yaml or XML DSL. 4.1.1. Features of Debug Adapter The VS Code Debug Adapter for Apache Camel extension supports the following features: Camel Main mode for XML only. The use of Camel debugger by attaching it to a running Camel route written in Java, Yaml or XML using the JMX url. The local use of Camel debugger by attaching it to a running Camel route written in Java, Yaml or XML using the PID. You can use it for a single Camel context. Add or remove the breakpoints. The conditional breakpoints with simple language. Inspecting the variable values on suspended breakpoints. Resume a single route instance and resume all route instances. Stepping when the route definition is in the same file. Allow to update variables in scope Debugger, in the message body, in a message header of type String, and an exchange property of type String Supports the command Run Camel Application with JBang and Debug . This command allows a one-click start and Camel debug in simple cases. This command is available through: Command Palette. It requires a valid Camel file opened in the current editor. Contextual menu in File explorer. It is visible to all *.xml , *.java , *.yaml and *.yml . Codelens at the top of a Camel file (the heuristic for the codelens is checking that there is a from and a to or a log on java , xml , and yaml files). Supports the command Run Camel application with JBang . It requires a valid Camel file defined in Yaml DSL (.yaml|.yml) opened in editor. Configuration snippets for Camel debugger launch configuration Configuration snippets to launch a Camel application ready to accept a Camel debugger connection using JBang, or Maven with Camel maven plugin 4.1.2. Requirements The following points must be considered when using the VS Code Debug Adapter for Apache Camel extension: Prerequsites Java Runtime Environment: 17 or later com.sun.tools.attach.VirtualMachine installed. The Camel instance: Camel version 3.16 or later camel-debug in the classpath. JMX enabled. Note For some features, The JBang must be available on a system commandline. 4.1.3. Installing VS Code Debug Adapter for Apache Camel You can download the VS Code Debug Adapter for Apache Camel extension from the VS Code Extension Marketplace and the Open VSX Registry. You can also install the Debug Adapter for Apache Camel extension directly in the Microsoft VS Code. Procedure Open the VS Code editor. In the VS Code editor, select View > Extensions . In the search bar, type Camel Debug . Select the Debug Adapter for Apache Camel option from the search results and then click Install. This installs the Debug Adapter for Apache Camel in the VS Code editor. 4.1.4. Using Debug Adapter You can debug your camel application with the debug adapter. Procedure Ensure that the jbang binary is available on the system commandline. Open a Camel route which can be started with Camel CLI. Call the command Palette using the keys Ctrl + Shift + P , and select the Run Camel Application with JBang and Debug command or click on the codelens Camel Debug with JBang that appears on top of the file. Wait until the route is started and debugger is connected. Put a breakpoint on the Camel route. Debug. Additional resources Debug Adapter for Apache Camel by Red Hat
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/tooling_guide_for_red_hat_build_of_apache_camel/camel-tooling-guide-debug
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/openstack_integration_test_suite_guide/making-open-source-more-inclusive
24.6.5. Extending Net-SNMP
24.6.5. Extending Net-SNMP The Net-SNMP Agent can be extended to provide application metrics in addition to raw system metrics. This allows for capacity planning as well as performance issue troubleshooting. For example, it may be helpful to know that an email system had a 5-minute load average of 15 while being tested, but it is more helpful to know that the email system has a load average of 15 while processing 80,000 messages a second. When application metrics are available via the same interface as the system metrics, this also allows for the visualization of the impact of different load scenarios on system performance (for example, each additional 10,000 messages increases the load average linearly until 100,000). A number of the applications that ship with Red Hat Enterprise Linux extend the Net-SNMP Agent to provide application metrics over SNMP. There are several ways to extend the agent for custom applications as well. This section describes extending the agent with shell scripts and Perl plug-ins. It assumes that the net-snmp-utils and net-snmp-perl packages are installed, and that the user is granted access to the SNMP tree as described in Section 24.6.3.2, "Configuring Authentication" . 24.6.5.1. Extending Net-SNMP with Shell Scripts The Net-SNMP Agent provides an extension MIB ( NET-SNMP-EXTEND-MIB ) that can be used to query arbitrary shell scripts. To specify the shell script to run, use the extend directive in the /etc/snmp/snmpd.conf file. Once defined, the Agent will provide the exit code and any output of the command over SNMP. The example below demonstrates this mechanism with a script which determines the number of httpd processes in the process table. Note The Net-SNMP Agent also provides a built-in mechanism for checking the process table via the proc directive. See the snmpd.conf (5) manual page for more information. The exit code of the following shell script is the number of httpd processes running on the system at a given point in time: #!/bin/sh NUMPIDS=`pgrep httpd | wc -l` exit USDNUMPIDS To make this script available over SNMP, copy the script to a location on the system path, set the executable bit, and add an extend directive to the /etc/snmp/snmpd.conf file. The format of the extend directive is the following: extend name prog args where name is an identifying string for the extension, prog is the program to run, and args are the arguments to give the program. For instance, if the above shell script is copied to /usr/local/bin/check_apache.sh , the following directive will add the script to the SNMP tree: The script can then be queried at NET-SNMP-EXTEND-MIB::nsExtendObjects : Note that the exit code ( " 8 " in this example) is provided as an INTEGER type and any output is provided as a STRING type. To expose multiple metrics as integers, supply different arguments to the script using the extend directive. For example, the following shell script can be used to determine the number of processes matching an arbitrary string, and will also output a text string giving the number of processes: #!/bin/sh PATTERN=USD1 NUMPIDS=`pgrep USDPATTERN | wc -l` echo "There are USDNUMPIDS USDPATTERN processes." exit USDNUMPIDS The following /etc/snmp/snmpd.conf directives will give both the number of httpd PIDs as well as the number of snmpd PIDs when the above script is copied to /usr/local/bin/check_proc.sh : The following example shows the output of an snmpwalk of the nsExtendObjects OID: Warning Integer exit codes are limited to a range of 0-255. For values that are likely to exceed 256, either use the standard output of the script (which will be typed as a string) or a different method of extending the agent. This last example shows a query for the free memory of the system and the number of httpd processes. This query could be used during a performance test to determine the impact of the number of processes on memory pressure: 24.6.5.2. Extending Net-SNMP with Perl Executing shell scripts using the extend directive is a fairly limited method for exposing custom application metrics over SNMP. The Net-SNMP Agent also provides an embedded Perl interface for exposing custom objects. The net-snmp-perl package provides the NetSNMP::agent Perl module that is used to write embedded Perl plug-ins on Red Hat Enterprise Linux. The NetSNMP::agent Perl module provides an agent object which is used to handle requests for a part of the agent's OID tree. The agent object's constructor has options for running the agent as a sub-agent of snmpd or a standalone agent. No arguments are necessary to create an embedded agent: use NetSNMP::agent (':all'); my USDagent = new NetSNMP::agent(); The agent object has a register method which is used to register a callback function with a particular OID. The register function takes a name, OID, and pointer to the callback function. The following example will register a callback function named hello_handler with the SNMP Agent which will handle requests under the OID .1.3.6.1.4.1.8072.9999.9999 : USDagent->register("hello_world", ".1.3.6.1.4.1.8072.9999.9999", \&hello_handler); Note The OID .1.3.6.1.4.1.8072.9999.9999 ( NET-SNMP-MIB::netSnmpPlaypen ) is typically used for demonstration purposes only. If your organization does not already have a root OID, you can obtain one by contacting an ISO Name Registration Authority (ANSI in the United States). The handler function will be called with four parameters, HANDLER , REGISTRATION_INFO , REQUEST_INFO , and REQUESTS . The REQUESTS parameter contains a list of requests in the current call and should be iterated over and populated with data. The request objects in the list have get and set methods which allow for manipulating the OID and value of the request. For example, the following call will set the value of a request object to the string " hello world " : USDrequest->setValue(ASN_OCTET_STR, "hello world"); The handler function should respond to two types of SNMP requests: the GET request and the GETNEXT request. The type of request is determined by calling the getMode method on the request_info object passed as the third parameter to the handler function. If the request is a GET request, the caller will expect the handler to set the value of the request object, depending on the OID of the request. If the request is a GETNEXT request, the caller will also expect the handler to set the OID of the request to the available OID in the tree. This is illustrated in the following code example: my USDrequest; my USDstring_value = "hello world"; my USDinteger_value = "8675309"; for(USDrequest = USDrequests; USDrequest; USDrequest = USDrequest->()) { my USDoid = USDrequest->getOID(); if (USDrequest_info->getMode() == MODE_GET) { if (USDoid == new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.0")) { USDrequest->setValue(ASN_OCTET_STR, USDstring_value); } elsif (USDoid == new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.1")) { USDrequest->setValue(ASN_INTEGER, USDinteger_value); } } elsif (USDrequest_info->getMode() == MODE_GETNEXT) { if (USDoid == new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.0")) { USDrequest->setOID(".1.3.6.1.4.1.8072.9999.9999.1.1"); USDrequest->setValue(ASN_INTEGER, USDinteger_value); } elsif (USDoid < new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.0")) { USDrequest->setOID(".1.3.6.1.4.1.8072.9999.9999.1.0"); USDrequest->setValue(ASN_OCTET_STR, USDstring_value); } } } When getMode returns MODE_GET , the handler analyzes the value of the getOID call on the request object. The value of the request is set to either string_value if the OID ends in " .1.0 " , or set to integer_value if the OID ends in " .1.1 " . If the getMode returns MODE_GETNEXT , the handler determines whether the OID of the request is " .1.0 " , and then sets the OID and value for " .1.1 " . If the request is higher on the tree than " .1.0 " , the OID and value for " .1.0 " is set. This in effect returns the " " value in the tree so that a program like snmpwalk can traverse the tree without prior knowledge of the structure. The type of the variable is set using constants from NetSNMP::ASN . See the perldoc for NetSNMP::ASN for a full list of available constants. The entire code listing for this example Perl plug-in is as follows: #!/usr/bin/perl use NetSNMP::agent (':all'); use NetSNMP::ASN qw(ASN_OCTET_STR ASN_INTEGER); sub hello_handler { my (USDhandler, USDregistration_info, USDrequest_info, USDrequests) = @_; my USDrequest; my USDstring_value = "hello world"; my USDinteger_value = "8675309"; for(USDrequest = USDrequests; USDrequest; USDrequest = USDrequest->()) { my USDoid = USDrequest->getOID(); if (USDrequest_info->getMode() == MODE_GET) { if (USDoid == new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.0")) { USDrequest->setValue(ASN_OCTET_STR, USDstring_value); } elsif (USDoid == new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.1")) { USDrequest->setValue(ASN_INTEGER, USDinteger_value); } } elsif (USDrequest_info->getMode() == MODE_GETNEXT) { if (USDoid == new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.0")) { USDrequest->setOID(".1.3.6.1.4.1.8072.9999.9999.1.1"); USDrequest->setValue(ASN_INTEGER, USDinteger_value); } elsif (USDoid < new NetSNMP::OID(".1.3.6.1.4.1.8072.9999.9999.1.0")) { USDrequest->setOID(".1.3.6.1.4.1.8072.9999.9999.1.0"); USDrequest->setValue(ASN_OCTET_STR, USDstring_value); } } } } my USDagent = new NetSNMP::agent(); USDagent->register("hello_world", ".1.3.6.1.4.1.8072.9999.9999", \&hello_handler); To test the plug-in, copy the above program to /usr/share/snmp/hello_world.pl and add the following line to the /etc/snmp/snmpd.conf configuration file: The SNMP Agent Daemon will need to be restarted to load the new Perl plug-in. Once it has been restarted, an snmpwalk should return the new data: The snmpget should also be used to exercise the other mode of the handler:
[ "#!/bin/sh NUMPIDS=`pgrep httpd | wc -l` exit USDNUMPIDS", "extend httpd_pids /bin/sh /usr/local/bin/check_apache.sh", "~]USD snmpwalk localhost NET-SNMP-EXTEND-MIB::nsExtendObjects NET-SNMP-EXTEND-MIB::nsExtendNumEntries.0 = INTEGER: 1 NET-SNMP-EXTEND-MIB::nsExtendCommand.\"httpd_pids\" = STRING: /bin/sh NET-SNMP-EXTEND-MIB::nsExtendArgs.\"httpd_pids\" = STRING: /usr/local/bin/check_apache.sh NET-SNMP-EXTEND-MIB::nsExtendInput.\"httpd_pids\" = STRING: NET-SNMP-EXTEND-MIB::nsExtendCacheTime.\"httpd_pids\" = INTEGER: 5 NET-SNMP-EXTEND-MIB::nsExtendExecType.\"httpd_pids\" = INTEGER: exec(1) NET-SNMP-EXTEND-MIB::nsExtendRunType.\"httpd_pids\" = INTEGER: run-on-read(1) NET-SNMP-EXTEND-MIB::nsExtendStorage.\"httpd_pids\" = INTEGER: permanent(4) NET-SNMP-EXTEND-MIB::nsExtendStatus.\"httpd_pids\" = INTEGER: active(1) NET-SNMP-EXTEND-MIB::nsExtendOutput1Line.\"httpd_pids\" = STRING: NET-SNMP-EXTEND-MIB::nsExtendOutputFull.\"httpd_pids\" = STRING: NET-SNMP-EXTEND-MIB::nsExtendOutNumLines.\"httpd_pids\" = INTEGER: 1 NET-SNMP-EXTEND-MIB::nsExtendResult.\"httpd_pids\" = INTEGER: 8 NET-SNMP-EXTEND-MIB::nsExtendOutLine.\"httpd_pids\".1 = STRING:", "#!/bin/sh PATTERN=USD1 NUMPIDS=`pgrep USDPATTERN | wc -l` echo \"There are USDNUMPIDS USDPATTERN processes.\" exit USDNUMPIDS", "extend httpd_pids /bin/sh /usr/local/bin/check_proc.sh httpd extend snmpd_pids /bin/sh /usr/local/bin/check_proc.sh snmpd", "~]USD snmpwalk localhost NET-SNMP-EXTEND-MIB::nsExtendObjects NET-SNMP-EXTEND-MIB::nsExtendNumEntries.0 = INTEGER: 2 NET-SNMP-EXTEND-MIB::nsExtendCommand.\"httpd_pids\" = STRING: /bin/sh NET-SNMP-EXTEND-MIB::nsExtendCommand.\"snmpd_pids\" = STRING: /bin/sh NET-SNMP-EXTEND-MIB::nsExtendArgs.\"httpd_pids\" = STRING: /usr/local/bin/check_proc.sh httpd NET-SNMP-EXTEND-MIB::nsExtendArgs.\"snmpd_pids\" = STRING: /usr/local/bin/check_proc.sh snmpd NET-SNMP-EXTEND-MIB::nsExtendInput.\"httpd_pids\" = STRING: NET-SNMP-EXTEND-MIB::nsExtendInput.\"snmpd_pids\" = STRING: NET-SNMP-EXTEND-MIB::nsExtendResult.\"httpd_pids\" = INTEGER: 8 NET-SNMP-EXTEND-MIB::nsExtendResult.\"snmpd_pids\" = INTEGER: 1 NET-SNMP-EXTEND-MIB::nsExtendOutLine.\"httpd_pids\".1 = STRING: There are 8 httpd processes. NET-SNMP-EXTEND-MIB::nsExtendOutLine.\"snmpd_pids\".1 = STRING: There are 1 snmpd processes.", "~]USD snmpget localhost 'NET-SNMP-EXTEND-MIB::nsExtendResult.\"httpd_pids\"' UCD-SNMP-MIB::memAvailReal.0 NET-SNMP-EXTEND-MIB::nsExtendResult.\"httpd_pids\" = INTEGER: 8 UCD-SNMP-MIB::memAvailReal.0 = INTEGER: 799664 kB", "use NetSNMP::agent (':all'); my USDagent = new NetSNMP::agent();", "USDagent->register(\"hello_world\", \".1.3.6.1.4.1.8072.9999.9999\", \\&hello_handler);", "USDrequest->setValue(ASN_OCTET_STR, \"hello world\");", "my USDrequest; my USDstring_value = \"hello world\"; my USDinteger_value = \"8675309\"; for(USDrequest = USDrequests; USDrequest; USDrequest = USDrequest->next()) { my USDoid = USDrequest->getOID(); if (USDrequest_info->getMode() == MODE_GET) { if (USDoid == new NetSNMP::OID(\".1.3.6.1.4.1.8072.9999.9999.1.0\")) { USDrequest->setValue(ASN_OCTET_STR, USDstring_value); } elsif (USDoid == new NetSNMP::OID(\".1.3.6.1.4.1.8072.9999.9999.1.1\")) { USDrequest->setValue(ASN_INTEGER, USDinteger_value); } } elsif (USDrequest_info->getMode() == MODE_GETNEXT) { if (USDoid == new NetSNMP::OID(\".1.3.6.1.4.1.8072.9999.9999.1.0\")) { USDrequest->setOID(\".1.3.6.1.4.1.8072.9999.9999.1.1\"); USDrequest->setValue(ASN_INTEGER, USDinteger_value); } elsif (USDoid < new NetSNMP::OID(\".1.3.6.1.4.1.8072.9999.9999.1.0\")) { USDrequest->setOID(\".1.3.6.1.4.1.8072.9999.9999.1.0\"); USDrequest->setValue(ASN_OCTET_STR, USDstring_value); } } }", "#!/usr/bin/perl use NetSNMP::agent (':all'); use NetSNMP::ASN qw(ASN_OCTET_STR ASN_INTEGER); sub hello_handler { my (USDhandler, USDregistration_info, USDrequest_info, USDrequests) = @_; my USDrequest; my USDstring_value = \"hello world\"; my USDinteger_value = \"8675309\"; for(USDrequest = USDrequests; USDrequest; USDrequest = USDrequest->next()) { my USDoid = USDrequest->getOID(); if (USDrequest_info->getMode() == MODE_GET) { if (USDoid == new NetSNMP::OID(\".1.3.6.1.4.1.8072.9999.9999.1.0\")) { USDrequest->setValue(ASN_OCTET_STR, USDstring_value); } elsif (USDoid == new NetSNMP::OID(\".1.3.6.1.4.1.8072.9999.9999.1.1\")) { USDrequest->setValue(ASN_INTEGER, USDinteger_value); } } elsif (USDrequest_info->getMode() == MODE_GETNEXT) { if (USDoid == new NetSNMP::OID(\".1.3.6.1.4.1.8072.9999.9999.1.0\")) { USDrequest->setOID(\".1.3.6.1.4.1.8072.9999.9999.1.1\"); USDrequest->setValue(ASN_INTEGER, USDinteger_value); } elsif (USDoid < new NetSNMP::OID(\".1.3.6.1.4.1.8072.9999.9999.1.0\")) { USDrequest->setOID(\".1.3.6.1.4.1.8072.9999.9999.1.0\"); USDrequest->setValue(ASN_OCTET_STR, USDstring_value); } } } } my USDagent = new NetSNMP::agent(); USDagent->register(\"hello_world\", \".1.3.6.1.4.1.8072.9999.9999\", \\&hello_handler);", "perl do \"/usr/share/snmp/hello_world.pl\"", "~]USD snmpwalk localhost NET-SNMP-MIB::netSnmpPlaypen NET-SNMP-MIB::netSnmpPlaypen.1.0 = STRING: \"hello world\" NET-SNMP-MIB::netSnmpPlaypen.1.1 = INTEGER: 8675309", "~]USD snmpget localhost NET-SNMP-MIB::netSnmpPlaypen.1.0 NET-SNMP-MIB::netSnmpPlaypen.1.1 NET-SNMP-MIB::netSnmpPlaypen.1.0 = STRING: \"hello world\" NET-SNMP-MIB::netSnmpPlaypen.1.1 = INTEGER: 8675309" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sect-system_monitoring_tools-net-snmp-extending
Chapter 1. Red Hat High Availability Add-On Configuration and Management Overview
Chapter 1. Red Hat High Availability Add-On Configuration and Management Overview Red Hat High Availability Add-On allows you to connect a group of computers (called nodes or members ) to work together as a cluster. You can use Red Hat High Availability Add-On to suit your clustering needs (for example, setting up a cluster for sharing files on a GFS2 file system or setting up service failover). Note For information on best practices for deploying and upgrading Red Hat Enterprise Linux clusters using the High Availability Add-On and Red Hat Global File System 2 (GFS2) see the article "Red Hat Enterprise Linux Cluster, High Availability, and GFS Deployment Best Practices" on Red Hat Customer Portal at https://access.redhat.com/site/articles/40051 . This chapter provides a summary of documentation features and updates that have been added to the Red Hat High Availability Add-On since the initial release of Red Hat Enterprise Linux 6, followed by an overview of configuring and managing the Red Hat High Availability Add-On. 1.1. New and Changed Features This section lists new and changed features of the Red Hat High Availability Add-On documentation that have been added since the initial release of Red Hat Enterprise Linux 6. 1.1.1. New and Changed Features for Red Hat Enterprise Linux 6.1 Red Hat Enterprise Linux 6.1 includes the following documentation and feature updates and changes. As of the Red Hat Enterprise Linux 6.1 release and later, the Red Hat High Availability Add-On provides support for SNMP traps. For information on configuring SNMP traps with the Red Hat High Availability Add-On, see Chapter 11, SNMP Configuration with the Red Hat High Availability Add-On . As of the Red Hat Enterprise Linux 6.1 release and later, the Red Hat High Availability Add-On provides support for the ccs cluster configuration command. For information on the ccs command, see Chapter 6, Configuring Red Hat High Availability Add-On With the ccs Command and Chapter 7, Managing Red Hat High Availability Add-On With ccs . The documentation for configuring and managing Red Hat High Availability Add-On software using Conga has been updated to reflect updated Conga screens and feature support. For the Red Hat Enterprise Linux 6.1 release and later, using ricci requires a password the first time you propagate updated cluster configuration from any particular node. For information on ricci see Section 3.13, "Considerations for ricci " . You can now specify a Restart-Disable failure policy for a service, indicating that the system should attempt to restart the service in place if it fails, but if restarting the service fails the service will be disabled instead of being moved to another host in the cluster. This feature is documented in Section 4.10, "Adding a Cluster Service to the Cluster" and Appendix B, HA Resource Parameters . You can now configure an independent subtree as non-critical, indicating that if the resource fails then only that resource is disabled. For information on this feature see Section 4.10, "Adding a Cluster Service to the Cluster" and Section C.4, "Failure Recovery and Independent Subtrees" . This document now includes the new chapter Chapter 10, Diagnosing and Correcting Problems in a Cluster . In addition, small corrections and clarifications have been made throughout the document. 1.1.2. New and Changed Features for Red Hat Enterprise Linux 6.2 Red Hat Enterprise Linux 6.2 includes the following documentation and feature updates and changes. Red Hat Enterprise Linux now provides support for running Clustered Samba in an active/active configuration. For information on clustered Samba configuration, see Chapter 12, Clustered Samba Configuration . Any user able to authenticate on the system that is hosting luci can log in to luci . As of Red Hat Enterprise Linux 6.2, only the root user on the system that is running luci can access any of the luci components until an administrator (the root user or a user with administrator permission) sets permissions for that user. For information on setting luci permissions for users, see Section 4.3, "Controlling Access to luci" . The nodes in a cluster can communicate with each other using the UDP unicast transport mechanism. For information on configuring UDP unicast, see Section 3.12, "UDP Unicast Traffic" . You can now configure some aspects of luci 's behavior by means of the /etc/sysconfig/luci file. For example, you can specifically configure the only IP address luci is being served at. For information on configuring the only IP address luci is being served at, see Table 3.2, "Enabled IP Port on a Computer That Runs luci " . For information on the /etc/sysconfig/luci file in general, see Section 3.4, "Configuring luci with /etc/sysconfig/luci " . The ccs command now includes the --lsfenceopts option, which prints a list of available fence devices, and the --lsfenceopts fence_type option, which prints each available fence type. For information on these options, see Section 6.6, "Listing Fence Devices and Fence Device Options" . The ccs command now includes the --lsserviceopts option, which prints a list of cluster services currently available for your cluster, and the --lsserviceopts service_type option, which prints a list of the options you can specify for a particular service type. For information on these options, see Section 6.11, "Listing Available Cluster Services and Resources" . The Red Hat Enterprise Linux 6.2 release provides support for the VMware (SOAP Interface) fence agent. For information on fence device parameters, see Appendix A, Fence Device Parameters . The Red Hat Enterprise Linux 6.2 release provides support for the RHEV-M REST API fence agent, against RHEV 3.0 and later. For information on fence device parameters, see Appendix A, Fence Device Parameters . As of the Red Hat Enterprise Linux 6.2 release, when you configure a virtual machine in a cluster with the ccs command you can use the --addvm option (rather than the addservice option). This ensures that the vm resource is defined directly under the rm configuration node in the cluster configuration file. For information on configuring virtual machine resources with the ccs command, see Section 6.12, "Virtual Machine Resources" . This document includes a new appendix, Appendix D, Modifying and Enforcing Cluster Service Resource Actions . This appendix describes how rgmanager monitors the status of cluster resources, and how to modify the status check interval. The appendix also describes the __enforce_timeouts service parameter, which indicates that a timeout for an operation should cause a service to fail. This document includes a new section, Section 3.3.3, "Configuring the iptables Firewall to Allow Cluster Components" . This section shows the filtering you can use to allow multicast traffic through the iptables firewall for the various cluster components. In addition, small corrections and clarifications have been made throughout the document. 1.1.3. New and Changed Features for Red Hat Enterprise Linux 6.3 Red Hat Enterprise Linux 6.3 includes the following documentation and feature updates and changes. The Red Hat Enterprise Linux 6.3 release provides support for the condor resource agent. For information on HA resource parameters, see Appendix B, HA Resource Parameters . This document includes a new appendix, Appendix F, High Availability LVM (HA-LVM) . Information throughout this document clarifies which configuration changes require a cluster restart. For a summary of these changes, see Section 10.1, "Configuration Changes Do Not Take Effect" . The documentation now notes that there is an idle timeout for luci that logs you out after 15 minutes of inactivity. For information on starting luci , see Section 4.2, "Starting luci " . The fence_ipmilan fence device supports a privilege level parameter. For information on fence device parameters, see Appendix A, Fence Device Parameters . This document includes a new section, Section 3.14, "Configuring Virtual Machines in a Clustered Environment" . This document includes a new section, Section 5.6, "Backing Up and Restoring the luci Configuration" . This document includes a new section, Section 10.4, "Cluster Daemon crashes" . This document provides information on setting debug options in Section 6.14.4, "Logging" , Section 8.7, "Configuring Debug Options" , and Section 10.13, "Debug Logging for Distributed Lock Manager (DLM) Needs to be Enabled" . As of Red Hat Enterprise Linux 6.3, the root user or a user who has been granted luci administrator permissions can also use the luci interface to add users to the system, as described in Section 4.3, "Controlling Access to luci" . As of the Red Hat Enterprise Linux 6.3 release, the ccs command validates the configuration according to the cluster schema at /usr/share/cluster/cluster.rng on the node that you specify with the -h option. Previously the ccs command always used the cluster schema that was packaged with the ccs command itself, /usr/share/ccs/cluster.rng on the local system. For information on configuration validation, see Section 6.1.6, "Configuration Validation" . The tables describing the fence device parameters in Appendix A, Fence Device Parameters and the tables describing the HA resource parameters in Appendix B, HA Resource Parameters now include the names of those parameters as they appear in the cluster.conf file. In addition, small corrections and clarifications have been made throughout the document. 1.1.4. New and Changed Features for Red Hat Enterprise Linux 6.4 Red Hat Enterprise Linux 6.4 includes the following documentation and feature updates and changes. The Red Hat Enterprise Linux 6.4 release provides support for the Eaton Network Power Controller (SNMP Interface) fence agent, the HP BladeSystem fence agent, and the IBM iPDU fence agent. For information on fence device parameters, see Appendix A, Fence Device Parameters . Appendix B, HA Resource Parameters now provides a description of the NFS Server resource agent. As of Red Hat Enterprise Linux 6.4, the root user or a user who has been granted luci administrator permissions can also use the luci interface to delete users from the system. This is documented in Section 4.3, "Controlling Access to luci" . Appendix B, HA Resource Parameters provides a description of the new nfsrestart parameter for the Filesystem and GFS2 HA resources. This document includes a new section, Section 6.1.5, "Commands that Overwrite Settings" . Section 3.3, "Enabling IP Ports" now includes information on filtering the iptables firewall for igmp . The IPMI LAN fence agent now supports a parameter to configure the privilege level on the IPMI device, as documented in Appendix A, Fence Device Parameters . In addition to Ethernet bonding mode 1, bonding modes 0 and 2 are now supported for inter-node communication in a cluster. Troubleshooting advice in this document that suggests you ensure that you are using only supported bonding modes now notes this. VLAN-tagged network devices are now supported for cluster heartbeat communication. Troubleshooting advice indicating that this is not supported has been removed from this document. The Red Hat High Availability Add-On now supports the configuration of redundant ring protocol. For general information on using this feature and configuring the cluster.conf configuration file, see Section 8.6, "Configuring Redundant Ring Protocol" . For information on configuring redundant ring protocol with luci , see Section 4.5.4, "Configuring Redundant Ring Protocol" . For information on configuring redundant ring protocol with the ccs command, see Section 6.14.5, "Configuring Redundant Ring Protocol" . In addition, small corrections and clarifications have been made throughout the document. 1.1.5. New and Changed Features for Red Hat Enterprise Linux 6.5 Red Hat Enterprise Linux 6.5 includes the following documentation and feature updates and changes. This document includes a new section, Section 8.8, "Configuring nfsexport and nfsserver Resources" . The tables of fence device parameters in Appendix A, Fence Device Parameters have been updated to reflect small updates to the luci interface. In addition, many small corrections and clarifications have been made throughout the document. 1.1.6. New and Changed Features for Red Hat Enterprise Linux 6.6 Red Hat Enterprise Linux 6.6 includes the following documentation and feature updates and changes. The tables of fence device parameters in Appendix A, Fence Device Parameters have been updated to reflect small updates to the luci interface. The tables of resource agent parameters in Appendix B, HA Resource Parameters have been updated to reflect small updates to the luci interface. Table B.3, "Bind Mount ( bind-mount Resource) (Red Hat Enterprise Linux 6.6 and later)" documents the parameters for the Bind Mount resource agent. As of Red Hat Enterprise Linux 6.6 release, you can use the --noenable option of the ccs --startall command to prevent cluster services from being enabled, as documented in Section 7.2, "Starting and Stopping a Cluster" Table A.26, "Fence kdump" documents the parameters for the kdump fence agent. As of the Red Hat Enterprise Linux 6.6 release, you can sort the columns in a resource list on the luci display by clicking on the header for the sort category, as described in Section 4.9, "Configuring Global Cluster Resources" . In addition, many small corrections and clarifications have been made throughout the document. 1.1.7. New and Changed Features for Red Hat Enterprise Linux 6.7 Red Hat Enterprise Linux 6.7 includes the following documentation and feature updates and changes. This document now includes a new chapter, Chapter 2, Getting Started: Overview , which provides a summary procedure for setting up a basic Red Hat High Availability cluster. Appendix A, Fence Device Parameters now includes a table listing the parameters for the Emerson Network Power Switch (SNMP interface). Appendix A, Fence Device Parameters now includes a table listing the parameters for the fence_xvm fence agent, titled as "Fence virt (Multicast Mode"). The table listing the parameters for the fence_virt fence agent is now titled "Fence virt ((Serial/VMChannel Mode)". Both tables have been updated to reflect the luci display. Appendix A, Fence Device Parameters now includes a table listing the parameters for the fence_xvm fence agent, titled as "Fence virt (Multicast Mode"). The table listing the parameters for the fence_virt fence agent is now titled "Fence virt ((Serial/VMChannel Mode)". Both tables have been updated to reflect the luci display. The troubleshooting procedure described in Section 10.10, "Quorum Disk Does Not Appear as Cluster Member" has been updated. In addition, many small corrections and clarifications have been made throughout the document. 1.1.8. New and Changed Features for Red Hat Enterprise Linux 6.8 Red Hat Enterprise Linux 6.8 includes the following documentation and feature updates and changes. Appendix A, Fence Device Parameters now includes a table listing the parameters for the fence_mpath fence agent, titled as "Multipath Persistent Reservation Fencing". The table listing the parameters for the fence_ipmilan , fence_idrac , fence_imm , fence_ilo3 , and fence_ilo4 fence agents has been updated to reflect the luci display. Section F.3, "Creating New Logical Volumes for an Existing Cluster" now provides a procedure for creating new logical volumes in an existing cluster when using HA-LVM. 1.1.9. New and Changed Features for Red Hat Enterprise Linux 6.9 Red Hat Enterprise Linux 6.9 includes the following documentation and feature updates and changes. As of Red Hat Enterprise Linux 6.9, after you have entered a node name on the luci Create New Cluster dialog box or the Add Existing Cluster screen, the fingerprint of the certificate of the ricci host is displayed for confirmation, as described in Section 4.4, "Creating a Cluster" and Section 5.1, "Adding an Existing Cluster to the luci Interface" . Similarly, the fingerprint of the certificate of the ricci host is displayed for confirmation when you add a new node to a running cluster, as described in Section 5.3.3, "Adding a Member to a Running Cluster" . The luci Service Groups display for a selected service group now includes a table showing the actions that have been configured for each resource in that service group. For information on resource actions, see Appendix D, Modifying and Enforcing Cluster Service Resource Actions .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/ch-overview-ca
5.3. Default Settings
5.3. Default Settings The default settings configure parameters that apply to all proxy subsections in a configuration ( frontend , backend , and listen ). A typical default section may look like the following: Note Any parameter configured in proxy subsection ( frontend , backend , or listen ) takes precedence over the parameter value in default . mode specifies the protocol for the HAProxy instance. Using the http mode connects source requests to real servers based on HTTP, ideal for load balancing web servers. For other applications, use the tcp mode. log specifies log address and syslog facilities to which log entries are written. The global value refers the HAProxy instance to whatever is specified in the log parameter in the global section. option httplog enables logging of various values of an HTTP session, including HTTP requests, session status, connection numbers, source address, and connection timers among other values. option dontlognull disables logging of null connections, meaning that HAProxy will not log connections wherein no data has been transferred. This is not recommended for environments such as web applications over the Internet where null connections could indicate malicious activities such as open port-scanning for vulnerabilities. retries specifies the number of times a real server will retry a connection request after failing to connect on the first try. The various timeout values specify the length of time of inactivity for a given request, connection, or response. These values are generally expressed in milliseconds (unless explicitly stated otherwise) but may be expressed in any other unit by suffixing the unit to the numeric value. Supported units are us (microseconds), ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). http-request 10s gives 10 seconds to wait for a complete HTTP request from a client. queue 1m sets one minute as the amount of time to wait before a connection is dropped and a client receives a 503 or "Service Unavailable" error. connect 10s specifies the number of seconds to wait for a successful connection to a server. client 1m specifies the amount of time (in minutes) a client can remain inactive (it neither accepts nor sends data). server 1m specifies the amount of time (in minutes) a server is given to accept or send data before timeout occurs.
[ "defaults mode http log global option httplog option dontlognull retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/load_balancer_administration/s1-haproxy-setup-defaults
B.109. yaboot
B.109. yaboot B.109.1. RHBA-2010:0854 - yaboot bug fix update An updated yaboot package that fixes a bug is now available. The yaboot package is a boot loader for Open Firmware based PowerPC systems. It can be used to boot IBM eServer System p machines. Bug Fix BZ# 642694 Previously, yaboot netboot failed to operate in an environment where the gateway is not same as the 'tftp' server, even though the 'tftp' server is on the same subnet. This issue was caused by yaboot's inability to check whether an IP address is valid. With this update, an IP address validity check has been added that resolves this issue. All users of yaboot are advised to upgrade to this updated package, which resolves this issue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/yaboot
2.3. Recording Statistical History
2.3. Recording Statistical History The ETL service collects data into the statistical tables every minute. Data is stored for every minute of the past 24 hours, at a minimum, but can be stored for as long as 48 hours depending on the last time a deletion job was run. Minute-by-minute data more than two hours old is aggregated into hourly data and stored for two months. Hourly data more than two days old is aggregated into daily data and stored for five years. Hourly data and daily data can be found in the hourly and daily tables. Each statistical datum is kept in its respective aggregation level table: samples, hourly, and daily history. All history tables also contain a history_id column to uniquely identify rows. Tables reference the configuration version of a host in order to enable reports on statistics of an entity in relation to its past configuration.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/data_warehouse_guide/recording_statistical_history
Chapter 23. Configuring the cluster-wide proxy
Chapter 23. Configuring the cluster-wide proxy Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure OpenShift Container Platform to use a proxy by modifying the Proxy object for existing clusters or by configuring the proxy settings in the install-config.yaml file for new clusters. After you enable a cluster-wide egress proxy for your cluster on a supported platform, Red Hat Enterprise Linux CoreOS (RHCOS) populates the status.noProxy parameter with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your install-config.yaml file that exists on the supported platform. Note As a postinstallation task, you can change the networking.clusterNetwork[].cidr value, but not the networking.machineNetwork[].cidr and the networking.serviceNetwork[] values. For more information, see "Configuring the cluster network range". For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the status.noProxy parameter is also populated with the instance metadata endpoint, 169.254.169.254 . Example of values added to the status: segment of a Proxy object by RHCOS apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster # ... networking: clusterNetwork: 1 - cidr: <ip_address_from_cidr> hostPrefix: 23 network type: OVNKubernetes machineNetwork: 2 - cidr: <ip_address_from_cidr> serviceNetwork: 3 - 172.30.0.0/16 # ... status: noProxy: - localhost - .cluster.local - .svc - 127.0.0.1 - <api_server_internal_url> 4 # ... 1 Specify IP address blocks from which pod IP addresses are allocated. The default value is 10.128.0.0/14 with a host prefix of /23 . 2 Specify the IP address blocks for machines. The default value is 10.0.0.0/16 . 3 Specify IP address block for services. The default value is 172.30.0.0/16 . 4 You can find the URL of the internal API server by running the oc get infrastructures.config.openshift.io cluster -o jsonpath='{.status.etcdDiscoveryDomain}' command. Important If your installation type does not include setting the networking.machineNetwork[].cidr field, you must include the machine IP addresses manually in the .status.noProxy field to make sure that the traffic between nodes can bypass the proxy. 23.1. Prerequisites Review the sites that your cluster requires access to and determine whether any of them must bypass the proxy. By default, all cluster system egress traffic is proxied, including calls to the cloud provider API for the cloud that hosts your cluster. The system-wide proxy affects system components only, not user workloads. If necessary, add sites to the spec.noProxy parameter of the Proxy object to bypass the proxy. 23.2. Enabling the cluster-wide proxy The Proxy object is used to manage the cluster-wide egress proxy. When a cluster is installed or upgraded without the proxy configured, a Proxy object is still generated but it will have a nil spec . For example: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: "" status: A cluster administrator can configure the proxy for OpenShift Container Platform by modifying this cluster Proxy object. Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Warning Enabling the cluster-wide proxy causes the Machine Config Operator (MCO) to trigger node reboot. Prerequisites Cluster administrator permissions OpenShift Container Platform oc CLI tool installed Procedure Create a config map that contains any additional CA certificates required for proxying HTTPS connections. Note You can skip this step if the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Create a file called user-ca-bundle.yaml with the following contents, and provide the values of your PEM-encoded certificates: apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4 1 This data key must be named ca-bundle.crt . 2 One or more PEM-encoded X.509 certificates used to sign the proxy's identity certificate. 3 The config map name that will be referenced from the Proxy object. 4 The config map must be in the openshift-config namespace. Create the config map from this file: USD oc create -f user-ca-bundle.yaml Use the oc edit command to modify the Proxy object: USD oc edit proxy/cluster Configure the necessary fields for the proxy: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. The URL scheme must be either http or https . Specify a URL for the proxy that supports the URL scheme. For example, most proxies will report an error if they are configured to use https but they only support http . This failure message may not propagate to the logs and can appear to be a network connection failure instead. If using a proxy that listens for https connections from the cluster, you may need to configure the cluster to accept the CAs and certificates that the proxy uses. 3 A comma-separated list of destination domain names, domains, IP addresses (or other network CIDRs), and port numbers to exclude proxying. Note Port numbers are only supported when configuring IPv6 addresses. Port numbers are not supported when configuring IPv4 addresses. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy or httpsProxy fields are set. 4 One or more URLs external to the cluster to use to perform a readiness check before writing the httpProxy and httpsProxy values to status. 5 A reference to the config map in the openshift-config namespace that contains additional CA certificates required for proxying HTTPS connections. Note that the config map must already exist before referencing it here. This field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Save the file to apply the changes. 23.3. Removing the cluster-wide proxy The cluster Proxy object cannot be deleted. To remove the proxy from a cluster, remove all spec fields from the Proxy object. Prerequisites Cluster administrator permissions OpenShift Container Platform oc CLI tool installed Procedure Use the oc edit command to modify the proxy: USD oc edit proxy/cluster Remove all spec fields from the Proxy object. For example: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: {} Save the file to apply the changes. 23.4. Verifying the cluster-wide proxy configuration After the cluster-wide proxy configuration is deployed, you can verify that it is working as expected. Follow these steps to check the logs and validate the implementation. Prerequisites You have cluster administrator permissions. You have the OpenShift Container Platform oc CLI tool installed. Procedure Check the proxy configuration status using the oc command: USD oc get proxy/cluster -o yaml Verify the proxy fields in the output to ensure they match your configuration. Specifically, check the spec.httpProxy , spec.httpsProxy , spec.noProxy , and spec.trustedCA fields. Inspect the status of the Proxy object: USD oc get proxy/cluster -o jsonpath='{.status}' Example output { status: httpProxy: http://user:xxx@xxxx:3128 httpsProxy: http://user:xxx@xxxx:3128 noProxy: .cluster.local,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,localhost,test.no-proxy.com } Check the logs of the Machine Config Operator (MCO) to ensure that the configuration changes were applied successfully: USD oc logs -n openshift-machine-config-operator USD(oc get pods -n openshift-machine-config-operator -l k8s-app=machine-config-operator -o name) Look for messages that indicate the proxy settings were applied and the nodes were rebooted if necessary. Verify that system components are using the proxy by checking the logs of a component that makes external requests, such as the Cluster Version Operator (CVO): USD oc logs -n openshift-cluster-version USD(oc get pods -n openshift-cluster-version -l k8s-app=machine-config-operator -o name) Look for log entries that show that external requests have been routed through the proxy. Additional resources Configuring the cluster network range Understanding the CA Bundle certificate Proxy certificates How is the cluster-wide proxy setting applied to OpenShift Container Platform nodes?
[ "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster networking: clusterNetwork: 1 - cidr: <ip_address_from_cidr> hostPrefix: 23 network type: OVNKubernetes machineNetwork: 2 - cidr: <ip_address_from_cidr> serviceNetwork: 3 - 172.30.0.0/16 status: noProxy: - localhost - .cluster.local - .svc - 127.0.0.1 - <api_server_internal_url> 4", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:", "apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4", "oc create -f user-ca-bundle.yaml", "oc edit proxy/cluster", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5", "oc edit proxy/cluster", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: {}", "oc get proxy/cluster -o yaml", "oc get proxy/cluster -o jsonpath='{.status}'", "{ status: httpProxy: http://user:xxx@xxxx:3128 httpsProxy: http://user:xxx@xxxx:3128 noProxy: .cluster.local,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,localhost,test.no-proxy.com }", "oc logs -n openshift-machine-config-operator USD(oc get pods -n openshift-machine-config-operator -l k8s-app=machine-config-operator -o name)", "oc logs -n openshift-cluster-version USD(oc get pods -n openshift-cluster-version -l k8s-app=machine-config-operator -o name)" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/networking/enable-cluster-wide-proxy
2.7. Storage Domain Autorecovery in Red Hat Virtualization
2.7. Storage Domain Autorecovery in Red Hat Virtualization Hosts in a Red Hat Virtualization environment monitor storage domains in their data centers by reading metadata from each domain. A storage domain becomes inactive when all hosts in a data center report that they cannot access the storage domain. Rather than disconnecting an inactive storage domain, the Manager assumes that the storage domain has become inactive temporarily, because of a temporary network outage for example. Once every 5 minutes, the Manager attempts to re-activate any inactive storage domains. Administrator intervention may be required to remedy the cause of the storage connectivity interruption, but the Manager handles re-activating storage domains as connectivity is restored.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/storage_domain_autorecovery_in_red_hat_enterprise_virtualization
Using Shenandoah garbage collector with Red Hat build of OpenJDK 17
Using Shenandoah garbage collector with Red Hat build of OpenJDK 17 Red Hat build of OpenJDK 17 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/using_shenandoah_garbage_collector_with_red_hat_build_of_openjdk_17/index
Chapter 30. Adding the IdM CA service to an IdM server in a deployment without a CA
Chapter 30. Adding the IdM CA service to an IdM server in a deployment without a CA If you previously installed an Identity Management (IdM) domain without the certificate authority (CA) component, you can add the IdM CA service to the domain by using the ipa-ca-install command. Depending on your requirements, you can select one of the following options: Note For details on the supported CA configurations, see Planning your CA services . 30.1. Installing the first IdM CA as the root CA into an existing IdM domain If you previously installed Identity Management (IdM) without the certificate authority (CA) component, you can install the CA on an IdM server subsequently. Follow this procedure to install, on the idmserver server, an IdM CA that is not subordinate to any external root CA. Prerequisites You have root permissions on idmserver . The IdM server is installed on idmserver . Your IdM deployment has no CA installed. You know the IdM Directory Manager password. Procedure On idmserver , install the IdM Certificate Server CA: On each IdM host in the topology, run the ipa-certupdate utility to update the host with the information about the new certificate from the IdM LDAP. Important If you do not run ipa-certupdate after generating the IdM CA certificate, the certificate will not be distributed to the other IdM machines. 30.2. Installing the first IdM CA with an external CA as the root CA into an existing IdM domain If you previously installed Identity Management (IdM) without the certificate authority (CA) component, you can install the CA on an IdM server subsequently. Follow this procedure to install, on the idmserver server, an IdM CA that is subordinate to an external root CA, with zero or several intermediate CAs in between. Prerequisites You have root permissions on idmserver . The IdM server is installed on idmserver . Your IdM deployment has no CA installed. You know the IdM Directory Manager password. Procedure Start the installation: Wait till the command line informs you that a certificate signing request (CSR) has been saved. Submit the CSR to the external CA. Copy the issued certificate to the IdM server. Continue the installation by adding the certificates and full path to the external CA files to ipa-ca-install : On each IdM host in the topology, run the ipa-certupdate utility to update the host with the information about the new certificate from the IdM LDAP. Important Failing to run ipa-certupdate after generating the IdM CA certificate means that the certificate will not be distributed to the other IdM machines.
[ "[root@idmserver ~] ipa-ca-install", "[root@idmserver ~] ipa-ca-install --external-ca", "ipa-ca-install --external-cert-file=/root/master.crt --external-cert-file=/root/ca.crt" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_identity_management/adding-the-idm-ca-service-to-an-idm-server-in-a-deployment-without-a-ca_installing-identity-management
3.7. Selecting an Installation Method
3.7. Selecting an Installation Method What type of installation method do you wish to use? The following installation methods are available: DVD If you have a DVD drive and the Red Hat Enterprise Linux DVD you can use this method. Refer to Section 8.3.1, "Installing from a DVD" , for DVD installation instructions. If you booted the installation from a piece of media other than the installation DVD, you can specify the DVD as the installation source with the linux askmethod or linux repo=cdrom: device :/ device boot option, or by selecting Local CD/DVD on the Installation Method menu (refer to Section 8.3, "Installation Method" ). Hard Drive If you have copied the Red Hat Enterprise Linux ISO images to a local hard drive, you can use this method. You need a boot CD-ROM (use the linux askmethod or linux repo=hd: device :/ path boot option), or by selecting Hard drive on the Installation Method menu (refer to Section 8.3, "Installation Method" ). Refer to Section 8.3.2, "Installing from a Hard Drive" , for hard drive installation instructions. NFS If you are installing from an NFS server using ISO images or a mirror image of Red Hat Enterprise Linux, you can use this method. You need a boot CD-ROM (use the linux askmethod or linux repo=nfs: server :options :/ path boot option, or the NFS directory option on the Installation Method menu described in Section 8.3, "Installation Method" ). Refer to Section 8.3.4, "Installing via NFS" for network installation instructions. Note that NFS installations may also be performed in GUI mode. URL If you are installing directly from an HTTP or HTTPS (Web) server or an FTP server, use this method. You need a boot CD-ROM (use the linux askmethod , linux repo=ftp:// user : password @ host / path , or linux repo=http:// host / path boot option, or linux repo=https:// host / path boot option,or the URL option on the Installation Method menu described in Section 8.3, "Installation Method" ). Refer to Section 8.3.5, "Installing via FTP, HTTP, or HTTPS" , for FTP, HTTP, and HTTPS installation instructions. If you booted the distribution DVD and did not use the alternate installation source option askmethod , the stage loads automatically from the DVD. Proceed to Section 8.2, "Language Selection" . Note If you boot from a Red Hat Enterprise Linux installation DVD, the installation program loads its stage from that disc. This happens regardless of which installation method you choose, unless you eject the disc before you proceed. The installation program still downloads package data from the source you choose.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-installmethod-x86
Chapter 3. Verifying OpenShift Data Foundation deployment for Internal-attached devices mode
Chapter 3. Verifying OpenShift Data Foundation deployment for Internal-attached devices mode Use this section to verify that OpenShift Data Foundation is deployed correctly. 3.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 3.1, "Pods corresponding to OpenShift Data Foundation cluster" . Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Table 3.1. Pods corresponding to OpenShift Data Foundation cluster Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node csi-addons-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) RGW rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (1 pod on any storage node) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 3.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . Click the Storage Systems tab and then click on ocs-storagecluster-storagesystem . In the Status card of Block and File dashboard under Overview tab, verify that both Storage Cluster and Data Resiliency has a green tick mark. In the Details card , verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 3.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . 3.4. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io ocs-storagecluster-ceph-rgw
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_using_ibm_z/verifying_openshift_data_foundation_deployment_for_internal_attached_devices_mode
Integrating an Overcloud with an Existing Red Hat Ceph Cluster
Integrating an Overcloud with an Existing Red Hat Ceph Cluster Red Hat OpenStack Platform 16.0 Configuring an Overcloud to Use Stand-Alone Red Hat Ceph Storage OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/integrating_an_overcloud_with_an_existing_red_hat_ceph_cluster/index
2.4. PKI with Certificate System
2.4. PKI with Certificate System The Certificate System is comprised of subsystems which each contribute different functions of a public key infrastructure. A PKI environment can be customized to fit individual needs by implementing different features and functions for the subsystems. Note A conventional PKI environment provides the basic framework to manage certificates stored in software databases. This is a non-TMS environment, since it does not manage certificates on smart cards. A TMS environment manages the certificates on smart cards. At a minimum, a non-TMS requires only a CA, but a non-TMS environment can use OCSP responders and KRA instances as well. 2.4.1. Issuing Certificates As stated, the Certificate Manager is the heart of the Certificate System. It manages certificates at every stage, from requests through enrollment (issuing), renewal, and revocation. The Certificate System supports enrolling and issuing certificates and processing certificate requests from a variety of end entities, such as web browsers, servers, and virtual private network (VPN) clients. Issued certificates conform to X.509 version 3 standards. For more information, see the About Enrolling and Renewing Certificates section in the Red Hat Certificate System Administration Guide . 2.4.1.1. The Enrollment Process An end entity enrolls in the PKI environment by submitting an enrollment request through the end-entity interface. There can be many kinds of enrollment that use different enrollment methods or require different authentication methods. Different interfaces can also accept different types of Certificate Signing Requests (CSR). The Certificate Manager supports different ways to submit CSRs, such as using the graphical interface and command-line tools. 2.4.1.1.1. Enrollment Using the User Interface For each enrollment through the user interface, there is a separate enrollment page created that is specific to the type of enrollment, type of authentication, and the certificate profiles associated with the type of certificate. The forms associated with enrollment can be customized for both appearance and content. Alternatively, the enrollment process can be customized by creating certificate profiles for each enrollment type. Certificate profiles dynamically-generate forms which are customized by configuring the inputs associated with the certificate profile. Different interfaces can also accept different types of Certificate Signing Requests (CSR). When an end entity enrolls in a PKI by requesting a certificate, the following events can occur, depending on the configuration of the PKI and the subsystems installed: The end entity provides the information in one of the enrollment forms and submits a request. The information gathered from the end entity is customizable in the form depending on the information collected to store in the certificate or to authenticate against the authentication method associated with the form. The form creates a request that is then submitted to the Certificate Manager. The enrollment form triggers the creation of the public and private keys or for dual-key pairs for the request. The end entity provides authentication credentials before submitting the request, depending on the authentication type. This can be LDAP authentication, PIN-based authentication, or certificate-based authentication. The request is submitted either to an agent-approved enrollment process or an automated process. The agent-approved process, which involves no end-entity authentication, sends the request to the request queue in the agent services interface, where an agent must processes the request. An agent can then modify parts of the request, change the status of the request, reject the request, or approve the request. Automatic notification can be set up so an email is sent to an agent any time a request appears in the queue. Also, an automated job can be set to send a list of the contents of the queue to agents on a pre configured schedule. The automated process, which involves end-entity authentication, processes the certificate request as soon as the end entity successfully authenticates. The form collects information about the end entity from an LDAP directory when the form is submitted. For certificate profile-based enrollment, the defaults for the form can be used to collect the user LDAP ID and password. The certificate profile associated with the form determine aspects of the certificate that is issued. Depending on the certificate profile, the request is evaluated to determine if the request meets the constraints set, if the required information is provided, and the contents of the new certificate. The form can also request that the user export the private encryption key. If the KRA subsystem is set up with this CA, the end entity's key is requested, and an archival request is sent to the KRA. This process generally requires no interaction from the end entity. The certificate request is either rejected because it did not meet the certificate profile or authentication requirements, or a certificate is issued. The certificate is delivered to the end entity. In automated enrollment, the certificate is delivered to the user immediately. Since the enrollment is normally through an HTML page, the certificate is returned as a response on another HTML page. In agent-approved enrollment, the certificate can be retrieved by serial number or request Id in the end-entity interface. If the notification feature is set up, the link where the certificate can be obtained is sent to the end user. An automatic notice can be sent to the end entity when the certificate is issued or rejected. The new certificate is stored in the Certificate Manager's internal database. If publishing is set up for the Certificate Manager, the certificate is published to a file or an LDAP directory. The internal OCSP service checks the status of certificates in the internal database when a certificate status request is received. The end-entity interface has a search form for certificates that have been issued and for the CA certificate chain. By default, the user interface supports CSR in the PKCS #10 and Certificate Request Message Format (CRMF). 2.4.1.1.2. Enrollment Using the Command Line This section describes the general workflows when enrolling certificates using the command line. 2.4.1.1.2.1. Enrolling Using the pki Utility For details, see: The pki-cert (1) man page The Command-Line Interfaces section in the Red Hat Certificate System Administration Guide . 2.4.1.1.2.2. Enrolling with CMC To enroll a certificate with CMC, proceed as follows: Generate a PKCS #10 or CRMF certificate signing request (CSR) using a utility, such as PKCS10Client or CRMFPopClient . Note If key archival is enabled in the Key Recovery Agent (KRA), use the CRMFPopClient utility with the KRA's transport certificate in Privacy Enhanced Mail (PEM) format set in the kra.transport file. Use the CMCRequest utility to convert the CSR into a CMC request. The CMCRequest utility uses a configuration file as input. This file contains, for example, the path to the CSR and the CSR's format. For further details and examples, see the CMCRequest (1) man page. Use the HttpClient utility to send the CMC request to the CA. HttpClient uses a configuration file with settings, such as the path to the CMC request file and the servlet. If the HttpClient command succeeds, the utility receives a PKCS #7 chain with CMC status controls from the CA. For details about what parameters the utility provides, enter the HttpClient command without any parameters. Use the CMCResponse utility to check the issuance result of the PKCS #7 file generated by HttpClient . If the request is successful, CMCResponse displays the certificate chain in a readable format. For further details, see the CMCResponse (1) man page. Import the new certificate into the application. For details, follow the instructions of the application to which you want to import the certificate. Note The certificate retrieved by HttpClient is in PKCS #7 format. If the application supports only Base64-encoded certificates, use the BtoA utility to convert the certificate. Additionally, certain applications require a header and footer for certificates in Privacy Enhanced Mail (PEM) format. If these are required, add them manually to the PEM file after you converted the certificate. 2.4.1.1.2.2.1. CMC Enrollment without POP In situations when Proof Of Possession (POP) is missing, the HttpClient utility receives an EncryptedPOP CMC status, which is displayed by the CMCResponse command. In this case, enter the CMCRequest command again with different parameters in the configuration file. For details, see the The CMC Enrollment Process section in the Red Hat Certificate System Administration Guide . 2.4.1.1.2.2.2. Signed CMC Requests CMC requests can either be signed by a user or a CA agent: If an agent signs the request, set the authentication method in the profile to CMCAuth . If a user signs the request, set the authentication method in the profile to CMCUserSignedAuth . For details, see the CMC Authentication Plug-ins section in the Red Hat Certificate System Administration Guide . 2.4.1.1.2.2.3. Unsigned CMC Requests When the CMCUserSignedAuth authentication plug-in is configured in the profile, you must use an unsigned CMC request in combination with the Shared Secret authentication mechanism. Note Unsigned CMC requests are also called self-signed CMC requests . For details, see the CMC Authentication Plug-ins section in the Red Hat Certificate System Administration Guide , and Section 14.8.3, "Enabling the CMC Shared Secret Feature" . 2.4.1.1.2.2.4. The Shared Secret Workflow Certificate System provides the Shared Secret authentication mechanism for CMC requests according to RFC 5272 . In order to protect the passphrase, an issuance protection certificate must be provided when using the CMCSharedToken command. The issuance protection certificate works similar to the KRA transport certificate. For further details, see the CMCSharedToken (1) man page and Section 14.8.3, "Enabling the CMC Shared Secret Feature" . Shared Secret Created by the End Entity User (Preferred) The following describes the workflow, if the user generates the shared secret: The end entity user obtains the issuance protection certificate from the CA administrator. The end entity user uses the CMCSharedToken utility to generate a shared secret token. Note The -p option sets the passphrase that is shared between the CA and the user, not the password of the token. The end entity user sends the encrypted shared token generated by the CMCSharedToken utility to the administrator. The administrator adds the shared token into the shrTok attribute in the user's LDAP entry. The end entity user uses the passphrase to set the witness.sharedSecret parameter in the configuration file passed to the CMCRequest utility. Shared Secret Created by the CA Administrator The following describes the workflow, if the CA administrator generates the shared secret for a user: The administrator uses the CMCSharedToken utility to generate a shared secret token for the user. Note The -p option sets the passphrase that is shared between the CA and the user, not the password of the token. The administrator adds the shared token into the shrTok attribute in the user's LDAP entry. The administrator shares the passphrase with the user. The end entity user uses the passphrase to set the witness.sharedSecret parameter in the configuration file passed to the CMCRequest utility. 2.4.1.1.2.2.5. Simple CMC Requests Certificate System allows simple CMC requests. However, this process does not support the same level of security requirements as full CMC requests and, therefore, must only be used in a secure environment. When using simple CMC requests, set the following in the HttpClient utility's configuration file: 2.4.1.2. Certificate Profiles The Certificate System uses certificate profiles to configure the content of the certificate, the constraints for issuing the certificate, the enrollment method used, and the input and output forms for that enrollment. A single certificate profile is associated with issuing a particular type of certificate. A set of certificate profiles is included for the most common certificate types; the profile settings can be modified. Certificate profiles are configured by an administrator, and then sent to the agent services page for agent approval. Once a certificate profile is approved, it is enabled for use. In case of a UI-enrollment, a dynamically-generated HTML form for the certificate profile is used in the end-entities page for certificate enrollment, which calls on the certificate profile. In case of a command line-based enrollment, the certificate profile is called upon to perform the same processing, such as authentication, authorization, input, output, defaults, and constraints. The server verifies that the defaults and constraints set in the certificate profile are met before acting on the request and uses the certificate profile to determine the content of the issued certificate. The Certificate Manager can issue certificates with any of the following characteristics, depending on the configuration in the profiles and the submitted certificate request: Certificates that are X.509 version 3-compliant Unicode support for the certificate subject name and issuer name Support for empty certificate subject names Support for customized subject name components Support for customized extensions By default, the certificate enrollment profiles are stored in <instance directory>/ca/profiles/ca with names in the format of <profile id> .cfg . LDAP-based profiles are possible with proper pkispawn configuration parameters. 2.4.1.3. Authentication for Certificate Enrollment Certificate System provides authentication options for certificate enrollment. These include agent-approved enrollment, in which an agent processes the request, and automated enrollment, in which an authentication method is used to authenticate the end entity and then the CA automatically issues a certificate. CMC enrollment is also supported, which automatically processes a request approved by an agent. 2.4.1.4. Cross-Pair Certificates It is possible to create a trusted relationship between two separate CAs by issuing and storing cross-signed certificates between these two CAs. By using cross-signed certificate pairs, certificates issued outside the organization's PKI can be trusted within the system. 2.4.2. Renewing Certificates When certificates reach their expiration date, they can either be allowed to lapse, or they can be renewed. Renewal regenerates a certificate request using the existing key pairs for that certificate, and then resubmits the request to Certificate Manager. The renewed certificate is identical to the original (since it was created from the same profile using the same key material) with one exception - it has a different, later expiration date. Renewal can make managing certificates and relationships between users and servers much smoother, because the renewed certificate functions precisely as the old one. For user certificates, renewal allows encrypted data to be accessed without any loss. 2.4.3. Publishing Certificates and CRLs Certificates can be published to files and an LDAP directory, and CRLs to files, an LDAP directory, and an OCSP responder. The publishing framework provides a robust set of tools to publish to all three places and to set rules to define with more detail which types of certificates or CRLs are published where. 2.4.4. Revoking Certificates and Checking Status End entities can request that their own certificates be revoked. When an end entity makes the request, the certificate has to be presented to the CA. If the certificate and the keys are available, the request is processed and sent to the Certificate Manager, and the certificate is revoked. The Certificate Manager marks the certificate as revoked in its database and adds it to any applicable CRLs. An agent can revoke any certificate issued by the Certificate Manager by searching for the certificate in the agent services interface and then marking it revoked. Once a certificate is revoked, it is marked revoked in the database and in the publishing directory, if the Certificate is set up for publishing. If the internal OCSP service has been configured, the service determines the status of certificates by looking them up in the internal database. Automated notifications can be set to send email messages to end entities when their certificates are revoked by enabling and configuring the certificate revoked notification message. 2.4.4.1. Revoking Certificates Users can revoke their certificates using: The end-entity pages. For details, see the Certificate Revocation Pages section in the Red Hat Certificate System Administration Guide . The CMCRequest utility on the command line. For details, see the Performing a CMC Revocation section in the Red Hat Certificate System Administration Guide . The pki utility on the command line. For details, see pki-cert (1) man page. 2.4.4.2. Certificate Status 2.4.4.2.1. CRLs The Certificate System can create certificate revocation lists (CRLs) from a configurable framework which allows user-defined issuing points so a CRL can be created for each issuing point. Delta CRLs can also be created for any issuing point that is defined. CRLs can be issued for each type of certificate, for a specific subset of a type of certificate, or for certificates generated according to a profile or list of profiles. The extensions used and the frequency and intervals when CRLs are published can all be configured. The Certificate Manager issues X.509-standard CRLs. A CRL can be automatically updated whenever a certificate is revoked or at specified intervals. 2.4.4.2.2. OCSP Services The Certificate System CA supports the Online Certificate Status Protocol (OCSP) as defined in PKIX standard RFC 2560 . The OCSP protocol enables OCSP-compliant applications to determine the state of a certificate, including the revocation status, without having to directly check a CRL published by a CA to the validation authority. The validation authority, which is also called an OCSP responder , checks for the application. A CA is set up to issue certificates that include the Authority Information Access extension, which identifies an OCSP responder that can be queried for the status of the certificate. The CA periodically publishes CRLs to an OCSP responder. The OCSP responder maintains the CRL it receives from the CA. An OCSP-compliant client sends requests containing all the information required to identify the certificate to the OCSP responder for verification. The applications determine the location of the OCSP responder from the value of the Authority Information Access extension in the certificate being validated. The OCSP responder determines if the request contains all the information required to process it. If it does not or if it is not enabled for the requested service, a rejection notice is sent. If it does have enough information, it processes the request and sends back a report stating the status of the certificate. 2.4.4.2.2.1. OCSP Response Signing Every response that the client receives, including a rejection notification, is digitally signed by the responder; the client is expected to verify the signature to ensure that the response came from the responder to which it submitted the request. The key the responder uses to sign the message depends on how the OCSP responder is deployed in a PKI setup. RFC 2560 recommends that the key used to sign the response belong to one of the following: The CA that issued the certificate whose status is being checked. A responder with a public key trusted by the client. Such a responder is called a trusted responder . A responder that holds a specially marked certificate issued to it directly by the CA that revokes the certificates and publishes the CRL. Possession of this certificate by a responder indicates that the CA has authorized the responder to issue OCSP responses for certificates revoked by the CA. Such a responder is called a CA-designated responder or a CA-authorized responder . The end-entities page of a Certificate Manager includes a form for manually requesting a certificate for the OCSP responder. The default enrollment form includes all the attributes that identify the certificate as an OCSP responder certificate. The required certificate extensions, such as OCSPNoCheck and Extended Key Usage, can be added to the certificate when the certificate request is submitted. 2.4.4.2.2.2. OCSP Responses The OCSP response that the client receives indicates the current status of the certificate as determined by the OCSP responder. The response could be any of the following: Good or Verified . Specifies a positive response to the status inquiry, meaning the certificate has not been revoked. It does not necessarily mean that the certificate was issued or that it is within the certificate's validity interval. Response extensions may be used to convey additional information on assertions made by the responder regarding the status of the certificate. Revoked . Specifies that the certificate has been revoked, either permanently or temporarily. Based on the status, the client decides whether to validate the certificate. Note The OCSP responder will never return a response of Unknown . The response will always be either Good or Revoked . 2.4.4.2.2.3. OCSP Services There are two ways to set up OCSP services: The OCSP built into the Certificate Manager The Online Certificate Status Manager subsystem In addition to the built-in OCSP service, the Certificate Manager can publish CRLs to an OCSP-compliant validation authority. CAs can be configured to publish CRLs to the Certificate System Online Certificate Status Manager. The Online Certificate Status Manager stores each Certificate Manager's CRL in its internal database and uses the appropriate CRL to verify the revocation status of a certificate when queried by an OCSP-compliant client. The Certificate Manager can generate and publish CRLs whenever a certificate is revoked and at specified intervals. Because the purpose of an OCSP responder is to facilitate immediate verification of certificates, the Certificate Manager should publish the CRL to the Online Certificate Status Manager every time a certificate is revoked. Publishing only at intervals means that the OCSP service is checking an outdated CRL. Note If the CRL is large, the Certificate Manager can take a considerable amount of time to publish the CRL. The Online Certificate Status Manager stores each Certificate Manager's CRL in its internal database and uses it as the CRL to verify certificates. The Online Certificate Status Manager can also use the CRL published to an LDAP directory, meaning the Certificate Manager does not have to update the CRLs directly to the Online Certificate Status Manager. 2.4.5. Archiving, Recovering, and Rotating Keys In the world of PKI, private key archival allows parties the possibility to recover the encrypted data in case the private key is lost. Private keys can be lost due to various reasons such as hardware failure, forgotten passwords, lost smartcards, incapacitated password holder, et caetera. Such archival and recovery feature is offered by the Key Recovery Authority (KRA) subsystem of RHCS. Only keys that are used exclusively for encrypting data should be archived; signing keys in particular should never be archived. Having two copies of a signing key makes it impossible to identify with certainty who used the key; a second archived copy could be used to impersonate the digital identity of the original key owner. 2.4.5.1. Archiving Keys There are two types of key archival mechanisms provided by KRA: Client-side key generation : With this mechanism, clients are to generate CSRs in CRMF format, and submit the requests to the CA (with proper KRA setup) for enrollment and key archival. See the Creating a CSR Using CRMFPopClient section in the Red Hat Certificate System Administration Guide . Server-side key generation : With this mechanism, the properly equipped certificate enrollment profiles would trigger the PKI keys to be generated on KRA and thereby optionally archived along with newly issued certificates. See the Generating CSRs Using Server-Side Key Generation section in the Red Hat Certificate System Administration Guide . The KRA automatically archives private encryption keys if archiving is configured. The KRA stores private encryption keys in a secure key repository; each key is encrypted and stored as a key record and is given a unique key identifier. The archived copy of the key remains wrapped with the KRA's storage key. It can be decrypted, or unwrapped, only by using the corresponding private key pair of the storage certificate. A combination of one or more key recovery (or KRA) agents' certificates authorizes the KRA to complete the key recovery to retrieve its private storage key and use it to decrypt/recover an archived private key. See Section 17.3.1, "Configuring Agent-Approved Key Recovery in the Command Line" . The KRA indexes stored keys by key number, owner name, and a hash of the public key, allowing for highly efficient searching. The key recovery agents have the privilege to insert, delete, and search for key records. When the key recovery agents search by the key ID, only the key that corresponds to that ID is returned. When the agents search by username, all stored keys belonging to that owner are returned. When the agents search by the public key in a certificate, only the corresponding private key is returned. When a Certificate Manager receives a certificate request that contains the key archival option, it automatically forwards the request to the KRA to archive the encryption key. The private key is encrypted by the transport key, and the KRA receives the encrypted copy and stores the key in its key repository. To archive the key, the KRA uses two special key pairs: A transport key pair and corresponding certificate. A storage key pair. Figure 2.2, "How the Key Archival Process Works in Client-Side Key Generation" illustrates how the key archival process occurs when an end entity requests a certificate in the case of client-side key generation. Figure 2.2. How the Key Archival Process Works in Client-Side Key Generation The client generates a CRMF request and submits it through the CA's enrollment portal. The client's private key is wrapped within the CRMF request and can only be unwrapped by the KRA. Detecting that it's a CRMF request with key archival option, CA forwards the request to KRA for private key archival. The KRA decrypts / unwraps the user private key, and after confirming that the private key corresponds to the public key, the KRA encrypts / wraps it again before storing it in its internal LDAP database. Once the private encryption key has been successfully stored, the KRA responds to CA confirming that the key has been successfully archived. The CA sends the request down its Enrollment Profile Framework for certificate information content creation as well as validation. When everything passes, it then issues the certificate and sends it back to the end entity in its response. 2.4.5.2. Recovering Keys The KRA supports agent-initiated key recovery . Agent-initiated recovery is when designated recovery agents use the key recovery form on the KRA agent services portal to process and approve key recovery requests. With the approval of a specified number of agents, an organization can recover keys when the key's owner is unavailable or when keys have been lost. Through the KRA agent services portal, key recovery agents can collectively authorize and retrieve private encryption keys and associated certificates into a PKCS #12 package, which can then be imported into the client. In key recovery authorization, one of the key recovery agents informs all required recovery agents about an impending key recovery. All recovery agents access the KRA key recovery portal. One of the agents initiates the key recovery process. The KRA returns a notification to the agent includes a recovery authorization reference number identifying the particular key recovery request that the agent is required to authorize. Each agent uses the reference number and authorizes key recovery separately. KRA supports Asynchronous recovery , meaning that each step of the recovery process - the initial request and each subsequent approval or rejection - is stored in the KRA's internal LDAP database, under the key entry. The status data for the recovery process can be retrieved even if the original browser session is closed or the KRA is shut down. Agents can search for the key to recover, without using a reference number. This asynchronous recovery option is illustrated in Figure 2.3, "Asynchronous Recovery" . Figure 2.3. Asynchronous Recovery The KRA informs the agent who initiated the key recovery process of the status of the authorizations. When all of the authorizations are entered, the KRA checks the information. If the information presented is correct, it retrieves the requested key and returns it along with the corresponding certificate in the form of a PKCS #12 package to the agent who initiated the key recovery process. Warning The PKCS #12 package contains the encrypted private key. To minimize the risk of key compromise, the recovery agent must use a secure method to deliver the PKCS #12 package and password to the key recipient. The agent should use a good password to encrypt the PKCS #12 package and set up an appropriate delivery mechanism. The key recovery agent scheme configures the KRA to recognize to which group the key recovery agents belong and specifies how many of these recovery agents are required to authorize a key recovery request before the archived key is restored. Important The above information refers to using a web browser, such as Firefox. However, functionality critical to KRA usage is no longer included in Firefox version 31.6 that was released on Red Hat Enterprise Linux 7 platforms. In such cases, it is necessary to use the pki utility to replicate this behavior. For more information, see the pki (1) and pki-key (1) man pages or run CRMFPopClient --help and man CMCRequest. Apart from storing asymmetric keys, KRA can also store symmetric keys or secrets similar to symmetric keys, such as volume encryption secrets, or even passwords and passphrases. The pki utility supports options that enable storing and retrieving these other types of secrets. 2.4.5.3. KRA Transport Key Rotation KRA transport rotation allows for seamless transition between CA and KRA subsystem instances using a current and a new transport key. This allows KRA transport keys to be periodically rotated for enhanced security by allowing both old and new transport keys to operate during the time of the transition; individual subsystem instances take turns being configured while other clones continue to serve with no downtime. In the KRA transport key rotation process, a new transport key pair is generated, a certificate request is submitted, and a new transport certificate is retrieved. The new transport key pair and certificate have to be included in the KRA configuration to provide support for the second transport key. Once KRA supports two transport keys, administrators can start transitioning CAs to the new transport key. KRA support for the old transport key can be removed once all CAs are moved to the new transport key. To configure KRA transport key rotation: Generate a new KRA transport key and certificate Transfer the new transport key and certificate to KRA clones Update the CA configuration with the new KRA transport certificate Update the KRA configuration to use only the new transport key and certificate After this, the rotation of KRA transport certificates is complete, and all the affected CAs and KRAs use the new KRA certificate only. For more information on how to perform the above steps, see the procedures below. Generating the new KRA transport key and certificate Request the KRA transport certificate. Stop the KRA: OR (if using the nuxwdog watchdog ) Go to the KRA NSS database directory: Create a subdirectory and save all the NSS database files into it. For example: Create a new request by using the PKCS10Client utility. For example: Alternatively, use the certutil utility. For example: Submit the transport certificate request on the Manual Data Recovery Manager Transport Certificate Enrollment page of the CA End-Entity page. Wait for the agent approval of the submitted request to retrieve the certificate by checking the request status on the End-Entity retrieval page. Approve the KRA transport certificate through the CA Agent Services interface. Retrieve the KRA transport certificate. Go to the KRA NSS database directory: Wait for the agent approval of the submitted request to retrieve the certificate by checking the request status on the End-Entity retrieval page. Once the new KRA transport certificate is available, paste its Base64-encoded value into a text file, for example a file named cert-serial_number .txt . Do not include the header ( -----BEGIN CERTIFICATE----- ) or the footer ( -----END CERTIFICATE----- ). Import the KRA transport certificate. Go to the KRA NSS database directory: Import the transport certificate into the KRA NSS database: Update the KRA transport certificate configuration. Go to the KRA NSS database directory: Verify that the new KRA transport certificate is imported: Open the /var/lib/pki/pki-kra/kra/conf/CS.cfg file and add the following line: Propagating the new transport key and certificate to KRA clones Start the KRA: or (if using the nuxwdog watchdog ) Extract the new transport key and certificate for propagation to clones. Go to the KRA NSS database directory: Stop the KRA: or (if using the nuxwdog watchdog ) Verify that the new KRA transport certificate is present: Export the KRA new transport key and certificate: Verify the exported KRA transport key and certificate: Perform these steps on each KRA clone: Copy the transport.p12 file, including the transport key and certificate, to the KRA clone location. Go to the clone NSS database directory: Stop the KRA clone: or (if using the nuxwdog watchdog ) Check the content of the clone NSS database: Import the new transport key and certificate of the clone: Add the following line to the /var/lib/pki/pki-kra/kra/conf/CS.cfg file on the clone: Start the KRA clone: or (if using the nuxwdog watchdog ) Updating the CA configuration with the new KRA transport certificate Format the new KRA transport certificate for inclusion in the CA. Obtain the cert-serial_number .txt KRA transport certificate file created when retrieving the KRA transport certificate in the procedure. Convert the Base64-encoded certificate included in cert-serial_number .txt to a single-line file: Do the following for the CA and all its clones corresponding to the KRA above: Stop the CA: or (if using the nuxwdog watchdog ) In the /var/lib/pki/pki-ca/ca/conf/CS.cfg file, locate the certificate included in the following line: Replace that certificate with the one contained in cert-one-line-serial_number .txt . Start the CA: or (if using the nuxwdog watchdog ) Note While the CA and all its clones are being updated with the new KRA transport certificate, the CA instances that have completed the transition use the new KRA transport certificate, and the CA instances that have not yet been updated continue to use the old KRA transport certificate. Because the corresponding KRA and its clones have already been updated to use both transport certificates, no downtime occurs. Updating the KRA configuration to use only the new transport key and certificate For the KRA and each of its clones, do the following: Go to the KRA NSS database directory: Stop the KRA: or (if using the nuxwdog watchdog ) Verify that the new KRA transport certificate is imported: Open the /var/lib/pki/pki-kra/kra/conf/CS.cfg file, and look for the nickName value included in the following line: Replace the nickName value with the newNickName value included in the following line: As a result, the CS.cfg file includes this line: Remove the following line from /var/lib/pki/pki-kra/kra/conf/CS.cfg : Start the KRA: or (if using the nuxwdog watchdog )
[ "servlet=/ca/ee/ca/profileSubmitCMCSimple?profileId=caECSimpleCMCUserCert", "pki-server stop pki-kra", "systemctl stop [email protected]", "cd /etc/pki/pki-kra/alias", "mkdir nss_db_backup cp *.db nss_db_backup", "PKCS10Client -p password -d '.' -o ' req.txt ' -n 'CN=KRA Transport 2 Certificate,O=example.com Security Domain'", "certutil -d . -R -k rsa -g 2048 -s 'CN=KRA Transport 2 Certificate,O=example.com Security Domain' -f password-file -a -o transport-certificate-request-file", "cd /etc/pki/pki-kra/alias", "cd /etc/pki/pki-kra/alias", "certutil -d . -A -n ' transportCert-serial_number cert-pki-kra KRA' -t 'u,u,u' -a -i cert-serial_number .txt", "cd /etc/pki/pki-kra/alias", "certutil -d . -L certutil -d . -L -n ' transportCert-serial_number cert-pki-kra KRA'", "kra.transportUnit.newNickName= transportCert-serial_number cert-pki-kra KRA", "pki-server start pki-kra", "systemctl start [email protected]", "cd /etc/pki/pki-kra/alias", "pki-server stop pki-kra", "systemctl stop [email protected]", "certutil -d . -L certutil -d . -L -n ' transportCert-serial_number cert-pki-kra KRA'", "pk12util -o transport.p12 -d . -n ' transportCert-serial_number cert-pki-kra KRA'", "pk12util -l transport.p12", "cd /etc/pki/pki-kra/alias", "pki-server stop pki-kra", "systemctl stop [email protected]", "certutil -d . -L", "pk12util -i transport.p12 -d .", "kra.transportUnit.newNickName= transportCert-serial_number cert-pki-kra KRA", "pki-server start pki-kra", "systemctl start [email protected]", "tr -d '\\n' < cert-serial_number .txt > cert-one-line-serial_number .txt", "pki-server stop pki-ca", "systemctl stop [email protected]", "ca.connector.KRA.transportCert= certificate", "pki-server start pki-ca", "systemctl start [email protected]", "cd /etc/pki/pki-kra/alias", "pki-server stop pki-kra", "systemctl stop [email protected]", "certutil -d . -L certutil -d . -L -n ' transportCert-serial_number cert-pki-kra KRA'", "kra.transportUnit. nickName= transportCert cert-pki-kra KRA", "kra.transportUnit. newNickName= transportCert-serial_number cert-pki-kra KRA", "kra.transportUnit. nickName= transportCert-serial_number cert-pki-kra KRA", "kra.transportUnit.newNickName= transportCert-serial_number cert-pki-kra KRA", "pki-server start pki-kra", "systemctl start [email protected]" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/managing-pki
Chapter 4. ResourceAccessReview [authorization.openshift.io/v1]
Chapter 4. ResourceAccessReview [authorization.openshift.io/v1] Description ResourceAccessReview is a means to request a list of which users and groups are authorized to perform the action specified by spec Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required namespace verb resourceAPIGroup resourceAPIVersion resource resourceName path isNonResourceURL 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources content RawExtension Content is the actual content of the request for create and update isNonResourceURL boolean IsNonResourceURL is true if this is a request for a non-resource URL (outside of the resource hierarchy) kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces path string Path is the path of a non resource URL resource string Resource is one of the existing resource types resourceAPIGroup string Group is the API group of the resource Serialized as resourceAPIGroup to avoid confusion with the 'groups' field when inlined resourceAPIVersion string Version is the API version of the resource Serialized as resourceAPIVersion to avoid confusion with TypeMeta.apiVersion and ObjectMeta.resourceVersion when inlined resourceName string ResourceName is the name of the resource being requested for a "get" or deleted for a "delete" verb string Verb is one of: get, list, watch, create, update, delete 4.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/resourceaccessreviews POST : create a ResourceAccessReview 4.2.1. /apis/authorization.openshift.io/v1/resourceaccessreviews Table 4.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a ResourceAccessReview Table 4.2. Body parameters Parameter Type Description body ResourceAccessReview schema Table 4.3. HTTP responses HTTP code Reponse body 200 - OK ResourceAccessReview schema 201 - Created ResourceAccessReview schema 202 - Accepted ResourceAccessReview schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/authorization_apis/resourceaccessreview-authorization-openshift-io-v1
Chapter 5. Environment variables and model expression resolution
Chapter 5. Environment variables and model expression resolution 5.1. Prerequisites You have some basic knowledge of how to configure environment variables on an operating system. For configuring environment variables on the OpenShift Container Platform, you must meet the following prerequisites: You have already installed OpenShift and set up the OpenShift CLI ("oc"). For more information about the oc, see Getting Started with the OpenShift CLI . You have deployed your application to OpenShift using a Helm chart. For more information about Helm charts, see Helm Charts for JBoss EAP . 5.2. Environment variables for resolving management model expressions To resolve management model expressions and to start your JBoss EAP 8.0 server on the OpenShift Container Platform, you can either add environment variables or set Java system properties in the management command-line interface (CLI). If you use both, JBoss EAP observes and uses the Java system property rather than the environment variable to resolve the management model expression. System property to environment variable mapping Imagine that you have this management expression: USD{my.example-expr} . When your JBoss EAP server tries to resolve it, it checks for a system property named my.example-expr . If your server finds this property, it uses its value to resolve the expression. If it doesn't find this property, your server continues searching. , assuming that your server does not find system property my.example-expr , it automatically changes my.example-expr to all uppercase letters and replaces all characters that aren't alphanumeric with underscores (_): MY_EXAMPLE_EXPR . JBoss EAP then checks for an environment variable with that name. If your server finds this variable, it uses its value to resolve the expression. If it doesn't find this variable, your server continues searching. Tip If your original expression starts with the prefix env. , JBoss EAP resolves the environment variable by removing the prefix, then looking for only the environment variable name. For example, for the expression env.example , JBoss EAP looks for an example environment variable. If none of these checks finds a property or variable to resolve your original expression, JBoss EAP looks for whether the expression has a default value. If it does, that default value resolves the expression. If not, then JBoss EAP can't resolve the expression. Example with two servers Suppose that, on one server, JBoss EAP defines this management resource: <socket-binding-group name="standard-sockets" default-interface="public" port-offset="USD{jboss.socket.binding.port-offset:0}"> . To run a second server with a different port offset, instead of editing the configuration file, do one of the following: Set the jboss.socket.binding.port-offset Java system property to resolve the value on the second server: ./standalone.sh -Djboss.socket.binding.port-offset=100 . Set the JBOSS_SOCKET_BINDING_PORT_OFFSET environment variable to resolve the value on the second server: JBOSS_SOCKET_BINDING_PORT_OFFSET=100 ./standalone.sh . 5.3. Configuring environment variables on the OpenShift Container Platform With JBoss EAP 8.0, you can configure environment variables to resolve management model expressions. You can also use environment variables to adapt the configuration of the JBoss EAP server you're running on OpenShift. Set environment variables and options on a resource that uses a pod template: Option Description -e, --env=<KEY>=<VAL> Set given key-value pairs of environment variables. --overwrite Confirm update of existing environment variables. Note Kubernetes workload resources that use pod templates include the following: Deployment ReplicaSet StatefulSet DaemonSet Job CronJob After you configure your environment variables, the JBoss EAP management console should display them in the details for their related pods. Additional resources About the OpenShift CLI Red Hat JBoss Enterprise Application Platform Configuration Guide 5.4. Overriding management attributes with environment variables You know that you can use a Java system property or an environment variable to resolve a management attribute that's defined with an expression, but you can also modify other attributes, even if they don't use expressions. To more easily adapt your JBoss EAP server configuration to your server environment, you can use an environment variable to override the value of any management attribute, without ever having to edit your configuration file. This feature, which is available starting with the JBoss EAP version 8.0, is useful for the following reasons: JBoss EAP provides expressions for only its most common management attributes. Now, you can change the value of an attribute that has no defined expression. Some management attributes connect your JBoss EAP server with other services, such as a database, whose values you can't know in advance, or whose values you can't store in a configuration; for example, in database credentials. By using environment variables, you can defer the configuration of such attributes while your JBoss EAP server is running. Important This feature is enabled by default, starting with JBoss EAP version 8.0 OpenShift runtime image. To enable it on other platforms, you must set the WILDFLY_OVERRIDING_ENV_VARS environment variable to any value; for example, export WILDFLY_OVERRIDING_ENV_VARS=1 . Note You can't override management attributes whose type is LIST , OBJECT , or PROPERTY . Prerequisites You must have defined a management attribute that you now want to override. Procedure To override a management attribute with an environment variable, complete the following steps: Identify the path of the resource and attribute you want to change. For example, set the value of the proxy-address-forwarding attribute to true for the resource /subsystem=undertow/server=default-server/http-listener=default . Create the name of the environment variable to override this attribute by mapping the resource address and the management attribute, as follows: Remove the first slash ( / ) from the resource address: /subsystem=undertow/server=default-server/http-listener=default becomes subsystem=undertow/server=default-server/http-listener=default . Append two underscores (__) and the name of the attribute; for example: subsystem=undertow/server=default-server/http-listener=default__proxy-address-forwarding . Replace all non-alphanumeric characters with an underscore (_), and put the entire line of code in all capital letters: SUBSYSTEM_UNDERTOW_SERVER_DEFAULT_SERVER_HTTP_LISTENER_DEFAULT__PROXY_ADDRESS_FORWARDING . Set the environment value: SUBSYSTEM_UNDERTOW_SERVER_DEFAULT_SERVER_HTTP_LISTENER_DEFAULT__PROXY_ADDRESS_FORWARDING=true . Note These values are examples that you must replace with your actual configuration values.
[ "oc set env <object-selection> KEY_1=VAL_1 ... KEY_N=VAL_N [<set-env-options>] [<common-options>]" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_jboss_eap_on_openshift_container_platform/assembly_environment-variables-and-model-expression-resolution_default
Deploying Red Hat Enterprise Linux 7 on public cloud platforms
Deploying Red Hat Enterprise Linux 7 on public cloud platforms Red Hat Enterprise Linux 7 Creating custom Red Hat Enterprise Linux images and configuring a Red Hat High Availability cluster for public cloud platforms Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/deploying_red_hat_enterprise_linux_7_on_public_cloud_platforms/index
Chapter 1. Ceph dashboard overview
Chapter 1. Ceph dashboard overview As a storage administrator, the Red Hat Ceph Storage Dashboard provides management and monitoring capabilities, allowing you to administer and configure the cluster, as well as visualize information and performance statistics related to it. The dashboard uses a web server hosted by the ceph-mgr daemon. The dashboard is accessible from a web browser and includes many useful management and monitoring features, for example, to configure manager modules and monitor the state of OSDs. 1.1. Prerequisites System administrator level experience. 1.2. Dashboard components The functionality of the dashboard is provided by multiple components. The Ansible automation application for deployment. The embedded dashboard ceph-mgr module. The embedded Prometheus ceph-mgr module. The Prometheus time-series database. The Prometheus node-exporter daemon, running on each node of the storage cluster. The Grafana platform to provide monitoring user interface and alerting. Additional Resources For more information, see the Ansible website For more information, see the Prometheus website . For more information, see the Grafana website . 1.3. Dashboard features The Ceph dashboard provides multiple features. Management features View cluster hierarchy : You can view the CRUSH map, for example, to determine which node a specific OSD ID is running on. This is helpful if there is an issue with an OSD. Configure manager modules : You can view and change parameters for ceph manager modules. View and filter logs : You can view event and audit cluster logs and filter them based on priority, keyword, date, or time range. Toggle dashboard components : You can enable and disable dashboard components so only the features you need are available. Manage OSD settings : You can set cluster-wide OSD flags using the dashboard. Viewing Alerts : The alerts page allows you to see details of current alerts. Quality of Service for images : You can set performance limits on images, for example limiting IOPS or read BPS burst rates. Monitoring features Username and password protection : You can access the dashboard only by providing a configurable user name and password. SSL and TLS support : All HTTP communication between the web browser and the dashboard is secured via SSL. A self-signed certificate can be created with a built-in command, but it is also possible to import custom certificates signed and issued by a Certificate Authority (CA). From Red Hat Ceph Storage 4.2, dashboard_protocol is set to https and Ansible generates the dashboard and grafana certificate. To plot data points and graphs, update the TLS handshake manually as: Alert manager API host - http://grafana_node:9093 Prometheus API host - http://grafana_node:9092 Grafana API Host - https://grafana_node:3000 Overall cluster health : Displays the overall cluster status, storage utilization (For example, number of objects, raw capacity, usage per pool), a list of pools and their status and usage statistics. Hosts : Provides a list of all hosts associated with the cluster along with the running services and the installed Ceph version. Performance counters : Displays detailed statistics for each running service. Monitors : Lists all Monitors, their quorum status and open sessions. Configuration Reference : Lists all available configuration options, their description and default values. Cluster logs : Display and filter the cluster's event and audit logs. View storage cluster capacity : You can view raw storage capacity of the Red Hat Ceph Storage cluster in the Capacity panels of the Ceph dashboard. Pools : Lists and manages all Ceph pools and their details. For example: applications, placement groups, replication size, EC profile, CRUSH ruleset, etc. OSDs : Lists and manages all OSDs, their status and usage statistics as well as detailed information like attributes (OSD map), metadata, performance counters and usage histograms for read/write operations. iSCSI : Lists all hosts that run the tcmu-runner service, displays all images and their performance characteristics, such as read and write operations or traffic. Images : Lists all RBD images and their properties such as size, objects, and features. Create, copy, modify and delete RBD images. Create, delete, and rollback snapshots of selected images, protect or unprotect these snapshots against modification. Copy or clone snapshots, flatten cloned images. Note The performance graph for I/O changes in the Overall Performance tab for a specific image shows values only after specifying the pool that includes that image by setting the rbd_stats_pool parameter in Cluster > Manager modules > Prometheus . Mirroring : Lists all active sync daemons and their status, pools and RBD images including their synchronization state. Filesystems : Lists all active Ceph file system (CephFS) clients and associated pools, including their usage statistics. Object Gateway (RGW) : Lists all active object gateways and their performance counters. Displays and manages (adds, edits, deletes) object gateway users and their details, for example quotas, as well as the users' buckets and their details, for example, owner or quotas. Additional Resources See Toggling dashboard components on or off in the Red Hat Ceph Storage Dashboard Guide for more information. 1.3.1. Toggling dashboard features on or off You can customize the Red Hat Ceph Storage dashboard components by enabling or disabling features on demand. All features are enabled by default. When disabling a feature, the web-interface elements become hidden and the associated REST API end-points reject any further requests for that feature. Enabling and disabling dashboard features can be done from the command-line interface or the web interface. Available features: Ceph Block Devices: Image management, rbd Mirroring, mirroring iSCSI gateway, iscsi Ceph Filesystem, cephfs Ceph Object Gateway, rgw Note By default, the Ceph Manager is collocated with the Ceph Monitor. Note You can disable multiple features at once. Important Once a feature is disabled, it can take up to 20 seconds to reflect the change in the web interface. Prerequisites Installation and configuration of the Red Hat Ceph Storage dashboard software. User access to the Ceph Manager node or the dashboard web interface. Procedure To toggle the dashboard features from the dashboard web interface: From the navigation bar on the dashboard page, navigate to Cluster , then Manager Modules , then click on Dashboard . This opens the Edit Manager module page. From the Edit Manager module page, you can enable or disable the dashboard features by checking or unchecking the selection box to the feature name. Once the selections have been made, click on the Update button at the bottom of the page. To toggle the dashboard features from the command-line interface: Log in to the Ceph Manager node. List the feature status: Disable a feature: This example disables the Ceph iSCSI gateway feature. Enable a feature: This example enables the Ceph Filesystem feature. 1.4. Dashboard architecture The Dashboard architecture depends on the Ceph manager dashboard plugin and other components. See the diagram below to understand how they work together.
[ "[user@mon ~]USD ceph dashboard feature status", "[user@mon ~]USD ceph dashboard feature disable iscsi", "[user@mon ~]USD ceph dashboard feature enable cephfs" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/dashboard_guide/ceph-dashboard-overview
Storage Guide
Storage Guide Red Hat OpenStack Platform 16.0 Understanding, using, and managing persistent storage in OpenStack OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/storage_guide/index
Chapter 3. Providing DHCP services
Chapter 3. Providing DHCP services The dynamic host configuration protocol (DHCP) is a network protocol that automatically assigns IP information to clients. You can set up the dhcpd service to provide a DHCP server and DHCP relay in your network. 3.1. The difference between static and dynamic IP addressing Static IP addressing When you assign a static IP address to a device, the address does not change over time unless you change it manually. Use static IP addressing if you want: To ensure network address consistency for servers such as DNS, and authentication servers. To use out-of-band management devices that work independently of other network infrastructure. Dynamic IP addressing When you configure a device to use a dynamic IP address, the address can change over time. For this reason, dynamic addresses are typically used for devices that connect to the network occasionally because the IP address can be different after rebooting the host. Dynamic IP addresses are more flexible, easier to set up, and administer. The Dynamic Host Control Protocol (DHCP) is a traditional method of dynamically assigning network configurations to hosts. Note There is no strict rule defining when to use static or dynamic IP addresses. It depends on user's needs, preferences, and the network environment. 3.2. DHCP transaction phases The DHCP works in four phases: Discovery, Offer, Request, Acknowledgement, also called the DORA process. DHCP uses this process to provide IP addresses to clients. Discovery The DHCP client sends a message to discover the DHCP server in the network. This message is broadcasted at the network and data link layer. Offer The DHCP server receives messages from the client and offers an IP address to the DHCP client. This message is unicast at the data link layer but broadcast at the network layer. Request The DHCP client requests the DHCP server for the offered IP address. This message is unicast at the data link layer but broadcast at the network layer. Acknowledgment The DHCP server sends an acknowledgment to the DHCP client. This message is unicast at the data link layer but broadcast at the network layer. It is the final message of the DHCP DORA process. 3.3. The differences when using dhcpd for DHCPv4 and DHCPv6 The dhcpd service supports providing both DHCPv4 and DHCPv6 on one server. However, you need a separate instance of dhcpd with separate configuration files to provide DHCP for each protocol. DHCPv4 Configuration file: /etc/dhcp/dhcpd.conf Systemd service name: dhcpd DHCPv6 Configuration file: /etc/dhcp/dhcpd6.conf Systemd service name: dhcpd6 3.4. The lease database of the dhcpd service A DHCP lease is the period for which the dhcpd service allocates a network address to a client. The dhcpd service stores the DHCP leases in the following databases: For DHCPv4: /var/lib/dhcpd/dhcpd.leases For DHCPv6: /var/lib/dhcpd/dhcpd6.leases Warning Manually updating the database files can corrupt the databases. The lease databases contain information about the allocated leases, such as the IP address assigned to a media access control (MAC) address or the time stamp when the lease expires. Note that all time stamps in the lease databases are in Coordinated Universal Time (UTC). The dhcpd service recreates the databases periodically: The service renames the existing files: /var/lib/dhcpd/dhcpd.leases to /var/lib/dhcpd/dhcpd.leases~ /var/lib/dhcpd/dhcpd6.leases to /var/lib/dhcpd/dhcpd6.leases~ The service writes all known leases to the newly created /var/lib/dhcpd/dhcpd.leases and /var/lib/dhcpd/dhcpd6.leases files. Additional resources dhcpd.leases(5) man page on your system Restoring a corrupt lease database 3.5. Comparison of DHCPv6 to radvd In an IPv6 network, only router advertisement messages provide information about an IPv6 default gateway. As a consequence, if you want to use DHCPv6 in subnets that require a default gateway setting, you must additionally configure a router advertisement service, such as Router Advertisement Daemon ( radvd ). The radvd service uses flags in router advertisement packets to announce the availability of a DHCPv6 server. The following table compares features of DHCPv6 and radvd : DHCPv6 radvd Provides information about the default gateway no yes Guarantees random addresses to protect privacy yes no Sends further network configuration options yes no Maps media access control (MAC) addresses to IPv6 addresses yes no 3.6. Configuring the radvd service for IPv6 routers The router advertisement daemon ( radvd ) sends router advertisement messages that are required for IPv6 stateless autoconfiguration. This enables users to automatically configure their addresses, settings, routes, and to choose a default router based on these advertisements. Note You can only set /64 prefixes in the radvd service. To use other prefixes, use DHCPv6. Prerequisites You are logged in as the root user. Procedure Install the radvd package: Edit the /etc/radvd.conf file, and add the following configuration: These settings configures radvd to send router advertisement messages on the enp1s0 device for the 2001:db8:0:1::/64 subnet. The AdvManagedFlag on setting defines that the client should receive the IP address from a DHCP server, and the AdvOtherConfigFlag parameter set to on defines that clients should receive non-address information from the DHCP server as well. Optional: Configure that radvd automatically starts when the system boots: Start the radvd service: Verficiation Display the content of router advertisement packages and the configured values radvd sends: Additional resources radvd.conf(5) man page on your system /usr/share/doc/radvd/radvd.conf.example file Can I use a prefix length other than 64 bits in IPv6 Router Advertisements? 3.7. Setting network interfaces for the DHCP servers By default, the dhcpd service processes requests only on network interfaces that have an IP address in the subnet defined in the configuration file of the service. For example, in the following scenario, dhcpd listens only on the enp0s1 network interface: You have only a subnet definition for the 192.0.2.0/24 network in the /etc/dhcp/dhcpd.conf file. The enp0s1 network interface is connected to the 192.0.2.0/24 subnet. The enp7s0 interface is connected to a different subnet. Only follow this procedure if the DHCP server contains multiple network interfaces connected to the same network but the service should listen only on specific interfaces. Depending on whether you want to provide DHCP for IPv4, IPv6, or both protocols, see the procedure for: IPv4 networks IPv6 networks Prerequisites You are logged in as the root user. The dhcp-server package is installed. Procedure For IPv4 networks: Copy the /usr/lib/systemd/system/dhcpd.service file to the /etc/systemd/system/ directory: Do not edit the /usr/lib/systemd/system/dhcpd.service file. Future updates of the dhcp-server package can override the changes. Edit the /etc/systemd/system/dhcpd.service file, and append the names of the interface, that dhcpd should listen on to the command in the ExecStart parameter: This example configures that dhcpd listens only on the enp0s1 and enp7s0 interfaces. Reload the systemd manager configuration: Restart the dhcpd service: For IPv6 networks: Copy the /usr/lib/systemd/system/dhcpd6.service file to the /etc/systemd/system/ directory: Do not edit the /usr/lib/systemd/system/dhcpd6.service file. Future updates of the dhcp-server package can override the changes. Edit the /etc/systemd/system/dhcpd6.service file, and append the names of the interface, that dhcpd should listen on to the command in the ExecStart parameter: This example configures that dhcpd listens only on the enp0s1 and enp7s0 interfaces. Reload the systemd manager configuration: Restart the dhcpd6 service: 3.8. Setting up the DHCP service for subnets directly connected to the DHCP server Use the following procedure if the DHCP server is directly connected to the subnet for which the server should answer DHCP requests. This is the case if a network interface of the server has an IP address of this subnet assigned. Depending on whether you want to provide DHCP for IPv4, IPv6, or both protocols, see the procedure for: IPv4 networks IPv6 networks Prerequisites You are logged in as the root user. The dhcp-server package is installed. Procedure For IPv4 networks: Edit the /etc/dhcp/dhcpd.conf file: Optional: Add global parameters that dhcpd uses as default if no other directives contain these settings: This example sets the default domain name for the connection to example.com , and the default lease time to 86400 seconds (1 day). Add the authoritative statement on a new line: Important Without the authoritative statement, the dhcpd service does not answer DHCPREQUEST messages with DHCPNAK if a client asks for an address that is outside of the pool. For each IPv4 subnet directly connected to an interface of the server, add a subnet declaration: This example adds a subnet declaration for the 192.0.2.0/24 network. With this configuration, the DHCP server assigns the following settings to a client that sends a DHCP request from this subnet: A free IPv4 address from the range defined in the range parameter IP of the DNS server for this subnet: 192.0.2.1 Default gateway for this subnet: 192.0.2.1 Broadcast address for this subnet: 192.0.2.255 The maximum lease time, after which clients in this subnet release the IP and send a new request to the server: 172800 seconds (2 days) Optional: Configure that dhcpd starts automatically when the system boots: Start the dhcpd service: For IPv6 networks: Edit the /etc/dhcp/dhcpd6.conf file: Optional: Add global parameters that dhcpd uses as default if no other directives contain these settings: This example sets the default domain name for the connection to example.com , and the default lease time to 86400 seconds (1 day). Add the authoritative statement on a new line: Important Without the authoritative statement, the dhcpd service does not answer DHCPREQUEST messages with DHCPNAK if a client asks for an address that is outside of the pool. For each IPv6 subnet directly connected to an interface of the server, add a subnet declaration: This example adds a subnet declaration for the 2001:db8:0:1::/64 network. With this configuration, the DHCP server assigns the following settings to a client that sends a DHCP request from this subnet: A free IPv6 address from the range defined in the range6 parameter. The IP of the DNS server for this subnet is 2001:db8:0:1::1 . The maximum lease time, after which clients in this subnet release the IP and send a new request to the server is 172800 seconds (2 days). Note that IPv6 requires uses router advertisement messages to identify the default gateway. Optional: Configure that dhcpd6 starts automatically when the system boots: Start the dhcpd6 service: Additional resources The dhcp-options(5) and dhcpd.conf(5) man pages on your system /usr/share/doc/dhcp-server/dhcpd.conf.example file /usr/share/doc/dhcp-server/dhcpd6.conf.example file 3.9. Setting up the DHCP service for subnets that are not directly connected to the DHCP server Use the following procedure if the DHCP server is not directly connected to the subnet for which the server should answer DHCP requests. This is the case if a DHCP relay agent forwards requests to the DHCP server, because none of the DHCP server's interfaces is directly connected to the subnet the server should serve. Depending on whether you want to provide DHCP for IPv4, IPv6, or both protocols, see the procedure for: IPv4 networks IPv6 networks Prerequisites You are logged in as the root user. The dhcp-server package is installed. Procedure For IPv4 networks: Edit the /etc/dhcp/dhcpd.conf file: Optional: Add global parameters that dhcpd uses as default if no other directives contain these settings: This example sets the default domain name for the connection to example.com , and the default lease time to 86400 seconds (1 day). Add the authoritative statement on a new line: Important Without the authoritative statement, the dhcpd service does not answer DHCPREQUEST messages with DHCPNAK if a client asks for an address that is outside of the pool. Add a shared-network declaration, such as the following, for IPv4 subnets that are not directly connected to an interface of the server: This example adds a shared network declaration, that contains a subnet declaration for both the 192.0.2.0/24 and 198.51.100.0/24 networks. With this configuration, the DHCP server assigns the following settings to a client that sends a DHCP request from one of these subnets: The IP of the DNS server for clients from both subnets is: 192.0.2.1 . A free IPv4 address from the range defined in the range parameter, depending on from which subnet the client sent the request. The default gateway is either 192.0.2.1 or 198.51.100.1 depending on from which subnet the client sent the request. Add a subnet declaration for the subnet the server is directly connected to and that is used to reach the remote subnets specified in shared-network above: Note If the server does not provide DHCP service to this subnet, the subnet declaration must be empty as shown in the example. Without a declaration for the directly connected subnet, dhcpd does not start. Optional: Configure that dhcpd starts automatically when the system boots: Start the dhcpd service: For IPv6 networks: Edit the /etc/dhcp/dhcpd6.conf file: Optional: Add global parameters that dhcpd uses as default if no other directives contain these settings: This example sets the default domain name for the connection to example.com , and the default lease time to 86400 seconds (1 day). Add the authoritative statement on a new line: Important Without the authoritative statement, the dhcpd service does not answer DHCPREQUEST messages with DHCPNAK if a client asks for an address that is outside of the pool. Add a shared-network declaration, such as the following, for IPv6 subnets that are not directly connected to an interface of the server: This example adds a shared network declaration that contains a subnet6 declaration for both the 2001:db8:0:1::1:0/120 and 2001:db8:0:1::2:0/120 networks. With this configuration, the DHCP server assigns the following settings to a client that sends a DHCP request from one of these subnets: The IP of the DNS server for clients from both subnets is 2001:db8:0:1::1:1 . A free IPv6 address from the range defined in the range6 parameter, depending on from which subnet the client sent the request. Note that IPv6 requires uses router advertisement messages to identify the default gateway. Add a subnet6 declaration for the subnet the server is directly connected to and that is used to reach the remote subnets specified in shared-network above: Note If the server does not provide DHCP service to this subnet, the subnet6 declaration must be empty as shown in the example. Without a declaration for the directly connected subnet, dhcpd does not start. Optional: Configure that dhcpd6 starts automatically when the system boots: Start the dhcpd6 service: Additional resources The dhcp-options(5) and dhcpd.conf(5) man pages on your system /usr/share/doc/dhcp-server/dhcpd.conf.example file /usr/share/doc/dhcp-server/dhcpd6.conf.example file Setting up a DHCP relay agent 3.10. Assigning a static address to a host using DHCP Using a host declaration, you can configure the DHCP server to assign a fixed IP address to a media access control (MAC) address of a host. For example, use this method to always assign the same IP address to a server or network device. Depending on whether you want to configure fixed addresses for IPv4, IPv6, or both protocols, see the procedure for: IPv4 networks IPv6 networks Prerequisites The dhcpd service is configured and running. You are logged in as the root user. Procedure For IPv4 networks: Edit the /etc/dhcp/dhcpd.conf file: Add a host declaration: This example configures the DHCP server to always assign the 192.0.2.130 IP address to the host with the 52:54:00:72:2f:6e MAC address. The dhcpd service identifies systems by the MAC address specified in the fixed-address parameter, and not by the name in the host declaration. As a consequence, you can set this name to any string that does not match other host declarations. To configure the same system for multiple networks, use a different name, otherwise, dhcpd fails to start. Optional: Add further settings to the host declaration that are specific for this host. Restart the dhcpd service: For IPv6 networks: Edit the /etc/dhcp/dhcpd6.conf file: Add a host declaration: This example configures the DHCP server to always assign the 2001:db8:0:1::20 IP address to the host with the 52:54:00:72:2f:6e MAC address. The dhcpd service identifies systems by the MAC address specified in the fixed-address6 parameter, and not by the name in the host declaration. As a consequence, you can set this name to any string, provided that it is unique to other host declarations. To configure the same system for multiple networks, use a different name because, otherwise, dhcpd fails to start. Optional: Add further settings to the host declaration that are specific for this host. Restart the dhcpd6 service: Additional resources dhcp-options(5) man page on your system /usr/share/doc/dhcp-server/dhcpd.conf.example file /usr/share/doc/dhcp-server/dhcpd6.conf.example file 3.11. Using a group declaration to apply parameters to multiple hosts, subnets, and shared networks at the same time Using a group declaration, you can apply the same parameters to multiple hosts, subnets, and shared networks. Note that the procedure describes using a group declaration for hosts, but the steps are the same for subnets and shared networks. Depending on whether you want to configure a group for IPv4, IPv6, or both protocols, see the procedure for: IPv4 networks IPv6 networks Prerequisites The dhcpd service is configured and running. You are logged in as the root user. Procedure For IPv4 networks: Edit the /etc/dhcp/dhcpd.conf file: Add a group declaration: This group definition groups two host entries. The dhcpd service applies the value set in the option domain-name-servers parameter to both hosts in the group. Optional: Add further settings to the group declaration that are specific for these hosts. Restart the dhcpd service: For IPv6 networks: Edit the /etc/dhcp/dhcpd6.conf file: Add a group declaration: This group definition groups two host entries. The dhcpd service applies the value set in the option dhcp6.domain-search parameter to both hosts in the group. Optional: Add further settings to the group declaration that are specific for these hosts. Restart the dhcpd6 service: Additional resources dhcp-options(5) man page on your system /usr/share/doc/dhcp-server/dhcpd.conf.example file /usr/share/doc/dhcp-server/dhcpd6.conf.example file 3.12. Restoring a corrupt lease database If the DHCP server logs an error that is related to the lease database, such as Corrupt lease file - possible data loss! ,you can restore the lease database from the copy the dhcpd service created. Note that this copy might not reflect the latest status of the database. Warning If you remove the lease database instead of replacing it with a backup, you lose all information about the currently assigned leases. As a consequence, the DHCP server could assign leases to clients that have been previously assigned to other hosts and are not expired yet. This leads to IP conflicts. Depending on whether you want to restore the DHCPv4, DHCPv6, or both databases, see the procedure for: Restoring the DHCPv4 lease database Restoring the DHCPv6 lease database Prerequisites You are logged in as the root user. The lease database is corrupt. Procedure Restoring the DHCPv4 lease database: Stop the dhcpd service: Rename the corrupt lease database: Restore the copy of the lease database that the dhcp service created when it refreshed the lease database: Important If you have a more recent backup of the lease database, restore this backup instead. Start the dhcpd service: Restoring the DHCPv6 lease database: Stop the dhcpd6 service: Rename the corrupt lease database: Restore the copy of the lease database that the dhcp service created when it refreshed the lease database: Important If you have a more recent backup of the lease database, restore this backup instead. Start the dhcpd6 service: Additional resources The lease database of the dhcpd service 3.13. Setting up a DHCP relay agent The DHCP Relay Agent ( dhcrelay ) enables the relay of DHCP and BOOTP requests from a subnet with no DHCP server on it to one or more DHCP servers on other subnets. When a DHCP client requests information, the DHCP Relay Agent forwards the request to the list of DHCP servers specified. When a DHCP server returns a reply, the DHCP Relay Agent forwards this request to the client. Depending on whether you want to set up a DHCP relay for IPv4, IPv6, or both protocols, see the procedure for: IPv4 networks IPv6 networks Prerequisites You are logged in as the root user. Procedure For IPv4 networks: Install the dhcp-relay package: Copy the /lib/systemd/system/dhcrelay.service file to the /etc/systemd/system/ directory: Do not edit the /usr/lib/systemd/system/dhcrelay.service file. Future updates of the dhcp-relay package can override the changes. Edit the /etc/systemd/system/dhcrelay.service file, and append the -i interface parameter, together with a list of IP addresses of DHCPv4 servers that are responsible for the subnet: With these additional parameters, dhcrelay listens for DHCPv4 requests on the enp1s0 interface and forwards them to the DHCP server with the IP 192.0.2.1 . Reload the systemd manager configuration: Optional: Configure that the dhcrelay service starts when the system boots: Start the dhcrelay service: For IPv6 networks: Install the dhcp-relay package: Copy the /lib/systemd/system/dhcrelay.service file to the /etc/systemd/system/ directory and name the file dhcrelay6.service : Do not edit the /usr/lib/systemd/system/dhcrelay.service file. Future updates of the dhcp-relay package can override the changes. Edit the /etc/systemd/system/dhcrelay6.service file, and append the -l receiving_interface and -u outgoing_interface parameters: With these additional parameters, dhcrelay listens for DHCPv6 requests on the enp1s0 interface and forwards them to the network connected to the enp7s0 interface. Reload the systemd manager configuration: Optional: Configure that the dhcrelay6 service starts when the system boots: Start the dhcrelay6 service: Additional resources dhcrelay(8) man page on your system
[ "yum install radvd", "interface enp1s0 { AdvSendAdvert on; AdvManagedFlag on; AdvOtherConfigFlag on; prefix 2001:db8:0:1::/64 { }; };", "systemctl enable radvd", "systemctl start radvd", "radvdump", "cp /usr/lib/systemd/system/dhcpd.service /etc/systemd/system/", "ExecStart=/usr/sbin/dhcpd -f -cf /etc/dhcp/dhcpd.conf -user dhcpd -group dhcpd --no-pid USDDHCPDARGS enp0s1 enp7s0", "systemctl daemon-reload", "systemctl restart dhcpd.service", "cp /usr/lib/systemd/system/dhcpd6.service /etc/systemd/system/", "ExecStart=/usr/sbin/dhcpd -f -6 -cf /etc/dhcp/dhcpd6.conf -user dhcpd -group dhcpd --no-pid USDDHCPDARGS enp0s1 enp7s0", "systemctl daemon-reload", "systemctl restart dhcpd6.service", "option domain-name \"example.com\"; default-lease-time 86400;", "authoritative;", "subnet 192.0.2.0 netmask 255.255.255.0 { range 192.0.2.20 192.0.2.100; option domain-name-servers 192.0.2.1; option routers 192.0.2.1; option broadcast-address 192.0.2.255; max-lease-time 172800; }", "systemctl enable dhcpd", "systemctl start dhcpd", "option dhcp6.domain-search \"example.com\"; default-lease-time 86400;", "authoritative;", "subnet6 2001:db8:0:1::/64 { range6 2001:db8:0:1::20 2001:db8:0:1::100; option dhcp6.name-servers 2001:db8:0:1::1; max-lease-time 172800; }", "systemctl enable dhcpd6", "systemctl start dhcpd6", "option domain-name \"example.com\"; default-lease-time 86400;", "authoritative;", "shared-network example { option domain-name-servers 192.0.2.1; subnet 192.0.2.0 netmask 255.255.255.0 { range 192.0.2.20 192.0.2.100; option routers 192.0.2.1; } subnet 198.51.100.0 netmask 255.255.255.0 { range 198.51.100.20 198.51.100.100; option routers 198.51.100.1; } }", "subnet 203.0.113.0 netmask 255.255.255.0 { }", "systemctl enable dhcpd", "systemctl start dhcpd", "option dhcp6.domain-search \"example.com\"; default-lease-time 86400;", "authoritative;", "shared-network example { option domain-name-servers 2001:db8:0:1::1:1 subnet6 2001:db8:0:1::1:0/120 { range6 2001:db8:0:1::1:20 2001:db8:0:1::1:100 } subnet6 2001:db8:0:1::2:0/120 { range6 2001:db8:0:1::2:20 2001:db8:0:1::2:100 } }", "subnet6 2001:db8:0:1::50:0/120 { }", "systemctl enable dhcpd6", "systemctl start dhcpd6", "host server.example.com { hardware ethernet 52:54:00:72:2f:6e; fixed-address 192.0.2.130; }", "systemctl start dhcpd", "host server.example.com { hardware ethernet 52:54:00:72:2f:6e; fixed-address6 2001:db8:0:1::200; }", "systemctl start dhcpd6", "group { option domain-name-servers 192.0.2.1; host server1.example.com { hardware ethernet 52:54:00:72:2f:6e; fixed-address 192.0.2.130; } host server2.example.com { hardware ethernet 52:54:00:1b:f3:cf; fixed-address 192.0.2.140; } }", "systemctl start dhcpd", "group { option dhcp6.domain-search \"example.com\"; host server1.example.com { hardware ethernet 52:54:00:72:2f:6e; fixed-address 2001:db8:0:1::200; } host server2.example.com { hardware ethernet 52:54:00:1b:f3:cf; fixed-address 2001:db8:0:1::ba3; } }", "systemctl start dhcpd6", "systemctl stop dhcpd", "mv /var/lib/dhcpd/dhcpd.leases /var/lib/dhcpd/dhcpd.leases.corrupt", "cp -p /var/lib/dhcpd/dhcpd.leases~ /var/lib/dhcpd/dhcpd.leases", "systemctl start dhcpd", "systemctl stop dhcpd6", "mv /var/lib/dhcpd/dhcpd6.leases /var/lib/dhcpd/dhcpd6.leases.corrupt", "cp -p /var/lib/dhcpd/dhcpd6.leases~ /var/lib/dhcpd/dhcpd6.leases", "systemctl start dhcpd6", "yum install dhcp-relay", "cp /lib/systemd/system/dhcrelay.service /etc/systemd/system/", "ExecStart=/usr/sbin/dhcrelay -d --no-pid -i enp1s0 192.0.2.1", "systemctl daemon-reload", "systemctl enable dhcrelay.service", "systemctl start dhcrelay.service", "yum install dhcp-relay", "cp /lib/systemd/system/dhcrelay.service /etc/systemd/system/dhcrelay6.service", "ExecStart=/usr/sbin/dhcrelay -d --no-pid -l enp1s0 -u enp7s0", "systemctl daemon-reload", "systemctl enable dhcrelay6.service", "systemctl start dhcrelay6.service" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_networking_infrastructure_services/providing-dhcp-services_networking-infrastructure-services
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Create creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/installing_and_using_red_hat_build_of_openjdk_17_on_rhel/proc-providing-feedback-on-redhat-documentation
Chapter 1. What is resource optimization for OpenShift?
Chapter 1. What is resource optimization for OpenShift? Resource optimization for OpenShift uses current and historical data from OpenShift to recommend actions to take: Shows metrics for CPU and memory usage and analyzes them Compares defined container requests and limits Analyzes the historical usage patterns to return optimization recommendations Reports usage of applications and deployments Optimizes the size of your pods Manages costs The data that resource optimization for OpenShift provides can improve your resource allocation and help you save money on your OpenShift cluster deployment.
null
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/getting_started_with_resource_optimization_for_openshift/what_is_resource_optimization_for_openshift
Chapter 8. Configuring your Logging deployment
Chapter 8. Configuring your Logging deployment 8.1. Configuring CPU and memory limits for logging components You can configure both the CPU and memory limits for each of the logging components as needed. 8.1.1. Configuring CPU and memory limits The logging components allow for adjustments to both the CPU and memory limits. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging ... spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: "gp2" size: "200G" redundancyPolicy: "SingleRedundancy" visualization: type: "kibana" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: logs: type: "fluentd" fluentd: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi 1 Specify the CPU and memory limits and requests for the log store as needed. For Elasticsearch, you must adjust both the request value and the limit value. 2 3 Specify the CPU and memory limits and requests for the log visualizer as needed. 4 Specify the CPU and memory limits and requests for the log collector as needed.
[ "oc -n openshift-logging edit ClusterLogging instance", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: logs: type: \"fluentd\" fluentd: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/logging/configuring-your-logging-deployment
5.14. bind-dyndb-ldap
5.14. bind-dyndb-ldap 5.14.1. RHSA-2012:1139 - Important: bind-dyndb-ldap security update An updated bind-dyndb-ldap package that fixes one security issue is now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link(s) associated with each description below. The dynamic LDAP back end is a plug-in for BIND that provides back-end capabilities to LDAP databases. It features support for dynamic updates and internal caching that help to reduce the load on LDAP servers. Security Fix CVE-2012-3429 A flaw was found in the way bind-dyndb-ldap performed the escaping of names from DNS requests for use in LDAP queries. A remote attacker able to send DNS queries to a named server that is configured to use bind-dyndb-ldap could use this flaw to cause named to exit unexpectedly with an assertion failure. Red Hat would like to thank Sigbjorn Lie of Atea Norway for reporting this issue. All bind-dyndb-ldap users should upgrade to this updated package, which contains a backported patch to correct this issue. For the update to take effect, the named service must be restarted. 5.14.2. RHBA-2012:0837 - bind-dyndb-ldap bug fix and enhancement update An updated bind-dyndb-ldap package which provides a number of bug fixes and enhancements is now available for Red Hat Enterprise Linux 6. The dynamic LDAP back end is a plug-in for BIND that provides back-end capabilities to LDAP databases. It features support for dynamic updates and internal caching that help to reduce the load on LDAP servers. Note The bind-dyndb-ldap package has been upgraded to upstream version 1.1.0b2 , which provides a number of bug fixes and enhancements over the version (BZ# 767486 ). Bug Fixes BZ# 751776 The bind-dyndb-ldap plug-in refused to load an entire zone when it contained an invalid Resource Record ( RR ) with the same Fully Qualified Domain Name ( FQDN ) as the zone name (for example an MX record). With this update, the code for parsing Resource Records has been improved. If an invalid RR is encountered, an error message " Failed to parse RR entry " is logged and the zone continues to load successfully. BZ# 767489 When the first connection to an LDAP server failed, the bind-dyndb-ldap plug-in did not try to connect again. Consequently, users had to execute the "rndc reload" command to make the plug-in work. With this update, the plug-in periodically retries to connect to an LDAP server. As a result, user intervention is no longer required and the plug-in works as expected. BZ# 767492 When the zone_refresh period timed out and a zone was removed from the LDAP server, the plug-in continued to serve the removed zone. With this update, the plug-in no longer serves zones which have been deleted from LDAP when the zone_refresh parameter is set. BZ# 789356 When the named daemon received the rndc reload command or a SIGHUP signal and the plug-in failed to connect to an LDAP server, the plug-in caused named to terminate unexpectedly when it received a query which belonged to a zone previously handled by the plug-in. This has been fixed, the plug-in no longer serves its zones when connection to LDAP fails during reload and no longer crashes in the scenario described. BZ# 796206 The plug-in terminated unexpectedly when named lost connection to an LDAP server for some time, then reconnected successfully, and some zones previously present had been removed from the LDAP server. The bug has been fixed and the plug-in no longer crashes in the scenario described. BZ# 805871 Certain string lengths were incorrectly set in the plug-in. Consequently, the Start of Authority ( SOA ) serial number and expiry time were incorrectly set for the forward zone during ipa-server installation. With this update, the code has been improved and the SOA serial number and expiry time are set as expected. BZ# 811074 When a Domain Name System ( DNS ) zone was managed by a bind-dyndb-ldap plugin and a sub-domain was delegated to another DNS server, the plug-in did not put A or AAAA glue records in the " additional section " of a DNS answer. Consequently, the delegated sub-domain was not accessible by other DNS servers. With this update, the plug-in has been fixed and now returns A or AAAA glue records of a delegated sub-domain in the " additional section " . As a result, delegated zones are correctly resolvable in the scenario described. BZ# 818933 Previously, the bind-dyndb-ldap plug-in did not escape non-ASCII characters in incoming DNS queries correctly. Consequently, the plug-in failed to send answers for queries which contained non-ASCII characters such as " , " . The plug-in has been fixed and now correctly returns answers for queries with non-ASCII characters. Enhancements BZ# 733371 The bind-dyndb-ldap plug-in now supports two new attributes, idnsAllowQuery and idnsAllowTransfer , which can be used to set ACLs for queries or transfers. Refer to /usr/share/doc/bind-dyndb-ldap/README for information on the attributes. BZ# 754433 The plug-in now supports the new zone attributes idnsForwarders and idnsForwardPolicy which can be used to configure forwarding. Refer to /usr/share/doc/bind-dyndb-ldap/README for a detailed description. BZ# 766233 The plug-in now supports zone transfers. BZ# 767494 The plug-in has a new option called sync_ptr that can be used to keep A and AAAA records and their PTR records synchronized. Refer to /usr/share/doc/bind-dyndb-ldap/README for a detailed description. BZ# 795406 It was not possible to store configuration for the plug-in in LDAP and configuration was only taken from the named.conf file. With this update, configuration information can be obtained from idnsConfigObject in LDAP. Note that options set in named.conf have lower priority than options set in LDAP. The priority will change in future updates. Refer to the README file for more details. Users of bind-dyndb-ldap package should upgrade to this updated package, which fixes these bugs and adds these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/bind-dyndb-ldap
18.12.5.2. DHCP Snooping
18.12.5.2. DHCP Snooping CTRL_IP_LEARNING= dhcp (DHCP snooping) provides additional anti-spoofing security, especially when combined with a filter allowing only trusted DHCP servers to assign IP addresses. To enable this, set the variable DHCPSERVER to the IP address of a valid DHCP server and provide filters that use this variable to filter incoming DHCP responses. When DHCP snooping is enabled and the DHCP lease expires, the guest virtual machine will no longer be able to use the IP address until it acquires a new, valid lease from a DHCP server. If the guest virtual machine is migrated, it must get a new valid DHCP lease to use an IP address (for example, by bringing the VM interface down and up again). Note Automatic DHCP detection listens to the DHCP traffic the guest virtual machine exchanges with the DHCP server of the infrastructure. To avoid denial-of-service attacks on libvirt, the evaluation of those packets is rate-limited, meaning that a guest virtual machine sending an excessive number of DHCP packets per second on an interface will not have all of those packets evaluated and thus filters may not get adapted. Normal DHCP client behavior is assumed to send a low number of DHCP packets per second. Further, it is important to setup appropriate filters on all guest virtual machines in the infrastructure to avoid them being able to send DHCP packets. Therefore guest virtual machines must either be prevented from sending UDP and TCP traffic from port 67 to port 68 or the DHCPSERVER variable should be used on all guest virtual machines to restrict DHCP server messages to only be allowed to originate from trusted DHCP servers. At the same time anti-spoofing prevention must be enabled on all guest virtual machines in the subnet. Example 18.6. Activating IPs for DHCP snooping The following XML provides an example for the activation of IP address learning using the DHCP snooping method:
[ "<interface type='bridge'> <source bridge='virbr0'/> <filterref filter='clean-traffic'> <parameter name='CTRL_IP_LEARNING' value='dhcp'/> </filterref> </interface>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sub-sect-dhcp-snooping
8.9. Verifying a Configuration
8.9. Verifying a Configuration Once you have created your cluster configuration file, verify that it is running correctly by performing the following steps: At each node, restart the cluster software. That action ensures that any configuration additions that are checked only at startup time are included in the running configuration. You can restart the cluster software by running service cman restart . For example: Run service clvmd start , if CLVM is being used to create clustered volumes. For example: Run service gfs2 start , if you are using Red Hat GFS2. For example: Run service rgmanager start , if you using high-availability (HA) services. For example: At any cluster node, run cman_tool nodes to verify that the nodes are functioning as members in the cluster (signified as "M" in the status column, "Sts"). For example: At any node, using the clustat utility, verify that the HA services are running as expected. In addition, clustat displays status of the cluster nodes. For example: If the cluster is running as expected, you are done with creating a configuration file. You can manage the cluster with command-line tools described in Chapter 9, Managing Red Hat High Availability Add-On With Command Line Tools .
[ "service cman restart Stopping cluster: Leaving fence domain... [ OK ] Stopping gfs_controld... [ OK ] Stopping dlm_controld... [ OK ] Stopping fenced... [ OK ] Stopping cman... [ OK ] Waiting for corosync to shutdown: [ OK ] Unloading kernel modules... [ OK ] Unmounting configfs... [ OK ] Starting cluster: Checking Network Manager... [ OK ] Global setup... [ OK ] Loading kernel modules... [ OK ] Mounting configfs... [ OK ] Starting cman... [ OK ] Waiting for quorum... [ OK ] Starting fenced... [ OK ] Starting dlm_controld... [ OK ] Starting gfs_controld... [ OK ] Unfencing self... [ OK ] Joining fence domain... [ OK ]", "service clvmd start Activating VGs: [ OK ]", "service gfs2 start Mounting GFS2 filesystem (/mnt/gfsA): [ OK ] Mounting GFS2 filesystem (/mnt/gfsB): [ OK ]", "service rgmanager start Starting Cluster Service Manager: [ OK ]", "cman_tool nodes Node Sts Inc Joined Name 1 M 548 2010-09-28 10:52:21 node-01.example.com 2 M 548 2010-09-28 10:52:21 node-02.example.com 3 M 544 2010-09-28 10:52:21 node-03.example.com", "clustat Cluster Status for mycluster @ Wed Nov 17 05:40:00 2010 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ node-03.example.com 3 Online, rgmanager node-02.example.com 2 Online, rgmanager node-01.example.com 1 Online, Local, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:example_apache node-01.example.com started service:example_apache2 (none) disabled" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-verify-config-cli-CA
Chapter 5. Compliance Operator
Chapter 5. Compliance Operator 5.1. Compliance Operator overview The OpenShift Container Platform Compliance Operator assists users by automating the inspection of numerous technical implementations and compares those against certain aspects of industry standards, benchmarks, and baselines; the Compliance Operator is not an auditor. In order to be compliant or certified under these various standards, you need to engage an authorized auditor such as a Qualified Security Assessor (QSA), Joint Authorization Board (JAB), or other industry recognized regulatory authority to assess your environment. The Compliance Operator makes recommendations based on generally available information and practices regarding such standards and may assist with remediations, but actual compliance is your responsibility. You are required to work with an authorized auditor to achieve compliance with a standard. For the latest updates, see the Compliance Operator release notes . For more information on compliance support for all Red Hat products, see Product Compliance . Compliance Operator concepts Understanding the Compliance Operator Understanding the Custom Resource Definitions Compliance Operator management Installing the Compliance Operator Updating the Compliance Operator Managing the Compliance Operator Uninstalling the Compliance Operator Compliance Operator scan management Supported compliance profiles Compliance Operator scans Tailoring the Compliance Operator Retrieving Compliance Operator raw results Managing Compliance Operator remediation Performing advanced Compliance Operator tasks Troubleshooting the Compliance Operator Using the oc-compliance plugin 5.2. Compliance Operator release notes The Compliance Operator lets OpenShift Container Platform administrators describe the required compliance state of a cluster and provides them with an overview of gaps and ways to remediate them. These release notes track the development of the Compliance Operator in the OpenShift Container Platform. For an overview of the Compliance Operator, see Understanding the Compliance Operator . To access the latest release, see Updating the Compliance Operator . For more information on compliance support for all Red Hat products, see Product Compliance . 5.2.1. OpenShift Compliance Operator 1.6.2 The following advisory is available for the OpenShift Compliance Operator 1.6.2: RHBA-2025:2659 - OpenShift Compliance Operator 1.6.2 update CVE-2024-45338 is resolved in the Compliance Operator 1.6.2 release. ( CVE-2024-45338 ) 5.2.2. OpenShift Compliance Operator 1.6.1 The following advisory is available for the OpenShift Compliance Operator 1.6.1: RHBA-2024:10367 - OpenShift Compliance Operator 1.6.1 update This update includes upgraded dependencies in underlying base images. 5.2.3. OpenShift Compliance Operator 1.6.0 The following advisory is available for the OpenShift Compliance Operator 1.6.0: RHBA-2024:6761 - OpenShift Compliance Operator 1.6.0 bug fix and enhancement update 5.2.3.1. New features and enhancements The Compliance Operator now contains supported profiles for Payment Card Industry Data Security Standard (PCI-DSS) version 4. For more information, see Supported compliance profiles . The Compliance Operator now contains supported profiles for Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) V2R1. For more information, see Supported compliance profiles . A must-gather extension is now available for the Compliance Operator installed on x86 , ppc64le , and s390x architectures. The must-gather tool provides crucial configuration details to Red Hat Customer Support and engineering. For more information, see Using the must-gather tool for the Compliance Operator . 5.2.3.2. Bug fixes Before this release, a misleading description in the ocp4-route-ip-whitelist rule resulted in misunderstanding, causing potential for misconfigurations. With this update, the rule is now more clearly defined. ( CMP-2485 ) Previously, the reporting of all of the ComplianceCheckResults for a DONE status ComplianceScan was incomplete. With this update, annotation has been added to report the number of total ComplianceCheckResults for a ComplianceScan with a DONE status. ( CMP-2615 ) Previously, the ocp4-cis-scc-limit-container-allowed-capabilities rule description contained ambiguous guidelines, leading to confusion among users. With this update, the rule description and actionable steps are clarified. ( OCPBUGS-17828 ) Before this update, sysctl configurations caused certain auto remediations for RHCOS4 rules to fail scans in affected clusters. With this update, the correct sysctl settings are applied and RHCOS4 rules for FedRAMP High profiles pass scans correctly. ( OCPBUGS-19690 ) Before this update, an issue with a jq filter caused errors with the rhacs-operator-controller-manager deployment during compliance checks. With this update, the jq filter expression is updated and the rhacs-operator-controller-manager deployment is exempt from compliance checks pertaining to container resource limits, eliminating false positive results. ( OCPBUGS-19690 ) Before this update, rhcos4-high and rhcos4-moderate profiles checked values of an incorrectly titled configuration file. As a result, some scan checks could fail. With this update, the rhcos4 profiles now check the correct configuration file and scans pass correctly. ( OCPBUGS-31674 ) Previously, the accessokenInactivityTimeoutSeconds variable used in the oauthclient-inactivity-timeout rule was immutable, leading to a FAIL status when performing DISA STIG scans. With this update, proper enforcement of the accessTokenInactivityTimeoutSeconds variable operates correctly and a PASS status is now possible. ( OCPBUGS-32551 ) Before this update, some annotations for rules were not updated, displaying the incorrect control standards. With this update, annotations for rules are updated correctly, ensuring the correct control standards are displayed. ( OCPBUGS-34982 ) Previously, when upgrading to Compliance Operator 1.5.1, an incorrectly referenced secret in a ServiceMonitor configuration caused integration issues with the Prometheus Operator. With this update, the Compliance Operator will accurately reference the secret containing the token for ServiceMonitor metrics. ( OCPBUGS-39417 ) 5.2.4. OpenShift Compliance Operator 1.5.1 The following advisory is available for the OpenShift Compliance Operator 1.5.1: RHBA-2024:5956 - OpenShift Compliance Operator 1.5.1 bug fix and enhancement update 5.2.5. OpenShift Compliance Operator 1.5.0 The following advisory is available for the OpenShift Compliance Operator 1.5.0: RHBA-2024:3533 - OpenShift Compliance Operator 1.5.0 bug fix and enhancement update 5.2.5.1. New features and enhancements With this update, the Compliance Operator provides a unique profile ID for easier programmatic use. ( CMP-2450 ) With this release, the Compliance Operator is now tested and supported on the ROSA HCP environment. The Compliance Operator loads only Node profiles when running on ROSA HCP. This is because a Red Hat managed platform restricts access to the control plane, which makes Platform profiles irrelevant to the operator's function.( CMP-2581 ) 5.2.5.2. Bug fixes CVE-2024-2961 is resolved in the Compliance Operator 1.5.0 release. ( CVE-2024-2961 ) Previously, for ROSA HCP systems, profile listings were incorrect. This update allows the Compliance Operator to provide correct profile output. ( OCPBUGS-34535 ) With this release, namespaces can be excluded from the ocp4-configure-network-policies-namespaces check by setting the ocp4-var-network-policies-namespaces-exempt-regex variable in the tailored profile. ( CMP-2543 ) 5.2.6. OpenShift Compliance Operator 1.4.1 The following advisory is available for the OpenShift Compliance Operator 1.4.1: RHBA-2024:1830 - OpenShift Compliance Operator bug fix and enhancement update 5.2.6.1. New features and enhancements As of this release, the Compliance Operator now provides the CIS OpenShift 1.5.0 profile rules. ( CMP-2447 ) With this update, the Compliance Operator now provides OCP4 STIG ID and SRG with the profile rules. ( CMP-2401 ) With this update, obsolete rules being applied to s390x have been removed. ( CMP-2471 ) 5.2.6.2. Bug fixes Previously, for Red Hat Enterprise Linux CoreOS (RHCOS) systems using Red Hat Enterprise Linux (RHEL) 9, application of the ocp4-kubelet-enable-protect-kernel-sysctl-file-exist rule failed. This update replaces the rule with ocp4-kubelet-enable-protect-kernel-sysctl . Now, after auto remediation is applied, RHEL 9-based RHCOS systems will show PASS upon the application of this rule. ( OCPBUGS-13589 ) Previously, after applying compliance remediations using profile rhcos4-e8 , the nodes were no longer accessible using SSH to the core user account. With this update, nodes remain accessible through SSH using the `sshkey1 option. ( OCPBUGS-18331 ) Previously, the STIG profile was missing rules from CaC that fulfill requirements on the published STIG for OpenShift Container Platform. With this update, upon remediation, the cluster satisfies STIG requirements that can be remediated using Compliance Operator. ( OCPBUGS-26193 ) Previously, creating a ScanSettingBinding object with profiles of different types for multiple products bypassed a restriction against multiple products types in a binding. With this update, the product validation now allows multiple products regardless of the of profile types in the ScanSettingBinding object. ( OCPBUGS-26229 ) Previously, running the rhcos4-service-debug-shell-disabled rule showed as FAIL even after auto-remediation was applied. With this update, running the rhcos4-service-debug-shell-disabled rule now shows PASS after auto-remediation is applied. ( OCPBUGS-28242 ) With this update, instructions for the use of the rhcos4-banner-etc-issue rule are enhanced to provide more detail. ( OCPBUGS-28797 ) Previously the api_server_api_priority_flowschema_catch_all rule provided FAIL status on OpenShift Container Platform 4.16 clusters. With this update, the api_server_api_priority_flowschema_catch_all rule provides PASS status on OpenShift Container Platform 4.16 clusters. ( OCPBUGS-28918 ) Previously, when a profile was removed from a completed scan shown in a ScanSettingBinding (SSB) object, the Compliance Operator did not remove the old scan. Afterward, when launching a new SSB using the deleted profile, the Compliance Operator failed to update the result. With this release of the Compliance Operator, the new SSB now shows the new compliance check result. ( OCPBUGS-29272 ) Previously, on ppc64le architecture, the metrics service was not created. With this update, when deploying the Compliance Operator v1.4.1 on ppc64le architecture, the metrics service is now created correctly. ( OCPBUGS-32797 ) Previously, on a HyperShift hosted cluster, a scan with the ocp4-pci-dss profile will run into an unrecoverable error due to a filter cannot iterate issue. With this release, the scan for the ocp4-pci-dss profile will reach done status and return either a Compliance or Non-Compliance test result. ( OCPBUGS-33067 ) 5.2.7. OpenShift Compliance Operator 1.4.0 The following advisory is available for the OpenShift Compliance Operator 1.4.0: RHBA-2023:7658 - OpenShift Compliance Operator bug fix and enhancement update 5.2.7.1. New features and enhancements With this update, clusters which use custom node pools outside the default worker and master node pools no longer need to supply additional variables to ensure Compliance Operator aggregates the configuration file for that node pool. Users can now pause scan schedules by setting the ScanSetting.suspend attribute to True . This allows users to suspend a scan schedule and reactivate it without the need to delete and re-create the ScanSettingBinding . This simplifies pausing scan schedules during maintenance periods. ( CMP-2123 ) Compliance Operator now supports an optional version attribute on Profile custom resources. ( CMP-2125 ) Compliance Operator now supports profile names in ComplianceRules . ( CMP-2126 ) Compliance Operator compatibility with improved cronjob API improvements is available in this release. ( CMP-2310 ) 5.2.7.2. Bug fixes Previously, on a cluster with Windows nodes, some rules will FAIL after auto remediation is applied because the Windows nodes were not skipped by the compliance scan. With this release, Windows nodes are correctly skipped when scanning. ( OCPBUGS-7355 ) With this update, rprivate default mount propagation is now handled correctly for root volume mounts of pods that rely on multipathing. ( OCPBUGS-17494 ) Previously, the Compliance Operator would generate a remediation for coreos_vsyscall_kernel_argument without reconciling the rule even while applying the remediation. With release 1.4.0, the coreos_vsyscall_kernel_argument rule properly evaluates kernel arguments and generates an appropriate remediation.( OCPBUGS-8041 ) Before this update, rule rhcos4-audit-rules-login-events-faillock would fail even after auto-remediation has been applied. With this update, rhcos4-audit-rules-login-events-faillock failure locks are now applied correctly after auto-remediation. ( OCPBUGS-24594 ) Previously, upgrades from Compliance Operator 1.3.1 to Compliance Operator 1.4.0 would cause OVS rules scan results to go from PASS to NOT-APPLICABLE . With this update, OVS rules scan results now show PASS ( OCPBUGS-25323 ) 5.2.8. OpenShift Compliance Operator 1.3.1 The following advisory is available for the OpenShift Compliance Operator 1.3.1: RHBA-2023:5669 - OpenShift Compliance Operator bug fix and enhancement update This update addresses a CVE in an underlying dependency. 5.2.8.1. New features and enhancements You can install and use the Compliance Operator in an OpenShift Container Platform cluster running in FIPS mode. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 5.2.8.2. Known issue On a cluster with Windows nodes, some rules will FAIL after auto remediation is applied because the Windows nodes are not skipped by the compliance scan. This differs from the expected results because the Windows nodes must be skipped when scanning. ( OCPBUGS-7355 ) 5.2.9. OpenShift Compliance Operator 1.3.0 The following advisory is available for the OpenShift Compliance Operator 1.3.0: RHBA-2023:5102 - OpenShift Compliance Operator enhancement update 5.2.9.1. New features and enhancements The Defense Information Systems Agency Security Technical Implementation Guide (DISA-STIG) for OpenShift Container Platform is now available from Compliance Operator 1.3.0. See Supported compliance profiles for additional information. Compliance Operator 1.3.0 now supports IBM Power(R) and IBM Z(R) for NIST 800-53 Moderate-Impact Baseline for OpenShift Container Platform platform and node profiles. 5.2.10. OpenShift Compliance Operator 1.2.0 The following advisory is available for the OpenShift Compliance Operator 1.2.0: RHBA-2023:4245 - OpenShift Compliance Operator enhancement update 5.2.10.1. New features and enhancements The CIS OpenShift Container Platform 4 Benchmark v1.4.0 profile is now available for platform and node applications. To locate the CIS OpenShift Container Platform v4 Benchmark, go to CIS Benchmarks and click Download Latest CIS Benchmark , where you can then register to download the benchmark. Important Upgrading to Compliance Operator 1.2.0 will overwrite the CIS OpenShift Container Platform 4 Benchmark 1.1.0 profiles. If your OpenShift Container Platform environment contains existing cis and cis-node remediations, there might be some differences in scan results after upgrading to Compliance Operator 1.2.0. Additional clarity for auditing security context constraints (SCCs) is now available for the scc-limit-container-allowed-capabilities rule. 5.2.11. OpenShift Compliance Operator 1.1.0 The following advisory is available for the OpenShift Compliance Operator 1.1.0: RHBA-2023:3630 - OpenShift Compliance Operator bug fix and enhancement update 5.2.11.1. New features and enhancements A start and end timestamp is now available in the ComplianceScan custom resource definition (CRD) status. The Compliance Operator can now be deployed on hosted control planes using the OperatorHub by creating a Subscription file. For more information, see Installing the Compliance Operator on hosted control planes . 5.2.11.2. Bug fixes Before this update, some Compliance Operator rule instructions were not present. After this update, instructions are improved for the following rules: classification_banner oauth_login_template_set oauth_logout_url_set oauth_provider_selection_set ocp_allowed_registries ocp_allowed_registries_for_import ( OCPBUGS-10473 ) Before this update, check accuracy and rule instructions were unclear. After this update, the check accuracy and instructions are improved for the following sysctl rules: kubelet-enable-protect-kernel-sysctl kubelet-enable-protect-kernel-sysctl-kernel-keys-root-maxbytes kubelet-enable-protect-kernel-sysctl-kernel-keys-root-maxkeys kubelet-enable-protect-kernel-sysctl-kernel-panic kubelet-enable-protect-kernel-sysctl-kernel-panic-on-oops kubelet-enable-protect-kernel-sysctl-vm-overcommit-memory kubelet-enable-protect-kernel-sysctl-vm-panic-on-oom ( OCPBUGS-11334 ) Before this update, the ocp4-alert-receiver-configured rule did not include instructions. With this update, the ocp4-alert-receiver-configured rule now includes improved instructions. ( OCPBUGS-7307 ) Before this update, the rhcos4-sshd-set-loglevel-info rule would fail for the rhcos4-e8 profile. With this update, the remediation for the sshd-set-loglevel-info rule was updated to apply the correct configuration changes, allowing subsequent scans to pass after the remediation is applied. ( OCPBUGS-7816 ) Before this update, a new installation of OpenShift Container Platform with the latest Compliance Operator install failed on the scheduler-no-bind-address rule. With this update, the scheduler-no-bind-address rule has been disabled on newer versions of OpenShift Container Platform since the parameter was removed. ( OCPBUGS-8347 ) 5.2.12. OpenShift Compliance Operator 1.0.0 The following advisory is available for the OpenShift Compliance Operator 1.0.0: RHBA-2023:1682 - OpenShift Compliance Operator bug fix update 5.2.12.1. New features and enhancements The Compliance Operator is now stable and the release channel is upgraded to stable . Future releases will follow Semantic Versioning . To access the latest release, see Updating the Compliance Operator . 5.2.12.2. Bug fixes Before this update, the compliance_operator_compliance_scan_error_total metric had an ERROR label with a different value for each error message. With this update, the compliance_operator_compliance_scan_error_total metric does not increase in values. ( OCPBUGS-1803 ) Before this update, the ocp4-api-server-audit-log-maxsize rule would result in a FAIL state. With this update, the error message has been removed from the metric, decreasing the cardinality of the metric in line with best practices. ( OCPBUGS-7520 ) Before this update, the rhcos4-enable-fips-mode rule description was misleading that FIPS could be enabled after installation. With this update, the rhcos4-enable-fips-mode rule description clarifies that FIPS must be enabled at install time. ( OCPBUGS-8358 ) 5.2.13. OpenShift Compliance Operator 0.1.61 The following advisory is available for the OpenShift Compliance Operator 0.1.61: RHBA-2023:0557 - OpenShift Compliance Operator bug fix update 5.2.13.1. New features and enhancements The Compliance Operator now supports timeout configuration for Scanner Pods. The timeout is specified in the ScanSetting object. If the scan is not completed within the timeout, the scan retries until the maximum number of retries is reached. See Configuring ScanSetting timeout for more information. 5.2.13.2. Bug fixes Before this update, Compliance Operator remediations required variables as inputs. Remediations without variables set were applied cluster-wide and resulted in stuck nodes, even though it appeared the remediation applied correctly. With this update, the Compliance Operator validates if a variable needs to be supplied using a TailoredProfile for a remediation. ( OCPBUGS-3864 ) Before this update, the instructions for ocp4-kubelet-configure-tls-cipher-suites were incomplete, requiring users to refine the query manually. With this update, the query provided in ocp4-kubelet-configure-tls-cipher-suites returns the actual results to perform the audit steps. ( OCPBUGS-3017 ) Before this update, system reserved parameters were not generated in kubelet configuration files, causing the Compliance Operator to fail to unpause the machine config pool. With this update, the Compliance Operator omits system reserved parameters during machine configuration pool evaluation. ( OCPBUGS-4445 ) Before this update, ComplianceCheckResult objects did not have correct descriptions. With this update, the Compliance Operator sources the ComplianceCheckResult information from the rule description. ( OCPBUGS-4615 ) Before this update, the Compliance Operator did not check for empty kubelet configuration files when parsing machine configurations. As a result, the Compliance Operator would panic and crash. With this update, the Compliance Operator implements improved checking of the kubelet configuration data structure and only continues if it is fully rendered. ( OCPBUGS-4621 ) Before this update, the Compliance Operator generated remediations for kubelet evictions based on machine config pool name and a grace period, resulting in multiple remediations for a single eviction rule. With this update, the Compliance Operator applies all remediations for a single rule. ( OCPBUGS-4338 ) Before this update, a regression occurred when attempting to create a ScanSettingBinding that was using a TailoredProfile with a non-default MachineConfigPool marked the ScanSettingBinding as Failed . With this update, functionality is restored and custom ScanSettingBinding using a TailoredProfile performs correctly. ( OCPBUGS-6827 ) Before this update, some kubelet configuration parameters did not have default values. With this update, the following parameters contain default values ( OCPBUGS-6708 ): ocp4-cis-kubelet-enable-streaming-connections ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-available ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-inodesfree ocp4-cis-kubelet-eviction-thresholds-set-hard-memory-available ocp4-cis-kubelet-eviction-thresholds-set-hard-nodefs-available Before this update, the selinux_confinement_of_daemons rule failed running on the kubelet because of the permissions necessary for the kubelet to run. With this update, the selinux_confinement_of_daemons rule is disabled. ( OCPBUGS-6968 ) 5.2.14. OpenShift Compliance Operator 0.1.59 The following advisory is available for the OpenShift Compliance Operator 0.1.59: RHBA-2022:8538 - OpenShift Compliance Operator bug fix update 5.2.14.1. New features and enhancements The Compliance Operator now supports Payment Card Industry Data Security Standard (PCI-DSS) ocp4-pci-dss and ocp4-pci-dss-node profiles on the ppc64le architecture. 5.2.14.2. Bug fixes Previously, the Compliance Operator did not support the Payment Card Industry Data Security Standard (PCI DSS) ocp4-pci-dss and ocp4-pci-dss-node profiles on different architectures such as ppc64le . Now, the Compliance Operator supports ocp4-pci-dss and ocp4-pci-dss-node profiles on the ppc64le architecture. ( OCPBUGS-3252 ) Previously, after the recent update to version 0.1.57, the rerunner service account (SA) was no longer owned by the cluster service version (CSV), which caused the SA to be removed during the Operator upgrade. Now, the CSV owns the rerunner SA in 0.1.59, and upgrades from any version will not result in a missing SA. ( OCPBUGS-3452 ) 5.2.15. OpenShift Compliance Operator 0.1.57 The following advisory is available for the OpenShift Compliance Operator 0.1.57: RHBA-2022:6657 - OpenShift Compliance Operator bug fix update 5.2.15.1. New features and enhancements KubeletConfig checks changed from Node to Platform type. KubeletConfig checks the default configuration of the KubeletConfig . The configuration files are aggregated from all nodes into a single location per node pool. See Evaluating KubeletConfig rules against default configuration values . The ScanSetting Custom Resource now allows users to override the default CPU and memory limits of scanner pods through the scanLimits attribute. For more information, see Increasing Compliance Operator resource limits . A PriorityClass object can now be set through ScanSetting . This ensures the Compliance Operator is prioritized and minimizes the chance that the cluster falls out of compliance. For more information, see Setting PriorityClass for ScanSetting scans . 5.2.15.2. Bug fixes Previously, the Compliance Operator hard-coded notifications to the default openshift-compliance namespace. If the Operator were installed in a non-default namespace, the notifications would not work as expected. Now, notifications work in non-default openshift-compliance namespaces. ( BZ#2060726 ) Previously, the Compliance Operator was unable to evaluate default configurations used by kubelet objects, resulting in inaccurate results and false positives. This new feature evaluates the kubelet configuration and now reports accurately. ( BZ#2075041 ) Previously, the Compliance Operator reported the ocp4-kubelet-configure-event-creation rule in a FAIL state after applying an automatic remediation because the eventRecordQPS value was set higher than the default value. Now, the ocp4-kubelet-configure-event-creation rule remediation sets the default value, and the rule applies correctly. ( BZ#2082416 ) The ocp4-configure-network-policies rule requires manual intervention to perform effectively. New descriptive instructions and rule updates increase applicability of the ocp4-configure-network-policies rule for clusters using Calico CNIs. ( BZ#2091794 ) Previously, the Compliance Operator would not clean up pods used to scan infrastructure when using the debug=true option in the scan settings. This caused pods to be left on the cluster even after deleting the ScanSettingBinding . Now, pods are always deleted when a ScanSettingBinding is deleted.( BZ#2092913 ) Previously, the Compliance Operator used an older version of the operator-sdk command that caused alerts about deprecated functionality. Now, an updated version of the operator-sdk command is included and there are no more alerts for deprecated functionality. ( BZ#2098581 ) Previously, the Compliance Operator would fail to apply remediations if it could not determine the relationship between kubelet and machine configurations. Now, the Compliance Operator has improved handling of the machine configurations and is able to determine if a kubelet configuration is a subset of a machine configuration. ( BZ#2102511 ) Previously, the rule for ocp4-cis-node-master-kubelet-enable-cert-rotation did not properly describe success criteria. As a result, the requirements for RotateKubeletClientCertificate were unclear. Now, the rule for ocp4-cis-node-master-kubelet-enable-cert-rotation reports accurately regardless of the configuration present in the kubelet configuration file. ( BZ#2105153 ) Previously, the rule for checking idle streaming timeouts did not consider default values, resulting in inaccurate rule reporting. Now, more robust checks ensure increased accuracy in results based on default configuration values. ( BZ#2105878 ) Previously, the Compliance Operator would fail to fetch API resources when parsing machine configurations without Ignition specifications, which caused the api-check-pods processes to crash loop. Now, the Compliance Operator handles Machine Config Pools that do not have Ignition specifications correctly. ( BZ#2117268 ) Previously, rules evaluating the modprobe configuration would fail even after applying remediations due to a mismatch in values for the modprobe configuration. Now, the same values are used for the modprobe configuration in checks and remediations, ensuring consistent results. ( BZ#2117747 ) 5.2.15.3. Deprecations Specifying Install into all namespaces in the cluster or setting the WATCH_NAMESPACES environment variable to "" no longer affects all namespaces. Any API resources installed in namespaces not specified at the time of Compliance Operator installation is no longer be operational. API resources might require creation in the selected namespace, or the openshift-compliance namespace by default. This change improves the Compliance Operator's memory usage. 5.2.16. OpenShift Compliance Operator 0.1.53 The following advisory is available for the OpenShift Compliance Operator 0.1.53: RHBA-2022:5537 - OpenShift Compliance Operator bug fix update 5.2.16.1. Bug fixes Previously, the ocp4-kubelet-enable-streaming-connections rule contained an incorrect variable comparison, resulting in false positive scan results. Now, the Compliance Operator provides accurate scan results when setting streamingConnectionIdleTimeout . ( BZ#2069891 ) Previously, group ownership for /etc/openvswitch/conf.db was incorrect on IBM Z(R) architectures, resulting in ocp4-cis-node-worker-file-groupowner-ovs-conf-db check failures. Now, the check is marked NOT-APPLICABLE on IBM Z(R) architecture systems. ( BZ#2072597 ) Previously, the ocp4-cis-scc-limit-container-allowed-capabilities rule reported in a FAIL state due to incomplete data regarding the security context constraints (SCC) rules in the deployment. Now, the result is MANUAL , which is consistent with other checks that require human intervention. ( BZ#2077916 ) Previously, the following rules failed to account for additional configuration paths for API servers and TLS certificates and keys, resulting in reported failures even if the certificates and keys were set properly: ocp4-cis-api-server-kubelet-client-cert ocp4-cis-api-server-kubelet-client-key ocp4-cis-kubelet-configure-tls-cert ocp4-cis-kubelet-configure-tls-key Now, the rules report accurately and observe legacy file paths specified in the kubelet configuration file. ( BZ#2079813 ) Previously, the content_rule_oauth_or_oauthclient_inactivity_timeout rule did not account for a configurable timeout set by the deployment when assessing compliance for timeouts. This resulted in the rule failing even if the timeout was valid. Now, the Compliance Operator uses the var_oauth_inactivity_timeout variable to set valid timeout length. ( BZ#2081952 ) Previously, the Compliance Operator used administrative permissions on namespaces not labeled appropriately for privileged use, resulting in warning messages regarding pod security-level violations. Now, the Compliance Operator has appropriate namespace labels and permission adjustments to access results without violating permissions. ( BZ#2088202 ) Previously, applying auto remediations for rhcos4-high-master-sysctl-kernel-yama-ptrace-scope and rhcos4-sysctl-kernel-core-pattern resulted in subsequent failures of those rules in scan results, even though they were remediated. Now, the rules report PASS accurately, even after remediations are applied.( BZ#2094382 ) Previously, the Compliance Operator would fail in a CrashLoopBackoff state because of out-of-memory exceptions. Now, the Compliance Operator is improved to handle large machine configuration data sets in memory and function correctly. ( BZ#2094854 ) 5.2.16.2. Known issue When "debug":true is set within the ScanSettingBinding object, the pods generated by the ScanSettingBinding object are not removed when that binding is deleted. As a workaround, run the following command to delete the remaining pods: USD oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis ( BZ#2092913 ) 5.2.17. OpenShift Compliance Operator 0.1.52 The following advisory is available for the OpenShift Compliance Operator 0.1.52: RHBA-2022:4657 - OpenShift Compliance Operator bug fix update 5.2.17.1. New features and enhancements The FedRAMP high SCAP profile is now available for use in OpenShift Container Platform environments. For more information, See Supported compliance profiles . 5.2.17.2. Bug fixes Previously, the OpenScap container would crash due to a mount permission issue in a security environment where DAC_OVERRIDE capability is dropped. Now, executable mount permissions are applied to all users. ( BZ#2082151 ) Previously, the compliance rule ocp4-configure-network-policies could be configured as MANUAL . Now, compliance rule ocp4-configure-network-policies is set to AUTOMATIC . ( BZ#2072431 ) Previously, the Cluster Autoscaler would fail to scale down because the Compliance Operator scan pods were never removed after a scan. Now, the pods are removed from each node by default unless explicitly saved for debugging purposes. ( BZ#2075029 ) Previously, applying the Compliance Operator to the KubeletConfig would result in the node going into a NotReady state due to unpausing the Machine Config Pools too early. Now, the Machine Config Pools are unpaused appropriately and the node operates correctly. ( BZ#2071854 ) Previously, the Machine Config Operator used base64 instead of url-encoded code in the latest release, causing Compliance Operator remediation to fail. Now, the Compliance Operator checks encoding to handle both base64 and url-encoded Machine Config code and the remediation applies correctly. ( BZ#2082431 ) 5.2.17.3. Known issue When "debug":true is set within the ScanSettingBinding object, the pods generated by the ScanSettingBinding object are not removed when that binding is deleted. As a workaround, run the following command to delete the remaining pods: USD oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis ( BZ#2092913 ) 5.2.18. OpenShift Compliance Operator 0.1.49 The following advisory is available for the OpenShift Compliance Operator 0.1.49: RHBA-2022:1148 - OpenShift Compliance Operator bug fix and enhancement update 5.2.18.1. New features and enhancements The Compliance Operator is now supported on the following architectures: IBM Power(R) IBM Z(R) IBM(R) LinuxONE 5.2.18.2. Bug fixes Previously, the openshift-compliance content did not include platform-specific checks for network types. As a result, OVN- and SDN-specific checks would show as failed instead of not-applicable based on the network configuration. Now, new rules contain platform checks for networking rules, resulting in a more accurate assessment of network-specific checks. ( BZ#1994609 ) Previously, the ocp4-moderate-routes-protected-by-tls rule incorrectly checked TLS settings that results in the rule failing the check, even if the connection secure SSL/TLS protocol. Now, the check properly evaluates TLS settings that are consistent with the networking guidance and profile recommendations. ( BZ#2002695 ) Previously, ocp-cis-configure-network-policies-namespace used pagination when requesting namespaces. This caused the rule to fail because the deployments truncated lists of more than 500 namespaces. Now, the entire namespace list is requested, and the rule for checking configured network policies works for deployments with more than 500 namespaces. ( BZ#2038909 ) Previously, remediations using the sshd jinja macros were hard-coded to specific sshd configurations. As a result, the configurations were inconsistent with the content the rules were checking for and the check would fail. Now, the sshd configuration is parameterized and the rules apply successfully. ( BZ#2049141 ) Previously, the ocp4-cluster-version-operator-verify-integrity always checked the first entry in the Cluter Version Operator (CVO) history. As a result, the upgrade would fail in situations where subsequent versions of OpenShift Container Platform would be verified. Now, the compliance check result for ocp4-cluster-version-operator-verify-integrity is able to detect verified versions and is accurate with the CVO history. ( BZ#2053602 ) Previously, the ocp4-api-server-no-adm-ctrl-plugins-disabled rule did not check for a list of empty admission controller plugins. As a result, the rule would always fail, even if all admission plugins were enabled. Now, more robust checking of the ocp4-api-server-no-adm-ctrl-plugins-disabled rule accurately passes with all admission controller plugins enabled. ( BZ#2058631 ) Previously, scans did not contain platform checks for running against Linux worker nodes. As a result, running scans against worker nodes that were not Linux-based resulted in a never ending scan loop. Now, the scan schedules appropriately based on platform type and labels complete successfully. ( BZ#2056911 ) 5.2.19. OpenShift Compliance Operator 0.1.48 The following advisory is available for the OpenShift Compliance Operator 0.1.48: RHBA-2022:0416 - OpenShift Compliance Operator bug fix and enhancement update 5.2.19.1. Bug fixes Previously, some rules associated with extended Open Vulnerability and Assessment Language (OVAL) definitions had a checkType of None . This was because the Compliance Operator was not processing extended OVAL definitions when parsing rules. With this update, content from extended OVAL definitions is parsed so that these rules now have a checkType of either Node or Platform . ( BZ#2040282 ) Previously, a manually created MachineConfig object for KubeletConfig prevented a KubeletConfig object from being generated for remediation, leaving the remediation in the Pending state. With this release, a KubeletConfig object is created by the remediation, regardless if there is a manually created MachineConfig object for KubeletConfig . As a result, KubeletConfig remediations now work as expected. ( BZ#2040401 ) 5.2.20. OpenShift Compliance Operator 0.1.47 The following advisory is available for the OpenShift Compliance Operator 0.1.47: RHBA-2022:0014 - OpenShift Compliance Operator bug fix and enhancement update 5.2.20.1. New features and enhancements The Compliance Operator now supports the following compliance benchmarks for the Payment Card Industry Data Security Standard (PCI DSS): ocp4-pci-dss ocp4-pci-dss-node Additional rules and remediations for FedRAMP moderate impact level are added to the OCP4-moderate, OCP4-moderate-node, and rhcos4-moderate profiles. Remediations for KubeletConfig are now available in node-level profiles. 5.2.20.2. Bug fixes Previously, if your cluster was running OpenShift Container Platform 4.6 or earlier, remediations for USBGuard-related rules would fail for the moderate profile. This is because the remediations created by the Compliance Operator were based on an older version of USBGuard that did not support drop-in directories. Now, invalid remediations for USBGuard-related rules are not created for clusters running OpenShift Container Platform 4.6. If your cluster is using OpenShift Container Platform 4.6, you must manually create remediations for USBGuard-related rules. Additionally, remediations are created only for rules that satisfy minimum version requirements. ( BZ#1965511 ) Previously, when rendering remediations, the compliance operator would check that the remediation was well-formed by using a regular expression that was too strict. As a result, some remediations, such as those that render sshd_config , would not pass the regular expression check and therefore, were not created. The regular expression was found to be unnecessary and removed. Remediations now render correctly. ( BZ#2033009 ) 5.2.21. OpenShift Compliance Operator 0.1.44 The following advisory is available for the OpenShift Compliance Operator 0.1.44: RHBA-2021:4530 - OpenShift Compliance Operator bug fix and enhancement update 5.2.21.1. New features and enhancements In this release, the strictNodeScan option is now added to the ComplianceScan , ComplianceSuite and ScanSetting CRs. This option defaults to true which matches the behavior, where an error occurred if a scan was not able to be scheduled on a node. Setting the option to false allows the Compliance Operator to be more permissive about scheduling scans. Environments with ephemeral nodes can set the strictNodeScan value to false, which allows a compliance scan to proceed, even if some of the nodes in the cluster are not available for scheduling. You can now customize the node that is used to schedule the result server workload by configuring the nodeSelector and tolerations attributes of the ScanSetting object. These attributes are used to place the ResultServer pod, the pod that is used to mount a PV storage volume and store the raw Asset Reporting Format (ARF) results. Previously, the nodeSelector and the tolerations parameters defaulted to selecting one of the control plane nodes and tolerating the node-role.kubernetes.io/master taint . This did not work in environments where control plane nodes are not permitted to mount PVs. This feature provides a way for you to select the node and tolerate a different taint in those environments. The Compliance Operator can now remediate KubeletConfig objects. A comment containing an error message is now added to help content developers differentiate between objects that do not exist in the cluster compared to objects that cannot be fetched. Rule objects now contain two new attributes, checkType and description . These attributes allow you to determine if the rule pertains to a node check or platform check, and also allow you to review what the rule does. This enhancement removes the requirement that you have to extend an existing profile to create a tailored profile. This means the extends field in the TailoredProfile CRD is no longer mandatory. You can now select a list of rule objects to create a tailored profile. Note that you must select whether your profile applies to nodes or the platform by setting the compliance.openshift.io/product-type: annotation or by setting the -node suffix for the TailoredProfile CR. In this release, the Compliance Operator is now able to schedule scans on all nodes irrespective of their taints. Previously, the scan pods would only tolerated the node-role.kubernetes.io/master taint , meaning that they would either ran on nodes with no taints or only on nodes with the node-role.kubernetes.io/master taint. In deployments that use custom taints for their nodes, this resulted in the scans not being scheduled on those nodes. Now, the scan pods tolerate all node taints. In this release, the Compliance Operator supports the following North American Electric Reliability Corporation (NERC) security profiles: ocp4-nerc-cip ocp4-nerc-cip-node rhcos4-nerc-cip In this release, the Compliance Operator supports the NIST 800-53 Moderate-Impact Baseline for the Red Hat OpenShift - Node level, ocp4-moderate-node, security profile. 5.2.21.2. Templating and variable use In this release, the remediation template now allows multi-value variables. With this update, the Compliance Operator can change remediations based on variables that are set in the compliance profile. This is useful for remediations that include deployment-specific values such as time outs, NTP server host names, or similar. Additionally, the ComplianceCheckResult objects now use the label compliance.openshift.io/check-has-value that lists the variables a check has used. 5.2.21.3. Bug fixes Previously, while performing a scan, an unexpected termination occurred in one of the scanner containers of the pods. In this release, the Compliance Operator uses the latest OpenSCAP version 1.3.5 to avoid a crash. Previously, using autoReplyRemediations to apply remediations triggered an update of the cluster nodes. This was disruptive if some of the remediations did not include all of the required input variables. Now, if a remediation is missing one or more required input variables, it is assigned a state of NeedsReview . If one or more remediations are in a NeedsReview state, the machine config pool remains paused, and the remediations are not applied until all of the required variables are set. This helps minimize disruption to the nodes. The RBAC Role and Role Binding used for Prometheus metrics are changed to 'ClusterRole' and 'ClusterRoleBinding' to ensure that monitoring works without customization. Previously, if an error occurred while parsing a profile, rules or variables objects were removed and deleted from the profile. Now, if an error occurs during parsing, the profileparser annotates the object with a temporary annotation that prevents the object from being deleted until after parsing completes. ( BZ#1988259 ) Previously, an error occurred if titles or descriptions were missing from a tailored profile. Because the XCCDF standard requires titles and descriptions for tailored profiles, titles and descriptions are now required to be set in TailoredProfile CRs. Previously, when using tailored profiles, TailoredProfile variable values were allowed to be set using only a specific selection set. This restriction is now removed, and TailoredProfile variables can be set to any value. 5.2.22. Release Notes for Compliance Operator 0.1.39 The following advisory is available for the OpenShift Compliance Operator 0.1.39: RHBA-2021:3214 - OpenShift Compliance Operator bug fix and enhancement update 5.2.22.1. New features and enhancements Previously, the Compliance Operator was unable to parse Payment Card Industry Data Security Standard (PCI DSS) references. Now, the Operator can parse compliance content that is provided with PCI DSS profiles. Previously, the Compliance Operator was unable to execute rules for AU-5 control in the moderate profile. Now, permission is added to the Operator so that it can read Prometheusrules.monitoring.coreos.com objects and run the rules that cover AU-5 control in the moderate profile. 5.2.23. Additional resources Understanding the Compliance Operator 5.3. Compliance Operator support 5.3.1. Compliance Operator lifecycle The Compliance Operator is a "Rolling Stream" Operator, meaning updates are available asynchronously of OpenShift Container Platform releases. For more information, see OpenShift Operator Life Cycles on the Red Hat Customer Portal. 5.3.2. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 5.3.3. Using the must-gather tool for the Compliance Operator Starting in Compliance Operator v1.6.0, you can collect data about the Compliance Operator resources by running the must-gather command with the Compliance Operator image. Note Consider using the must-gather tool when opening support cases or filing bug reports, as it provides additional details about the Operator configuration and logs. Procedure Run the following command to collect data about the Compliance Operator: USD oc adm must-gather --image=USD(oc get csv compliance-operator.v1.6.0 -o=jsonpath='{.spec.relatedImages[?(@.name=="must-gather")].image}') 5.3.4. Additional resources About the must-gather tool Product Compliance 5.4. Compliance Operator concepts 5.4.1. Understanding the Compliance Operator The Compliance Operator lets OpenShift Container Platform administrators describe the required compliance state of a cluster and provides them with an overview of gaps and ways to remediate them. The Compliance Operator assesses compliance of both the Kubernetes API resources of OpenShift Container Platform, as well as the nodes running the cluster. The Compliance Operator uses OpenSCAP, a NIST-certified tool, to scan and enforce security policies provided by the content. Important The Compliance Operator is available for Red Hat Enterprise Linux CoreOS (RHCOS) deployments only. 5.4.1.1. Compliance Operator profiles There are several profiles available as part of the Compliance Operator installation. You can use the oc get command to view available profiles, profile details, and specific rules. View the available profiles: USD oc get profile.compliance -n openshift-compliance Example output NAME AGE VERSION ocp4-cis 3h49m 1.5.0 ocp4-cis-1-4 3h49m 1.4.0 ocp4-cis-1-5 3h49m 1.5.0 ocp4-cis-node 3h49m 1.5.0 ocp4-cis-node-1-4 3h49m 1.4.0 ocp4-cis-node-1-5 3h49m 1.5.0 ocp4-e8 3h49m ocp4-high 3h49m Revision 4 ocp4-high-node 3h49m Revision 4 ocp4-high-node-rev-4 3h49m Revision 4 ocp4-high-rev-4 3h49m Revision 4 ocp4-moderate 3h49m Revision 4 ocp4-moderate-node 3h49m Revision 4 ocp4-moderate-node-rev-4 3h49m Revision 4 ocp4-moderate-rev-4 3h49m Revision 4 ocp4-nerc-cip 3h49m ocp4-nerc-cip-node 3h49m ocp4-pci-dss 3h49m 3.2.1 ocp4-pci-dss-3-2 3h49m 3.2.1 ocp4-pci-dss-4-0 3h49m 4.0.0 ocp4-pci-dss-node 3h49m 3.2.1 ocp4-pci-dss-node-3-2 3h49m 3.2.1 ocp4-pci-dss-node-4-0 3h49m 4.0.0 ocp4-stig 3h49m V2R1 ocp4-stig-node 3h49m V2R1 ocp4-stig-node-v1r1 3h49m V1R1 ocp4-stig-node-v2r1 3h49m V2R1 ocp4-stig-v1r1 3h49m V1R1 ocp4-stig-v2r1 3h49m V2R1 rhcos4-e8 3h49m rhcos4-high 3h49m Revision 4 rhcos4-high-rev-4 3h49m Revision 4 rhcos4-moderate 3h49m Revision 4 rhcos4-moderate-rev-4 3h49m Revision 4 rhcos4-nerc-cip 3h49m rhcos4-stig 3h49m V2R1 rhcos4-stig-v1r1 3h49m V1R1 rhcos4-stig-v2r1 3h49m V2R1 These profiles represent different compliance benchmarks. Each profile has the product name that it applies to added as a prefix to the profile's name. ocp4-e8 applies the Essential 8 benchmark to the OpenShift Container Platform product, while rhcos4-e8 applies the Essential 8 benchmark to the Red Hat Enterprise Linux CoreOS (RHCOS) product. Run the following command to view the details of the rhcos4-e8 profile: USD oc get -n openshift-compliance -oyaml profiles.compliance rhcos4-e8 Example 5.1. Example output apiVersion: compliance.openshift.io/v1alpha1 description: 'This profile contains configuration checks for Red Hat Enterprise Linux CoreOS that align to the Australian Cyber Security Centre (ACSC) Essential Eight. A copy of the Essential Eight in Linux Environments guide can be found at the ACSC website: https://www.cyber.gov.au/acsc/view-all-content/publications/hardening-linux-workstations-and-servers' id: xccdf_org.ssgproject.content_profile_e8 kind: Profile metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/product: redhat_enterprise_linux_coreos_4 compliance.openshift.io/product-type: Node creationTimestamp: "2022-10-19T12:06:49Z" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-e8 namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: "43699" uid: 86353f70-28f7-40b4-bf0e-6289ec33675b rules: - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown - rhcos4-audit-rules-execution-chcon - rhcos4-audit-rules-execution-restorecon - rhcos4-audit-rules-execution-semanage - rhcos4-audit-rules-execution-setfiles - rhcos4-audit-rules-execution-setsebool - rhcos4-audit-rules-execution-seunshare - rhcos4-audit-rules-kernel-module-loading-delete - rhcos4-audit-rules-kernel-module-loading-finit - rhcos4-audit-rules-kernel-module-loading-init - rhcos4-audit-rules-login-events - rhcos4-audit-rules-login-events-faillock - rhcos4-audit-rules-login-events-lastlog - rhcos4-audit-rules-login-events-tallylog - rhcos4-audit-rules-networkconfig-modification - rhcos4-audit-rules-sysadmin-actions - rhcos4-audit-rules-time-adjtimex - rhcos4-audit-rules-time-clock-settime - rhcos4-audit-rules-time-settimeofday - rhcos4-audit-rules-time-stime - rhcos4-audit-rules-time-watch-localtime - rhcos4-audit-rules-usergroup-modification - rhcos4-auditd-data-retention-flush - rhcos4-auditd-freq - rhcos4-auditd-local-events - rhcos4-auditd-log-format - rhcos4-auditd-name-format - rhcos4-auditd-write-logs - rhcos4-configure-crypto-policy - rhcos4-configure-ssh-crypto-policy - rhcos4-no-empty-passwords - rhcos4-selinux-policytype - rhcos4-selinux-state - rhcos4-service-auditd-enabled - rhcos4-sshd-disable-empty-passwords - rhcos4-sshd-disable-gssapi-auth - rhcos4-sshd-disable-rhosts - rhcos4-sshd-disable-root-login - rhcos4-sshd-disable-user-known-hosts - rhcos4-sshd-do-not-permit-user-env - rhcos4-sshd-enable-strictmodes - rhcos4-sshd-print-last-log - rhcos4-sshd-set-loglevel-info - rhcos4-sysctl-kernel-dmesg-restrict - rhcos4-sysctl-kernel-kptr-restrict - rhcos4-sysctl-kernel-randomize-va-space - rhcos4-sysctl-kernel-unprivileged-bpf-disabled - rhcos4-sysctl-kernel-yama-ptrace-scope - rhcos4-sysctl-net-core-bpf-jit-harden title: Australian Cyber Security Centre (ACSC) Essential Eight Run the following command to view the details of the rhcos4-audit-rules-login-events rule: USD oc get -n openshift-compliance -oyaml rules rhcos4-audit-rules-login-events Example 5.2. Example output apiVersion: compliance.openshift.io/v1alpha1 checkType: Node description: |- The audit system already collects login information for all users and root. If the auditd daemon is configured to use the augenrules program to read audit rules during daemon startup (the default), add the following lines to a file with suffix.rules in the directory /etc/audit/rules.d in order to watch for attempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins If the auditd daemon is configured to use the auditctl utility to read audit rules during daemon startup, add the following lines to /etc/audit/audit.rules file in order to watch for unattempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins id: xccdf_org.ssgproject.content_rule_audit_rules_login_events kind: Rule metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/rule: audit-rules-login-events control.compliance.openshift.io/NIST-800-53: AU-2(d);AU-12(c);AC-6(9);CM-6(a) control.compliance.openshift.io/PCI-DSS: Req-10.2.3 policies.open-cluster-management.io/controls: AU-2(d),AU-12(c),AC-6(9),CM-6(a),Req-10.2.3 policies.open-cluster-management.io/standards: NIST-800-53,PCI-DSS creationTimestamp: "2022-10-19T12:07:08Z" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-audit-rules-login-events namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: "44819" uid: 75872f1f-3c93-40ca-a69d-44e5438824a4 rationale: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion. severity: medium title: Record Attempts to Alter Logon and Logout Events warning: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion. 5.4.1.1.1. Compliance Operator profile types There are two types of compliance profiles available: Platform and Node. Platform Platform scans target your OpenShift Container Platform cluster. Node Node scans target the nodes of the cluster. Important For compliance profiles that have Node and Platform applications, such as pci-dss compliance profiles, you must run both in your OpenShift Container Platform environment. 5.4.1.2. Additional resources Supported compliance profiles 5.4.2. Understanding the Custom Resource Definitions The Compliance Operator in the OpenShift Container Platform provides you with several Custom Resource Definitions (CRDs) to accomplish the compliance scans. To run a compliance scan, it leverages the predefined security policies, which are derived from the ComplianceAsCode community project. The Compliance Operator converts these security policies into CRDs, which you can use to run compliance scans and get remediations for the issues found. 5.4.2.1. CRDs workflow The CRD provides you the following workflow to complete the compliance scans: Define your compliance scan requirements Configure the compliance scan settings Process compliance requirements with compliance scans settings Monitor the compliance scans Check the compliance scan results 5.4.2.2. Defining the compliance scan requirements By default, the Compliance Operator CRDs include ProfileBundle and Profile objects, in which you can define and set the rules for your compliance scan requirements. You can also customize the default profiles by using a TailoredProfile object. 5.4.2.2.1. ProfileBundle object When you install the Compliance Operator, it includes ready-to-run ProfileBundle objects. The Compliance Operator parses the ProfileBundle object and creates a Profile object for each profile in the bundle. It also parses Rule and Variable objects, which are used by the Profile object. Example ProfileBundle object apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle name: <profile bundle name> namespace: openshift-compliance status: dataStreamStatus: VALID 1 1 Indicates whether the Compliance Operator was able to parse the content files. Note When the contentFile fails, an errorMessage attribute appears, which provides details of the error that occurred. Troubleshooting When you roll back to a known content image from an invalid image, the ProfileBundle object stops responding and displays PENDING state. As a workaround, you can move to a different image than the one. Alternatively, you can delete and re-create the ProfileBundle object to return to the working state. 5.4.2.2.2. Profile object The Profile object defines the rules and variables that can be evaluated for a certain compliance standard. It contains parsed out details about an OpenSCAP profile, such as its XCCDF identifier and profile checks for a Node or Platform type. You can either directly use the Profile object or further customize it using a TailorProfile object. Note You cannot create or modify the Profile object manually because it is derived from a single ProfileBundle object. Typically, a single ProfileBundle object can include several Profile objects. Example Profile object apiVersion: compliance.openshift.io/v1alpha1 description: <description of the profile> id: xccdf_org.ssgproject.content_profile_moderate 1 kind: Profile metadata: annotations: compliance.openshift.io/product: <product name> compliance.openshift.io/product-type: Node 2 creationTimestamp: "YYYY-MM-DDTMM:HH:SSZ" generation: 1 labels: compliance.openshift.io/profile-bundle: <profile bundle name> name: rhcos4-moderate namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: <profile bundle name> uid: <uid string> resourceVersion: "<version number>" selfLink: /apis/compliance.openshift.io/v1alpha1/namespaces/openshift-compliance/profiles/rhcos4-moderate uid: <uid string> rules: 3 - rhcos4-account-disable-post-pw-expiration - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown title: <title of the profile> 1 Specify the XCCDF name of the profile. Use this identifier when you define a ComplianceScan object as the value of the profile attribute of the scan. 2 Specify either a Node or Platform . Node profiles scan the cluster nodes and platform profiles scan the Kubernetes platform. 3 Specify the list of rules for the profile. Each rule corresponds to a single check. 5.4.2.2.3. Rule object The Rule object, which forms the profiles, are also exposed as objects. Use the Rule object to define your compliance check requirements and specify how it could be fixed. Example Rule object apiVersion: compliance.openshift.io/v1alpha1 checkType: Platform 1 description: <description of the rule> id: xccdf_org.ssgproject.content_rule_configure_network_policies_namespaces 2 instructions: <manual instructions for the scan> kind: Rule metadata: annotations: compliance.openshift.io/rule: configure-network-policies-namespaces control.compliance.openshift.io/CIS-OCP: 5.3.2 control.compliance.openshift.io/NERC-CIP: CIP-003-3 R4;CIP-003-3 R4.2;CIP-003-3 R5;CIP-003-3 R6;CIP-004-3 R2.2.4;CIP-004-3 R3;CIP-007-3 R2;CIP-007-3 R2.1;CIP-007-3 R2.2;CIP-007-3 R2.3;CIP-007-3 R5.1;CIP-007-3 R6.1 control.compliance.openshift.io/NIST-800-53: AC-4;AC-4(21);CA-3(5);CM-6;CM-6(1);CM-7;CM-7(1);SC-7;SC-7(3);SC-7(5);SC-7(8);SC-7(12);SC-7(13);SC-7(18) labels: compliance.openshift.io/profile-bundle: ocp4 name: ocp4-configure-network-policies-namespaces namespace: openshift-compliance rationale: <description of why this rule is checked> severity: high 3 title: <summary of the rule> 1 Specify the type of check this rule executes. Node profiles scan the cluster nodes and Platform profiles scan the Kubernetes platform. An empty value indicates there is no automated check. 2 Specify the XCCDF name of the rule, which is parsed directly from the datastream. 3 Specify the severity of the rule when it fails. Note The Rule object gets an appropriate label for an easy identification of the associated ProfileBundle object. The ProfileBundle also gets specified in the OwnerReferences of this object. 5.4.2.2.4. TailoredProfile object Use the TailoredProfile object to modify the default Profile object based on your organization requirements. You can enable or disable rules, set variable values, and provide justification for the customization. After validation, the TailoredProfile object creates a ConfigMap , which can be referenced by a ComplianceScan object. Tip You can use the TailoredProfile object by referencing it in a ScanSettingBinding object. For more information about ScanSettingBinding , see ScanSettingBinding object. Example TailoredProfile object apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: rhcos4-with-usb spec: extends: rhcos4-moderate 1 title: <title of the tailored profile> disableRules: - name: <name of a rule object to be disabled> rationale: <description of why this rule is checked> status: id: xccdf_compliance.openshift.io_profile_rhcos4-with-usb 2 outputRef: name: rhcos4-with-usb-tp 3 namespace: openshift-compliance state: READY 4 1 This is optional. Name of the Profile object upon which the TailoredProfile is built. If no value is set, a new profile is created from the enableRules list. 2 Specifies the XCCDF name of the tailored profile. 3 Specifies the ConfigMap name, which can be used as the value of the tailoringConfigMap.name attribute of a ComplianceScan . 4 Shows the state of the object such as READY , PENDING , and FAILURE . If the state of the object is ERROR , then the attribute status.errorMessage provides the reason for the failure. With the TailoredProfile object, it is possible to create a new Profile object using the TailoredProfile construct. To create a new Profile , set the following configuration parameters : an appropriate title extends value must be empty scan type annotation on the TailoredProfile object: compliance.openshift.io/product-type: Platform/Node Note If you have not set the product-type annotation, the Compliance Operator defaults to Platform scan type. Adding the -node suffix to the name of the TailoredProfile object results in node scan type. 5.4.2.3. Configuring the compliance scan settings After you have defined the requirements of the compliance scan, you can configure it by specifying the type of the scan, occurrence of the scan, and location of the scan. To do so, Compliance Operator provides you with a ScanSetting object. 5.4.2.3.1. ScanSetting object Use the ScanSetting object to define and reuse the operational policies to run your scans. By default, the Compliance Operator creates the following ScanSetting objects: default - it runs a scan every day at 1 AM on both master and worker nodes using a 1Gi Persistent Volume (PV) and keeps the last three results. Remediation is neither applied nor updated automatically. default-auto-apply - it runs a scan every day at 1AM on both control plane and worker nodes using a 1Gi Persistent Volume (PV) and keeps the last three results. Both autoApplyRemediations and autoUpdateRemediations are set to true. Example ScanSetting object apiVersion: compliance.openshift.io/v1alpha1 autoApplyRemediations: true 1 autoUpdateRemediations: true 2 kind: ScanSetting maxRetryOnTimeout: 3 metadata: creationTimestamp: "2022-10-18T20:21:00Z" generation: 1 name: default-auto-apply namespace: openshift-compliance resourceVersion: "38840" uid: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 rawResultStorage: nodeSelector: node-role.kubernetes.io/master: "" pvAccessModes: - ReadWriteOnce rotation: 3 3 size: 1Gi 4 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists roles: 5 - master - worker scanTolerations: - operator: Exists schedule: 0 1 * * * 6 showNotApplicable: false strictNodeScan: true timeout: 30m 1 Set to true to enable auto remediations. Set to false to disable auto remediations. 2 Set to true to enable auto remediations for content updates. Set to false to disable auto remediations for content updates. 3 Specify the number of stored scans in the raw result format. The default value is 3 . As the older results get rotated, the administrator must store the results elsewhere before the rotation happens. 4 Specify the storage size that should be created for the scan to store the raw results. The default value is 1Gi 6 Specify how often the scan should be run in cron format. Note To disable the rotation policy, set the value to 0 . 5 Specify the node-role.kubernetes.io label value to schedule the scan for Node type. This value has to match the name of a MachineConfigPool . 5.4.2.4. Processing the compliance scan requirements with compliance scans settings When you have defined the compliance scan requirements and configured the settings to run the scans, then the Compliance Operator processes it using the ScanSettingBinding object. 5.4.2.4.1. ScanSettingBinding object Use the ScanSettingBinding object to specify your compliance requirements with reference to the Profile or TailoredProfile object. It is then linked to a ScanSetting object, which provides the operational constraints for the scan. Then the Compliance Operator generates the ComplianceSuite object based on the ScanSetting and ScanSettingBinding objects. Example ScanSettingBinding object apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: <name of the scan> profiles: 1 # Node checks - name: rhcos4-with-usb kind: TailoredProfile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-moderate kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: 2 name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1 1 Specify the details of Profile or TailoredProfile object to scan your environment. 2 Specify the operational constraints, such as schedule and storage size. The creation of ScanSetting and ScanSettingBinding objects results in the compliance suite. To get the list of compliance suite, run the following command: USD oc get compliancesuites Important If you delete ScanSettingBinding , then compliance suite also is deleted. 5.4.2.5. Tracking the compliance scans After the creation of compliance suite, you can monitor the status of the deployed scans using the ComplianceSuite object. 5.4.2.5.1. ComplianceSuite object The ComplianceSuite object helps you keep track of the state of the scans. It contains the raw settings to create scans and the overall result. For Node type scans, you should map the scan to the MachineConfigPool , since it contains the remediations for any issues. If you specify a label, ensure it directly applies to a pool. Example ComplianceSuite object apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: <name_of_the_suite> spec: autoApplyRemediations: false 1 schedule: "0 1 * * *" 2 scans: 3 - name: workers-scan scanType: Node profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... rule: "xccdf_org.ssgproject.content_rule_no_netrc_files" nodeSelector: node-role.kubernetes.io/worker: "" status: Phase: DONE 4 Result: NON-COMPLIANT 5 scanStatuses: - name: workers-scan phase: DONE result: NON-COMPLIANT 1 Set to true to enable auto remediations. Set to false to disable auto remediations. 2 Specify how often the scan should be run in cron format. 3 Specify a list of scan specifications to run in the cluster. 4 Indicates the progress of the scans. 5 Indicates the overall verdict of the suite. The suite in the background creates the ComplianceScan object based on the scans parameter. You can programmatically fetch the ComplianceSuites events. To get the events for the suite, run the following command: USD oc get events --field-selector involvedObject.kind=ComplianceSuite,involvedObject.name=<name of the suite> Important You might create errors when you manually define the ComplianceSuite , since it contains the XCCDF attributes. 5.4.2.5.2. Advanced ComplianceScan Object The Compliance Operator includes options for advanced users for debugging or integrating with existing tooling. While it is recommended that you not create a ComplianceScan object directly, you can instead manage it using a ComplianceSuite object. Example Advanced ComplianceScan object apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceScan metadata: name: <name_of_the_compliance_scan> spec: scanType: Node 1 profile: xccdf_org.ssgproject.content_profile_moderate 2 content: ssg-ocp4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... 3 rule: "xccdf_org.ssgproject.content_rule_no_netrc_files" 4 nodeSelector: 5 node-role.kubernetes.io/worker: "" status: phase: DONE 6 result: NON-COMPLIANT 7 1 Specify either Node or Platform . Node profiles scan the cluster nodes and platform profiles scan the Kubernetes platform. 2 Specify the XCCDF identifier of the profile that you want to run. 3 Specify the container image that encapsulates the profile files. 4 It is optional. Specify the scan to run a single rule. This rule has to be identified with the XCCDF ID, and has to belong to the specified profile. Note If you skip the rule parameter, then scan runs for all the available rules of the specified profile. 5 If you are on the OpenShift Container Platform and wants to generate a remediation, then nodeSelector label has to match the MachineConfigPool label. Note If you do not specify nodeSelector parameter or match the MachineConfig label, scan will still run, but it will not create remediation. 6 Indicates the current phase of the scan. 7 Indicates the verdict of the scan. Important If you delete a ComplianceSuite object, then all the associated scans get deleted. When the scan is complete, it generates the result as Custom Resources of the ComplianceCheckResult object. However, the raw results are available in ARF format. These results are stored in a Persistent Volume (PV), which has a Persistent Volume Claim (PVC) associated with the name of the scan. You can programmatically fetch the ComplianceScans events. To generate events for the suite, run the following command: oc get events --field-selector involvedObject.kind=ComplianceScan,involvedObject.name=<name_of_the_compliance_scan> 5.4.2.6. Viewing the compliance results When the compliance suite reaches the DONE phase, you can view the scan results and possible remediations. 5.4.2.6.1. ComplianceCheckResult object When you run a scan with a specific profile, several rules in the profiles are verified. For each of these rules, a ComplianceCheckResult object is created, which provides the state of the cluster for a specific rule. Example ComplianceCheckResult object apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceCheckResult metadata: labels: compliance.openshift.io/check-severity: medium compliance.openshift.io/check-status: FAIL compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan name: workers-scan-no-direct-root-logins namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceScan name: workers-scan description: <description of scan check> instructions: <manual instructions for the scan> id: xccdf_org.ssgproject.content_rule_no_direct_root_logins severity: medium 1 status: FAIL 2 1 Describes the severity of the scan check. 2 Describes the result of the check. The possible values are: PASS: check was successful. FAIL: check was unsuccessful. INFO: check was successful and found something not severe enough to be considered an error. MANUAL: check cannot automatically assess the status and manual check is required. INCONSISTENT: different nodes report different results. ERROR: check run successfully, but could not complete. NOTAPPLICABLE: check did not run as it is not applicable. To get all the check results from a suite, run the following command: oc get compliancecheckresults \ -l compliance.openshift.io/suite=workers-compliancesuite 5.4.2.6.2. ComplianceRemediation object For a specific check you can have a datastream specified fix. However, if a Kubernetes fix is available, then the Compliance Operator creates a ComplianceRemediation object. Example ComplianceRemediation object apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: labels: compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan machineconfiguration.openshift.io/role: worker name: workers-scan-disable-users-coredumps namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: workers-scan-disable-users-coredumps uid: <UID> spec: apply: false 1 object: current: 2 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:,%2A%20%20%20%20%20hard%20%20%20core%20%20%20%200 filesystem: root mode: 420 path: /etc/security/limits.d/75-disable_users_coredumps.conf outdated: {} 3 1 true indicates the remediation was applied. false indicates the remediation was not applied. 2 Includes the definition of the remediation. 3 Indicates remediation that was previously parsed from an earlier version of the content. The Compliance Operator still retains the outdated objects to give the administrator a chance to review the new remediations before applying them. To get all the remediations from a suite, run the following command: oc get complianceremediations \ -l compliance.openshift.io/suite=workers-compliancesuite To list all failing checks that can be remediated automatically, run the following command: oc get compliancecheckresults \ -l 'compliance.openshift.io/check-status in (FAIL),compliance.openshift.io/automated-remediation' To list all failing checks that can be remediated manually, run the following command: oc get compliancecheckresults \ -l 'compliance.openshift.io/check-status in (FAIL),!compliance.openshift.io/automated-remediation' 5.5. Compliance Operator management 5.5.1. Installing the Compliance Operator Before you can use the Compliance Operator, you must ensure it is deployed in the cluster. Important The Compliance Operator might report incorrect results on managed platforms, such as OpenShift Dedicated, Red Hat OpenShift Service on AWS Classic, and Microsoft Azure Red Hat OpenShift. For more information, see the Knowledgebase article Compliance Operator reports incorrect results on Managed Services . Important Before deploying the Compliance Operator, you are required to define persistent storage in your cluster to store the raw results output. For more information, see Persistant storage overview and Managing the default storage class . 5.5.1.1. Installing the Compliance Operator through the web console Prerequisites You must have admin privileges. You must have a StorageClass resource configured. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Search for the Compliance Operator, then click Install . Keep the default selection of Installation mode and namespace to ensure that the Operator will be installed to the openshift-compliance namespace. Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Compliance Operator is installed in the openshift-compliance namespace and its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any pods in the openshift-compliance project that are reporting issues. Important If the restricted Security Context Constraints (SCC) have been modified to contain the system:authenticated group or has added requiredDropCapabilities , the Compliance Operator may not function properly due to permissions issues. You can create a custom SCC for the Compliance Operator scanner pod service account. For more information, see Creating a custom SCC for the Compliance Operator . 5.5.1.2. Installing the Compliance Operator using the CLI Prerequisites You must have admin privileges. You must have a StorageClass resource configured. Procedure Define a Namespace object: Example namespace-object.yaml apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance 1 In OpenShift Container Platform 4.17, the pod security label must be set to privileged at the namespace level. Create the Namespace object: USD oc create -f namespace-object.yaml Define an OperatorGroup object: Example operator-group-object.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance Create the OperatorGroup object: USD oc create -f operator-group-object.yaml Define a Subscription object: Example subscription-object.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: "stable" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription object: USD oc create -f subscription-object.yaml Note If you are setting the global scheduler feature and enable defaultNodeSelector , you must create the namespace manually and update the annotations of the openshift-compliance namespace, or the namespace where the Compliance Operator was installed, with openshift.io/node-selector: "" . This removes the default node selector and prevents deployment failures. Verification Verify the installation succeeded by inspecting the CSV file: USD oc get csv -n openshift-compliance Verify that the Compliance Operator is up and running: USD oc get deploy -n openshift-compliance 5.5.1.3. Installing the Compliance Operator on ROSA hosted control planes (HCP) As of the Compliance Operator 1.5.0 release, the Operator is tested against Red Hat OpenShift Service on AWS using Hosted control planes. Red Hat OpenShift Service on AWS Hosted control planes clusters have restricted access to the control plane, which is managed by Red Hat. By default, the Compliance Operator will schedule to nodes within the master node pool, which is not available in Red Hat OpenShift Service on AWS Hosted control planes installations. This requires you to configure the Subscription object in a way that allows the Operator to schedule on available node pools. This step is necessary for a successful installation on Red Hat OpenShift Service on AWS Hosted control planes clusters. Prerequisites You must have admin privileges. You must have a StorageClass resource configured. Procedure Define a Namespace object: Example namespace-object.yaml file apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance 1 In OpenShift Container Platform 4.17, the pod security label must be set to privileged at the namespace level. Create the Namespace object by running the following command: USD oc create -f namespace-object.yaml Define an OperatorGroup object: Example operator-group-object.yaml file apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance Create the OperatorGroup object by running the following command: USD oc create -f operator-group-object.yaml Define a Subscription object: Example subscription-object.yaml file apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: "stable" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: "" 1 1 Update the Operator deployment to deploy on worker nodes. Create the Subscription object by running the following command: USD oc create -f subscription-object.yaml Verification Verify that the installation succeeded by running the following command to inspect the cluster service version (CSV) file: USD oc get csv -n openshift-compliance Verify that the Compliance Operator is up and running by using the following command: USD oc get deploy -n openshift-compliance Important If the restricted Security Context Constraints (SCC) have been modified to contain the system:authenticated group or has added requiredDropCapabilities , the Compliance Operator may not function properly due to permissions issues. You can create a custom SCC for the Compliance Operator scanner pod service account. For more information, see Creating a custom SCC for the Compliance Operator . 5.5.1.4. Installing the Compliance Operator on Hypershift hosted control planes The Compliance Operator can be installed in hosted control planes using the OperatorHub by creating a Subscription file. Important Hosted control planes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You must have admin privileges. Procedure Define a Namespace object similar to the following: Example namespace-object.yaml apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance 1 In OpenShift Container Platform 4.17, the pod security label must be set to privileged at the namespace level. Create the Namespace object by running the following command: USD oc create -f namespace-object.yaml Define an OperatorGroup object: Example operator-group-object.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance Create the OperatorGroup object by running the following command: USD oc create -f operator-group-object.yaml Define a Subscription object: Example subscription-object.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: "stable" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: "" env: - name: PLATFORM value: "HyperShift" Create the Subscription object by running the following command: USD oc create -f subscription-object.yaml Verification Verify the installation succeeded by inspecting the CSV file by running the following command: USD oc get csv -n openshift-compliance Verify that the Compliance Operator is up and running by running the following command: USD oc get deploy -n openshift-compliance Additional resources Hosted control planes overview 5.5.1.5. Additional resources The Compliance Operator is supported in a restricted network environment. For more information, see Using Operator Lifecycle Manager in disconnected environments . 5.5.2. Updating the Compliance Operator As a cluster administrator, you can update the Compliance Operator on your OpenShift Container Platform cluster. Important Updating your OpenShift Container Platform cluster to version 4.14 might cause the Compliance Operator to not work as expected. This is due to an ongoing known issue. For more information, see OCPBUGS-18025 . 5.5.2.1. Preparing for an Operator update The subscription of an installed Operator specifies an update channel that tracks and receives updates for the Operator. You can change the update channel to start tracking and receiving updates from a newer channel. The names of update channels in a subscription can differ between Operators, but the naming scheme typically follows a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator ( 1.2 , 1.3 ) or a release frequency ( stable , fast ). Note You cannot change installed Operators to a channel that is older than the current channel. Red Hat Customer Portal Labs include the following application that helps administrators prepare to update their Operators: Red Hat OpenShift Container Platform Operator Update Information Checker You can use the application to search for Operator Lifecycle Manager-based Operators and verify the available Operator version per update channel across different versions of OpenShift Container Platform. Cluster Version Operator-based Operators are not included. 5.5.2.2. Changing the update channel for an Operator You can change the update channel for an Operator by using the OpenShift Container Platform web console. Tip If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the web console, navigate to Operators Installed Operators . Click the name of the Operator you want to change the update channel for. Click the Subscription tab. Click the name of the update channel under Update channel . Click the newer update channel that you want to change to, then click Save . For subscriptions with an Automatic approval strategy, the update begins automatically. Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . For subscriptions with a Manual approval strategy, you can manually approve the update from the Subscription tab. 5.5.2.3. Manually approving a pending Operator update If an installed Operator has the approval strategy in its subscription set to Manual , when new updates are released in its current update channel, the update must be manually approved before installation can begin. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators . Operators that have a pending update display a status with Upgrade available . Click the name of the Operator you want to update. Click the Subscription tab. Any updates requiring approval are displayed to Upgrade status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for update. When satisfied, click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . 5.5.3. Managing the Compliance Operator This section describes the lifecycle of security content, including how to use an updated version of compliance content and how to create a custom ProfileBundle object. 5.5.3.1. ProfileBundle CR example The ProfileBundle object requires two pieces of information: the URL of a container image that contains the contentImage and the file that contains the compliance content. The contentFile parameter is relative to the root of the file system. You can define the built-in rhcos4 ProfileBundle object as shown in the following example: apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: "2022-10-19T12:06:30Z" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: "46741" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml 1 contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 2 status: conditions: - lastTransitionTime: "2022-10-19T12:07:51Z" message: Profile bundle successfully parsed reason: Valid status: "True" type: Ready dataStreamStatus: VALID 1 Location of the file containing the compliance content. 2 Content image location. Important The base image used for the content images must include coreutils . 5.5.3.2. Updating security content Security content is included as container images that the ProfileBundle objects refer to. To accurately track updates to ProfileBundles and the custom resources parsed from the bundles such as rules or profiles, identify the container image with the compliance content using a digest instead of a tag: USD oc -n openshift-compliance get profilebundles rhcos4 -oyaml Example output apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: "2022-10-19T12:06:30Z" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: "46741" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 1 status: conditions: - lastTransitionTime: "2022-10-19T12:07:51Z" message: Profile bundle successfully parsed reason: Valid status: "True" type: Ready dataStreamStatus: VALID 1 Security container image. Each ProfileBundle is backed by a deployment. When the Compliance Operator detects that the container image digest has changed, the deployment is updated to reflect the change and parse the content again. Using the digest instead of a tag ensures that you use a stable and predictable set of profiles. 5.5.3.3. Additional resources The Compliance Operator is supported in a restricted network environment. For more information, see Using Operator Lifecycle Manager in disconnected environments . 5.5.4. Uninstalling the Compliance Operator You can remove the OpenShift Compliance Operator from your cluster by using the OpenShift Container Platform web console or the CLI. 5.5.4.1. Uninstalling the OpenShift Compliance Operator from OpenShift Container Platform using the web console To remove the Compliance Operator, you must first delete the objects in the namespace. After the objects are removed, you can remove the Operator and its namespace by deleting the openshift-compliance project. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. The OpenShift Compliance Operator must be installed. Procedure To remove the Compliance Operator by using the OpenShift Container Platform web console: Go to the Operators Installed Operators Compliance Operator page. Click All instances . In All namespaces , click the Options menu and delete all ScanSettingBinding, ComplainceSuite, ComplianceScan, and ProfileBundle objects. Switch to the Administration Operators Installed Operators page. Click the Options menu on the Compliance Operator entry and select Uninstall Operator . Switch to the Home Projects page. Search for 'compliance'. Click the Options menu to the openshift-compliance project, and select Delete Project . Confirm the deletion by typing openshift-compliance in the dialog box, and click Delete . 5.5.4.2. Uninstalling the OpenShift Compliance Operator from OpenShift Container Platform using the CLI To remove the Compliance Operator, you must first delete the objects in the namespace. After the objects are removed, you can remove the Operator and its namespace by deleting the openshift-compliance project. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. The OpenShift Compliance Operator must be installed. Procedure Delete all objects in the namespace. Delete the ScanSettingBinding objects: USD oc delete ssb --all -n openshift-compliance Delete the ScanSetting objects: USD oc delete ss --all -n openshift-compliance Delete the ComplianceSuite objects: USD oc delete suite --all -n openshift-compliance Delete the ComplianceScan objects: USD oc delete scan --all -n openshift-compliance Delete the ProfileBundle objects: USD oc delete profilebundle.compliance --all -n openshift-compliance Delete the Subscription object: USD oc delete sub --all -n openshift-compliance Delete the CSV object: USD oc delete csv --all -n openshift-compliance Delete the project: USD oc delete project openshift-compliance Example output project.project.openshift.io "openshift-compliance" deleted Verification Confirm the namespace is deleted: USD oc get project/openshift-compliance Example output Error from server (NotFound): namespaces "openshift-compliance" not found 5.6. Compliance Operator scan management 5.6.1. Supported compliance profiles There are several profiles available as part of the Compliance Operator (CO) installation. While you can use the following profiles to assess gaps in a cluster, usage alone does not infer or guarantee compliance with a particular profile and is not an auditor. In order to be compliant or certified under these various standards, you need to engage an authorized auditor such as a Qualified Security Assessor (QSA), Joint Authorization Board (JAB), or other industry recognized regulatory authority to assess your environment. You are required to work with an authorized auditor to achieve compliance with a standard. For more information on compliance support for all Red Hat products, see Product Compliance . Important The Compliance Operator might report incorrect results on some managed platforms, such as OpenShift Dedicated and Azure Red Hat OpenShift. For more information, see the Red Hat Knowledgebase Solution #6983418 . 5.6.1.1. Compliance profiles The Compliance Operator provides profiles to meet industry standard benchmarks. Note The following tables reflect the latest available profiles in the Compliance Operator. 5.6.1.1.1. CIS compliance profiles Table 5.1. Supported CIS compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-cis [1] CIS Red Hat OpenShift Container Platform Benchmark v1.5.0 Platform CIS Benchmarks TM [1] x86_64 ppc64le s390x ocp4-cis-1-4 [3] CIS Red Hat OpenShift Container Platform Benchmark v1.4.0 Platform CIS Benchmarks TM [4] x86_64 ppc64le s390x ocp4-cis-1-5 CIS Red Hat OpenShift Container Platform Benchmark v1.5.0 Platform CIS Benchmarks TM [4] x86_64 ppc64le s390x ocp4-cis-node [1] CIS Red Hat OpenShift Container Platform Benchmark v1.5.0 Node [2] CIS Benchmarks TM [4] x86_64 ppc64le s390x Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-cis-node-1-4 [3] CIS Red Hat OpenShift Container Platform Benchmark v1.4.0 Node [2] CIS Benchmarks TM [4] x86_64 ppc64le s390x Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-cis-node-1-5 CIS Red Hat OpenShift Container Platform Benchmark v1.5.0 Node [2] CIS Benchmarks TM [4] x86_64 ppc64le s390x Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) The ocp4-cis and ocp4-cis-node profiles maintain the most up-to-date version of the CIS benchmark as it becomes available in the Compliance Operator. If you want to adhere to a specific version, such as CIS v1.4.0, use the ocp4-cis-1-4 and ocp4-cis-node-1-4 profiles. Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types . CIS v1.4.0 is superceded by CIS v1.5.0. It is recommended to apply the latest profile to your environment. To locate the CIS OpenShift Container Platform v4 Benchmark, go to CIS Benchmarks and click Download Latest CIS Benchmark , where you can then register to download the benchmark. 5.6.1.1.2. Essential Eight compliance profiles Table 5.2. Supported Essential Eight compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-e8 Australian Cyber Security Centre (ACSC) Essential Eight Platform ACSC Hardening Linux Workstations and Servers x86_64 rhcos4-e8 Australian Cyber Security Centre (ACSC) Essential Eight Node ACSC Hardening Linux Workstations and Servers x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) 5.6.1.1.3. FedRAMP High compliance profiles Table 5.3. Supported FedRAMP High compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-high [1] NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Platform level Platform NIST SP-800-53 Release Search x86_64 ocp4-high-node [1] NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Node level Node [2] NIST SP-800-53 Release Search x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-high-node-rev-4 NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Node level Node [2] NIST SP-800-53 Release Search x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-high-rev-4 NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Platform level Platform NIST SP-800-53 Release Search x86_64 rhcos4-high [1] NIST 800-53 High-Impact Baseline for Red Hat Enterprise Linux CoreOS Node NIST SP-800-53 Release Search x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) rhcos4-high-rev-4 NIST 800-53 High-Impact Baseline for Red Hat Enterprise Linux CoreOS Node NIST SP-800-53 Release Search x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) The ocp4-high , ocp4-high-node and rhcos4-high profiles maintain the most up-to-date version of the FedRAMP High standard as it becomes available in the Compliance Operator. If you want to adhere to a specific version, such as FedRAMP high R4, use the ocp4-high-rev-4 and ocp4-high-node-rev-4 profiles. Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types . 5.6.1.1.4. FedRAMP Moderate compliance profiles Table 5.4. Supported FedRAMP Moderate compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-moderate [1] NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Platform level Platform NIST SP-800-53 Release Search x86_64 ppc64le s390x ocp4-moderate-node [1] NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Node level Node [2] NIST SP-800-53 Release Search x86_64 ppc64le s390x Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-moderate-node-rev-4 NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Node level Node [2] NIST SP-800-53 Release Search x86_64 ppc64le s390x Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-moderate-rev-4 NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Platform level Platform NIST SP-800-53 Release Search x86_64 ppc64le s390x rhcos4-moderate [1] NIST 800-53 Moderate-Impact Baseline for Red Hat Enterprise Linux CoreOS Node NIST SP-800-53 Release Search x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) rhcos4-moderate-rev-4 NIST 800-53 Moderate-Impact Baseline for Red Hat Enterprise Linux CoreOS Node NIST SP-800-53 Release Search x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) The ocp4-moderate , ocp4-moderate-node and rhcos4-moderate profiles maintain the most up-to-date version of the FedRAMP Moderate standard as it becomes available in the Compliance Operator. If you want to adhere to a specific version, such as FedRAMP Moderate R4, use the ocp4-moderate-rev-4 and ocp4-moderate-node-rev-4 profiles. Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types . 5.6.1.1.5. NERC-CIP compliance profiles Table 5.5. Supported NERC-CIP compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-nerc-cip North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for the OpenShift Container Platform - Platform level Platform NERC CIP Standards x86_64 ocp4-nerc-cip-node North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for the OpenShift Container Platform - Node level Node [1] NERC CIP Standards x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) rhcos4-nerc-cip North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for Red Hat Enterprise Linux CoreOS Node NERC CIP Standards x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types . 5.6.1.1.6. PCI-DSS compliance profiles Table 5.6. Supported PCI-DSS compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-pci-dss [1] PCI-DSS v4 Control Baseline for OpenShift Container Platform 4 Platform PCI Security Standards (R) Council Document Library x86_64 ocp4-pci-dss-3-2 [3] PCI-DSS v3.2.1 Control Baseline for OpenShift Container Platform 4 Platform PCI Security Standards (R) Council Document Library x86_64 ppc64le s390x ocp4-pci-dss-4-0 PCI-DSS v4 Control Baseline for OpenShift Container Platform 4 Platform PCI Security Standards (R) Council Document Library x86_64 ocp4-pci-dss-node [1] PCI-DSS v4 Control Baseline for OpenShift Container Platform 4 Node [2] PCI Security Standards (R) Council Document Library x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-pci-dss-node-3-2 [3] PCI-DSS v3.2.1 Control Baseline for OpenShift Container Platform 4 Node [2] PCI Security Standards (R) Council Document Library x86_64 ppc64le s390x Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-pci-dss-node-4-0 PCI-DSS v4 Control Baseline for OpenShift Container Platform 4 Node [2] PCI Security Standards (R) Council Document Library x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) The ocp4-pci-dss and ocp4-pci-dss-node profiles maintain the most up-to-date version of the PCI-DSS standard as it becomes available in the Compliance Operator. If you want to adhere to a specific version, such as PCI-DSS v3.2.1, use the ocp4-pci-dss-3-2 and ocp4-pci-dss-node-3-2 profiles. Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types . PCI-DSS v3.2.1 is superceded by PCI-DSS v4. It is recommended to apply the latest profile to your environment. 5.6.1.1.7. STIG compliance profiles Table 5.7. Supported STIG compliance profiles Profile Profile title Application Industry compliance benchmark Supported architectures Supported platforms ocp4-stig [1] Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift Platform DISA-STIG x86_64 ocp4-stig-node [1] Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift Node [2] DISA-STIG x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-stig-node-v1r1 [3] Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V1R1 Node [2] DISA-STIG x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-stig-node-v2r1 Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V2R1 Node [2] DISA-STIG x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) ocp4-stig-v1r1 [3] Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V1R1 Platform DISA-STIG x86_64 ocp4-stig-v2r1 Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V2R1 Platform DISA-STIG x86_64 rhcos4-stig Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift Node DISA-STIG x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) rhcos4-stig-v1r1 [3] Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V1R1 Node DISA-STIG [3] x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) rhcos4-stig-v2r1 Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V2R1 Node DISA-STIG x86_64 Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) The ocp4-stig , ocp4-stig-node and rhcos4-stig profiles maintain the most up-to-date version of the DISA-STIG benchmark as it becomes available in the Compliance Operator. If you want to adhere to a specific version, such as DISA-STIG V2R1, use the ocp4-stig-v2r1 and ocp4-stig-node-v2r1 profiles. Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types . DISA-STIG V1R1 is superceded by DISA-STIG V2R1. It is recommended to apply the latest profile to your environment. 5.6.1.1.8. About extended compliance profiles Some compliance profiles have controls that require following industry best practices, resulting in some profiles extending others. Combining the Center for Internet Security (CIS) best practices with National Institute of Standards and Technology (NIST) security frameworks establishes a path to a secure and compliant environment. For example, the NIST High-Impact and Moderate-Impact profiles extend the CIS profile to achieve compliance. As a result, extended compliance profiles eliminate the need to run both profiles in a single cluster. Table 5.8. Profile extensions Profile Extends ocp4-pci-dss ocp4-cis ocp4-pci-dss-node ocp4-cis-node ocp4-high ocp4-cis ocp4-high-node ocp4-cis-node ocp4-moderate ocp4-cis ocp4-moderate-node ocp4-cis-node ocp4-nerc-cip ocp4-moderate ocp4-nerc-cip-node ocp4-moderate-node 5.6.1.2. Additional resources Compliance Operator profile types 5.6.2. Compliance Operator scans The ScanSetting and ScanSettingBinding APIs are recommended to run compliance scans with the Compliance Operator. For more information on these API objects, run: USD oc explain scansettings or USD oc explain scansettingbindings 5.6.2.1. Running compliance scans You can run a scan using the Center for Internet Security (CIS) profiles. For convenience, the Compliance Operator creates a ScanSetting object with reasonable defaults on startup. This ScanSetting object is named default . Note For all-in-one control plane and worker nodes, the compliance scan runs twice on the worker and control plane nodes. The compliance scan might generate inconsistent scan results. You can avoid inconsistent results by defining only a single role in the ScanSetting object. For more information about inconsistent scan results, see Compliance Operator shows INCONSISTENT scan result with worker node . Procedure Inspect the ScanSetting object by running the following command: USD oc describe scansettings default -n openshift-compliance Example output Name: default Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Kind: ScanSetting Max Retry On Timeout: 3 Metadata: Creation Timestamp: 2024-07-16T14:56:42Z Generation: 2 Resource Version: 91655682 UID: 50358cf1-57a8-4f69-ac50-5c7a5938e402 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce 1 Rotation: 3 2 Size: 1Gi 3 Storage Class Name: standard 4 Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master 5 worker 6 Scan Tolerations: 7 Operator: Exists Schedule: 0 1 * * * 8 Show Not Applicable: false Strict Node Scan: true Suspend: false Timeout: 30m Events: <none> 1 The Compliance Operator creates a persistent volume (PV) that contains the results of the scans. By default, the PV will use access mode ReadWriteOnce because the Compliance Operator cannot make any assumptions about the storage classes configured on the cluster. Additionally, ReadWriteOnce access mode is available on most clusters. If you need to fetch the scan results, you can do so by using a helper pod, which also binds the volume. Volumes that use the ReadWriteOnce access mode can be mounted by only one pod at time, so it is important to remember to delete the helper pods. Otherwise, the Compliance Operator will not be able to reuse the volume for subsequent scans. 2 The Compliance Operator keeps results of three subsequent scans in the volume; older scans are rotated. 3 The Compliance Operator will allocate one GB of storage for the scan results. 4 The scansetting.rawResultStorage.storageClassName field specifies the storageClassName value to use when creating the PersistentVolumeClaim object to store the raw results. The default value is null, which will attempt to use the default storage class configured in the cluster. If there is no default class specified, then you must set a default class. 5 6 If the scan setting uses any profiles that scan cluster nodes, scan these node roles. 7 The default scan setting object scans all the nodes. 8 The default scan setting object runs scans at 01:00 each day. As an alternative to the default scan setting, you can use default-auto-apply , which has the following settings: Name: default-auto-apply Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Auto Apply Remediations: true 1 Auto Update Remediations: true 2 Kind: ScanSetting Metadata: Creation Timestamp: 2022-10-18T20:21:00Z Generation: 1 Managed Fields: API Version: compliance.openshift.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:autoApplyRemediations: f:autoUpdateRemediations: f:rawResultStorage: .: f:nodeSelector: .: f:node-role.kubernetes.io/master: f:pvAccessModes: f:rotation: f:size: f:tolerations: f:roles: f:scanTolerations: f:schedule: f:showNotApplicable: f:strictNodeScan: Manager: compliance-operator Operation: Update Time: 2022-10-18T20:21:00Z Resource Version: 38840 UID: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce Rotation: 3 Size: 1Gi Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master worker Scan Tolerations: Operator: Exists Schedule: 0 1 * * * Show Not Applicable: false Strict Node Scan: true Events: <none> 1 2 Setting autoUpdateRemediations and autoApplyRemediations flags to true allows you to easily create ScanSetting objects that auto-remediate without extra steps. Create a ScanSettingBinding object that binds to the default ScanSetting object and scans the cluster using the cis and cis-node profiles. For example: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis-compliance namespace: openshift-compliance profiles: - name: ocp4-cis-node kind: Profile apiGroup: compliance.openshift.io/v1alpha1 - name: ocp4-cis kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: default kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1 Create the ScanSettingBinding object by running: USD oc create -f <file-name>.yaml -n openshift-compliance At this point in the process, the ScanSettingBinding object is reconciled and based on the Binding and the Bound settings. The Compliance Operator creates a ComplianceSuite object and the associated ComplianceScan objects. Follow the compliance scan progress by running: USD oc get compliancescan -w -n openshift-compliance The scans progress through the scanning phases and eventually reach the DONE phase when complete. In most cases, the result of the scan is NON-COMPLIANT . You can review the scan results and start applying remediations to make the cluster compliant. See Managing Compliance Operator remediation for more information. 5.6.2.2. Setting custom storage size for results While the custom resources such as ComplianceCheckResult represent an aggregated result of one check across all scanned nodes, it can be useful to review the raw results as produced by the scanner. The raw results are produced in the ARF format and can be large (tens of megabytes per node), it is impractical to store them in a Kubernetes resource backed by the etcd key-value store. Instead, every scan creates a persistent volume (PV) which defaults to 1GB size. Depending on your environment, you may want to increase the PV size accordingly. This is done using the rawResultStorage.size attribute that is exposed in both the ScanSetting and ComplianceScan resources. A related parameter is rawResultStorage.rotation which controls how many scans are retained in the PV before the older scans are rotated. The default value is 3, setting the rotation policy to 0 disables the rotation. Given the default rotation policy and an estimate of 100MB per a raw ARF scan report, you can calculate the right PV size for your environment. 5.6.2.2.1. Using custom result storage values Because OpenShift Container Platform can be deployed in a variety of public clouds or bare metal, the Compliance Operator cannot determine available storage configurations. By default, the Compliance Operator will try to create the PV for storing results using the default storage class of the cluster, but a custom storage class can be configured using the rawResultStorage.StorageClassName attribute. Important If your cluster does not specify a default storage class, this attribute must be set. Configure the ScanSetting custom resource to use a standard storage class and create persistent volumes that are 10GB in size and keep the last 10 results: Example ScanSetting CR apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' 5.6.2.3. Scheduling the result server pod on a worker node The result server pod mounts the persistent volume (PV) that stores the raw Asset Reporting Format (ARF) scan results. The nodeSelector and tolerations attributes enable you to configure the location of the result server pod. This is helpful for those environments where control plane nodes are not permitted to mount persistent volumes. Procedure Create a ScanSetting custom resource (CR) for the Compliance Operator: Define the ScanSetting CR, and save the YAML file, for example, rs-workers.yaml : apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: rs-on-workers namespace: openshift-compliance rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: "" 1 pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists 2 roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * * 1 The Compliance Operator uses this node to store scan results in ARF format. 2 The result server pod tolerates all taints. To create the ScanSetting CR, run the following command: USD oc create -f rs-workers.yaml Verification To verify that the ScanSetting object is created, run the following command: USD oc get scansettings rs-on-workers -n openshift-compliance -o yaml Example output apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: creationTimestamp: "2021-11-19T19:36:36Z" generation: 1 name: rs-on-workers namespace: openshift-compliance resourceVersion: "48305" uid: 43fdfc5f-15a7-445a-8bbc-0e4a160cd46e rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: "" pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * * strictNodeScan: true 5.6.2.4. ScanSetting Custom Resource The ScanSetting Custom Resource now allows you to override the default CPU and memory limits of scanner pods through the scan limits attribute. The Compliance Operator will use defaults of 500Mi memory, 100m CPU for the scanner container, and 200Mi memory with 100m CPU for the api-resource-collector container. To set the memory limits of the Operator, modify the Subscription object if installed through OLM or the Operator deployment itself. To increase the default CPU and memory limits of the Compliance Operator, see Increasing Compliance Operator resource limits . Important Increasing the memory limit for the Compliance Operator or the scanner pods is needed if the default limits are not sufficient and the Operator or scanner pods are ended by the Out Of Memory (OOM) process. 5.6.2.5. Configuring the hosted control planes management cluster If you are hosting your own Hosted control plane or Hypershift environment and want to scan a Hosted Cluster from the management cluster, you will need to set the name and prefix namespace for the target Hosted Cluster. You can achieve this by creating a TailoredProfile . Important This procedure only applies to users managing their own hosted control planes environment. Note Only ocp4-cis and ocp4-pci-dss profiles are supported in hosted control planes management clusters. Prerequisites The Compliance Operator is installed in the management cluster. Procedure Obtain the name and namespace of the hosted cluster to be scanned by running the following command: USD oc get hostedcluster -A Example output NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE local-cluster 79136a1bdb84b3c13217 4.13.5 79136a1bdb84b3c13217-admin-kubeconfig Completed True False The hosted control plane is available In the management cluster, create a TailoredProfile extending the scan Profile and define the name and namespace of the Hosted Cluster to be scanned: Example management-tailoredprofile.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: hypershift-cisk57aw88gry namespace: openshift-compliance spec: description: This profile test required rules extends: ocp4-cis 1 title: Management namespace profile setValues: - name: ocp4-hypershift-cluster rationale: This value is used for HyperShift version detection value: 79136a1bdb84b3c13217 2 - name: ocp4-hypershift-namespace-prefix rationale: This value is used for HyperShift control plane namespace detection value: local-cluster 3 1 Variable. Only ocp4-cis and ocp4-pci-dss profiles are supported in hosted control planes management clusters. 2 The value is the NAME from the output in the step. 3 The value is the NAMESPACE from the output in the step. Create the TailoredProfile : USD oc create -n openshift-compliance -f mgmt-tp.yaml 5.6.2.6. Applying resource requests and limits When the kubelet starts a container as part of a Pod, the kubelet passes that container's requests and limits for memory and CPU to the container runtime. In Linux, the container runtime configures the kernel cgroups that apply and enforce the limits you defined. The CPU limit defines how much CPU time the container can use. During each scheduling interval, the Linux kernel checks to see if this limit is exceeded. If so, the kernel waits before allowing the cgroup to resume execution. If several different containers (cgroups) want to run on a contended system, workloads with larger CPU requests are allocated more CPU time than workloads with small requests. The memory request is used during Pod scheduling. On a node that uses cgroups v2, the container runtime might use the memory request as a hint to set memory.min and memory.low values. If a container attempts to allocate more memory than this limit, the Linux kernel out-of-memory subsystem activates and intervenes by stopping one of the processes in the container that tried to allocate memory. The memory limit for the Pod or container can also apply to pages in memory-backed volumes, such as an emptyDir. The kubelet tracks tmpfs emptyDir volumes as container memory is used, rather than as local ephemeral storage. If a container exceeds its memory request and the node that it runs on becomes short of memory overall, the Pod's container might be evicted. Important A container may not exceed its CPU limit for extended periods. Container run times do not stop Pods or containers for excessive CPU usage. To determine whether a container cannot be scheduled or is being killed due to resource limits, see Troubleshooting the Compliance Operator . 5.6.2.7. Scheduling Pods with container resource requests When a Pod is created, the scheduler selects a Node for the Pod to run on. Each node has a maximum capacity for each resource type in the amount of CPU and memory it can provide for the Pods. The scheduler ensures that the sum of the resource requests of the scheduled containers is less than the capacity nodes for each resource type. Although memory or CPU resource usage on nodes is very low, the scheduler might still refuse to place a Pod on a node if the capacity check fails to protect against a resource shortage on a node. For each container, you can specify the following resource limits and request: spec.containers[].resources.limits.cpu spec.containers[].resources.limits.memory spec.containers[].resources.limits.hugepages-<size> spec.containers[].resources.requests.cpu spec.containers[].resources.requests.memory spec.containers[].resources.requests.hugepages-<size> Although you can specify requests and limits for only individual containers, it is also useful to consider the overall resource requests and limits for a pod. For a particular resource, a container resource request or limit is the sum of the resource requests or limits of that type for each container in the pod. Example container resource requests and limits apiVersion: v1 kind: Pod metadata: name: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: app image: images.my-company.example/app:v4 resources: requests: 1 memory: "64Mi" cpu: "250m" limits: 2 memory: "128Mi" cpu: "500m" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] 1 The container is requesting 64 Mi of memory and 250 m CPU. 2 The container's limits are 128 Mi of memory and 500 m CPU. 5.6.3. Tailoring the Compliance Operator While the Compliance Operator comes with ready-to-use profiles, they must be modified to fit the organizations' needs and requirements. The process of modifying a profile is called tailoring . The Compliance Operator provides the TailoredProfile object to help tailor profiles. 5.6.3.1. Creating a new tailored profile You can write a tailored profile from scratch by using the TailoredProfile object. Set an appropriate title and description and leave the extends field empty. Indicate to the Compliance Operator what type of scan this custom profile will generate: Node scan: Scans the Operating System. Platform scan: Scans the OpenShift Container Platform configuration. Procedure Set the following annotation on the TailoredProfile object: Example new-profile.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: new-profile annotations: compliance.openshift.io/product-type: Node 1 spec: extends: ocp4-cis-node 2 description: My custom profile 3 title: Custom profile 4 enableRules: - name: ocp4-etcd-unique-ca rationale: We really need to enable this disableRules: - name: ocp4-file-groupowner-cni-conf rationale: This does not apply to the cluster 1 Set Node or Platform accordingly. 2 The extends field is optional. 3 Use the description field to describe the function of the new TailoredProfile object. 4 Give your TailoredProfile object a title with the title field. Note Adding the -node suffix to the name field of the TailoredProfile object is similar to adding the Node product type annotation and generates an Operating System scan. 5.6.3.2. Using tailored profiles to extend existing ProfileBundles While the TailoredProfile CR enables the most common tailoring operations, the XCCDF standard allows even more flexibility in tailoring OpenSCAP profiles. In addition, if your organization has been using OpenScap previously, you may have an existing XCCDF tailoring file and can reuse it. The ComplianceSuite object contains an optional TailoringConfigMap attribute that you can point to a custom tailoring file. The value of the TailoringConfigMap attribute is a name of a config map, which must contain a key called tailoring.xml and the value of this key is the tailoring contents. Procedure Browse the available rules for the Red Hat Enterprise Linux CoreOS (RHCOS) ProfileBundle : USD oc get rules.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4 Browse the available variables in the same ProfileBundle : USD oc get variables.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4 Create a tailored profile named nist-moderate-modified : Choose which rules you want to add to the nist-moderate-modified tailored profile. This example extends the rhcos4-moderate profile by disabling two rules and changing one value. Use the rationale value to describe why these changes were made: Example new-profile-node.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: nist-moderate-modified spec: extends: rhcos4-moderate description: NIST moderate profile title: My modified NIST moderate profile disableRules: - name: rhcos4-file-permissions-var-log-messages rationale: The file contains logs of error messages in the system - name: rhcos4-account-disable-post-pw-expiration rationale: No need to check this as it comes from the IdP setValues: - name: rhcos4-var-selinux-state rationale: Organizational requirements value: permissive Table 5.9. Attributes for spec variables Attribute Description extends Name of the Profile object upon which this TailoredProfile is built. title Human-readable title of the TailoredProfile . disableRules A list of name and rationale pairs. Each name refers to a name of a rule object that is to be disabled. The rationale value is human-readable text describing why the rule is disabled. manualRules A list of name and rationale pairs. When a manual rule is added, the check result status will always be manual and remediation will not be generated. This attribute is automatic and by default has no values when set as a manual rule. enableRules A list of name and rationale pairs. Each name refers to a name of a rule object that is to be enabled. The rationale value is human-readable text describing why the rule is enabled. description Human-readable text describing the TailoredProfile . setValues A list of name, rationale, and value groupings. Each name refers to a name of the value set. The rationale is human-readable text describing the set. The value is the actual setting. Add the tailoredProfile.spec.manualRules attribute: Example tailoredProfile.spec.manualRules.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: ocp4-manual-scc-check spec: extends: ocp4-cis description: This profile extends ocp4-cis by forcing the SCC check to always return MANUAL title: OCP4 CIS profile with manual SCC check manualRules: - name: ocp4-scc-limit-container-allowed-capabilities rationale: We use third party software that installs its own SCC with extra privileges Create the TailoredProfile object: USD oc create -n openshift-compliance -f new-profile-node.yaml 1 1 The TailoredProfile object is created in the default openshift-compliance namespace. Example output tailoredprofile.compliance.openshift.io/nist-moderate-modified created Define the ScanSettingBinding object to bind the new nist-moderate-modified tailored profile to the default ScanSetting object. Example new-scansettingbinding.yaml apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: nist-moderate-modified profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-moderate - apiGroup: compliance.openshift.io/v1alpha1 kind: TailoredProfile name: nist-moderate-modified settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default Create the ScanSettingBinding object: USD oc create -n openshift-compliance -f new-scansettingbinding.yaml Example output scansettingbinding.compliance.openshift.io/nist-moderate-modified created 5.6.4. Retrieving Compliance Operator raw results When proving compliance for your OpenShift Container Platform cluster, you might need to provide the scan results for auditing purposes. 5.6.4.1. Obtaining Compliance Operator raw results from a persistent volume Procedure The Compliance Operator generates and stores the raw results in a persistent volume. These results are in Asset Reporting Format (ARF). Explore the ComplianceSuite object: USD oc get compliancesuites nist-moderate-modified \ -o json -n openshift-compliance | jq '.status.scanStatuses[].resultsStorage' Example output { "name": "ocp4-moderate", "namespace": "openshift-compliance" } { "name": "nist-moderate-modified-master", "namespace": "openshift-compliance" } { "name": "nist-moderate-modified-worker", "namespace": "openshift-compliance" } This shows the persistent volume claims where the raw results are accessible. Verify the raw data location by using the name and namespace of one of the results: USD oc get pvc -n openshift-compliance rhcos4-moderate-worker Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rhcos4-moderate-worker Bound pvc-548f6cfe-164b-42fe-ba13-a07cfbc77f3a 1Gi RWO gp2 92m Fetch the raw results by spawning a pod that mounts the volume and copying the results: USD oc create -n openshift-compliance -f pod.yaml Example pod.yaml apiVersion: "v1" kind: Pod metadata: name: pv-extract spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: pv-extract-pod image: registry.access.redhat.com/ubi9/ubi command: ["sleep", "3000"] volumeMounts: - mountPath: "/workers-scan-results" name: workers-scan-vol securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: workers-scan-vol persistentVolumeClaim: claimName: rhcos4-moderate-worker After the pod is running, download the results: USD oc cp pv-extract:/workers-scan-results -n openshift-compliance . Important Spawning a pod that mounts the persistent volume will keep the claim as Bound . If the volume's storage class in use has permissions set to ReadWriteOnce , the volume is only mountable by one pod at a time. You must delete the pod upon completion, or it will not be possible for the Operator to schedule a pod and continue storing results in this location. After the extraction is complete, the pod can be deleted: USD oc delete pod pv-extract -n openshift-compliance 5.6.5. Managing Compliance Operator result and remediation Each ComplianceCheckResult represents a result of one compliance rule check. If the rule can be remediated automatically, a ComplianceRemediation object with the same name, owned by the ComplianceCheckResult is created. Unless requested, the remediations are not applied automatically, which gives an OpenShift Container Platform administrator the opportunity to review what the remediation does and only apply a remediation once it has been verified. Important Full remediation for Federal Information Processing Standards (FIPS) compliance requires enabling FIPS mode for the cluster. To enable FIPS mode, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . FIPS mode is supported on the following architectures: x86_64 ppc64le s390x 5.6.5.1. Filters for compliance check results By default, the ComplianceCheckResult objects are labeled with several useful labels that allow you to query the checks and decide on the steps after the results are generated. List checks that belong to a specific suite: USD oc get -n openshift-compliance compliancecheckresults \ -l compliance.openshift.io/suite=workers-compliancesuite List checks that belong to a specific scan: USD oc get -n openshift-compliance compliancecheckresults \ -l compliance.openshift.io/scan=workers-scan Not all ComplianceCheckResult objects create ComplianceRemediation objects. Only ComplianceCheckResult objects that can be remediated automatically do. A ComplianceCheckResult object has a related remediation if it is labeled with the compliance.openshift.io/automated-remediation label. The name of the remediation is the same as the name of the check. List all failing checks that can be remediated automatically: USD oc get -n openshift-compliance compliancecheckresults \ -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/automated-remediation' List all failing checks sorted by severity: USD oc get compliancecheckresults -n openshift-compliance \ -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/check-severity=high' Example output NAME STATUS SEVERITY nist-moderate-modified-master-configure-crypto-policy FAIL high nist-moderate-modified-master-coreos-pti-kernel-argument FAIL high nist-moderate-modified-master-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-master-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-master-enable-fips-mode FAIL high nist-moderate-modified-master-no-empty-passwords FAIL high nist-moderate-modified-master-selinux-state FAIL high nist-moderate-modified-worker-configure-crypto-policy FAIL high nist-moderate-modified-worker-coreos-pti-kernel-argument FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-worker-enable-fips-mode FAIL high nist-moderate-modified-worker-no-empty-passwords FAIL high nist-moderate-modified-worker-selinux-state FAIL high ocp4-moderate-configure-network-policies-namespaces FAIL high ocp4-moderate-fips-mode-enabled-on-all-nodes FAIL high List all failing checks that must be remediated manually: USD oc get -n openshift-compliance compliancecheckresults \ -l 'compliance.openshift.io/check-status=FAIL,!compliance.openshift.io/automated-remediation' The manual remediation steps are typically stored in the description attribute in the ComplianceCheckResult object. Table 5.10. ComplianceCheckResult Status ComplianceCheckResult Status Description PASS Compliance check ran to completion and passed. FAIL Compliance check ran to completion and failed. INFO Compliance check ran to completion and found something not severe enough to be considered an error. MANUAL Compliance check does not have a way to automatically assess the success or failure and must be checked manually. INCONSISTENT Compliance check reports different results from different sources, typically cluster nodes. ERROR Compliance check ran, but could not complete properly. NOT-APPLICABLE Compliance check did not run because it is not applicable or not selected. 5.6.5.2. Reviewing a remediation Review both the ComplianceRemediation object and the ComplianceCheckResult object that owns the remediation. The ComplianceCheckResult object contains human-readable descriptions of what the check does and the hardening trying to prevent, as well as other metadata like the severity and the associated security controls. The ComplianceRemediation object represents a way to fix the problem described in the ComplianceCheckResult . After first scan, check for remediations with the state MissingDependencies . Below is an example of a check and a remediation called sysctl-net-ipv4-conf-all-accept-redirects . This example is redacted to only show spec and status and omits metadata : spec: apply: false current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf mode: 0644 contents: source: data:,net.ipv4.conf.all.accept_redirects%3D0 outdated: {} status: applicationState: NotApplied The remediation payload is stored in the spec.current attribute. The payload can be any Kubernetes object, but because this remediation was produced by a node scan, the remediation payload in the above example is a MachineConfig object. For Platform scans, the remediation payload is often a different kind of an object (for example, a ConfigMap or Secret object), but typically applying that remediation is up to the administrator, because otherwise the Compliance Operator would have required a very broad set of permissions to manipulate any generic Kubernetes object. An example of remediating a Platform check is provided later in the text. To see exactly what the remediation does when applied, the MachineConfig object contents use the Ignition objects for the configuration. See the Ignition specification for further information about the format. In our example, the spec.config.storage.files[0].path attribute specifies the file that is being create by this remediation ( /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf ) and the spec.config.storage.files[0].contents.source attribute specifies the contents of that file. Note The contents of the files are URL-encoded. Use the following Python script to view the contents: USD echo "net.ipv4.conf.all.accept_redirects%3D0" | python3 -c "import sys, urllib.parse; print(urllib.parse.unquote(''.join(sys.stdin.readlines())))" Example output net.ipv4.conf.all.accept_redirects=0 Important The Compliance Operator does not automatically resolve dependency issues that can occur between remediations. Users should perform a rescan after remediations are applied to ensure accurate results. 5.6.5.3. Applying remediation when using customized machine config pools When you create a custom MachineConfigPool , add a label to the MachineConfigPool so that machineConfigPoolSelector present in the KubeletConfig can match the label with MachineConfigPool . Important Do not set protectKernelDefaults: false in the KubeletConfig file, because the MachineConfigPool object might fail to unpause unexpectedly after the Compliance Operator finishes applying remediation. Procedure List the nodes. USD oc get nodes -n openshift-compliance Example output NAME STATUS ROLES AGE VERSION ip-10-0-128-92.us-east-2.compute.internal Ready master 5h21m v1.30.3 ip-10-0-158-32.us-east-2.compute.internal Ready worker 5h17m v1.30.3 ip-10-0-166-81.us-east-2.compute.internal Ready worker 5h17m v1.30.3 ip-10-0-171-170.us-east-2.compute.internal Ready master 5h21m v1.30.3 ip-10-0-197-35.us-east-2.compute.internal Ready master 5h22m v1.30.3 Add a label to nodes. USD oc -n openshift-compliance \ label node ip-10-0-166-81.us-east-2.compute.internal \ node-role.kubernetes.io/<machine_config_pool_name>= Example output node/ip-10-0-166-81.us-east-2.compute.internal labeled Create custom MachineConfigPool CR. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: <machine_config_pool_name> labels: pools.operator.machineconfiguration.openshift.io/<machine_config_pool_name>: '' 1 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,<machine_config_pool_name>]} nodeSelector: matchLabels: node-role.kubernetes.io/<machine_config_pool_name>: "" 1 The labels field defines label name to add for Machine config pool(MCP). Verify MCP created successfully. USD oc get mcp -w 5.6.5.4. Evaluating KubeletConfig rules against default configuration values OpenShift Container Platform infrastructure might contain incomplete configuration files at run time, and nodes assume default configuration values for missing configuration options. Some configuration options can be passed as command line arguments. As a result, the Compliance Operator cannot verify if the configuration file on the node is complete because it might be missing options used in the rule checks. To prevent false negative results where the default configuration value passes a check, the Compliance Operator uses the Node/Proxy API to fetch the configuration for each node in a node pool, then all configuration options that are consistent across nodes in the node pool are stored in a file that represents the configuration for all nodes within that node pool. This increases the accuracy of the scan results. No additional configuration changes are required to use this feature with default master and worker node pools configurations. 5.6.5.5. Scanning custom node pools The Compliance Operator does not maintain a copy of each node pool configuration. The Compliance Operator aggregates consistent configuration options for all nodes within a single node pool into one copy of the configuration file. The Compliance Operator then uses the configuration file for a particular node pool to evaluate rules against nodes within that pool. Procedure Add the example role to the ScanSetting object that will be stored in the ScanSettingBinding CR: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master - example scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' Create a scan that uses the ScanSettingBinding CR: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis namespace: openshift-compliance profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis-node settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default Verification The Platform KubeletConfig rules are checked through the Node/Proxy object. You can find those rules by running the following command: USD oc get rules -o json | jq '.items[] | select(.checkType == "Platform") | select(.metadata.name | contains("ocp4-kubelet-")) | .metadata.name' 5.6.5.6. Remediating KubeletConfig sub pools KubeletConfig remediation labels can be applied to MachineConfigPool sub-pools. Procedure Add a label to the sub-pool MachineConfigPool CR: USD oc label mcp <sub-pool-name> pools.operator.machineconfiguration.openshift.io/<sub-pool-name>= 5.6.5.7. Applying a remediation The boolean attribute spec.apply controls whether the remediation should be applied by the Compliance Operator. You can apply the remediation by setting the attribute to true : USD oc -n openshift-compliance \ patch complianceremediations/<scan-name>-sysctl-net-ipv4-conf-all-accept-redirects \ --patch '{"spec":{"apply":true}}' --type=merge After the Compliance Operator processes the applied remediation, the status.ApplicationState attribute would change to Applied or to Error if incorrect. When a machine config remediation is applied, that remediation along with all other applied remediations are rendered into a MachineConfig object named 75-USDscan-name-USDsuite-name . That MachineConfig object is subsequently rendered by the Machine Config Operator and finally applied to all the nodes in a machine config pool by an instance of the machine control daemon running on each node. Note that when the Machine Config Operator applies a new MachineConfig object to nodes in a pool, all the nodes belonging to the pool are rebooted. This might be inconvenient when applying multiple remediations, each of which re-renders the composite 75-USDscan-name-USDsuite-name MachineConfig object. To prevent applying the remediation immediately, you can pause the machine config pool by setting the .spec.paused attribute of a MachineConfigPool object to true . The Compliance Operator can apply remediations automatically. Set autoApplyRemediations: true in the ScanSetting top-level object. Warning Applying remediations automatically should only be done with careful consideration. Important The Compliance Operator does not automatically resolve dependency issues that can occur between remediations. Users should perform a rescan after remediations are applied to ensure accurate results. 5.6.5.8. Remediating a platform check manually Checks for Platform scans typically have to be remediated manually by the administrator for two reasons: It is not always possible to automatically determine the value that must be set. One of the checks requires that a list of allowed registries is provided, but the scanner has no way of knowing which registries the organization wants to allow. Different checks modify different API objects, requiring automated remediation to possess root or superuser access to modify objects in the cluster, which is not advised. Procedure The example below uses the ocp4-ocp-allowed-registries-for-import rule, which would fail on a default OpenShift Container Platform installation. Inspect the rule oc get rule.compliance/ocp4-ocp-allowed-registries-for-import -oyaml , the rule is to limit the registries the users are allowed to import images from by setting the allowedRegistriesForImport attribute, The warning attribute of the rule also shows the API object checked, so it can be modified and remediate the issue: USD oc edit image.config.openshift.io/cluster Example output apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2020-09-10T10:12:54Z" generation: 2 name: cluster resourceVersion: "363096" selfLink: /apis/config.openshift.io/v1/images/cluster uid: 2dcb614e-2f8a-4a23-ba9a-8e33cd0ff77e spec: allowedRegistriesForImport: - domainName: registry.redhat.io status: externalRegistryHostnames: - default-route-openshift-image-registry.apps.user-cluster-09-10-12-07.devcluster.openshift.com internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 Re-run the scan: USD oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan= 5.6.5.9. Updating remediations When a new version of compliance content is used, it might deliver a new and different version of a remediation than the version. The Compliance Operator will keep the old version of the remediation applied. The OpenShift Container Platform administrator is also notified of the new version to review and apply. A ComplianceRemediation object that had been applied earlier, but was updated changes its status to Outdated . The outdated objects are labeled so that they can be searched for easily. The previously applied remediation contents would then be stored in the spec.outdated attribute of a ComplianceRemediation object and the new updated contents would be stored in the spec.current attribute. After updating the content to a newer version, the administrator then needs to review the remediation. As long as the spec.outdated attribute exists, it would be used to render the resulting MachineConfig object. After the spec.outdated attribute is removed, the Compliance Operator re-renders the resulting MachineConfig object, which causes the Operator to push the configuration to the nodes. Procedure Search for any outdated remediations: USD oc -n openshift-compliance get complianceremediations \ -l complianceoperator.openshift.io/outdated-remediation= Example output NAME STATE workers-scan-no-empty-passwords Outdated The currently applied remediation is stored in the Outdated attribute and the new, unapplied remediation is stored in the Current attribute. If you are satisfied with the new version, remove the Outdated field. If you want to keep the updated content, remove the Current and Outdated attributes. Apply the newer version of the remediation: USD oc -n openshift-compliance patch complianceremediations workers-scan-no-empty-passwords \ --type json -p '[{"op":"remove", "path":/spec/outdated}]' The remediation state will switch from Outdated to Applied : USD oc get -n openshift-compliance complianceremediations workers-scan-no-empty-passwords Example output NAME STATE workers-scan-no-empty-passwords Applied The nodes will apply the newer remediation version and reboot. Important The Compliance Operator does not automatically resolve dependency issues that can occur between remediations. Users should perform a rescan after remediations are applied to ensure accurate results. 5.6.5.10. Unapplying a remediation It might be required to unapply a remediation that was previously applied. Procedure Set the apply flag to false : USD oc -n openshift-compliance \ patch complianceremediations/rhcos4-moderate-worker-sysctl-net-ipv4-conf-all-accept-redirects \ --patch '{"spec":{"apply":false}}' --type=merge The remediation status will change to NotApplied and the composite MachineConfig object would be re-rendered to not include the remediation. Important All affected nodes with the remediation will be rebooted. Important The Compliance Operator does not automatically resolve dependency issues that can occur between remediations. Users should perform a rescan after remediations are applied to ensure accurate results. 5.6.5.11. Removing a KubeletConfig remediation KubeletConfig remediations are included in node-level profiles. In order to remove a KubeletConfig remediation, you must manually remove it from the KubeletConfig objects. This example demonstrates how to remove the compliance check for the one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available remediation. Procedure Locate the scan-name and compliance check for the one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available remediation: USD oc -n openshift-compliance get remediation \ one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -o yaml Example output apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: annotations: compliance.openshift.io/xccdf-value-used: var-kubelet-evictionhard-imagefs-available creationTimestamp: "2022-01-05T19:52:27Z" generation: 1 labels: compliance.openshift.io/scan-name: one-rule-tp-node-master 1 compliance.openshift.io/suite: one-rule-ssb-node name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available uid: fe8e1577-9060-4c59-95b2-3e2c51709adc resourceVersion: "84820" uid: 5339d21a-24d7-40cb-84d2-7a2ebb015355 spec: apply: true current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: kubeletConfig: evictionHard: imagefs.available: 10% 2 outdated: {} type: Configuration status: applicationState: Applied 1 The scan name of the remediation. 2 The remediation that was added to the KubeletConfig objects. Note If the remediation invokes an evictionHard kubelet configuration, you must specify all of the evictionHard parameters: memory.available , nodefs.available , nodefs.inodesFree , imagefs.available , and imagefs.inodesFree . If you do not specify all parameters, only the specified parameters are applied and the remediation will not function properly. Remove the remediation: Set apply to false for the remediation object: USD oc -n openshift-compliance patch \ complianceremediations/one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available \ -p '{"spec":{"apply":false}}' --type=merge Using the scan-name , find the KubeletConfig object that the remediation was applied to: USD oc -n openshift-compliance get kubeletconfig \ --selector compliance.openshift.io/scan-name=one-rule-tp-node-master Example output NAME AGE compliance-operator-kubelet-master 2m34s Manually remove the remediation, imagefs.available: 10% , from the KubeletConfig object: USD oc edit -n openshift-compliance KubeletConfig compliance-operator-kubelet-master Important All affected nodes with the remediation will be rebooted. Note You must also exclude the rule from any scheduled scans in your tailored profiles that auto-applies the remediation, otherwise, the remediation will be re-applied during the scheduled scan. 5.6.5.12. Inconsistent ComplianceScan The ScanSetting object lists the node roles that the compliance scans generated from the ScanSetting or ScanSettingBinding objects would scan. Each node role usually maps to a machine config pool. Important It is expected that all machines in a machine config pool are identical and all scan results from the nodes in a pool should be identical. If some of the results are different from others, the Compliance Operator flags a ComplianceCheckResult object where some of the nodes will report as INCONSISTENT . All ComplianceCheckResult objects are also labeled with compliance.openshift.io/inconsistent-check . Because the number of machines in a pool might be quite large, the Compliance Operator attempts to find the most common state and list the nodes that differ from the common state. The most common state is stored in the compliance.openshift.io/most-common-status annotation and the annotation compliance.openshift.io/inconsistent-source contains pairs of hostname:status of check statuses that differ from the most common status. If no common state can be found, all the hostname:status pairs are listed in the compliance.openshift.io/inconsistent-source annotation . If possible, a remediation is still created so that the cluster can converge to a compliant status. However, this might not always be possible and correcting the difference between nodes must be done manually. The compliance scan must be re-run to get a consistent result by annotating the scan with the compliance.openshift.io/rescan= option: USD oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan= 5.6.5.13. Additional resources Modifying nodes . 5.6.6. Performing advanced Compliance Operator tasks The Compliance Operator includes options for advanced users for the purpose of debugging or integration with existing tooling. 5.6.6.1. Using the ComplianceSuite and ComplianceScan objects directly While it is recommended that users take advantage of the ScanSetting and ScanSettingBinding objects to define the suites and scans, there are valid use cases to define the ComplianceSuite objects directly: Specifying only a single rule to scan. This can be useful for debugging together with the debug: true attribute which increases the OpenSCAP scanner verbosity, as the debug mode tends to get quite verbose otherwise. Limiting the test to one rule helps to lower the amount of debug information. Providing a custom nodeSelector. In order for a remediation to be applicable, the nodeSelector must match a pool. Pointing the Scan to a bespoke config map with a tailoring file. For testing or development when the overhead of parsing profiles from bundles is not required. The following example shows a ComplianceSuite that scans the worker machines with only a single rule: apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... debug: true rule: xccdf_org.ssgproject.content_rule_no_direct_root_logins nodeSelector: node-role.kubernetes.io/worker: "" The ComplianceSuite object and the ComplianceScan objects referred to above specify several attributes in a format that OpenSCAP expects. To find out the profile, content, or rule values, you can start by creating a similar Suite from ScanSetting and ScanSettingBinding or inspect the objects parsed from the ProfileBundle objects like rules or profiles. Those objects contain the xccdf_org identifiers you can use to refer to them from a ComplianceSuite . 5.6.6.2. Setting PriorityClass for ScanSetting scans In large scale environments, the default PriorityClass object can be too low to guarantee Pods execute scans on time. For clusters that must maintain compliance or guarantee automated scanning, it is recommended to set the PriorityClass variable to ensure the Compliance Operator is always given priority in resource constrained situations. Procedure Set the PriorityClass variable: apiVersion: compliance.openshift.io/v1alpha1 strictNodeScan: true metadata: name: default namespace: openshift-compliance priorityClass: compliance-high-priority 1 kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker scanTolerations: - operator: Exists 1 If the PriorityClass referenced in the ScanSetting cannot be found, the Operator will leave the PriorityClass empty, issue a warning, and continue scheduling scans without a PriorityClass . 5.6.6.3. Using raw tailored profiles While the TailoredProfile CR enables the most common tailoring operations, the XCCDF standard allows even more flexibility in tailoring OpenSCAP profiles. In addition, if your organization has been using OpenScap previously, you may have an existing XCCDF tailoring file and can reuse it. The ComplianceSuite object contains an optional TailoringConfigMap attribute that you can point to a custom tailoring file. The value of the TailoringConfigMap attribute is a name of a config map which must contain a key called tailoring.xml and the value of this key is the tailoring contents. Procedure Create the ConfigMap object from a file: USD oc -n openshift-compliance \ create configmap nist-moderate-modified \ --from-file=tailoring.xml=/path/to/the/tailoringFile.xml Reference the tailoring file in a scan that belongs to a suite: apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: debug: true scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... debug: true tailoringConfigMap: name: nist-moderate-modified nodeSelector: node-role.kubernetes.io/worker: "" 5.6.6.4. Performing a rescan Typically you will want to re-run a scan on a defined schedule, like every Monday or daily. It can also be useful to re-run a scan once after fixing a problem on a node. To perform a single scan, annotate the scan with the compliance.openshift.io/rescan= option: USD oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan= A rescan generates four additional mc for rhcos-moderate profile: USD oc get mc Example output 75-worker-scan-chronyd-or-ntpd-specify-remote-server 75-worker-scan-configure-usbguard-auditbackend 75-worker-scan-service-usbguard-enabled 75-worker-scan-usbguard-allow-hid-and-hub Important When the scan setting default-auto-apply label is applied, remediations are applied automatically and outdated remediations automatically update. If there are remediations that were not applied due to dependencies, or remediations that had been outdated, rescanning applies the remediations and might trigger a reboot. Only remediations that use MachineConfig objects trigger reboots. If there are no updates or dependencies to be applied, no reboot occurs. 5.6.6.5. Setting custom storage size for results While the custom resources such as ComplianceCheckResult represent an aggregated result of one check across all scanned nodes, it can be useful to review the raw results as produced by the scanner. The raw results are produced in the ARF format and can be large (tens of megabytes per node), it is impractical to store them in a Kubernetes resource backed by the etcd key-value store. Instead, every scan creates a persistent volume (PV) which defaults to 1GB size. Depending on your environment, you may want to increase the PV size accordingly. This is done using the rawResultStorage.size attribute that is exposed in both the ScanSetting and ComplianceScan resources. A related parameter is rawResultStorage.rotation which controls how many scans are retained in the PV before the older scans are rotated. The default value is 3, setting the rotation policy to 0 disables the rotation. Given the default rotation policy and an estimate of 100MB per a raw ARF scan report, you can calculate the right PV size for your environment. 5.6.6.5.1. Using custom result storage values Because OpenShift Container Platform can be deployed in a variety of public clouds or bare metal, the Compliance Operator cannot determine available storage configurations. By default, the Compliance Operator will try to create the PV for storing results using the default storage class of the cluster, but a custom storage class can be configured using the rawResultStorage.StorageClassName attribute. Important If your cluster does not specify a default storage class, this attribute must be set. Configure the ScanSetting custom resource to use a standard storage class and create persistent volumes that are 10GB in size and keep the last 10 results: Example ScanSetting CR apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' 5.6.6.6. Applying remediations generated by suite scans Although you can use the autoApplyRemediations boolean parameter in a ComplianceSuite object, you can alternatively annotate the object with compliance.openshift.io/apply-remediations . This allows the Operator to apply all of the created remediations. Procedure Apply the compliance.openshift.io/apply-remediations annotation by running: USD oc -n openshift-compliance \ annotate compliancesuites/workers-compliancesuite compliance.openshift.io/apply-remediations= 5.6.6.7. Automatically update remediations In some cases, a scan with newer content might mark remediations as OUTDATED . As an administrator, you can apply the compliance.openshift.io/remove-outdated annotation to apply new remediations and remove the outdated ones. Procedure Apply the compliance.openshift.io/remove-outdated annotation: USD oc -n openshift-compliance \ annotate compliancesuites/workers-compliancesuite compliance.openshift.io/remove-outdated= Alternatively, set the autoUpdateRemediations flag in a ScanSetting or ComplianceSuite object to update the remediations automatically. 5.6.6.8. Creating a custom SCC for the Compliance Operator In some environments, you must create a custom Security Context Constraints (SCC) file to ensure the correct permissions are available to the Compliance Operator api-resource-collector . Prerequisites You must have admin privileges. Procedure Define the SCC in a YAML file named restricted-adjusted-compliance.yaml : SecurityContextConstraints object definition allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs kind: SecurityContextConstraints metadata: name: restricted-adjusted-compliance priority: 30 1 readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - SETUID - SETGID - MKNOD runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny users: - system:serviceaccount:openshift-compliance:api-resource-collector 2 volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret 1 The priority of this SCC must be higher than any other SCC that applies to the system:authenticated group. 2 Service Account used by Compliance Operator Scanner pod. Create the SCC: USD oc create -n openshift-compliance -f restricted-adjusted-compliance.yaml Example output securitycontextconstraints.security.openshift.io/restricted-adjusted-compliance created Verification Verify the SCC was created: USD oc get -n openshift-compliance scc restricted-adjusted-compliance Example output NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES restricted-adjusted-compliance false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny 30 false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"] 5.6.6.9. Additional resources Managing security context constraints 5.6.7. Troubleshooting Compliance Operator scans This section describes how to troubleshoot the Compliance Operator. The information can be useful either to diagnose a problem or provide information in a bug report. Some general tips: The Compliance Operator emits Kubernetes events when something important happens. You can either view all events in the cluster using the command: USD oc get events -n openshift-compliance Or view events for an object like a scan using the command: USD oc describe -n openshift-compliance compliancescan/cis-compliance The Compliance Operator consists of several controllers, approximately one per API object. It could be useful to filter only those controllers that correspond to the API object having issues. If a ComplianceRemediation cannot be applied, view the messages from the remediationctrl controller. You can filter the messages from a single controller by parsing with jq : USD oc -n openshift-compliance logs compliance-operator-775d7bddbd-gj58f \ | jq -c 'select(.logger == "profilebundlectrl")' The timestamps are logged as seconds since UNIX epoch in UTC. To convert them to a human-readable date, use date -d @timestamp --utc , for example: USD date -d @1596184628.955853 --utc Many custom resources, most importantly ComplianceSuite and ScanSetting , allow the debug option to be set. Enabling this option increases verbosity of the OpenSCAP scanner pods, as well as some other helper pods. If a single rule is passing or failing unexpectedly, it could be helpful to run a single scan or a suite with only that rule to find the rule ID from the corresponding ComplianceCheckResult object and use it as the rule attribute value in a Scan CR. Then, together with the debug option enabled, the scanner container logs in the scanner pod would show the raw OpenSCAP logs. 5.6.7.1. Anatomy of a scan The following sections outline the components and stages of Compliance Operator scans. 5.6.7.1.1. Compliance sources The compliance content is stored in Profile objects that are generated from a ProfileBundle object. The Compliance Operator creates a ProfileBundle object for the cluster and another for the cluster nodes. USD oc get -n openshift-compliance profilebundle.compliance USD oc get -n openshift-compliance profile.compliance The ProfileBundle objects are processed by deployments labeled with the Bundle name. To troubleshoot an issue with the Bundle , you can find the deployment and view logs of the pods in a deployment: USD oc logs -n openshift-compliance -lprofile-bundle=ocp4 -c profileparser USD oc get -n openshift-compliance deployments,pods -lprofile-bundle=ocp4 USD oc logs -n openshift-compliance pods/<pod-name> USD oc describe -n openshift-compliance pod/<pod-name> -c profileparser 5.6.7.1.2. The ScanSetting and ScanSettingBinding objects lifecycle and debugging With valid compliance content sources, the high-level ScanSetting and ScanSettingBinding objects can be used to generate ComplianceSuite and ComplianceScan objects: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: my-companys-constraints debug: true # For each role, a separate scan will be created pointing # to a node-role specified in roles roles: - worker --- apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: my-companys-compliance-requirements profiles: # Node checks - name: rhcos4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1 Both ScanSetting and ScanSettingBinding objects are handled by the same controller tagged with logger=scansettingbindingctrl . These objects have no status. Any issues are communicated in form of events: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuiteCreated 9m52s scansettingbindingctrl ComplianceSuite openshift-compliance/my-companys-compliance-requirements created Now a ComplianceSuite object is created. The flow continues to reconcile the newly created ComplianceSuite . 5.6.7.1.3. ComplianceSuite custom resource lifecycle and debugging The ComplianceSuite CR is a wrapper around ComplianceScan CRs. The ComplianceSuite CR is handled by controller tagged with logger=suitectrl . This controller handles creating scans from a suite, reconciling and aggregating individual Scan statuses into a single Suite status. If a suite is set to execute periodically, the suitectrl also handles creating a CronJob CR that re-runs the scans in the suite after the initial run is done: USD oc get cronjobs Example output NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE <cron_name> 0 1 * * * False 0 <none> 151m For the most important issues, events are emitted. View them with oc describe compliancesuites/<name> . The Suite objects also have a Status subresource that is updated when any of Scan objects that belong to this suite update their Status subresource. After all expected scans are created, control is passed to the scan controller. 5.6.7.1.4. ComplianceScan custom resource lifecycle and debugging The ComplianceScan CRs are handled by the scanctrl controller. This is also where the actual scans happen and the scan results are created. Each scan goes through several phases: 5.6.7.1.4.1. Pending phase The scan is validated for correctness in this phase. If some parameters like storage size are invalid, the scan transitions to DONE with ERROR result, otherwise proceeds to the Launching phase. 5.6.7.1.4.2. Launching phase In this phase, several config maps that contain either environment for the scanner pods or directly the script that the scanner pods will be evaluating. List the config maps: USD oc -n openshift-compliance get cm \ -l compliance.openshift.io/scan-name=rhcos4-e8-worker,complianceoperator.openshift.io/scan-script= These config maps will be used by the scanner pods. If you ever needed to modify the scanner behavior, change the scanner debug level or print the raw results, modifying the config maps is the way to go. Afterwards, a persistent volume claim is created per scan to store the raw ARF results: USD oc get pvc -n openshift-compliance -lcompliance.openshift.io/scan-name=rhcos4-e8-worker The PVCs are mounted by a per-scan ResultServer deployment. A ResultServer is a simple HTTP server where the individual scanner pods upload the full ARF results to. Each server can run on a different node. The full ARF results might be very large and you cannot presume that it would be possible to create a volume that could be mounted from multiple nodes at the same time. After the scan is finished, the ResultServer deployment is scaled down. The PVC with the raw results can be mounted from another custom pod and the results can be fetched or inspected. The traffic between the scanner pods and the ResultServer is protected by mutual TLS protocols. Finally, the scanner pods are launched in this phase; one scanner pod for a Platform scan instance and one scanner pod per matching node for a node scan instance. The per-node pods are labeled with the node name. Each pod is always labeled with the ComplianceScan name: USD oc get pods -lcompliance.openshift.io/scan-name=rhcos4-e8-worker,workload=scanner --show-labels Example output NAME READY STATUS RESTARTS AGE LABELS rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod 0/2 Completed 0 39m compliance.openshift.io/scan-name=rhcos4-e8-worker,targetNode=ip-10-0-169-90.eu-north-1.compute.internal,workload=scanner + The scan then proceeds to the Running phase. 5.6.7.1.4.3. Running phase The running phase waits until the scanner pods finish. The following terms and processes are in use in the running phase: init container : There is one init container called content-container . It runs the contentImage container and executes a single command that copies the contentFile to the /content directory shared with the other containers in this pod. scanner : This container runs the scan. For node scans, the container mounts the node filesystem as /host and mounts the content delivered by the init container. The container also mounts the entrypoint ConfigMap created in the Launching phase and executes it. The default script in the entrypoint ConfigMap executes OpenSCAP and stores the result files in the /results directory shared between the pod's containers. Logs from this pod can be viewed to determine what the OpenSCAP scanner checked. More verbose output can be viewed with the debug flag. logcollector : The logcollector container waits until the scanner container finishes. Then, it uploads the full ARF results to the ResultServer and separately uploads the XCCDF results along with scan result and OpenSCAP result code as a ConfigMap. These result config maps are labeled with the scan name ( compliance.openshift.io/scan-name=rhcos4-e8-worker ): USD oc describe cm/rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod Example output Name: rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod Namespace: openshift-compliance Labels: compliance.openshift.io/scan-name-scan=rhcos4-e8-worker complianceoperator.openshift.io/scan-result= Annotations: compliance-remediations/processed: compliance.openshift.io/scan-error-msg: compliance.openshift.io/scan-result: NON-COMPLIANT OpenSCAP-scan-result/node: ip-10-0-169-90.eu-north-1.compute.internal Data ==== exit-code: ---- 2 results: ---- <?xml version="1.0" encoding="UTF-8"?> ... Scanner pods for Platform scans are similar, except: There is one extra init container called api-resource-collector that reads the OpenSCAP content provided by the content-container init, container, figures out which API resources the content needs to examine and stores those API resources to a shared directory where the scanner container would read them from. The scanner container does not need to mount the host file system. When the scanner pods are done, the scans move on to the Aggregating phase. 5.6.7.1.4.4. Aggregating phase In the aggregating phase, the scan controller spawns yet another pod called the aggregator pod. Its purpose it to take the result ConfigMap objects, read the results and for each check result create the corresponding Kubernetes object. If the check failure can be automatically remediated, a ComplianceRemediation object is created. To provide human-readable metadata for the checks and remediations, the aggregator pod also mounts the OpenSCAP content using an init container. When a config map is processed by an aggregator pod, it is labeled the compliance-remediations/processed label. The result of this phase are ComplianceCheckResult objects: USD oc get compliancecheckresults -lcompliance.openshift.io/scan-name=rhcos4-e8-worker Example output NAME STATUS SEVERITY rhcos4-e8-worker-accounts-no-uid-except-zero PASS high rhcos4-e8-worker-audit-rules-dac-modification-chmod FAIL medium and ComplianceRemediation objects: USD oc get complianceremediations -lcompliance.openshift.io/scan-name=rhcos4-e8-worker Example output NAME STATE rhcos4-e8-worker-audit-rules-dac-modification-chmod NotApplied rhcos4-e8-worker-audit-rules-dac-modification-chown NotApplied rhcos4-e8-worker-audit-rules-execution-chcon NotApplied rhcos4-e8-worker-audit-rules-execution-restorecon NotApplied rhcos4-e8-worker-audit-rules-execution-semanage NotApplied rhcos4-e8-worker-audit-rules-execution-setfiles NotApplied After these CRs are created, the aggregator pod exits and the scan moves on to the Done phase. 5.6.7.1.4.5. Done phase In the final scan phase, the scan resources are cleaned up if needed and the ResultServer deployment is either scaled down (if the scan was one-time) or deleted if the scan is continuous; the scan instance would then recreate the deployment again. It is also possible to trigger a re-run of a scan in the Done phase by annotating it: USD oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan= After the scan reaches the Done phase, nothing else happens on its own unless the remediations are set to be applied automatically with autoApplyRemediations: true . The OpenShift Container Platform administrator would now review the remediations and apply them as needed. If the remediations are set to be applied automatically, the ComplianceSuite controller takes over in the Done phase, pauses the machine config pool to which the scan maps to and applies all the remediations in one go. If a remediation is applied, the ComplianceRemediation controller takes over. 5.6.7.1.5. ComplianceRemediation controller lifecycle and debugging The example scan has reported some findings. One of the remediations can be enabled by toggling its apply attribute to true : USD oc patch complianceremediations/rhcos4-e8-worker-audit-rules-dac-modification-chmod --patch '{"spec":{"apply":true}}' --type=merge The ComplianceRemediation controller ( logger=remediationctrl ) reconciles the modified object. The result of the reconciliation is change of status of the remediation object that is reconciled, but also a change of the rendered per-suite MachineConfig object that contains all the applied remediations. The MachineConfig object always begins with 75- and is named after the scan and the suite: USD oc get mc | grep 75- Example output 75-rhcos4-e8-worker-my-companys-compliance-requirements 3.2.0 2m46s The remediations the mc currently consists of are listed in the machine config's annotations: USD oc describe mc/75-rhcos4-e8-worker-my-companys-compliance-requirements Example output Name: 75-rhcos4-e8-worker-my-companys-compliance-requirements Labels: machineconfiguration.openshift.io/role=worker Annotations: remediation/rhcos4-e8-worker-audit-rules-dac-modification-chmod: The ComplianceRemediation controller's algorithm works like this: All currently applied remediations are read into an initial remediation set. If the reconciled remediation is supposed to be applied, it is added to the set. A MachineConfig object is rendered from the set and annotated with names of remediations in the set. If the set is empty (the last remediation was unapplied), the rendered MachineConfig object is removed. If and only if the rendered machine config is different from the one already applied in the cluster, the applied MC is updated (or created, or deleted). Creating or modifying a MachineConfig object triggers a reboot of nodes that match the machineconfiguration.openshift.io/role label - see the Machine Config Operator documentation for more details. The remediation loop ends once the rendered machine config is updated, if needed, and the reconciled remediation object status is updated. In our case, applying the remediation would trigger a reboot. After the reboot, annotate the scan to re-run it: USD oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan= The scan will run and finish. Check for the remediation to pass: USD oc -n openshift-compliance \ get compliancecheckresults/rhcos4-e8-worker-audit-rules-dac-modification-chmod Example output NAME STATUS SEVERITY rhcos4-e8-worker-audit-rules-dac-modification-chmod PASS medium 5.6.7.1.6. Useful labels Each pod that is spawned by the Compliance Operator is labeled specifically with the scan it belongs to and the work it does. The scan identifier is labeled with the compliance.openshift.io/scan-name label. The workload identifier is labeled with the workload label. The Compliance Operator schedules the following workloads: scanner : Performs the compliance scan. resultserver : Stores the raw results for the compliance scan. aggregator : Aggregates the results, detects inconsistencies and outputs result objects (checkresults and remediations). suitererunner : Will tag a suite to be re-run (when a schedule is set). profileparser : Parses a datastream and creates the appropriate profiles, rules and variables. When debugging and logs are required for a certain workload, run: USD oc logs -l workload=<workload_name> -c <container_name> 5.6.7.2. Increasing Compliance Operator resource limits In some cases, the Compliance Operator might require more memory than the default limits allow. The best way to mitigate this issue is to set custom resource limits. To increase the default memory and CPU limits of scanner pods, see `ScanSetting` Custom resource . Procedure To increase the Operator's memory limits to 500 Mi, create the following patch file named co-memlimit-patch.yaml : spec: config: resources: limits: memory: 500Mi Apply the patch file: USD oc patch sub compliance-operator -nopenshift-compliance --patch-file co-memlimit-patch.yaml --type=merge 5.6.7.3. Configuring Operator resource constraints The resources field defines Resource Constraints for all the containers in the Pod created by the Operator Lifecycle Manager (OLM). Note Resource Constraints applied in this process overwrites the existing resource constraints. Procedure Inject a request of 0.25 cpu and 64 Mi of memory, and a limit of 0.5 cpu and 128 Mi of memory in each container by editing the Subscription object: kind: Subscription metadata: name: compliance-operator namespace: openshift-compliance spec: package: package-name channel: stable config: resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" 5.6.7.4. Configuring ScanSetting resources When using the Compliance Operator in a cluster that contains more than 500 MachineConfigs, the ocp4-pci-dss-api-checks-pod pod may pause in the init phase when performing a Platform scan. Note Resource constraints applied in this process overwrites the existing resource constraints. Procedure Confirm the ocp4-pci-dss-api-checks-pod pod is stuck in the Init:OOMKilled status: USD oc get pod ocp4-pci-dss-api-checks-pod -w Example output NAME READY STATUS RESTARTS AGE ocp4-pci-dss-api-checks-pod 0/2 Init:1/2 8 (5m56s ago) 25m ocp4-pci-dss-api-checks-pod 0/2 Init:OOMKilled 8 (6m19s ago) 26m Edit the scanLimits attribute in the ScanSetting CR to increase the available memory for the ocp4-pci-dss-api-checks-pod pod: timeout: 30m strictNodeScan: true metadata: name: default namespace: openshift-compliance kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker apiVersion: compliance.openshift.io/v1alpha1 maxRetryOnTimeout: 3 scanTolerations: - operator: Exists scanLimits: memory: 1024Mi 1 1 The default setting is 500Mi . Apply the ScanSetting CR to your cluster: USD oc apply -f scansetting.yaml 5.6.7.5. Configuring ScanSetting timeout The ScanSetting object has a timeout option that can be specified in the ComplianceScanSetting object as a duration string, such as 1h30m . If the scan does not finish within the specified timeout, the scan reattempts until the maxRetryOnTimeout limit is reached. Procedure To set a timeout and maxRetryOnTimeout in ScanSetting, modify an existing ScanSetting object: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' timeout: '10m0s' 1 maxRetryOnTimeout: 3 2 1 The timeout variable is defined as a duration string, such as 1h30m . The default value is 30m . To disable the timeout, set the value to 0s . 2 The maxRetryOnTimeout variable defines how many times a retry is attempted. The default value is 3 . 5.6.7.6. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 5.6.8. Using the oc-compliance plugin Although the Compliance Operator automates many of the checks and remediations for the cluster, the full process of bringing a cluster into compliance often requires administrator interaction with the Compliance Operator API and other components. The oc-compliance plugin makes the process easier. 5.6.8.1. Installing the oc-compliance plugin Procedure Extract the oc-compliance image to get the oc-compliance binary: USD podman run --rm -v ~/.local/bin:/mnt/out:Z registry.redhat.io/compliance/oc-compliance-rhel8:stable /bin/cp /usr/bin/oc-compliance /mnt/out/ Example output W0611 20:35:46.486903 11354 manifest.go:440] Chose linux/amd64 manifest from the manifest list. You can now run oc-compliance . 5.6.8.2. Fetching raw results When a compliance scan finishes, the results of the individual checks are listed in the resulting ComplianceCheckResult custom resource (CR). However, an administrator or auditor might require the complete details of the scan. The OpenSCAP tool creates an Advanced Recording Format (ARF) formatted file with the detailed results. This ARF file is too large to store in a config map or other standard Kubernetes resource, so a persistent volume (PV) is created to contain it. Procedure Fetching the results from the PV with the Compliance Operator is a four-step process. However, with the oc-compliance plugin, you can use a single command: USD oc compliance fetch-raw <object-type> <object-name> -o <output-path> <object-type> can be either scansettingbinding , compliancescan or compliancesuite , depending on which of these objects the scans were launched with. <object-name> is the name of the binding, suite, or scan object to gather the ARF file for, and <output-path> is the local directory to place the results. For example: USD oc compliance fetch-raw scansettingbindings my-binding -o /tmp/ Example output Fetching results for my-binding scans: ocp4-cis, ocp4-cis-node-worker, ocp4-cis-node-master Fetching raw compliance results for scan 'ocp4-cis'....... The raw compliance results are available in the following directory: /tmp/ocp4-cis Fetching raw compliance results for scan 'ocp4-cis-node-worker'........... The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-worker Fetching raw compliance results for scan 'ocp4-cis-node-master'...... The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-master View the list of files in the directory: USD ls /tmp/ocp4-cis-node-master/ Example output ocp4-cis-node-master-ip-10-0-128-89.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-150-5.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-163-32.ec2.internal-pod.xml.bzip2 Extract the results: USD bunzip2 -c resultsdir/worker-scan/worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 > resultsdir/worker-scan/worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml View the results: USD ls resultsdir/worker-scan/ Example output worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 worker-scan-stage-459-tqkg7-compute-1-pod.xml.bzip2 5.6.8.3. Re-running scans Although it is possible to run scans as scheduled jobs, you must often re-run a scan on demand, particularly after remediations are applied or when other changes to the cluster are made. Procedure Rerunning a scan with the Compliance Operator requires use of an annotation on the scan object. However, with the oc-compliance plugin you can rerun a scan with a single command. Enter the following command to rerun the scans for the ScanSettingBinding object named my-binding : USD oc compliance rerun-now scansettingbindings my-binding Example output Rerunning scans from 'my-binding': ocp4-cis Re-running scan 'openshift-compliance/ocp4-cis' 5.6.8.4. Using ScanSettingBinding custom resources When using the ScanSetting and ScanSettingBinding custom resources (CRs) that the Compliance Operator provides, it is possible to run scans for multiple profiles while using a common set of scan options, such as schedule , machine roles , tolerations , and so on. While that is easier than working with multiple ComplianceSuite or ComplianceScan objects, it can confuse new users. The oc compliance bind subcommand helps you create a ScanSettingBinding CR. Procedure Run: USD oc compliance bind [--dry-run] -N <binding name> [-S <scansetting name>] <objtype/objname> [..<objtype/objname>] If you omit the -S flag, the default scan setting provided by the Compliance Operator is used. The object type is the Kubernetes object type, which can be profile or tailoredprofile . More than one object can be provided. The object name is the name of the Kubernetes resource, such as .metadata.name . Add the --dry-run option to display the YAML file of the objects that are created. For example, given the following profiles and scan settings: USD oc get profile.compliance -n openshift-compliance Example output NAME AGE VERSION ocp4-cis 3h49m 1.5.0 ocp4-cis-1-4 3h49m 1.4.0 ocp4-cis-1-5 3h49m 1.5.0 ocp4-cis-node 3h49m 1.5.0 ocp4-cis-node-1-4 3h49m 1.4.0 ocp4-cis-node-1-5 3h49m 1.5.0 ocp4-e8 3h49m ocp4-high 3h49m Revision 4 ocp4-high-node 3h49m Revision 4 ocp4-high-node-rev-4 3h49m Revision 4 ocp4-high-rev-4 3h49m Revision 4 ocp4-moderate 3h49m Revision 4 ocp4-moderate-node 3h49m Revision 4 ocp4-moderate-node-rev-4 3h49m Revision 4 ocp4-moderate-rev-4 3h49m Revision 4 ocp4-nerc-cip 3h49m ocp4-nerc-cip-node 3h49m ocp4-pci-dss 3h49m 3.2.1 ocp4-pci-dss-3-2 3h49m 3.2.1 ocp4-pci-dss-4-0 3h49m 4.0.0 ocp4-pci-dss-node 3h49m 3.2.1 ocp4-pci-dss-node-3-2 3h49m 3.2.1 ocp4-pci-dss-node-4-0 3h49m 4.0.0 ocp4-stig 3h49m V2R1 ocp4-stig-node 3h49m V2R1 ocp4-stig-node-v1r1 3h49m V1R1 ocp4-stig-node-v2r1 3h49m V2R1 ocp4-stig-v1r1 3h49m V1R1 ocp4-stig-v2r1 3h49m V2R1 rhcos4-e8 3h49m rhcos4-high 3h49m Revision 4 rhcos4-high-rev-4 3h49m Revision 4 rhcos4-moderate 3h49m Revision 4 rhcos4-moderate-rev-4 3h49m Revision 4 rhcos4-nerc-cip 3h49m rhcos4-stig 3h49m V2R1 rhcos4-stig-v1r1 3h49m V1R1 rhcos4-stig-v2r1 3h49m V2R1 USD oc get scansettings -n openshift-compliance Example output NAME AGE default 10m default-auto-apply 10m To apply the default settings to the ocp4-cis and ocp4-cis-node profiles, run: USD oc compliance bind -N my-binding profile/ocp4-cis profile/ocp4-cis-node Example output Creating ScanSettingBinding my-binding After the ScanSettingBinding CR is created, the bound profile begins scanning for both profiles with the related settings. Overall, this is the fastest way to begin scanning with the Compliance Operator. 5.6.8.5. Printing controls Compliance standards are generally organized into a hierarchy as follows: A benchmark is the top-level definition of a set of controls for a particular standard. For example, FedRAMP Moderate or Center for Internet Security (CIS) v.1.6.0. A control describes a family of requirements that must be met in order to be in compliance with the benchmark. For example, FedRAMP AC-01 (access control policy and procedures). A rule is a single check that is specific for the system being brought into compliance, and one or more of these rules map to a control. The Compliance Operator handles the grouping of rules into a profile for a single benchmark. It can be difficult to determine which controls that the set of rules in a profile satisfy. Procedure The oc compliance controls subcommand provides a report of the standards and controls that a given profile satisfies: USD oc compliance controls profile ocp4-cis-node Example output +-----------+----------+ | FRAMEWORK | CONTROLS | +-----------+----------+ | CIS-OCP | 1.1.1 | + +----------+ | | 1.1.10 | + +----------+ | | 1.1.11 | + +----------+ ... 5.6.8.6. Fetching compliance remediation details The Compliance Operator provides remediation objects that are used to automate the changes required to make the cluster compliant. The fetch-fixes subcommand can help you understand exactly which configuration remediations are used. Use the fetch-fixes subcommand to extract the remediation objects from a profile, rule, or ComplianceRemediation object into a directory to inspect. Procedure View the remediations for a profile: USD oc compliance fetch-fixes profile ocp4-cis -o /tmp Example output No fixes to persist for rule 'ocp4-api-server-api-priority-flowschema-catch-all' 1 No fixes to persist for rule 'ocp4-api-server-api-priority-gate-enabled' No fixes to persist for rule 'ocp4-api-server-audit-log-maxbackup' Persisted rule fix to /tmp/ocp4-api-server-audit-log-maxsize.yaml No fixes to persist for rule 'ocp4-api-server-audit-log-path' No fixes to persist for rule 'ocp4-api-server-auth-mode-no-aa' No fixes to persist for rule 'ocp4-api-server-auth-mode-node' No fixes to persist for rule 'ocp4-api-server-auth-mode-rbac' No fixes to persist for rule 'ocp4-api-server-basic-auth' No fixes to persist for rule 'ocp4-api-server-bind-address' No fixes to persist for rule 'ocp4-api-server-client-ca' Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-cipher.yaml Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-config.yaml 1 The No fixes to persist warning is expected whenever there are rules in a profile that do not have a corresponding remediation, because either the rule cannot be remediated automatically or a remediation was not provided. You can view a sample of the YAML file. The head command will show you the first 10 lines: USD head /tmp/ocp4-api-server-audit-log-maxsize.yaml Example output apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: maximumFileSizeMegabytes: 100 View the remediation from a ComplianceRemediation object created after a scan: USD oc get complianceremediations -n openshift-compliance Example output NAME STATE ocp4-cis-api-server-encryption-provider-cipher NotApplied ocp4-cis-api-server-encryption-provider-config NotApplied USD oc compliance fetch-fixes complianceremediations ocp4-cis-api-server-encryption-provider-cipher -o /tmp Example output Persisted compliance remediation fix to /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml You can view a sample of the YAML file. The head command will show you the first 10 lines: USD head /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml Example output apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: encryption: type: aescbc Warning Use caution before applying remediations directly. Some remediations might not be applicable in bulk, such as the usbguard rules in the moderate profile. In these cases, allow the Compliance Operator to apply the rules because it addresses the dependencies and ensures that the cluster remains in a good state. 5.6.8.7. Viewing ComplianceCheckResult object details When scans are finished running, ComplianceCheckResult objects are created for the individual scan rules. The view-result subcommand provides a human-readable output of the ComplianceCheckResult object details. Procedure Run: USD oc compliance view-result ocp4-cis-scheduler-no-bind-address
[ "oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis", "oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis", "oc adm must-gather --image=USD(oc get csv compliance-operator.v1.6.0 -o=jsonpath='{.spec.relatedImages[?(@.name==\"must-gather\")].image}')", "oc get profile.compliance -n openshift-compliance", "NAME AGE VERSION ocp4-cis 3h49m 1.5.0 ocp4-cis-1-4 3h49m 1.4.0 ocp4-cis-1-5 3h49m 1.5.0 ocp4-cis-node 3h49m 1.5.0 ocp4-cis-node-1-4 3h49m 1.4.0 ocp4-cis-node-1-5 3h49m 1.5.0 ocp4-e8 3h49m ocp4-high 3h49m Revision 4 ocp4-high-node 3h49m Revision 4 ocp4-high-node-rev-4 3h49m Revision 4 ocp4-high-rev-4 3h49m Revision 4 ocp4-moderate 3h49m Revision 4 ocp4-moderate-node 3h49m Revision 4 ocp4-moderate-node-rev-4 3h49m Revision 4 ocp4-moderate-rev-4 3h49m Revision 4 ocp4-nerc-cip 3h49m ocp4-nerc-cip-node 3h49m ocp4-pci-dss 3h49m 3.2.1 ocp4-pci-dss-3-2 3h49m 3.2.1 ocp4-pci-dss-4-0 3h49m 4.0.0 ocp4-pci-dss-node 3h49m 3.2.1 ocp4-pci-dss-node-3-2 3h49m 3.2.1 ocp4-pci-dss-node-4-0 3h49m 4.0.0 ocp4-stig 3h49m V2R1 ocp4-stig-node 3h49m V2R1 ocp4-stig-node-v1r1 3h49m V1R1 ocp4-stig-node-v2r1 3h49m V2R1 ocp4-stig-v1r1 3h49m V1R1 ocp4-stig-v2r1 3h49m V2R1 rhcos4-e8 3h49m rhcos4-high 3h49m Revision 4 rhcos4-high-rev-4 3h49m Revision 4 rhcos4-moderate 3h49m Revision 4 rhcos4-moderate-rev-4 3h49m Revision 4 rhcos4-nerc-cip 3h49m rhcos4-stig 3h49m V2R1 rhcos4-stig-v1r1 3h49m V1R1 rhcos4-stig-v2r1 3h49m V2R1", "oc get -n openshift-compliance -oyaml profiles.compliance rhcos4-e8", "apiVersion: compliance.openshift.io/v1alpha1 description: 'This profile contains configuration checks for Red Hat Enterprise Linux CoreOS that align to the Australian Cyber Security Centre (ACSC) Essential Eight. A copy of the Essential Eight in Linux Environments guide can be found at the ACSC website: https://www.cyber.gov.au/acsc/view-all-content/publications/hardening-linux-workstations-and-servers' id: xccdf_org.ssgproject.content_profile_e8 kind: Profile metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/product: redhat_enterprise_linux_coreos_4 compliance.openshift.io/product-type: Node creationTimestamp: \"2022-10-19T12:06:49Z\" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-e8 namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: \"43699\" uid: 86353f70-28f7-40b4-bf0e-6289ec33675b rules: - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown - rhcos4-audit-rules-execution-chcon - rhcos4-audit-rules-execution-restorecon - rhcos4-audit-rules-execution-semanage - rhcos4-audit-rules-execution-setfiles - rhcos4-audit-rules-execution-setsebool - rhcos4-audit-rules-execution-seunshare - rhcos4-audit-rules-kernel-module-loading-delete - rhcos4-audit-rules-kernel-module-loading-finit - rhcos4-audit-rules-kernel-module-loading-init - rhcos4-audit-rules-login-events - rhcos4-audit-rules-login-events-faillock - rhcos4-audit-rules-login-events-lastlog - rhcos4-audit-rules-login-events-tallylog - rhcos4-audit-rules-networkconfig-modification - rhcos4-audit-rules-sysadmin-actions - rhcos4-audit-rules-time-adjtimex - rhcos4-audit-rules-time-clock-settime - rhcos4-audit-rules-time-settimeofday - rhcos4-audit-rules-time-stime - rhcos4-audit-rules-time-watch-localtime - rhcos4-audit-rules-usergroup-modification - rhcos4-auditd-data-retention-flush - rhcos4-auditd-freq - rhcos4-auditd-local-events - rhcos4-auditd-log-format - rhcos4-auditd-name-format - rhcos4-auditd-write-logs - rhcos4-configure-crypto-policy - rhcos4-configure-ssh-crypto-policy - rhcos4-no-empty-passwords - rhcos4-selinux-policytype - rhcos4-selinux-state - rhcos4-service-auditd-enabled - rhcos4-sshd-disable-empty-passwords - rhcos4-sshd-disable-gssapi-auth - rhcos4-sshd-disable-rhosts - rhcos4-sshd-disable-root-login - rhcos4-sshd-disable-user-known-hosts - rhcos4-sshd-do-not-permit-user-env - rhcos4-sshd-enable-strictmodes - rhcos4-sshd-print-last-log - rhcos4-sshd-set-loglevel-info - rhcos4-sysctl-kernel-dmesg-restrict - rhcos4-sysctl-kernel-kptr-restrict - rhcos4-sysctl-kernel-randomize-va-space - rhcos4-sysctl-kernel-unprivileged-bpf-disabled - rhcos4-sysctl-kernel-yama-ptrace-scope - rhcos4-sysctl-net-core-bpf-jit-harden title: Australian Cyber Security Centre (ACSC) Essential Eight", "oc get -n openshift-compliance -oyaml rules rhcos4-audit-rules-login-events", "apiVersion: compliance.openshift.io/v1alpha1 checkType: Node description: |- The audit system already collects login information for all users and root. If the auditd daemon is configured to use the augenrules program to read audit rules during daemon startup (the default), add the following lines to a file with suffix.rules in the directory /etc/audit/rules.d in order to watch for attempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins If the auditd daemon is configured to use the auditctl utility to read audit rules during daemon startup, add the following lines to /etc/audit/audit.rules file in order to watch for unattempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins id: xccdf_org.ssgproject.content_rule_audit_rules_login_events kind: Rule metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/rule: audit-rules-login-events control.compliance.openshift.io/NIST-800-53: AU-2(d);AU-12(c);AC-6(9);CM-6(a) control.compliance.openshift.io/PCI-DSS: Req-10.2.3 policies.open-cluster-management.io/controls: AU-2(d),AU-12(c),AC-6(9),CM-6(a),Req-10.2.3 policies.open-cluster-management.io/standards: NIST-800-53,PCI-DSS creationTimestamp: \"2022-10-19T12:07:08Z\" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-audit-rules-login-events namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: \"44819\" uid: 75872f1f-3c93-40ca-a69d-44e5438824a4 rationale: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion. severity: medium title: Record Attempts to Alter Logon and Logout Events warning: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion.", "apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle name: <profile bundle name> namespace: openshift-compliance status: dataStreamStatus: VALID 1", "apiVersion: compliance.openshift.io/v1alpha1 description: <description of the profile> id: xccdf_org.ssgproject.content_profile_moderate 1 kind: Profile metadata: annotations: compliance.openshift.io/product: <product name> compliance.openshift.io/product-type: Node 2 creationTimestamp: \"YYYY-MM-DDTMM:HH:SSZ\" generation: 1 labels: compliance.openshift.io/profile-bundle: <profile bundle name> name: rhcos4-moderate namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: <profile bundle name> uid: <uid string> resourceVersion: \"<version number>\" selfLink: /apis/compliance.openshift.io/v1alpha1/namespaces/openshift-compliance/profiles/rhcos4-moderate uid: <uid string> rules: 3 - rhcos4-account-disable-post-pw-expiration - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown title: <title of the profile>", "apiVersion: compliance.openshift.io/v1alpha1 checkType: Platform 1 description: <description of the rule> id: xccdf_org.ssgproject.content_rule_configure_network_policies_namespaces 2 instructions: <manual instructions for the scan> kind: Rule metadata: annotations: compliance.openshift.io/rule: configure-network-policies-namespaces control.compliance.openshift.io/CIS-OCP: 5.3.2 control.compliance.openshift.io/NERC-CIP: CIP-003-3 R4;CIP-003-3 R4.2;CIP-003-3 R5;CIP-003-3 R6;CIP-004-3 R2.2.4;CIP-004-3 R3;CIP-007-3 R2;CIP-007-3 R2.1;CIP-007-3 R2.2;CIP-007-3 R2.3;CIP-007-3 R5.1;CIP-007-3 R6.1 control.compliance.openshift.io/NIST-800-53: AC-4;AC-4(21);CA-3(5);CM-6;CM-6(1);CM-7;CM-7(1);SC-7;SC-7(3);SC-7(5);SC-7(8);SC-7(12);SC-7(13);SC-7(18) labels: compliance.openshift.io/profile-bundle: ocp4 name: ocp4-configure-network-policies-namespaces namespace: openshift-compliance rationale: <description of why this rule is checked> severity: high 3 title: <summary of the rule>", "apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: rhcos4-with-usb spec: extends: rhcos4-moderate 1 title: <title of the tailored profile> disableRules: - name: <name of a rule object to be disabled> rationale: <description of why this rule is checked> status: id: xccdf_compliance.openshift.io_profile_rhcos4-with-usb 2 outputRef: name: rhcos4-with-usb-tp 3 namespace: openshift-compliance state: READY 4", "compliance.openshift.io/product-type: Platform/Node", "apiVersion: compliance.openshift.io/v1alpha1 autoApplyRemediations: true 1 autoUpdateRemediations: true 2 kind: ScanSetting maxRetryOnTimeout: 3 metadata: creationTimestamp: \"2022-10-18T20:21:00Z\" generation: 1 name: default-auto-apply namespace: openshift-compliance resourceVersion: \"38840\" uid: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 rawResultStorage: nodeSelector: node-role.kubernetes.io/master: \"\" pvAccessModes: - ReadWriteOnce rotation: 3 3 size: 1Gi 4 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists roles: 5 - master - worker scanTolerations: - operator: Exists schedule: 0 1 * * * 6 showNotApplicable: false strictNodeScan: true timeout: 30m", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: <name of the scan> profiles: 1 # Node checks - name: rhcos4-with-usb kind: TailoredProfile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-moderate kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: 2 name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1", "oc get compliancesuites", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: <name_of_the_suite> spec: autoApplyRemediations: false 1 schedule: \"0 1 * * *\" 2 scans: 3 - name: workers-scan scanType: Node profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc rule: \"xccdf_org.ssgproject.content_rule_no_netrc_files\" nodeSelector: node-role.kubernetes.io/worker: \"\" status: Phase: DONE 4 Result: NON-COMPLIANT 5 scanStatuses: - name: workers-scan phase: DONE result: NON-COMPLIANT", "oc get events --field-selector involvedObject.kind=ComplianceSuite,involvedObject.name=<name of the suite>", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceScan metadata: name: <name_of_the_compliance_scan> spec: scanType: Node 1 profile: xccdf_org.ssgproject.content_profile_moderate 2 content: ssg-ocp4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... 3 rule: \"xccdf_org.ssgproject.content_rule_no_netrc_files\" 4 nodeSelector: 5 node-role.kubernetes.io/worker: \"\" status: phase: DONE 6 result: NON-COMPLIANT 7", "get events --field-selector involvedObject.kind=ComplianceScan,involvedObject.name=<name_of_the_compliance_scan>", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceCheckResult metadata: labels: compliance.openshift.io/check-severity: medium compliance.openshift.io/check-status: FAIL compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan name: workers-scan-no-direct-root-logins namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceScan name: workers-scan description: <description of scan check> instructions: <manual instructions for the scan> id: xccdf_org.ssgproject.content_rule_no_direct_root_logins severity: medium 1 status: FAIL 2", "get compliancecheckresults -l compliance.openshift.io/suite=workers-compliancesuite", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: labels: compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan machineconfiguration.openshift.io/role: worker name: workers-scan-disable-users-coredumps namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: workers-scan-disable-users-coredumps uid: <UID> spec: apply: false 1 object: current: 2 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:,%2A%20%20%20%20%20hard%20%20%20core%20%20%20%200 filesystem: root mode: 420 path: /etc/security/limits.d/75-disable_users_coredumps.conf outdated: {} 3", "get complianceremediations -l compliance.openshift.io/suite=workers-compliancesuite", "get compliancecheckresults -l 'compliance.openshift.io/check-status in (FAIL),compliance.openshift.io/automated-remediation'", "get compliancecheckresults -l 'compliance.openshift.io/check-status in (FAIL),!compliance.openshift.io/automated-remediation'", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance", "oc create -f namespace-object.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance", "oc create -f operator-group-object.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: \"stable\" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f subscription-object.yaml", "oc get csv -n openshift-compliance", "oc get deploy -n openshift-compliance", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance", "oc create -f namespace-object.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance", "oc create -f operator-group-object.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: \"stable\" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: \"\" 1", "oc create -f subscription-object.yaml", "oc get csv -n openshift-compliance", "oc get deploy -n openshift-compliance", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance", "oc create -f namespace-object.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance", "oc create -f operator-group-object.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: \"stable\" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: \"\" env: - name: PLATFORM value: \"HyperShift\"", "oc create -f subscription-object.yaml", "oc get csv -n openshift-compliance", "oc get deploy -n openshift-compliance", "apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: \"2022-10-19T12:06:30Z\" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: \"46741\" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml 1 contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 2 status: conditions: - lastTransitionTime: \"2022-10-19T12:07:51Z\" message: Profile bundle successfully parsed reason: Valid status: \"True\" type: Ready dataStreamStatus: VALID", "oc -n openshift-compliance get profilebundles rhcos4 -oyaml", "apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: \"2022-10-19T12:06:30Z\" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: \"46741\" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 1 status: conditions: - lastTransitionTime: \"2022-10-19T12:07:51Z\" message: Profile bundle successfully parsed reason: Valid status: \"True\" type: Ready dataStreamStatus: VALID", "oc delete ssb --all -n openshift-compliance", "oc delete ss --all -n openshift-compliance", "oc delete suite --all -n openshift-compliance", "oc delete scan --all -n openshift-compliance", "oc delete profilebundle.compliance --all -n openshift-compliance", "oc delete sub --all -n openshift-compliance", "oc delete csv --all -n openshift-compliance", "oc delete project openshift-compliance", "project.project.openshift.io \"openshift-compliance\" deleted", "oc get project/openshift-compliance", "Error from server (NotFound): namespaces \"openshift-compliance\" not found", "oc explain scansettings", "oc explain scansettingbindings", "oc describe scansettings default -n openshift-compliance", "Name: default Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Kind: ScanSetting Max Retry On Timeout: 3 Metadata: Creation Timestamp: 2024-07-16T14:56:42Z Generation: 2 Resource Version: 91655682 UID: 50358cf1-57a8-4f69-ac50-5c7a5938e402 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce 1 Rotation: 3 2 Size: 1Gi 3 Storage Class Name: standard 4 Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master 5 worker 6 Scan Tolerations: 7 Operator: Exists Schedule: 0 1 * * * 8 Show Not Applicable: false Strict Node Scan: true Suspend: false Timeout: 30m Events: <none>", "Name: default-auto-apply Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Auto Apply Remediations: true 1 Auto Update Remediations: true 2 Kind: ScanSetting Metadata: Creation Timestamp: 2022-10-18T20:21:00Z Generation: 1 Managed Fields: API Version: compliance.openshift.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:autoApplyRemediations: f:autoUpdateRemediations: f:rawResultStorage: .: f:nodeSelector: .: f:node-role.kubernetes.io/master: f:pvAccessModes: f:rotation: f:size: f:tolerations: f:roles: f:scanTolerations: f:schedule: f:showNotApplicable: f:strictNodeScan: Manager: compliance-operator Operation: Update Time: 2022-10-18T20:21:00Z Resource Version: 38840 UID: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce Rotation: 3 Size: 1Gi Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master worker Scan Tolerations: Operator: Exists Schedule: 0 1 * * * Show Not Applicable: false Strict Node Scan: true Events: <none>", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis-compliance namespace: openshift-compliance profiles: - name: ocp4-cis-node kind: Profile apiGroup: compliance.openshift.io/v1alpha1 - name: ocp4-cis kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: default kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1", "oc create -f <file-name>.yaml -n openshift-compliance", "oc get compliancescan -w -n openshift-compliance", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: rs-on-workers namespace: openshift-compliance rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: \"\" 1 pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists 2 roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * *", "oc create -f rs-workers.yaml", "oc get scansettings rs-on-workers -n openshift-compliance -o yaml", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: creationTimestamp: \"2021-11-19T19:36:36Z\" generation: 1 name: rs-on-workers namespace: openshift-compliance resourceVersion: \"48305\" uid: 43fdfc5f-15a7-445a-8bbc-0e4a160cd46e rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: \"\" pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * * strictNodeScan: true", "oc get hostedcluster -A", "NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE local-cluster 79136a1bdb84b3c13217 4.13.5 79136a1bdb84b3c13217-admin-kubeconfig Completed True False The hosted control plane is available", "apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: hypershift-cisk57aw88gry namespace: openshift-compliance spec: description: This profile test required rules extends: ocp4-cis 1 title: Management namespace profile setValues: - name: ocp4-hypershift-cluster rationale: This value is used for HyperShift version detection value: 79136a1bdb84b3c13217 2 - name: ocp4-hypershift-namespace-prefix rationale: This value is used for HyperShift control plane namespace detection value: local-cluster 3", "oc create -n openshift-compliance -f mgmt-tp.yaml", "spec.containers[].resources.limits.cpu spec.containers[].resources.limits.memory spec.containers[].resources.limits.hugepages-<size> spec.containers[].resources.requests.cpu spec.containers[].resources.requests.memory spec.containers[].resources.requests.hugepages-<size>", "apiVersion: v1 kind: Pod metadata: name: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: app image: images.my-company.example/app:v4 resources: requests: 1 memory: \"64Mi\" cpu: \"250m\" limits: 2 memory: \"128Mi\" cpu: \"500m\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: new-profile annotations: compliance.openshift.io/product-type: Node 1 spec: extends: ocp4-cis-node 2 description: My custom profile 3 title: Custom profile 4 enableRules: - name: ocp4-etcd-unique-ca rationale: We really need to enable this disableRules: - name: ocp4-file-groupowner-cni-conf rationale: This does not apply to the cluster", "oc get rules.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4", "oc get variables.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4", "apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: nist-moderate-modified spec: extends: rhcos4-moderate description: NIST moderate profile title: My modified NIST moderate profile disableRules: - name: rhcos4-file-permissions-var-log-messages rationale: The file contains logs of error messages in the system - name: rhcos4-account-disable-post-pw-expiration rationale: No need to check this as it comes from the IdP setValues: - name: rhcos4-var-selinux-state rationale: Organizational requirements value: permissive", "apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: ocp4-manual-scc-check spec: extends: ocp4-cis description: This profile extends ocp4-cis by forcing the SCC check to always return MANUAL title: OCP4 CIS profile with manual SCC check manualRules: - name: ocp4-scc-limit-container-allowed-capabilities rationale: We use third party software that installs its own SCC with extra privileges", "oc create -n openshift-compliance -f new-profile-node.yaml 1", "tailoredprofile.compliance.openshift.io/nist-moderate-modified created", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: nist-moderate-modified profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-moderate - apiGroup: compliance.openshift.io/v1alpha1 kind: TailoredProfile name: nist-moderate-modified settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default", "oc create -n openshift-compliance -f new-scansettingbinding.yaml", "scansettingbinding.compliance.openshift.io/nist-moderate-modified created", "oc get compliancesuites nist-moderate-modified -o json -n openshift-compliance | jq '.status.scanStatuses[].resultsStorage'", "{ \"name\": \"ocp4-moderate\", \"namespace\": \"openshift-compliance\" } { \"name\": \"nist-moderate-modified-master\", \"namespace\": \"openshift-compliance\" } { \"name\": \"nist-moderate-modified-worker\", \"namespace\": \"openshift-compliance\" }", "oc get pvc -n openshift-compliance rhcos4-moderate-worker", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rhcos4-moderate-worker Bound pvc-548f6cfe-164b-42fe-ba13-a07cfbc77f3a 1Gi RWO gp2 92m", "oc create -n openshift-compliance -f pod.yaml", "apiVersion: \"v1\" kind: Pod metadata: name: pv-extract spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: pv-extract-pod image: registry.access.redhat.com/ubi9/ubi command: [\"sleep\", \"3000\"] volumeMounts: - mountPath: \"/workers-scan-results\" name: workers-scan-vol securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: workers-scan-vol persistentVolumeClaim: claimName: rhcos4-moderate-worker", "oc cp pv-extract:/workers-scan-results -n openshift-compliance .", "oc delete pod pv-extract -n openshift-compliance", "oc get -n openshift-compliance compliancecheckresults -l compliance.openshift.io/suite=workers-compliancesuite", "oc get -n openshift-compliance compliancecheckresults -l compliance.openshift.io/scan=workers-scan", "oc get -n openshift-compliance compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/automated-remediation'", "oc get compliancecheckresults -n openshift-compliance -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/check-severity=high'", "NAME STATUS SEVERITY nist-moderate-modified-master-configure-crypto-policy FAIL high nist-moderate-modified-master-coreos-pti-kernel-argument FAIL high nist-moderate-modified-master-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-master-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-master-enable-fips-mode FAIL high nist-moderate-modified-master-no-empty-passwords FAIL high nist-moderate-modified-master-selinux-state FAIL high nist-moderate-modified-worker-configure-crypto-policy FAIL high nist-moderate-modified-worker-coreos-pti-kernel-argument FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-worker-enable-fips-mode FAIL high nist-moderate-modified-worker-no-empty-passwords FAIL high nist-moderate-modified-worker-selinux-state FAIL high ocp4-moderate-configure-network-policies-namespaces FAIL high ocp4-moderate-fips-mode-enabled-on-all-nodes FAIL high", "oc get -n openshift-compliance compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,!compliance.openshift.io/automated-remediation'", "spec: apply: false current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf mode: 0644 contents: source: data:,net.ipv4.conf.all.accept_redirects%3D0 outdated: {} status: applicationState: NotApplied", "echo \"net.ipv4.conf.all.accept_redirects%3D0\" | python3 -c \"import sys, urllib.parse; print(urllib.parse.unquote(''.join(sys.stdin.readlines())))\"", "net.ipv4.conf.all.accept_redirects=0", "oc get nodes -n openshift-compliance", "NAME STATUS ROLES AGE VERSION ip-10-0-128-92.us-east-2.compute.internal Ready master 5h21m v1.30.3 ip-10-0-158-32.us-east-2.compute.internal Ready worker 5h17m v1.30.3 ip-10-0-166-81.us-east-2.compute.internal Ready worker 5h17m v1.30.3 ip-10-0-171-170.us-east-2.compute.internal Ready master 5h21m v1.30.3 ip-10-0-197-35.us-east-2.compute.internal Ready master 5h22m v1.30.3", "oc -n openshift-compliance label node ip-10-0-166-81.us-east-2.compute.internal node-role.kubernetes.io/<machine_config_pool_name>=", "node/ip-10-0-166-81.us-east-2.compute.internal labeled", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: <machine_config_pool_name> labels: pools.operator.machineconfiguration.openshift.io/<machine_config_pool_name>: '' 1 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,<machine_config_pool_name>]} nodeSelector: matchLabels: node-role.kubernetes.io/<machine_config_pool_name>: \"\"", "oc get mcp -w", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master - example scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis namespace: openshift-compliance profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis-node settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default", "oc get rules -o json | jq '.items[] | select(.checkType == \"Platform\") | select(.metadata.name | contains(\"ocp4-kubelet-\")) | .metadata.name'", "oc label mcp <sub-pool-name> pools.operator.machineconfiguration.openshift.io/<sub-pool-name>=", "oc -n openshift-compliance patch complianceremediations/<scan-name>-sysctl-net-ipv4-conf-all-accept-redirects --patch '{\"spec\":{\"apply\":true}}' --type=merge", "oc edit image.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2020-09-10T10:12:54Z\" generation: 2 name: cluster resourceVersion: \"363096\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: 2dcb614e-2f8a-4a23-ba9a-8e33cd0ff77e spec: allowedRegistriesForImport: - domainName: registry.redhat.io status: externalRegistryHostnames: - default-route-openshift-image-registry.apps.user-cluster-09-10-12-07.devcluster.openshift.com internalRegistryHostname: image-registry.openshift-image-registry.svc:5000", "oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=", "oc -n openshift-compliance get complianceremediations -l complianceoperator.openshift.io/outdated-remediation=", "NAME STATE workers-scan-no-empty-passwords Outdated", "oc -n openshift-compliance patch complianceremediations workers-scan-no-empty-passwords --type json -p '[{\"op\":\"remove\", \"path\":/spec/outdated}]'", "oc get -n openshift-compliance complianceremediations workers-scan-no-empty-passwords", "NAME STATE workers-scan-no-empty-passwords Applied", "oc -n openshift-compliance patch complianceremediations/rhcos4-moderate-worker-sysctl-net-ipv4-conf-all-accept-redirects --patch '{\"spec\":{\"apply\":false}}' --type=merge", "oc -n openshift-compliance get remediation \\ one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -o yaml", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: annotations: compliance.openshift.io/xccdf-value-used: var-kubelet-evictionhard-imagefs-available creationTimestamp: \"2022-01-05T19:52:27Z\" generation: 1 labels: compliance.openshift.io/scan-name: one-rule-tp-node-master 1 compliance.openshift.io/suite: one-rule-ssb-node name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available uid: fe8e1577-9060-4c59-95b2-3e2c51709adc resourceVersion: \"84820\" uid: 5339d21a-24d7-40cb-84d2-7a2ebb015355 spec: apply: true current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: kubeletConfig: evictionHard: imagefs.available: 10% 2 outdated: {} type: Configuration status: applicationState: Applied", "oc -n openshift-compliance patch complianceremediations/one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -p '{\"spec\":{\"apply\":false}}' --type=merge", "oc -n openshift-compliance get kubeletconfig --selector compliance.openshift.io/scan-name=one-rule-tp-node-master", "NAME AGE compliance-operator-kubelet-master 2m34s", "oc edit -n openshift-compliance KubeletConfig compliance-operator-kubelet-master", "oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc debug: true rule: xccdf_org.ssgproject.content_rule_no_direct_root_logins nodeSelector: node-role.kubernetes.io/worker: \"\"", "apiVersion: compliance.openshift.io/v1alpha1 strictNodeScan: true metadata: name: default namespace: openshift-compliance priorityClass: compliance-high-priority 1 kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker scanTolerations: - operator: Exists", "oc -n openshift-compliance create configmap nist-moderate-modified --from-file=tailoring.xml=/path/to/the/tailoringFile.xml", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: debug: true scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc debug: true tailoringConfigMap: name: nist-moderate-modified nodeSelector: node-role.kubernetes.io/worker: \"\"", "oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=", "oc get mc", "75-worker-scan-chronyd-or-ntpd-specify-remote-server 75-worker-scan-configure-usbguard-auditbackend 75-worker-scan-service-usbguard-enabled 75-worker-scan-usbguard-allow-hid-and-hub", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'", "oc -n openshift-compliance annotate compliancesuites/workers-compliancesuite compliance.openshift.io/apply-remediations=", "oc -n openshift-compliance annotate compliancesuites/workers-compliancesuite compliance.openshift.io/remove-outdated=", "allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs kind: SecurityContextConstraints metadata: name: restricted-adjusted-compliance priority: 30 1 readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - SETUID - SETGID - MKNOD runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny users: - system:serviceaccount:openshift-compliance:api-resource-collector 2 volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret", "oc create -n openshift-compliance -f restricted-adjusted-compliance.yaml", "securitycontextconstraints.security.openshift.io/restricted-adjusted-compliance created", "oc get -n openshift-compliance scc restricted-adjusted-compliance", "NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES restricted-adjusted-compliance false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny 30 false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"]", "oc get events -n openshift-compliance", "oc describe -n openshift-compliance compliancescan/cis-compliance", "oc -n openshift-compliance logs compliance-operator-775d7bddbd-gj58f | jq -c 'select(.logger == \"profilebundlectrl\")'", "date -d @1596184628.955853 --utc", "oc get -n openshift-compliance profilebundle.compliance", "oc get -n openshift-compliance profile.compliance", "oc logs -n openshift-compliance -lprofile-bundle=ocp4 -c profileparser", "oc get -n openshift-compliance deployments,pods -lprofile-bundle=ocp4", "oc logs -n openshift-compliance pods/<pod-name>", "oc describe -n openshift-compliance pod/<pod-name> -c profileparser", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: my-companys-constraints debug: true For each role, a separate scan will be created pointing to a node-role specified in roles roles: - worker --- apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: my-companys-compliance-requirements profiles: # Node checks - name: rhcos4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuiteCreated 9m52s scansettingbindingctrl ComplianceSuite openshift-compliance/my-companys-compliance-requirements created", "oc get cronjobs", "NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE <cron_name> 0 1 * * * False 0 <none> 151m", "oc -n openshift-compliance get cm -l compliance.openshift.io/scan-name=rhcos4-e8-worker,complianceoperator.openshift.io/scan-script=", "oc get pvc -n openshift-compliance -lcompliance.openshift.io/scan-name=rhcos4-e8-worker", "oc get pods -lcompliance.openshift.io/scan-name=rhcos4-e8-worker,workload=scanner --show-labels", "NAME READY STATUS RESTARTS AGE LABELS rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod 0/2 Completed 0 39m compliance.openshift.io/scan-name=rhcos4-e8-worker,targetNode=ip-10-0-169-90.eu-north-1.compute.internal,workload=scanner", "oc describe cm/rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod", "Name: rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod Namespace: openshift-compliance Labels: compliance.openshift.io/scan-name-scan=rhcos4-e8-worker complianceoperator.openshift.io/scan-result= Annotations: compliance-remediations/processed: compliance.openshift.io/scan-error-msg: compliance.openshift.io/scan-result: NON-COMPLIANT OpenSCAP-scan-result/node: ip-10-0-169-90.eu-north-1.compute.internal Data ==== exit-code: ---- 2 results: ---- <?xml version=\"1.0\" encoding=\"UTF-8\"?>", "oc get compliancecheckresults -lcompliance.openshift.io/scan-name=rhcos4-e8-worker", "NAME STATUS SEVERITY rhcos4-e8-worker-accounts-no-uid-except-zero PASS high rhcos4-e8-worker-audit-rules-dac-modification-chmod FAIL medium", "oc get complianceremediations -lcompliance.openshift.io/scan-name=rhcos4-e8-worker", "NAME STATE rhcos4-e8-worker-audit-rules-dac-modification-chmod NotApplied rhcos4-e8-worker-audit-rules-dac-modification-chown NotApplied rhcos4-e8-worker-audit-rules-execution-chcon NotApplied rhcos4-e8-worker-audit-rules-execution-restorecon NotApplied rhcos4-e8-worker-audit-rules-execution-semanage NotApplied rhcos4-e8-worker-audit-rules-execution-setfiles NotApplied", "oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=", "oc patch complianceremediations/rhcos4-e8-worker-audit-rules-dac-modification-chmod --patch '{\"spec\":{\"apply\":true}}' --type=merge", "oc get mc | grep 75-", "75-rhcos4-e8-worker-my-companys-compliance-requirements 3.2.0 2m46s", "oc describe mc/75-rhcos4-e8-worker-my-companys-compliance-requirements", "Name: 75-rhcos4-e8-worker-my-companys-compliance-requirements Labels: machineconfiguration.openshift.io/role=worker Annotations: remediation/rhcos4-e8-worker-audit-rules-dac-modification-chmod:", "oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=", "oc -n openshift-compliance get compliancecheckresults/rhcos4-e8-worker-audit-rules-dac-modification-chmod", "NAME STATUS SEVERITY rhcos4-e8-worker-audit-rules-dac-modification-chmod PASS medium", "oc logs -l workload=<workload_name> -c <container_name>", "spec: config: resources: limits: memory: 500Mi", "oc patch sub compliance-operator -nopenshift-compliance --patch-file co-memlimit-patch.yaml --type=merge", "kind: Subscription metadata: name: compliance-operator namespace: openshift-compliance spec: package: package-name channel: stable config: resources: requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\"", "oc get pod ocp4-pci-dss-api-checks-pod -w", "NAME READY STATUS RESTARTS AGE ocp4-pci-dss-api-checks-pod 0/2 Init:1/2 8 (5m56s ago) 25m ocp4-pci-dss-api-checks-pod 0/2 Init:OOMKilled 8 (6m19s ago) 26m", "timeout: 30m strictNodeScan: true metadata: name: default namespace: openshift-compliance kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker apiVersion: compliance.openshift.io/v1alpha1 maxRetryOnTimeout: 3 scanTolerations: - operator: Exists scanLimits: memory: 1024Mi 1", "oc apply -f scansetting.yaml", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' timeout: '10m0s' 1 maxRetryOnTimeout: 3 2", "podman run --rm -v ~/.local/bin:/mnt/out:Z registry.redhat.io/compliance/oc-compliance-rhel8:stable /bin/cp /usr/bin/oc-compliance /mnt/out/", "W0611 20:35:46.486903 11354 manifest.go:440] Chose linux/amd64 manifest from the manifest list.", "oc compliance fetch-raw <object-type> <object-name> -o <output-path>", "oc compliance fetch-raw scansettingbindings my-binding -o /tmp/", "Fetching results for my-binding scans: ocp4-cis, ocp4-cis-node-worker, ocp4-cis-node-master Fetching raw compliance results for scan 'ocp4-cis'.... The raw compliance results are available in the following directory: /tmp/ocp4-cis Fetching raw compliance results for scan 'ocp4-cis-node-worker'........ The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-worker Fetching raw compliance results for scan 'ocp4-cis-node-master'... The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-master", "ls /tmp/ocp4-cis-node-master/", "ocp4-cis-node-master-ip-10-0-128-89.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-150-5.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-163-32.ec2.internal-pod.xml.bzip2", "bunzip2 -c resultsdir/worker-scan/worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 > resultsdir/worker-scan/worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml", "ls resultsdir/worker-scan/", "worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 worker-scan-stage-459-tqkg7-compute-1-pod.xml.bzip2", "oc compliance rerun-now scansettingbindings my-binding", "Rerunning scans from 'my-binding': ocp4-cis Re-running scan 'openshift-compliance/ocp4-cis'", "oc compliance bind [--dry-run] -N <binding name> [-S <scansetting name>] <objtype/objname> [..<objtype/objname>]", "oc get profile.compliance -n openshift-compliance", "NAME AGE VERSION ocp4-cis 3h49m 1.5.0 ocp4-cis-1-4 3h49m 1.4.0 ocp4-cis-1-5 3h49m 1.5.0 ocp4-cis-node 3h49m 1.5.0 ocp4-cis-node-1-4 3h49m 1.4.0 ocp4-cis-node-1-5 3h49m 1.5.0 ocp4-e8 3h49m ocp4-high 3h49m Revision 4 ocp4-high-node 3h49m Revision 4 ocp4-high-node-rev-4 3h49m Revision 4 ocp4-high-rev-4 3h49m Revision 4 ocp4-moderate 3h49m Revision 4 ocp4-moderate-node 3h49m Revision 4 ocp4-moderate-node-rev-4 3h49m Revision 4 ocp4-moderate-rev-4 3h49m Revision 4 ocp4-nerc-cip 3h49m ocp4-nerc-cip-node 3h49m ocp4-pci-dss 3h49m 3.2.1 ocp4-pci-dss-3-2 3h49m 3.2.1 ocp4-pci-dss-4-0 3h49m 4.0.0 ocp4-pci-dss-node 3h49m 3.2.1 ocp4-pci-dss-node-3-2 3h49m 3.2.1 ocp4-pci-dss-node-4-0 3h49m 4.0.0 ocp4-stig 3h49m V2R1 ocp4-stig-node 3h49m V2R1 ocp4-stig-node-v1r1 3h49m V1R1 ocp4-stig-node-v2r1 3h49m V2R1 ocp4-stig-v1r1 3h49m V1R1 ocp4-stig-v2r1 3h49m V2R1 rhcos4-e8 3h49m rhcos4-high 3h49m Revision 4 rhcos4-high-rev-4 3h49m Revision 4 rhcos4-moderate 3h49m Revision 4 rhcos4-moderate-rev-4 3h49m Revision 4 rhcos4-nerc-cip 3h49m rhcos4-stig 3h49m V2R1 rhcos4-stig-v1r1 3h49m V1R1 rhcos4-stig-v2r1 3h49m V2R1", "oc get scansettings -n openshift-compliance", "NAME AGE default 10m default-auto-apply 10m", "oc compliance bind -N my-binding profile/ocp4-cis profile/ocp4-cis-node", "Creating ScanSettingBinding my-binding", "oc compliance controls profile ocp4-cis-node", "+-----------+----------+ | FRAMEWORK | CONTROLS | +-----------+----------+ | CIS-OCP | 1.1.1 | + +----------+ | | 1.1.10 | + +----------+ | | 1.1.11 | + +----------+", "oc compliance fetch-fixes profile ocp4-cis -o /tmp", "No fixes to persist for rule 'ocp4-api-server-api-priority-flowschema-catch-all' 1 No fixes to persist for rule 'ocp4-api-server-api-priority-gate-enabled' No fixes to persist for rule 'ocp4-api-server-audit-log-maxbackup' Persisted rule fix to /tmp/ocp4-api-server-audit-log-maxsize.yaml No fixes to persist for rule 'ocp4-api-server-audit-log-path' No fixes to persist for rule 'ocp4-api-server-auth-mode-no-aa' No fixes to persist for rule 'ocp4-api-server-auth-mode-node' No fixes to persist for rule 'ocp4-api-server-auth-mode-rbac' No fixes to persist for rule 'ocp4-api-server-basic-auth' No fixes to persist for rule 'ocp4-api-server-bind-address' No fixes to persist for rule 'ocp4-api-server-client-ca' Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-cipher.yaml Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-config.yaml", "head /tmp/ocp4-api-server-audit-log-maxsize.yaml", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: maximumFileSizeMegabytes: 100", "oc get complianceremediations -n openshift-compliance", "NAME STATE ocp4-cis-api-server-encryption-provider-cipher NotApplied ocp4-cis-api-server-encryption-provider-config NotApplied", "oc compliance fetch-fixes complianceremediations ocp4-cis-api-server-encryption-provider-cipher -o /tmp", "Persisted compliance remediation fix to /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml", "head /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: encryption: type: aescbc", "oc compliance view-result ocp4-cis-scheduler-no-bind-address" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/security_and_compliance/compliance-operator
Installing and using the Migration Toolkit for Virtualization
Installing and using the Migration Toolkit for Virtualization Migration Toolkit for Virtualization 2.6 Migrating from VMware vSphere or Red Hat Virtualization to Red Hat OpenShift Virtualization Red Hat Modernization and Migration Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.6/html/installing_and_using_the_migration_toolkit_for_virtualization/index
Chapter 4. Using Jobs and DaemonSets
Chapter 4. Using Jobs and DaemonSets 4.1. Running background tasks on nodes automatically with daemon sets As an administrator, you can create and use daemon sets to run replicas of a pod on specific or all nodes in an OpenShift Container Platform cluster. A daemon set ensures that all (or some) nodes run a copy of a pod. As nodes are added to the cluster, pods are added to the cluster. As nodes are removed from the cluster, those pods are removed through garbage collection. Deleting a daemon set will clean up the pods it created. You can use daemon sets to create shared storage, run a logging pod on every node in your cluster, or deploy a monitoring agent on every node. For security reasons, only cluster administrators can create daemon sets. For more information on daemon sets, see the Kubernetes documentation . Important Daemon set scheduling is incompatible with project's default node selector. If you fail to disable it, the daemon set gets restricted by merging with the default node selector. This results in frequent pod recreates on the nodes that got unselected by the merged node selector, which in turn puts unwanted load on the cluster. 4.1.1. Scheduled by default scheduler A daemon set ensures that all eligible nodes run a copy of a pod. Normally, the node that a pod runs on is selected by the Kubernetes scheduler. However, previously daemon set pods are created and scheduled by the daemon set controller. That introduces the following issues: Inconsistent pod behavior: Normal pods waiting to be scheduled are created and in Pending state, but daemon set pods are not created in Pending state. This is confusing to the user. Pod preemption is handled by default scheduler. When preemption is enabled, the daemon set controller will make scheduling decisions without considering pod priority and preemption. The ScheduleDaemonSetPods feature, enabled by default in OpenShift Container Platform, lets you to schedule daemon sets using the default scheduler instead of the daemon set controller, by adding the NodeAffinity term to the daemon set pods, instead of the spec.nodeName term. The default scheduler is then used to bind the pod to the target host. If node affinity of the daemon set pod already exists, it is replaced. The daemon set controller only performs these operations when creating or modifying daemon set pods, and no changes are made to the spec.template of the daemon set. nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - target-host-name In addition, a node.kubernetes.io/unschedulable:NoSchedule toleration is added automatically to daemon set pods. The default scheduler ignores unschedulable Nodes when scheduling daemon set pods. 4.1.2. Creating daemonsets When creating daemon sets, the nodeSelector field is used to indicate the nodes on which the daemon set should deploy replicas. Prerequisites Before you start using daemon sets, disable the default project-wide node selector in your namespace, by setting the namespace annotation openshift.io/node-selector to an empty string: USD oc patch namespace myproject -p \ '{"metadata": {"annotations": {"openshift.io/node-selector": ""}}}' If you are creating a new project, overwrite the default node selector: `oc adm new-project <name> --node-selector=""`. Procedure To create a daemon set: Define the daemon set yaml file: apiVersion: apps/v1 kind: DaemonSet metadata: name: hello-daemonset spec: selector: matchLabels: name: hello-daemonset 1 template: metadata: labels: name: hello-daemonset 2 spec: nodeSelector: 3 role: worker containers: - image: openshift/hello-openshift imagePullPolicy: Always name: registry ports: - containerPort: 80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log serviceAccount: default terminationGracePeriodSeconds: 10 1 The label selector that determines which pods belong to the daemon set. 2 The pod template's label selector. Must match the label selector above. 3 The node selector that determines on which nodes pod replicas should be deployed. A matching label must be present on the node. Create the daemon set object: USD oc create -f daemonset.yaml To verify that the pods were created, and that each node has a pod replica: Find the daemonset pods: USD oc get pods Example output hello-daemonset-cx6md 1/1 Running 0 2m hello-daemonset-e3md9 1/1 Running 0 2m View the pods to verify the pod has been placed onto the node: USD oc describe pod/hello-daemonset-cx6md|grep Node Example output Node: openshift-node01.hostname.com/10.14.20.134 USD oc describe pod/hello-daemonset-e3md9|grep Node Example output Node: openshift-node02.hostname.com/10.14.20.137 Important If you update a daemon set pod template, the existing pod replicas are not affected. If you delete a daemon set and then create a new daemon set with a different template but the same label selector, it recognizes any existing pod replicas as having matching labels and thus does not update them or create new replicas despite a mismatch in the pod template. If you change node labels, the daemon set adds pods to nodes that match the new labels and deletes pods from nodes that do not match the new labels. To update a daemon set, force new pod replicas to be created by deleting the old replicas or nodes. 4.2. Running tasks in pods using jobs A job executes a task in your OpenShift Container Platform cluster. A job tracks the overall progress of a task and updates its status with information about active, succeeded, and failed pods. Deleting a job will clean up any pod replicas it created. Jobs are part of the Kubernetes API, which can be managed with oc commands like other object types. Sample Job specification apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: OnFailure 6 1 The pod replicas a job should run in parallel. 2 Successful pod completions are needed to mark a job completed. 3 The maximum duration the job can run. 4 The number of retries for a job. 5 The template for the pod the controller creates. 6 The restart policy of the pod. See the Kubernetes documentation for more information about jobs. 4.2.1. Understanding jobs and cron jobs A job tracks the overall progress of a task and updates its status with information about active, succeeded, and failed pods. Deleting a job cleans up any pods it created. Jobs are part of the Kubernetes API, which can be managed with oc commands like other object types. There are two possible resource types that allow creating run-once objects in OpenShift Container Platform: Job A regular job is a run-once object that creates a task and ensures the job finishes. There are three main types of task suitable to run as a job: Non-parallel jobs: A job that starts only one pod, unless the pod fails. The job is complete as soon as its pod terminates successfully. Parallel jobs with a fixed completion count: a job that starts multiple pods. The job represents the overall task and is complete when there is one successful pod for each value in the range 1 to the completions value. Parallel jobs with a work queue: A job with multiple parallel worker processes in a given pod. OpenShift Container Platform coordinates pods to determine what each should work on or use an external queue service. Each pod is independently capable of determining whether or not all peer pods are complete and that the entire job is done. When any pod from the job terminates with success, no new pods are created. When at least one pod has terminated with success and all pods are terminated, the job is successfully completed. When any pod has exited with success, no other pod should be doing any work for this task or writing any output. Pods should all be in the process of exiting. For more information about how to make use of the different types of job, see Job Patterns in the Kubernetes documentation. Cron job A job can be scheduled to run multiple times, using a cron job. A cron job builds on a regular job by allowing you to specify how the job should be run. Cron jobs are part of the Kubernetes API, which can be managed with oc commands like other object types. Cron jobs are useful for creating periodic and recurring tasks, like running backups or sending emails. Cron jobs can also schedule individual tasks for a specific time, such as if you want to schedule a job for a low activity period. A cron job creates a Job object based on the timezone configured on the control plane node that runs the cronjob controller. Warning A cron job creates a Job object approximately once per execution time of its schedule, but there are circumstances in which it fails to create a job or two jobs might be created. Therefore, jobs must be idempotent and you must configure history limits. 4.2.1.1. Understanding how to create jobs Both resource types require a job configuration that consists of the following key parts: A pod template, which describes the pod that OpenShift Container Platform creates. The parallelism parameter, which specifies how many pods running in parallel at any point in time should execute a job. For non-parallel jobs, leave unset. When unset, defaults to 1 . The completions parameter, specifying how many successful pod completions are needed to finish a job. For non-parallel jobs, leave unset. When unset, defaults to 1 . For parallel jobs with a fixed completion count, specify a value. For parallel jobs with a work queue, leave unset. When unset defaults to the parallelism value. 4.2.1.2. Understanding how to set a maximum duration for jobs When defining a job, you can define its maximum duration by setting the activeDeadlineSeconds field. It is specified in seconds and is not set by default. When not set, there is no maximum duration enforced. The maximum duration is counted from the time when a first pod gets scheduled in the system, and defines how long a job can be active. It tracks overall time of an execution. After reaching the specified timeout, the job is terminated by OpenShift Container Platform. 4.2.1.3. Understanding how to set a job back off policy for pod failure A job can be considered failed, after a set amount of retries due to a logical error in configuration or other similar reasons. Failed pods associated with the job are recreated by the controller with an exponential back off delay ( 10s , 20s , 40s ...) capped at six minutes. The limit is reset if no new failed pods appear between controller checks. Use the spec.backoffLimit parameter to set the number of retries for a job. 4.2.1.4. Understanding how to configure a cron job to remove artifacts Cron jobs can leave behind artifact resources such as jobs or pods. As a user it is important to configure history limits so that old jobs and their pods are properly cleaned. There are two fields within cron job's spec responsible for that: .spec.successfulJobsHistoryLimit . The number of successful finished jobs to retain (defaults to 3). .spec.failedJobsHistoryLimit . The number of failed finished jobs to retain (defaults to 1). Tip Delete cron jobs that you no longer need: USD oc delete cronjob/<cron_job_name> Doing this prevents them from generating unnecessary artifacts. You can suspend further executions by setting the spec.suspend to true. All subsequent executions are suspended until you reset to false . 4.2.1.5. Known limitations The job specification restart policy only applies to the pods , and not the job controller . However, the job controller is hard-coded to keep retrying jobs to completion. As such, restartPolicy: Never or --restart=Never results in the same behavior as restartPolicy: OnFailure or --restart=OnFailure . That is, when a job fails it is restarted automatically until it succeeds (or is manually discarded). The policy only sets which subsystem performs the restart. With the Never policy, the job controller performs the restart. With each attempt, the job controller increments the number of failures in the job status and create new pods. This means that with each failed attempt, the number of pods increases. With the OnFailure policy, kubelet performs the restart. Each attempt does not increment the number of failures in the job status. In addition, kubelet will retry failed jobs starting pods on the same nodes. 4.2.2. Creating jobs You create a job in OpenShift Container Platform by creating a job object. Procedure To create a job: Create a YAML file similar to the following: apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: OnFailure 6 Optionally, specify how many pod replicas a job should run in parallel; defaults to 1 . For non-parallel jobs, leave unset. When unset, defaults to 1 . Optionally, specify how many successful pod completions are needed to mark a job completed. For non-parallel jobs, leave unset. When unset, defaults to 1 . For parallel jobs with a fixed completion count, specify the number of completions. For parallel jobs with a work queue, leave unset. When unset defaults to the parallelism value. Optionally, specify the maximum duration the job can run. Optionally, specify the number of retries for a job. This field defaults to six. Specify the template for the pod the controller creates. Specify the restart policy of the pod: Never . Do not restart the job. OnFailure . Restart the job only if it fails. Always . Always restart the job. For details on how OpenShift Container Platform uses restart policy with failed containers, see the Example States in the Kubernetes documentation. Create the job: USD oc create -f <file-name>.yaml Note You can also create and launch a job from a single command using oc create job . The following command creates and launches a job similar to the one specified in the example: USD oc create job pi --image=perl -- perl -Mbignum=bpi -wle 'print bpi(2000)' 4.2.3. Creating cron jobs You create a cron job in OpenShift Container Platform by creating a job object. Procedure To create a cron job: Create a YAML file similar to the following: apiVersion: batch/v1beta1 kind: CronJob metadata: name: pi spec: schedule: "*/1 * * * *" 1 concurrencyPolicy: "Replace" 2 startingDeadlineSeconds: 200 3 suspend: true 4 successfulJobsHistoryLimit: 3 5 failedJobsHistoryLimit: 1 6 jobTemplate: 7 spec: template: metadata: labels: 8 parent: "cronjobpi" spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: OnFailure 9 1 1 Schedule for the job specified in cron format . In this example, the job will run every minute. 2 2 An optional concurrency policy, specifying how to treat concurrent jobs within a cron job. Only one of the following concurrent policies may be specified. If not specified, this defaults to allowing concurrent executions. Allow allows cron jobs to run concurrently. Forbid forbids concurrent runs, skipping the run if the has not finished yet. Replace cancels the currently running job and replaces it with a new one. 3 3 An optional deadline (in seconds) for starting the job if it misses its scheduled time for any reason. Missed jobs executions will be counted as failed ones. If not specified, there is no deadline. 4 4 An optional flag allowing the suspension of a cron job. If set to true , all subsequent executions will be suspended. 5 5 The number of successful finished jobs to retain (defaults to 3). 6 6 The number of failed finished jobs to retain (defaults to 1). 7 Job template. This is similar to the job example. 8 Sets a label for jobs spawned by this cron job. 9 The restart policy of the pod. This does not apply to the job controller. Note The .spec.successfulJobsHistoryLimit and .spec.failedJobsHistoryLimit fields are optional. These fields specify how many completed and failed jobs should be kept. By default, they are set to 3 and 1 respectively. Setting a limit to 0 corresponds to keeping none of the corresponding kind of jobs after they finish. Create the cron job: USD oc create -f <file-name>.yaml Note You can also create and launch a cron job from a single command using oc create cronjob . The following command creates and launches a cron job similar to the one specified in the example: USD oc create cronjob pi --image=perl --schedule='*/1 * * * *' -- perl -Mbignum=bpi -wle 'print bpi(2000)' With oc create cronjob , the --schedule option accepts schedules in cron format .
[ "nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - target-host-name", "oc patch namespace myproject -p '{\"metadata\": {\"annotations\": {\"openshift.io/node-selector\": \"\"}}}'", "`oc adm new-project <name> --node-selector=\"\"`.", "apiVersion: apps/v1 kind: DaemonSet metadata: name: hello-daemonset spec: selector: matchLabels: name: hello-daemonset 1 template: metadata: labels: name: hello-daemonset 2 spec: nodeSelector: 3 role: worker containers: - image: openshift/hello-openshift imagePullPolicy: Always name: registry ports: - containerPort: 80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log serviceAccount: default terminationGracePeriodSeconds: 10", "oc create -f daemonset.yaml", "oc get pods", "hello-daemonset-cx6md 1/1 Running 0 2m hello-daemonset-e3md9 1/1 Running 0 2m", "oc describe pod/hello-daemonset-cx6md|grep Node", "Node: openshift-node01.hostname.com/10.14.20.134", "oc describe pod/hello-daemonset-e3md9|grep Node", "Node: openshift-node02.hostname.com/10.14.20.137", "apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 6", "oc delete cronjob/<cron_job_name>", "apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 6", "oc create -f <file-name>.yaml", "oc create job pi --image=perl -- perl -Mbignum=bpi -wle 'print bpi(2000)'", "apiVersion: batch/v1beta1 kind: CronJob metadata: name: pi spec: schedule: \"*/1 * * * *\" 1 concurrencyPolicy: \"Replace\" 2 startingDeadlineSeconds: 200 3 suspend: true 4 successfulJobsHistoryLimit: 3 5 failedJobsHistoryLimit: 1 6 jobTemplate: 7 spec: template: metadata: labels: 8 parent: \"cronjobpi\" spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 9", "oc create -f <file-name>.yaml", "oc create cronjob pi --image=perl --schedule='*/1 * * * *' -- perl -Mbignum=bpi -wle 'print bpi(2000)'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/nodes/using-jobs-and-daemonsets
Specialized hardware and driver enablement
Specialized hardware and driver enablement OpenShift Container Platform 4.17 Learn about hardware enablement on OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "oc adm release info quay.io/openshift-release-dev/ocp-release:4.17.z-x86_64 --image-for=driver-toolkit", "oc adm release info quay.io/openshift-release-dev/ocp-release:4.17.z-aarch64 --image-for=driver-toolkit", "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b53883ca2bac5925857148c4a1abc300ced96c222498e3bc134fe7ce3a1dd404", "podman pull --authfile=path/to/pullsecret.json quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:<SHA>", "oc new-project simple-kmod-demo", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: labels: app: simple-kmod-driver-container name: simple-kmod-driver-container namespace: simple-kmod-demo spec: {} --- apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: labels: app: simple-kmod-driver-build name: simple-kmod-driver-build namespace: simple-kmod-demo spec: nodeSelector: node-role.kubernetes.io/worker: \"\" runPolicy: \"Serial\" triggers: - type: \"ConfigChange\" - type: \"ImageChange\" source: dockerfile: | ARG DTK FROM USD{DTK} as builder ARG KVER WORKDIR /build/ RUN git clone https://github.com/openshift-psap/simple-kmod.git WORKDIR /build/simple-kmod RUN make all install KVER=USD{KVER} FROM registry.redhat.io/ubi8/ubi-minimal ARG KVER # Required for installing `modprobe` RUN microdnf install kmod COPY --from=builder /lib/modules/USD{KVER}/simple-kmod.ko /lib/modules/USD{KVER}/ COPY --from=builder /lib/modules/USD{KVER}/simple-procfs-kmod.ko /lib/modules/USD{KVER}/ RUN depmod USD{KVER} strategy: dockerStrategy: buildArgs: - name: KMODVER value: DEMO # USD oc adm release info quay.io/openshift-release-dev/ocp-release:<cluster version>-x86_64 --image-for=driver-toolkit - name: DTK value: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:34864ccd2f4b6e385705a730864c04a40908e57acede44457a783d739e377cae - name: KVER value: 4.18.0-372.26.1.el8_6.x86_64 output: to: kind: ImageStreamTag name: simple-kmod-driver-container:demo", "OCP_VERSION=USD(oc get clusterversion/version -ojsonpath={.status.desired.version})", "DRIVER_TOOLKIT_IMAGE=USD(oc adm release info USDOCP_VERSION --image-for=driver-toolkit)", "sed \"s#DRIVER_TOOLKIT_IMAGE#USD{DRIVER_TOOLKIT_IMAGE}#\" 0000-buildconfig.yaml.template > 0000-buildconfig.yaml", "oc create -f 0000-buildconfig.yaml", "apiVersion: v1 kind: ServiceAccount metadata: name: simple-kmod-driver-container --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: simple-kmod-driver-container rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints verbs: - use resourceNames: - privileged --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: simple-kmod-driver-container roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: simple-kmod-driver-container subjects: - kind: ServiceAccount name: simple-kmod-driver-container userNames: - system:serviceaccount:simple-kmod-demo:simple-kmod-driver-container --- apiVersion: apps/v1 kind: DaemonSet metadata: name: simple-kmod-driver-container spec: selector: matchLabels: app: simple-kmod-driver-container template: metadata: labels: app: simple-kmod-driver-container spec: serviceAccount: simple-kmod-driver-container serviceAccountName: simple-kmod-driver-container containers: - image: image-registry.openshift-image-registry.svc:5000/simple-kmod-demo/simple-kmod-driver-container:demo name: simple-kmod-driver-container imagePullPolicy: Always command: [sleep, infinity] lifecycle: postStart: exec: command: [\"modprobe\", \"-v\", \"-a\" , \"simple-kmod\", \"simple-procfs-kmod\"] preStop: exec: command: [\"modprobe\", \"-r\", \"-a\" , \"simple-kmod\", \"simple-procfs-kmod\"] securityContext: privileged: true nodeSelector: node-role.kubernetes.io/worker: \"\"", "oc create -f 1000-drivercontainer.yaml", "oc get pod -n simple-kmod-demo", "NAME READY STATUS RESTARTS AGE simple-kmod-driver-build-1-build 0/1 Completed 0 6m simple-kmod-driver-container-b22fd 1/1 Running 0 40s simple-kmod-driver-container-jz9vn 1/1 Running 0 40s simple-kmod-driver-container-p45cc 1/1 Running 0 40s", "oc exec -it pod/simple-kmod-driver-container-p45cc -- lsmod | grep simple", "simple_procfs_kmod 16384 0 simple_kmod 16384 0", "apiVersion: v1 kind: Namespace metadata: name: openshift-nfd labels: name: openshift-nfd openshift.io/cluster-monitoring: \"true\"", "oc create -f nfd-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-nfd- name: openshift-nfd namespace: openshift-nfd spec: targetNamespaces: - openshift-nfd", "oc create -f nfd-operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nfd namespace: openshift-nfd spec: channel: \"stable\" installPlanApproval: Automatic name: nfd source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f nfd-sub.yaml", "oc project openshift-nfd", "oc get pods", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 10m", "apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance namespace: openshift-nfd spec: instance: \"\" # instance is empty by default topologyupdater: false # False by default operand: image: registry.redhat.io/openshift4/ose-node-feature-discovery-rhel9:v4.17 1 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - \"BMI1\" - \"BMI2\" - \"CLMUL\" - \"CMOV\" - \"CX16\" - \"ERMS\" - \"F16C\" - \"HTT\" - \"LZCNT\" - \"MMX\" - \"MMXEXT\" - \"NX\" - \"POPCNT\" - \"RDRAND\" - \"RDSEED\" - \"RDTSCP\" - \"SGX\" - \"SSE\" - \"SSE2\" - \"SSE3\" - \"SSE4.1\" - \"SSE4.2\" - \"SSSE3\" attributeWhitelist: kernel: kconfigFile: \"/path/to/kconfig\" configOpts: - \"NO_HZ\" - \"X86\" - \"DMI\" pci: deviceClassWhitelist: - \"0200\" - \"03\" - \"12\" deviceLabelFields: - \"class\" customConfig: configData: | - name: \"more.kernel.features\" matchOn: - loadedKMod: [\"example_kmod3\"]", "oc apply -f <filename>", "oc get pods", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 11m nfd-master-hcn64 1/1 Running 0 60s nfd-master-lnnxx 1/1 Running 0 60s nfd-master-mp6hr 1/1 Running 0 60s nfd-worker-vgcz9 1/1 Running 0 60s nfd-worker-xqbws 1/1 Running 0 60s", "skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:<openshift_version>", "skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:v4.12", "{ \"Digest\": \"sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef\", }", "skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@<image_digest> docker://<mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest>", "skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef docker://<your-mirror-registry>/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef", "apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance spec: operand: image: <mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest> 1 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - \"BMI1\" - \"BMI2\" - \"CLMUL\" - \"CMOV\" - \"CX16\" - \"ERMS\" - \"F16C\" - \"HTT\" - \"LZCNT\" - \"MMX\" - \"MMXEXT\" - \"NX\" - \"POPCNT\" - \"RDRAND\" - \"RDSEED\" - \"RDTSCP\" - \"SGX\" - \"SSE\" - \"SSE2\" - \"SSE3\" - \"SSE4.1\" - \"SSE4.2\" - \"SSSE3\" attributeWhitelist: kernel: kconfigFile: \"/path/to/kconfig\" configOpts: - \"NO_HZ\" - \"X86\" - \"DMI\" pci: deviceClassWhitelist: - \"0200\" - \"03\" - \"12\" deviceLabelFields: - \"class\" customConfig: configData: | - name: \"more.kernel.features\" matchOn: - loadedKMod: [\"example_kmod3\"]", "oc apply -f <filename>", "oc get nodefeaturediscovery nfd-instance -o yaml", "oc get pods -n <nfd_namespace>", "core: sleepInterval: 60s 1", "core: sources: - system - custom", "core: labelWhiteList: '^cpu-cpuid'", "core: noPublish: true 1", "sources: cpu: cpuid: attributeBlacklist: [MMX, MMXEXT]", "sources: cpu: cpuid: attributeWhitelist: [AVX512BW, AVX512CD, AVX512DQ, AVX512F, AVX512VL]", "sources: kernel: kconfigFile: \"/path/to/kconfig\"", "sources: kernel: configOpts: [NO_HZ, X86, DMI]", "sources: pci: deviceClassWhitelist: [\"0200\", \"03\"]", "sources: pci: deviceLabelFields: [class, vendor, device]", "sources: usb: deviceClassWhitelist: [\"ef\", \"ff\"]", "sources: pci: deviceLabelFields: [class, vendor]", "source: custom: - name: \"my.custom.feature\" matchOn: - loadedKMod: [\"e1000e\"] - pciId: class: [\"0200\"] vendor: [\"8086\"]", "apiVersion: nfd.openshift.io/v1 kind: NodeFeatureRule metadata: name: example-rule spec: rules: - name: \"example rule\" labels: \"example-custom-feature\": \"true\" # Label is created if all of the rules below match matchFeatures: # Match if \"veth\" kernel module is loaded - feature: kernel.loadedmodule matchExpressions: veth: {op: Exists} # Match if any PCI device with vendor 8086 exists in the system - feature: pci.device matchExpressions: vendor: {op: In, value: [\"8086\"]}", "oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.13.6/examples/nodefeaturerule.yaml", "apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: name: node1 topologyPolicies: [\"SingleNUMANodeContainerLevel\"] zones: - name: node-0 type: Node resources: - name: cpu capacity: 20 allocatable: 16 available: 10 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3 - name: node-1 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic2 capacity: 6 allocatable: 6 available: 6 - name: node-2 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3", "podman run gcr.io/k8s-staging-nfd/node-feature-discovery:master nfd-topology-updater -help", "nfd-topology-updater -ca-file=/opt/nfd/ca.crt -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key", "nfd-topology-updater -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -ca-file=/opt/nfd/ca.crt", "nfd-topology-updater -key-file=/opt/nfd/updater.key -cert-file=/opt/nfd/updater.crt -ca-file=/opt/nfd/ca.crt", "nfd-topology-updater -kubelet-config-file=/var/lib/kubelet/config.yaml", "nfd-topology-updater -no-publish", "nfd-topology-updater -oneshot -no-publish", "nfd-topology-updater -podresources-socket=/var/lib/kubelet/pod-resources/kubelet.sock", "nfd-topology-updater -server=nfd-master.nfd.svc.cluster.local:443", "nfd-topology-updater -server-name-override=localhost", "nfd-topology-updater -sleep-interval=1h", "nfd-topology-updater -watch-namespace=rte", "apiVersion: v1 kind: Namespace metadata: name: openshift-kmm", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management namespace: openshift-kmm", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: release-1.0 installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: kernel-module-management.v1.0.0", "oc create -f kmm-sub.yaml", "oc get -n openshift-kmm deployments.apps kmm-operator-controller", "NAME READY UP-TO-DATE AVAILABLE AGE kmm-operator-controller 1/1 1 1 97s", "apiVersion: v1 kind: Namespace metadata: name: openshift-kmm", "allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: false allowPrivilegedContainer: false allowedCapabilities: - NET_BIND_SERVICE apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: [] kind: SecurityContextConstraints metadata: name: restricted-v2 priority: null readOnlyRootFilesystem: false requiredDropCapabilities: - ALL runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs seccompProfiles: - runtime/default supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret", "oc apply -f kmm-security-constraint.yaml", "oc adm policy add-scc-to-user kmm-security-constraint -z kmm-operator-controller -n openshift-kmm", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management namespace: openshift-kmm", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: release-1.0 installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: kernel-module-management.v1.0.0", "oc create -f kmm-sub.yaml", "oc get -n openshift-kmm deployments.apps kmm-operator-controller", "NAME READY UP-TO-DATE AVAILABLE AGE kmm-operator-controller 1/1 1 1 97s", "oc edit configmap -n \"USDnamespace\" kmm-operator-manager-config", "healthProbeBindAddress: :8081 job: gcDelay: 1h leaderElection: enabled: true resourceID: kmm.sigs.x-k8s.io webhook: disableHTTP2: true # CVE-2023-44487 port: 9443 metrics: enableAuthnAuthz: true disableHTTP2: true # CVE-2023-44487 bindAddress: 0.0.0.0:8443 secureServing: true worker: runAsUser: 0 seLinuxType: spc_t setFirmwareClassPath: /var/lib/firmware", "oc delete pod -n \"<namespace>\" -l app.kubernetes.io/component=kmm", "oc delete -k https://github.com/rh-ecosystem-edge/kernel-module-management/config/default", "spec: moduleLoader: container: modprobe: moduleName: mod_a dirName: /opt firmwarePath: /firmware parameters: - param=1 modulesLoadingOrder: - mod_a - mod_b", "oc adm policy add-scc-to-user privileged -z \"USD{serviceAccountName}\" [ -n \"USD{namespace}\" ]", "spec: moduleLoader: container: modprobe: moduleName: mod_a inTreeModulesToRemove: [mod_a, mod_b]", "spec: moduleLoader: container: kernelMappings: - literal: 6.0.15-300.fc37.x86_64 containerImage: \"some.registry/org/my-kmod:USD{KERNEL_FULL_VERSION}\" inTreeModulesToRemove: [<module_name>, <module_name>]", "apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: <my_kmod> spec: moduleLoader: container: modprobe: moduleName: <my_kmod> 1 dirName: /opt 2 firmwarePath: /firmware 3 parameters: 4 - param=1 kernelMappings: 5 - literal: 6.0.15-300.fc37.x86_64 containerImage: some.registry/org/my-kmod:6.0.15-300.fc37.x86_64 - regexp: '^.+\\fc37\\.x86_64USD' 6 containerImage: \"some.other.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}\" - regexp: '^.+USD' 7 containerImage: \"some.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}\" 8 build: buildArgs: 9 - name: ARG_NAME value: <some_value> secrets: - name: <some_kubernetes_secret> 10 baseImageRegistryTLS: 11 insecure: false insecureSkipTLSVerify: false 12 dockerfileConfigMap: 13 name: <my_kmod_dockerfile> sign: certSecret: name: <cert_secret> 14 keySecret: name: <key_secret> 15 filesToSign: - /opt/lib/modules/USD{KERNEL_FULL_VERSION}/<my_kmod>.ko registryTLS: 16 insecure: false 17 insecureSkipTLSVerify: false serviceAccountName: <sa_module_loader> 18 devicePlugin: 19 container: image: some.registry/org/device-plugin:latest 20 env: - name: MY_DEVICE_PLUGIN_ENV_VAR value: SOME_VALUE volumeMounts: 21 - mountPath: /some/mountPath name: <device_plugin_volume> volumes: 22 - name: <device_plugin_volume> configMap: name: <some_configmap> serviceAccountName: <sa_device_plugin> 23 imageRepoSecret: 24 name: <secret_name> selector: node-role.kubernetes.io/worker: \"\"", "ARG DTK_AUTO FROM USD{DTK_AUTO} as builder # Build steps # FROM ubi9/ubi ARG KERNEL_FULL_VERSION RUN dnf update && dnf install -y kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ Create the symbolic link RUN ln -s /lib/modules/USD{KERNEL_FULL_VERSION} /opt/lib/modules/USD{KERNEL_FULL_VERSION}/host RUN depmod -b /opt USD{KERNEL_FULL_VERSION}", "depmod -b /opt USD{KERNEL_FULL_VERSION}+`.", "apiVersion: v1 kind: ConfigMap metadata: name: kmm-ci-dockerfile data: dockerfile: | ARG DTK_AUTO FROM USD{DTK_AUTO} as builder ARG KERNEL_FULL_VERSION WORKDIR /usr/src RUN [\"git\", \"clone\", \"https://github.com/rh-ecosystem-edge/kernel-module-management.git\"] WORKDIR /usr/src/kernel-module-management/ci/kmm-kmod RUN KERNEL_SRC_DIR=/lib/modules/USD{KERNEL_FULL_VERSION}/build make all FROM registry.redhat.io/ubi9/ubi-minimal ARG KERNEL_FULL_VERSION RUN microdnf install kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ RUN depmod -b /opt USD{KERNEL_FULL_VERSION}", "- regexp: '^.+USD' containerImage: \"some.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}\" build: buildArgs: 1 - name: ARG_NAME value: <some_value> secrets: 2 - name: <some_kubernetes_secret> 3 baseImageRegistryTLS: insecure: false 4 insecureSkipTLSVerify: false 5 dockerfileConfigMap: 6 name: <my_kmod_dockerfile> registryTLS: insecure: false 7 insecureSkipTLSVerify: false 8", "ARG DTK_AUTO FROM USD{DTK_AUTO} as builder ARG KERNEL_FULL_VERSION WORKDIR /usr/src RUN [\"git\", \"clone\", \"https://github.com/rh-ecosystem-edge/kernel-module-management.git\"] WORKDIR /usr/src/kernel-module-management/ci/kmm-kmod RUN KERNEL_SRC_DIR=/lib/modules/USD{KERNEL_FULL_VERSION}/build make all FROM ubi9/ubi-minimal ARG KERNEL_FULL_VERSION RUN microdnf install kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ RUN depmod -b /opt USD{KERNEL_FULL_VERSION}", "openssl req -x509 -new -nodes -utf8 -sha256 -days 36500 -batch -config configuration_file.config -outform DER -out my_signing_key_pub.der -keyout my_signing_key.priv", "oc create secret generic my-signing-key --from-file=key=<my_signing_key.priv>", "oc create secret generic my-signing-key-pub --from-file=cert=<my_signing_key_pub.der>", "cat sb_cert.priv | base64 -w 0 > my_signing_key2.base64", "cat sb_cert.cer | base64 -w 0 > my_signing_key_pub.base64", "apiVersion: v1 kind: Secret metadata: name: my-signing-key-pub namespace: default 1 type: Opaque data: cert: <base64_encoded_secureboot_public_key> --- apiVersion: v1 kind: Secret metadata: name: my-signing-key namespace: default 2 type: Opaque data: key: <base64_encoded_secureboot_private_key>", "oc apply -f <yaml_filename>", "oc get secret -o yaml <certificate secret name> | awk '/cert/{print USD2; exit}' | base64 -d | openssl x509 -inform der -text", "oc get secret -o yaml <private key secret name> | awk '/key/{print USD2; exit}' | base64 -d", "--- apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: example-module spec: moduleLoader: serviceAccountName: default container: modprobe: 1 moduleName: '<module_name>' kernelMappings: # the kmods will be deployed on all nodes in the cluster with a kernel that matches the regexp - regexp: '^.*\\.x86_64USD' # the container to produce containing the signed kmods containerImage: <image_name> 2 sign: # the image containing the unsigned kmods (we need this because we are not building the kmods within the cluster) unsignedImage: <image_name> 3 keySecret: # a secret holding the private secureboot key with the key 'key' name: <private_key_secret_name> certSecret: # a secret holding the public secureboot key with the key 'cert' name: <certificate_secret_name> filesToSign: # full path within the unsignedImage container to the kmod(s) to sign - /opt/lib/modules/4.18.0-348.2.1.el8_5.x86_64/kmm_ci_a.ko imageRepoSecret: # the name of a secret containing credentials to pull unsignedImage and push containerImage to the registry name: repo-pull-secret selector: kubernetes.io/arch: amd64", "--- apiVersion: v1 kind: ConfigMap metadata: name: example-module-dockerfile namespace: <namespace> 1 data: Dockerfile: | ARG DTK_AUTO ARG KERNEL_VERSION FROM USD{DTK_AUTO} as builder WORKDIR /build/ RUN git clone -b main --single-branch https://github.com/rh-ecosystem-edge/kernel-module-management.git WORKDIR kernel-module-management/ci/kmm-kmod/ RUN make FROM registry.access.redhat.com/ubi9/ubi:latest ARG KERNEL_VERSION RUN yum -y install kmod && yum clean all RUN mkdir -p /opt/lib/modules/USD{KERNEL_VERSION} COPY --from=builder /build/kernel-module-management/ci/kmm-kmod/*.ko /opt/lib/modules/USD{KERNEL_VERSION}/ RUN /usr/sbin/depmod -b /opt --- apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: example-module namespace: <namespace> 2 spec: moduleLoader: serviceAccountName: default 3 container: modprobe: moduleName: simple_kmod kernelMappings: - regexp: '^.*\\.x86_64USD' containerImage: <final_driver_container_name> build: dockerfileConfigMap: name: example-module-dockerfile sign: keySecret: name: <private_key_secret_name> certSecret: name: <certificate_secret_name> filesToSign: - /opt/lib/modules/4.18.0-348.2.1.el8_5.x86_64/kmm_ci_a.ko imageRepoSecret: 4 name: repo-pull-secret selector: # top-level selector kubernetes.io/arch: amd64", "--- apiVersion: v1 kind: Namespace metadata: name: openshift-kmm-hub --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management-hub namespace: openshift-kmm-hub --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management-hub namespace: openshift-kmm-hub spec: channel: stable installPlanApproval: Automatic name: kernel-module-management-hub source: redhat-operators sourceNamespace: openshift-marketplace", "apiVersion: hub.kmm.sigs.x-k8s.io/v1beta1 kind: ManagedClusterModule metadata: name: <my-mcm> # No namespace, because this resource is cluster-scoped. spec: moduleSpec: 1 selector: 2 node-wants-my-mcm: 'true' spokeNamespace: <some-namespace> 3 selector: 4 wants-my-mcm: 'true'", "--- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: install-kmm spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-kmm spec: severity: high object-templates: - complianceType: mustonlyhave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: openshift-kmm - complianceType: mustonlyhave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kmm namespace: openshift-kmm spec: upgradeStrategy: Default - complianceType: mustonlyhave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: stable config: env: - name: KMM_MANAGED 1 value: \"1\" installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace - complianceType: mustonlyhave objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: kmm-module-manager rules: - apiGroups: [kmm.sigs.x-k8s.io] resources: [modules] verbs: [create, delete, get, list, patch, update, watch] - complianceType: mustonlyhave objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: klusterlet-kmm subjects: - kind: ServiceAccount name: klusterlet-work-sa namespace: open-cluster-management-agent roleRef: kind: ClusterRole name: kmm-module-manager apiGroup: rbac.authorization.k8s.io --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: all-managed-clusters spec: clusterSelector: 2 matchExpressions: [] --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: install-kmm placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: all-managed-clusters subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-kmm", "oc label node/<node_name> kmm.node.kubernetes.io/version-module.<module_namespace>.<module_name>-", "oc label node/<node_name> kmm.node.kubernetes.io/version-module.<module_namespace>.<module_name>=<desired_version>", "ProduceMachineConfig(machineConfigName, machineConfigPoolRef, kernelModuleImage, kernelModuleName string) (string, error)", "kind: MachineConfigPool metadata: name: sfc spec: machineConfigSelector: 1 matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker, sfc]} nodeSelector: 2 matchLabels: node-role.kubernetes.io/sfc: \"\" paused: false maxUnavailable: 1", "metadata: labels: machineconfiguration.opensfhit.io/role: master", "metadata: labels: machineconfiguration.opensfhit.io/role: worker", "modprobe: ERROR: could not insert '<your_kmod_name>': Required key not available", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 99-worker-kernel-args-firmware-path spec: kernelArguments: - 'firmware_class.path=/var/lib/firmware'", "FROM registry.redhat.io/ubi9/ubi-minimal as builder Build the kmod RUN [\"mkdir\", \"/firmware\"] RUN [\"curl\", \"-o\", \"/firmware/firmware.bin\", \"https://artifacts.example.com/firmware.bin\"] FROM registry.redhat.io/ubi9/ubi-minimal Copy the kmod, install modprobe, run depmod COPY --from=builder /firmware /firmware", "apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: my-kmod spec: moduleLoader: container: modprobe: moduleName: my-kmod # Required firmwarePath: /firmware 1", "oc logs -fn openshift-kmm deployments/kmm-operator-controller", "oc logs -fn openshift-kmm deployments/kmm-operator-webhook-server", "oc logs -fn openshift-kmm-hub deployments/kmm-operator-hub-controller", "oc logs -fn openshift-kmm deployments/kmm-operator-hub-webhook-server", "oc describe modules.kmm.sigs.x-k8s.io kmm-ci-a [...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal BuildCreated 2m29s kmm Build created for kernel 6.6.2-201.fc39.x86_64 Normal BuildSucceeded 63s kmm Build job succeeded for kernel 6.6.2-201.fc39.x86_64 Normal SignCreated 64s (x2 over 64s) kmm Sign created for kernel 6.6.2-201.fc39.x86_64 Normal SignSucceeded 57s kmm Sign job succeeded for kernel 6.6.2-201.fc39.x86_64", "oc describe node my-node [...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- [...] Normal ModuleLoaded 4m17s kmm Module default/kmm-ci-a loaded into the kernel Normal ModuleUnloaded 2s kmm Module default/kmm-ci-a unloaded from the kernel", "export MUST_GATHER_IMAGE=USD(oc get deployment -n openshift-kmm kmm-operator-controller -ojsonpath='{.spec.template.spec.containers[?(@.name==\"manager\")].env[?(@.name==\"RELATED_IMAGE_MUST_GATHER\")].value}') oc adm must-gather --image=\"USD{MUST_GATHER_IMAGE}\" -- /usr/bin/gather", "oc adm must-gather --image=\"USD{MUST_GATHER_IMAGE}\" -- /usr/bin/gather", "oc logs -fn openshift-kmm deployments/kmm-operator-controller", "I0228 09:36:37.352405 1 request.go:682] Waited for 1.001998746s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/machine.openshift.io/v1beta1?timeout=32s I0228 09:36:40.767060 1 listener.go:44] kmm/controller-runtime/metrics \"msg\"=\"Metrics server is starting to listen\" \"addr\"=\"127.0.0.1:8080\" I0228 09:36:40.769483 1 main.go:234] kmm/setup \"msg\"=\"starting manager\" I0228 09:36:40.769907 1 internal.go:366] kmm \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"127.0.0.1\",\"Port\":8080,\"Zone\":\"\"} \"kind\"=\"metrics\" \"path\"=\"/metrics\" I0228 09:36:40.770025 1 internal.go:366] kmm \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"::\",\"Port\":8081,\"Zone\":\"\"} \"kind\"=\"health probe\" I0228 09:36:40.770128 1 leaderelection.go:248] attempting to acquire leader lease openshift-kmm/kmm.sigs.x-k8s.io I0228 09:36:40.784396 1 leaderelection.go:258] successfully acquired lease openshift-kmm/kmm.sigs.x-k8s.io I0228 09:36:40.784876 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1beta1.Module\" I0228 09:36:40.784925 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.DaemonSet\" I0228 09:36:40.784968 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.Build\" I0228 09:36:40.785001 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.Job\" I0228 09:36:40.785025 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.Node\" I0228 09:36:40.785039 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" I0228 09:36:40.785458 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PodNodeModule\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Pod\" \"source\"=\"kind source: *v1.Pod\" I0228 09:36:40.786947 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1beta1.PreflightValidation\" I0228 09:36:40.787406 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1.Build\" I0228 09:36:40.787474 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1.Job\" I0228 09:36:40.787488 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1beta1.Module\" I0228 09:36:40.787603 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"NodeKernel\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Node\" \"source\"=\"kind source: *v1.Node\" I0228 09:36:40.787634 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"NodeKernel\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Node\" I0228 09:36:40.787680 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" I0228 09:36:40.785607 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"source\"=\"kind source: *v1.ImageStream\" I0228 09:36:40.787822 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"preflightvalidationocp\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidationOCP\" \"source\"=\"kind source: *v1beta1.PreflightValidationOCP\" I0228 09:36:40.787853 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" I0228 09:36:40.787879 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"preflightvalidationocp\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidationOCP\" \"source\"=\"kind source: *v1beta1.PreflightValidation\" I0228 09:36:40.787905 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"preflightvalidationocp\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidationOCP\" I0228 09:36:40.786489 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"PodNodeModule\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Pod\"", "export MUST_GATHER_IMAGE=USD(oc get deployment -n openshift-kmm-hub kmm-operator-hub-controller -ojsonpath='{.spec.template.spec.containers[?(@.name==\"manager\")].env[?(@.name==\"RELATED_IMAGE_MUST_GATHER\")].value}') oc adm must-gather --image=\"USD{MUST_GATHER_IMAGE}\" -- /usr/bin/gather -u", "oc adm must-gather --image=\"USD{MUST_GATHER_IMAGE}\" -- /usr/bin/gather -u", "oc logs -fn openshift-kmm-hub deployments/kmm-operator-hub-controller", "I0417 11:34:08.807472 1 request.go:682] Waited for 1.023403273s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/tuned.openshift.io/v1?timeout=32s I0417 11:34:12.373413 1 listener.go:44] kmm-hub/controller-runtime/metrics \"msg\"=\"Metrics server is starting to listen\" \"addr\"=\"127.0.0.1:8080\" I0417 11:34:12.376253 1 main.go:150] kmm-hub/setup \"msg\"=\"Adding controller\" \"name\"=\"ManagedClusterModule\" I0417 11:34:12.376621 1 main.go:186] kmm-hub/setup \"msg\"=\"starting manager\" I0417 11:34:12.377690 1 leaderelection.go:248] attempting to acquire leader lease openshift-kmm-hub/kmm-hub.sigs.x-k8s.io I0417 11:34:12.378078 1 internal.go:366] kmm-hub \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"127.0.0.1\",\"Port\":8080,\"Zone\":\"\"} \"kind\"=\"metrics\" \"path\"=\"/metrics\" I0417 11:34:12.378222 1 internal.go:366] kmm-hub \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"::\",\"Port\":8081,\"Zone\":\"\"} \"kind\"=\"health probe\" I0417 11:34:12.395703 1 leaderelection.go:258] successfully acquired lease openshift-kmm-hub/kmm-hub.sigs.x-k8s.io I0417 11:34:12.396334 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1beta1.ManagedClusterModule\" I0417 11:34:12.396403 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.ManifestWork\" I0417 11:34:12.396430 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.Build\" I0417 11:34:12.396469 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.Job\" I0417 11:34:12.396522 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.ManagedCluster\" I0417 11:34:12.396543 1 controller.go:193] kmm-hub \"msg\"=\"Starting Controller\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" I0417 11:34:12.397175 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"source\"=\"kind source: *v1.ImageStream\" I0417 11:34:12.397221 1 controller.go:193] kmm-hub \"msg\"=\"Starting Controller\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" I0417 11:34:12.498335 1 filter.go:196] kmm-hub \"msg\"=\"Listing all ManagedClusterModules\" \"managedcluster\"=\"local-cluster\" I0417 11:34:12.498570 1 filter.go:205] kmm-hub \"msg\"=\"Listed ManagedClusterModules\" \"count\"=0 \"managedcluster\"=\"local-cluster\" I0417 11:34:12.498629 1 filter.go:238] kmm-hub \"msg\"=\"Adding reconciliation requests\" \"count\"=0 \"managedcluster\"=\"local-cluster\" I0417 11:34:12.498687 1 filter.go:196] kmm-hub \"msg\"=\"Listing all ManagedClusterModules\" \"managedcluster\"=\"sno1-0\" I0417 11:34:12.498750 1 filter.go:205] kmm-hub \"msg\"=\"Listed ManagedClusterModules\" \"count\"=0 \"managedcluster\"=\"sno1-0\" I0417 11:34:12.498801 1 filter.go:238] kmm-hub \"msg\"=\"Adding reconciliation requests\" \"count\"=0 \"managedcluster\"=\"sno1-0\" I0417 11:34:12.501947 1 controller.go:227] kmm-hub \"msg\"=\"Starting workers\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"worker count\"=1 I0417 11:34:12.501948 1 controller.go:227] kmm-hub \"msg\"=\"Starting workers\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"worker count\"=1 I0417 11:34:12.502285 1 imagestream_reconciler.go:50] kmm-hub \"msg\"=\"registered imagestream info mapping\" \"ImageStream\"={\"name\":\"driver-toolkit\",\"namespace\":\"openshift\"} \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"dtkImage\"=\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df42b4785a7a662b30da53bdb0d206120cf4d24b45674227b16051ba4b7c3934\" \"name\"=\"driver-toolkit\" \"namespace\"=\"openshift\" \"osImageVersion\"=\"412.86.202302211547-0\" \"reconcileID\"=\"e709ff0a-5664-4007-8270-49b5dff8bae9\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/specialized_hardware_and_driver_enablement/index
17.4.3.4. Resource Management Options
17.4.3.4. Resource Management Options The xinetd daemon can add a basic level of protection from a Denial of Service (DoS) attacks. Below is a list of directives which can aid in limiting the effectiveness of such attacks: per_source - Defines the maximum number of instances for a service per source IP address. It accepts only integers as an argument and can be used in both xinetd.conf and in the service-specific configuration files in the xinetd.d/ directory. cps - Defines the maximum of connections per second. This directive takes two integer arguments separated by white space. The first is the maximum number of connections allowed to the service per second. The second is the number of seconds xinetd must wait before re-enabling the service. It accepts only integers as an argument and can be used in both xinetd.conf and in the service-specific configuration files in the xinetd.d/ directory. max_load - Defines the CPU usage threshold for a service. It accepts a floating point number argument. There are more resource management options available for xinetd . Refer to the chapter titled Server Security in the Security Guide for more information, as well as the xinetd.conf man page.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/ch17s04s03s04
Chapter 3. September 2024
Chapter 3. September 2024 3.1. Unattributed Storage for AWS Unattributed Storage is a type of project that gets created when cost management is unable to correlate a portion of the cloud cost to an OpenShift namespace. This storage type was originally only available for Azure, but now it is also available for AWS. For more information, see Unattributed Storage project for Azure and AWS .
null
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/whats_new_in_cost_management/september_2024
2.7. Active Directory Authentication (Non-Kerberos)
2.7. Active Directory Authentication (Non-Kerberos) See Example 2.2, "Example of JBoss EAP LDAP login module configuration" for a non-Kerberos Active Directory Authentication configuration example. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/security_guide/active_directory_authentication_non-kerberos
Chapter 3. Using Fence Agents Remediation
Chapter 3. Using Fence Agents Remediation You can use the Fence Agents Remediation Operator to automatically remediate unhealthy nodes, similar to the Self Node Remediation Operator. FAR is designed to run an existing set of upstream fencing agents on environments with a traditional API end-point, for example, IPMI, for power cycling cluster nodes, while their pods are quickly evicted based on the remediation strategy . 3.1. About the Fence Agents Remediation Operator The Fence Agents Remediation (FAR) Operator uses external tools to fence unhealthy nodes. These tools are a set of fence agents, where each fence agent can be used for different environments to fence a node, and using a traditional Application Programming Interface (API) call that reboots a node. By doing so, FAR can minimize downtime for stateful applications, restores compute capacity if transient failures occur, and increases the availability of workloads. FAR not only fences a node when it becomes unhealthy, it also tries to remediate the node from being unhealthy to healthy. It adds a taint to evict stateless pods, fences the node with a fence agent, and after a reboot, it completes the remediation with resource deletion to remove any remaining workloads (mostly stateful workloads). Adding the taint and deleting the workloads accelerates the workload rescheduling. The Operator watches for new or deleted custom resources (CRs) called FenceAgentsRemediation which trigger a fence agent to remediate a node, based on the CR's name. FAR uses the NodeHealthCheck controller to detect the health of a node in the cluster. When a node is identified as unhealthy, the NodeHealthCheck resource creates the FenceAgentsRemediation CR, based on the FenceAgentsRemediationTemplate CR, which then triggers the Fence Agents Remediation Operator. FAR uses a fence agent to fence a Kubernetes node. Generally, fencing is the process of taking unresponsive/unhealthy computers into a safe state, and isolating the computer. Fence agent is a software code that uses a management interface to perform fencing, mostly power-based fencing which enables power-cycling, reset, or turning off the computer. An example fence agent is fence_ipmilan which is used for Intelligent Platform Management Interface (IPMI) environments. apiVersion: fence-agents-remediation.medik8s.io/v1alpha1 kind: FenceAgentsRemediation metadata: name: node-name 1 namespace: openshift-workload-availability spec: remediationStrategy: <remediation_strategy> 2 1 The node-name should match the name of the unhealthy cluster node. 2 Specifies the remediation strategy for the nodes. For more information on the remediation strategies available, see the Understanding the Fence Agents Remediation Template configuration topic. The Operator includes a set of fence agents, that are also available in the Red Hat High Availability Add-On, which use a management interface, such as IPMI or an API, to provision/reboot a node for bare metal servers, virtual machines, and cloud platforms. 3.2. Installing the Fence Agents Remediation Operator by using the web console You can use the Red Hat OpenShift web console to install the Fence Agents Remediation Operator. Prerequisites Log in as a user with cluster-admin privileges. Procedure In the Red Hat OpenShift web console, navigate to Operators OperatorHub . Select the Fence Agents Remediation Operator, or FAR, from the list of available Operators, and then click Install . Keep the default selection of Installation mode and namespace to ensure that the Operator is installed to the openshift-workload-availability namespace. Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Operator is installed in the openshift-workload-availability namespace and its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the log of the fence-agents-remediation-controller-manager pod for any reported issues. 3.3. Installing the Fence Agents Remediation Operator by using the CLI You can use the OpenShift CLI ( oc ) to install the Fence Agents Remediation Operator. You can install the Fence Agents Remediation Operator in your own namespace or in the openshift-workload-availability namespace. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a Namespace custom resource (CR) for the Fence Agents Remediation Operator: Define the Namespace CR and save the YAML file, for example, workload-availability-namespace.yaml : apiVersion: v1 kind: Namespace metadata: name: openshift-workload-availability To create the Namespace CR, run the following command: USD oc create -f workload-availability-namespace.yaml Create an OperatorGroup CR: Define the OperatorGroup CR and save the YAML file, for example, workload-availability-operator-group.yaml : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: workload-availability-operator-group namespace: openshift-workload-availability To create the OperatorGroup CR, run the following command: USD oc create -f workload-availability-operator-group.yaml Create a Subscription CR: Define the Subscription CR and save the YAML file, for example, fence-agents-remediation-subscription.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: fence-agents-remediation-subscription namespace: openshift-workload-availability 1 spec: channel: stable name: fence-agents-remediation source: redhat-operators sourceNamespace: openshift-marketplace package: fence-agents-remediation 1 Specify the Namespace where you want to install the Fence Agents Remediation Operator, for example, the openshift-workload-availability outlined earlier in this procedure. You can install the Subscription CR for the Fence Agents Remediation Operator in the openshift-workload-availability namespace where there is already a matching OperatorGroup CR. To create the Subscription CR, run the following command: USD oc create -f fence-agents-remediation-subscription.yaml Verification Verify that the installation succeeded by inspecting the CSV resource: USD oc get csv -n openshift-workload-availability Example output NAME DISPLAY VERSION REPLACES PHASE fence-agents-remediation.v0.3.0 Fence Agents Remediation Operator 0.3.0 fence-agents-remediation.v0.2.1 Succeeded Verify that the Fence Agents Remediation Operator is up and running: USD oc get deployment -n openshift-workload-availability Example output NAME READY UP-TO-DATE AVAILABLE AGE fence-agents-remediation-controller-manager 2/2 2 2 110m 3.4. Configuring the Fence Agents Remediation Operator You can use the Fence Agents Remediation Operator to create the FenceAgentsRemediationTemplate Custom Resource (CR), which is used by the Node Health Check Operator (NHC). This CR defines the fence agent to be used in the cluster with all the required parameters for remediating the nodes. There may be many FenceAgentsRemediationTemplate CRs, at most one for each fence agent, and when NHC is being used it can choose the FenceAgentsRemediationTemplate as the remediationTemplate to be used for power-cycling the node. The FenceAgentsRemediationTemplate CR resembles the following YAML file: apiVersion: fence-agents-remediation.medik8s.io/v1alpha1 kind: FenceAgentsRemediationTemplate metadata: name: fence-agents-remediation-template-fence-ipmilan namespace: openshift-workload-availability spec: template: spec: agent: fence_ipmilan 1 nodeparameters: 2 --ipport: master-0-0: '6230' master-0-1: '6231' master-0-2: '6232' worker-0-0: '6233' worker-0-1: '6234' worker-0-2: '6235' sharedparameters: 3 '--action': reboot '--ip': 192.168.123.1 '--lanplus': '' '--password': password '--username': admin retryCount: '5' 4 retryInterval: '5s' 5 timeout: '60s' 6 1 Displays the name of the fence agent to be executed, for example, fence_ipmilan . 2 Displays the node-specific parameters for executing the fence agent, for example, ipport . 3 Displays the cluster-wide parameters for executing the fence agent, for example, username . 4 Displays the number of times to retry the fence agent command in case of failure. The default number of attempts is 5. 5 Displays the interval between retries in seconds. The default is 5 seconds. 6 Displays the timeout for the fence agent command. The default is 60 seconds. For values of 60 seconds or greater, the timeout value is expressed in both minutes and seconds in the YAML file. 3.4.1. Understanding the Fence Agents Remediation Template configuration The Fence Agents Remediation Operator also creates the FenceAgentsRemediationTemplate Custom Resource Definition (CRD). This CRD defines the remediation strategy for the nodes that is aimed to recover workloads faster. The following remediation strategies are available: ResourceDeletion This remediation strategy removes the pods on the node. This strategy recovers workloads faster. OutOfServiceTaint This remediation strategy implicitly causes the removal of the pods and associated volume attachments on the node. It achieves this by placing the OutOfServiceTaint taint on the node. The OutOfServiceTaint strategy also represents a non-graceful node shutdown. A non-graceful node shutdown occurs when a node is shut down and not detected, instead of triggering an in-operating system shutdown. This strategy has been supported on technology preview since OpenShift Container Platform version 4.13, and on general availability since OpenShift Container Platform version 4.15. The FenceAgentsRemediationTemplate CR resembles the following YAML file: apiVersion: fence-agents-remediation.medik8s.io/v1alpha1 kind: FenceAgentsRemediationTemplate metadata: name: fence-agents-remediation-<remediation_object>-deletion-template 1 namespace: openshift-workload-availability spec: template: spec: remediationStrategy: <remediation_strategy> 2 1 Specifies the type of remediation template based on the remediation strategy. Replace <remediation_object> with either resource or taint ; for example, fence-agents-remediation-resource-deletion-template . 2 Specifies the remediation strategy. The remediation strategy can either be ResourceDeletion or OutOfServiceTaint . 3.5. Troubleshooting the Fence Agents Remediation Operator 3.5.1. General troubleshooting Issue You want to troubleshoot issues with the Fence Agents Remediation Operator. Resolution Check the Operator logs. USD oc logs <fence-agents-remediation-controller-manager-name> -c manager -n <namespace-name> 3.5.2. Unsuccessful remediation Issue An unhealthy node was not remediated. Resolution Verify that the FenceAgentsRemediation CR was created by running the following command: USD oc get far -A If the NodeHealthCheck controller did not create the FenceAgentsRemediation CR when the node turned unhealthy, check the logs of the NodeHealthCheck controller. Additionally, ensure that the NodeHealthCheck CR includes the required specification to use the remediation template. If the FenceAgentsRemediation CR was created, ensure that its name matches the unhealthy node object. 3.5.3. Fence Agents Remediation Operator resources exist after uninstalling the Operator Issue The Fence Agents Remediation Operator resources, such as the remediation CR and the remediation template CR, exist after uninstalling the Operator. Resolution To remove the Fence Agents Remediation Operator resources, you can delete the resources by selecting the "Delete all operand instances for this operator" checkbox before uninstalling. This checkbox feature is only available in Red Hat OpenShift since version 4.13. For all versions of Red Hat OpenShift, you can delete the resources by running the following relevant command for each resource type: USD oc delete far <fence-agents-remediation> -n <namespace> USD oc delete fartemplate <fence-agents-remediation-template> -n <namespace> The remediation CR far must be created and deleted by the same entity, for example, NHC. If the remediation CR far is still present, it is deleted, together with the FAR operator. The remediation template CR fartemplate only exists if you use FAR with NHC. When the FAR operator is deleted using the web console, the remediation template CR fartemplate is also deleted. 3.6. Gathering data about the Fence Agents Remediation Operator To collect debugging information about the Fence Agents Remediation Operator, use the must-gather tool. For information about the must-gather image for the Fence Agents Remediation Operator, see Gathering data about specific features . 3.7. Agents supported by the Fence Agents Remediation Operator This section describes the agents currently supported by the Fence Agents Remediation Operator. Most of the supported agents can be grouped by the node's hardware proprietary and usage, as follows: BareMetal Virtualization Intel HP IBM VMware Cisco APC Dell Other Table 3.1. BareMetal - Using the Redfish management interface is recommended, unless it is not supported. Agent Description fence_redfish An I/O Fencing agent that can be used with Out-of-Band controllers that support Redfish APIs. fence_ipmilan [a] An I/O Fencing agent that can be used with machines controlled by IPMI . [a] This description also applies for the agents fence_ilo3 , fence_ilo4 , fence_ilo5 , fence_imm , fence_idrac , and fence_ipmilanplus . Table 3.2. Virtualization Agent Description fence_rhevm An I/O Fencing agent that can be used with RHEV-M REST API to fence virtual machines. fence_virt [a] An I/O Fencing agent that can be used with virtual machines. [a] This description also applies for the agent fence_xvm . Table 3.3. Intel Agent Description fence_amt_ws An I/O Fencing agent that can be used with Intel AMT (WS). fence_intelmodular An I/O Fencing agent that can be used with Intel Modular device (tested on Intel MFSYS25, should also work with MFSYS35). Table 3.4. HP - agents for the iLO management interface or BladeSystem. Agent Description fence_ilo [a] An I/O Fencing agent that can be used for HP servers with the Integrated Light Out ( iLO ) PCI card. fence_ilo_ssh [b] A fencing agent that can be used to connect to an iLO device. It logs into device via ssh and reboot a specified outlet. fence_ilo_moonshot An I/O Fencing agent that can be used with HP Moonshot iLO . fence_ilo_mp An I/O Fencing agent that can be used with HP iLO MP. fence_hpblade An I/O Fencing agent that can be used with HP BladeSystem and HP Integrity Superdome X. [a] This description also applies for the agent fence_ilo2 . [b] This description also applies for the agents fence_ilo3_ssh , fence_ilo4_ssh , and fence_ilo5_ssh . Table 3.5. IBM Agent Description fence_bladecenter An I/O Fencing agent that can be used with IBM Bladecenters with recent enough firmware that includes telnet support. fence_ibmblade An I/O Fencing agent that can be used with IBM BladeCenter chassis. fence_ipdu An I/O Fencing agent that can be used with the IBM iPDU network power switch. fence_rsa An I/O Fencing agent that can be used with the IBM RSA II management interface. Table 3.6. VMware Agent Description fence_vmware_rest An I/O Fencing agent that can be used with VMware API to fence virtual machines. fence_vmware_soap An I/O Fencing agent that can be used with the virtual machines managed by VMWare products that have SOAP API v4.1+. Table 3.7. Cisco Agent Description fence_cisco_mds An I/O Fencing agent that can be used with any Cisco MDS 9000 series with SNMP enabled device. fence_cisco_ucs An I/O Fencing agent that can be used with Cisco UCS to fence machines. Table 3.8. APC Agent Description fence_apc An I/O Fencing agent that can be used with the APC network power switch. fence_apc_snmp An I/O Fencing agent that can be used with the APC network power switch or Tripplite PDU devices. Table 3.9. Dell Agent Description fence_drac5 An I/O Fencing agent that can be used with the Dell Remote Access Card v5 or CMC (DRAC). Table 3.10. Other - agents for usage not listed in the tables. Agent Description fence_brocade An I/O Fencing agent that can be used with Brocade FC switches. fence_compute A resource that can be used to tell Nova that compute nodes are down and to reschedule flagged instances. fence_eaton_snmp An I/O Fencing agent that can be used with the Eaton network power switch. fence_emerson An I/O Fencing agent that can be used with MPX and MPH2 managed rack PDU. fence_eps An I/O Fencing agent that can be used with the ePowerSwitch 8M+ power switch to fence connected machines. fence_evacuate A resource that can be used to reschedule flagged instances. fence_heuristics_ping A resource that can be used with ping-heuristics to control execution of another fence agent on the same fencing level. fence_ifmib An I/O Fencing agent that can be used with any SNMP IF-MIB capable device. fence_kdump An I/O Fencing agent that can be used with the kdump crash recovery service. fence_mpath An I/O Fencing agent that can be used with SCSI-3 persistent reservations to control access multipath devices. fence_rsb An I/O Fencing agent that can be used with the Fujitsu-Siemens RSB management interface. fence_sbd An I/O Fencing agent that can be used in environments where sbd can be used (shared storage). fence_scsi An I/O Fencing agent that can be used with SCSI-3 persistent reservations to control access to shared storage devices. fence_wti An I/O Fencing agent that can be used with the WTI Network Power Switch (NPS). 3.8. Additional resources Using Operator Lifecycle Manager on restricted networks . Deleting Operators from a cluster
[ "apiVersion: fence-agents-remediation.medik8s.io/v1alpha1 kind: FenceAgentsRemediation metadata: name: node-name 1 namespace: openshift-workload-availability spec: remediationStrategy: <remediation_strategy> 2", "apiVersion: v1 kind: Namespace metadata: name: openshift-workload-availability", "oc create -f workload-availability-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: workload-availability-operator-group namespace: openshift-workload-availability", "oc create -f workload-availability-operator-group.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: fence-agents-remediation-subscription namespace: openshift-workload-availability 1 spec: channel: stable name: fence-agents-remediation source: redhat-operators sourceNamespace: openshift-marketplace package: fence-agents-remediation", "oc create -f fence-agents-remediation-subscription.yaml", "oc get csv -n openshift-workload-availability", "NAME DISPLAY VERSION REPLACES PHASE fence-agents-remediation.v0.3.0 Fence Agents Remediation Operator 0.3.0 fence-agents-remediation.v0.2.1 Succeeded", "oc get deployment -n openshift-workload-availability", "NAME READY UP-TO-DATE AVAILABLE AGE fence-agents-remediation-controller-manager 2/2 2 2 110m", "apiVersion: fence-agents-remediation.medik8s.io/v1alpha1 kind: FenceAgentsRemediationTemplate metadata: name: fence-agents-remediation-template-fence-ipmilan namespace: openshift-workload-availability spec: template: spec: agent: fence_ipmilan 1 nodeparameters: 2 --ipport: master-0-0: '6230' master-0-1: '6231' master-0-2: '6232' worker-0-0: '6233' worker-0-1: '6234' worker-0-2: '6235' sharedparameters: 3 '--action': reboot '--ip': 192.168.123.1 '--lanplus': '' '--password': password '--username': admin retryCount: '5' 4 retryInterval: '5s' 5 timeout: '60s' 6", "apiVersion: fence-agents-remediation.medik8s.io/v1alpha1 kind: FenceAgentsRemediationTemplate metadata: name: fence-agents-remediation-<remediation_object>-deletion-template 1 namespace: openshift-workload-availability spec: template: spec: remediationStrategy: <remediation_strategy> 2", "oc logs <fence-agents-remediation-controller-manager-name> -c manager -n <namespace-name>", "oc get far -A", "oc delete far <fence-agents-remediation> -n <namespace>", "oc delete fartemplate <fence-agents-remediation-template> -n <namespace>" ]
https://docs.redhat.com/en/documentation/workload_availability_for_red_hat_openshift/24.3/html/remediation_fencing_and_maintenance/fence-agents-remediation-operator-remediate-nodes
4.127. libatasmart
4.127. libatasmart 4.127.1. RHBA-2012:0703 - libatasmart bug fix update Updated libatasmart packages that fix one bug are now available for Red Hat Enterprise Linux 6. The libatasmart packages contain a small and lightweight parser library for ATA S.M.A.R.T. hard disk health monitoring. Bug Fix BZ# 824918 Due to libatasmart incorrectly calculating the number of bad sectors, certain tools, for example gnome-disk-utility, could erroneously report hard disks with Self-Monitoring, Analysis and Reporting Technology (S.M.A.R.T) as failing when logging in GNOME. This update corrects the bad sector calculation, which ensures that tools such as gnome-disk-utility do not report false positive warnings in this scenario. All users of libatasmart are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/libatasmart
19.6. Setting Account Lockout Policies
19.6. Setting Account Lockout Policies A brute force attack occurs when a malefactor attempts to guess a password by simply slamming the server with multiple login attempts. An account lockout policy prevents brute force attacks by blocking an account from logging into the system after a certain number of login failures - even if the correct password is subsequently entered. Note A user account can be manually unlocked by an administrator using the ipa user-unlock . Refer to Section 9.6, "Unlocking User Accounts After Password Failures" . 19.6.1. In the UI These attributes are available in the password policy form when a group-level password policy is created or when any password policy (including the global password policy) is edited. Click the Policy tab, and then click the Password Policies subtab. Click the name of the policy to edit. Set the account lockout attribute values. There are three parts to the account lockout policy: The number of failed login attempts before the account is locked ( Max Failures ). The time after a failed login attempt before the counter resets ( Failure reset interval ). Since mistakes do happen honestly, the count of failed attempts is not kept forever; it naturally lapses after a certain amount of time. This is in seconds. How long an account is locked after the max number of failures is reached ( Lockout duration ). This is in seconds. 19.6.2. In the CLI There are three parts to the account lockout policy: The number of failed login attempts before the account is locked ( --maxfail ). How long an account is locked after the max number of failures is reached ( --lockouttime ). This is in seconds. The time after a failed login attempt before the counter resets ( --failinterval ). Since mistakes do happen honestly, the count of failed attempts is not kept forever; it naturally lapses after a certain amount of time. This is in seconds. These account lockout attributes can all be set when a password policy is created with pwpolicy-add or added later using pwpolicy-mod . For example:
[ "[jsmith@ipaserver ~]USD kinit admin [jsmith@ipaserver ~]USD ipa pwpolicy-mod examplegroup --maxfail=4 --lockouttime=600 --failinterval=30" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/setting_account-lockout_policies
Chapter 8. Memory
Chapter 8. Memory 8.1. Introduction This chapter covers memory optimization options for virtualized environments.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/chap-Virtualization_Tuning_Optimization_Guide-Memory
Chapter 73. Cassandra CQL Component
Chapter 73. Cassandra CQL Component Available as of Camel version 2.15 Apache Cassandra is an open source NoSQL database designed to handle large amounts on commodity hardware. Like Amazon's DynamoDB, Cassandra has a peer-to-peer and master-less architecture to avoid single point of failure and garanty high availability. Like Google's BigTable, Cassandra data is structured using column families which can be accessed through the Thrift RPC API or a SQL-like API called CQL. This component aims at integrating Cassandra 2.0+ using the CQL3 API (not the Thrift API). It's based on Cassandra Java Driver provided by DataStax. Maven users will need to add the following dependency to their pom.xml : pom.xml <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-cassandraql</artifactId> <version>x.y.z</version> <!-- use the same version as your Camel core version --> </dependency> 73.1. URI format The endpoint can initiate the Cassandra connection or use an existing one. URI Description cql:localhost/keyspace Single host, default port, usual for testing cql:host1,host2/keyspace Multi host, default port cql:host1,host2:9042/keyspace Multi host, custom port cql:host1,host2 Default port and keyspace cql:bean:sessionRef Provided Session reference cql:bean:clusterRef/keyspace Provided Cluster reference To fine tune the Cassandra connection (SSL options, pooling options, load balancing policy, retry policy, reconnection policy... ), create your own Cluster instance and give it to the Camel endpoint. 73.2. Cassandra Options The Cassandra CQL component has no options. The Cassandra CQL endpoint is configured using URI syntax: with the following path and query parameters: 73.2.1. Path Parameters (4 parameters): Name Description Default Type beanRef beanRef is defined using bean:id String hosts Hostname(s) cassansdra server(s). Multiple hosts can be separated by comma. String port Port number of cassansdra server(s) Integer keyspace Keyspace to use String 73.2.2. Query Parameters (29 parameters): Name Description Default Type cluster (common) To use the Cluster instance (you would normally not use this option) Cluster clusterName (common) Cluster name String consistencyLevel (common) Consistency level to use ConsistencyLevel cql (common) CQL query to perform. Can be overridden with the message header with key CamelCqlQuery. String loadBalancingPolicy (common) To use a specific LoadBalancingPolicy String password (common) Password for session authentication String prepareStatements (common) Whether to use PreparedStatements or regular Statements true boolean resultSetConversionStrategy (common) To use a custom class that implements logic for converting ResultSet into message body ALL, ONE, LIMIT_10, LIMIT_100... String session (common) To use the Session instance (you would normally not use this option) Session username (common) Username for session authentication String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern pollStrategy (consumer) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPoll Strategy synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 1000 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutor Service scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz2 component none ScheduledPollConsumer Scheduler schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz2, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean 73.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.cql.enabled Enable cql component true Boolean camel.component.cql.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 73.4. Messages 73.4.1. Incoming Message The Camel Cassandra endpoint expects a bunch of simple objects ( Object or Object[] or Collection<Object> ) which will be bound to the CQL statement as query parameters. If message body is null or empty, then CQL query will be executed without binding parameters. Headers: CamelCqlQuery (optional, String or RegularStatement ): CQL query either as a plain String or built using the QueryBuilder . 73.4.2. Outgoing Message The Camel Cassandra endpoint produces one or many a Cassandra Row objects depending on the resultSetConversionStrategy : List<Row> if resultSetConversionStrategy is ALL or LIMIT_[0-9]+ Single` Row` if resultSetConversionStrategy is ONE Anything else, if resultSetConversionStrategy is a custom implementation of the ResultSetConversionStrategy 73.5. Repositories Cassandra can be used to store message keys or messages for the idempotent and aggregation EIP. Cassandra might not be the best tool for queuing use cases yet, read Cassandra anti-patterns queues and queue like datasets . It's advised to use LeveledCompaction and a small GC grace setting for these tables to allow tombstoned rows to be removed quickly. 73.6. Idempotent repository The NamedCassandraIdempotentRepository stores messages keys in a Cassandra table like this: CAMEL_IDEMPOTENT.cql CREATE TABLE CAMEL_IDEMPOTENT ( NAME varchar, -- Repository name KEY varchar, -- Message key PRIMARY KEY (NAME, KEY) ) WITH compaction = {'class':'LeveledCompactionStrategy'} AND gc_grace_seconds = 86400; This repository implementation uses lightweight transactions (also known as Compare and Set) and requires Cassandra 2.0.7+. Alternatively, the CassandraIdempotentRepository does not have a NAME column and can be extended to use a different data model. Option Default Description table CAMEL_IDEMPOTENT Table name pkColumns NAME ,` KEY` Primary key columns name Repository name, value used for NAME column ttl Key time to live writeConsistencyLevel Consistency level used to insert/delete key: ANY , ONE , TWO , QUORUM , LOCAL_QUORUM ... readConsistencyLevel Consistency level used to read/check key: ONE , TWO , QUORUM , LOCAL_QUORUM ... 73.7. Aggregation repository The NamedCassandraAggregationRepository stores exchanges by correlation key in a Cassandra table like this: CAMEL_AGGREGATION.cql CREATE TABLE CAMEL_AGGREGATION ( NAME varchar, -- Repository name KEY varchar, -- Correlation id EXCHANGE_ID varchar, -- Exchange id EXCHANGE blob, -- Serialized exchange PRIMARY KEY (NAME, KEY) ) WITH compaction = {'class':'LeveledCompactionStrategy'} AND gc_grace_seconds = 86400; Alternatively, the CassandraAggregationRepository does not have a NAME column and can be extended to use a different data model. Option Default Description table CAMEL_AGGREGATION Table name pkColumns NAME , KEY Primary key columns exchangeIdColumn EXCHANGE_ID Exchange Id column exchangeColumn EXCHANGE Exchange content column name Repository name, value used for NAME column ttl Exchange time to live writeConsistencyLevel Consistency level used to insert/delete exchange: ANY , ONE , TWO , QUORUM , LOCAL_QUORUM ... readConsistencyLevel Consistency level used to read/check exchange: ONE , TWO , QUORUM , LOCAL_QUORUM ... 73.8. Examples To insert something on a table you can use the following code: String CQL = "insert into camel_user(login, first_name, last_name) values (?, ?, ?)"; from("direct:input") .to("cql://localhost/camel_ks?cql=" + CQL); At this point you should be able to insert data by using a list as body Arrays.asList("davsclaus", "Claus", "Ibsen") The same approach can be used for updating or querying the table.
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-cassandraql</artifactId> <version>x.y.z</version> <!-- use the same version as your Camel core version --> </dependency>", "cql:beanRef:hosts:port/keyspace", "CREATE TABLE CAMEL_IDEMPOTENT ( NAME varchar, -- Repository name KEY varchar, -- Message key PRIMARY KEY (NAME, KEY) ) WITH compaction = {'class':'LeveledCompactionStrategy'} AND gc_grace_seconds = 86400;", "CREATE TABLE CAMEL_AGGREGATION ( NAME varchar, -- Repository name KEY varchar, -- Correlation id EXCHANGE_ID varchar, -- Exchange id EXCHANGE blob, -- Serialized exchange PRIMARY KEY (NAME, KEY) ) WITH compaction = {'class':'LeveledCompactionStrategy'} AND gc_grace_seconds = 86400;", "String CQL = \"insert into camel_user(login, first_name, last_name) values (?, ?, ?)\"; from(\"direct:input\") .to(\"cql://localhost/camel_ks?cql=\" + CQL);", "Arrays.asList(\"davsclaus\", \"Claus\", \"Ibsen\")" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/cql-component
Images
Images Red Hat OpenShift Service on AWS 4 Red Hat OpenShift Service on AWS Images. Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/images/index
Appendix A. MSI-based installer properties
Appendix A. MSI-based installer properties The Red Hat build of OpenJDK for Windows MSI-based installer includes the JDK Files component and the following optional properties: Table A.1. Red Hat build of OpenJDK for Windows MSI-based installer properties Property Description Default value OpenJDK Runtime - Windows Registry The following registry keys are set HKLM\Software\JavaSoft\JDK\<version>, entries: JavaHome: <INSTALLDIR> RuntimeLib: <INSTALLDIR>\bin\server\jvm.dll HKLM\Software\JavaSoft\JDK, entries: CurrentVersion: <version> Yes OpenJDK Runtime - Path Variable Adds the Runtime to the Path variable so it is available from the command line. Yes OpenJDK Runtime - JAVA_HOME System Variable JAVA_HOME is used by some programs to find the Java runtime. No OpenJDK Runtime - REDHAT_JAVA_HOME System Variable REDHAT_JAVA_HOME can be used by some programs to find the Red Hat build of OpenJDK runtime. No OpenJDK Runtime - Jar Files Association This enables Jar files to be run from within Windows Explorer. No Mission Control - Files Contains files that are installed in the <installdir> \missioncontrol directory. No Mission Control - Path Variable Appends <installdir> \missioncontrol to the system PATH environment variable. No Revised on 2024-05-09 16:49:09 UTC
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/installing_and_using_red_hat_build_of_openjdk_11_for_windows/msi-based-installer-properties
Chapter 2. Deploying Red Hat build of OpenJDK application in containers
Chapter 2. Deploying Red Hat build of OpenJDK application in containers You can deploy Red Hat build of OpenJDK applications in containers and have them run when the container is loaded. Procedure Copy the application JAR to the /deployments directory in the image JAR file. For example, the following shows a brief Dockerfile that adds an application called testubi.jar to the Red Hat build of OpenJDK 21 UBI8 image:
[ "FROM registry.access.redhat.com/ubi8/openjdk-17 COPY target/testubi.jar /deployments/testubi.jar" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/packaging_red_hat_build_of_openjdk_21_applications_in_containers/deploying-openjdk-apps-in-containers
Compiling your Red Hat build of Quarkus applications to native executables
Compiling your Red Hat build of Quarkus applications to native executables Red Hat build of Quarkus 3.15 Red Hat Customer Content Services
[ "<profiles> <profile> <id>native</id> <activation> <property> <name>native</name> </property> </activation> <properties> <skipITs>false</skipITs> <quarkus.package.type>native</quarkus.package.type> </properties> </profile> </profiles>", "./mvnw package -Dnative -Dquarkus.native.container-build=true", "./mvnw package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman", "quarkus build --native -Dquarkus.native.container-build=true", "quarkus build --native -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman", "./target/*-runner", "./mvnw package -Dnative", "quarkus build --native", "./target/*-runner", "FROM registry.access.redhat.com/ubi8/ubi-minimal:8.10 WORKDIR /work/ RUN chown 1001 /work && chmod \"g+rwX\" /work && chown 1001:root /work COPY --chown=1001:root target/*-runner /work/application EXPOSE 8080 USER 1001 ENTRYPOINT [\"./application\", \"-Dquarkus.http.host=0.0.0.0\"]", "registry.access.redhat.com/ubi8/ubi:8.10", "registry.access.redhat.com/ubi8/ubi-minimal:8.10", "./mvnw package -Dnative -Dquarkus.native.container-build=true", "./mvnw package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman", "docker build -f src/main/docker/Dockerfile.native -t quarkus-quickstart/getting-started .", "build -f src/main/docker/Dockerfile.native -t quarkus-quickstart/getting-started .", "docker run -i --rm -p 8080:8080 quarkus-quickstart/getting-started .", "run -i --rm -p 8080:8080 quarkus-quickstart/getting-started .", "login -u <username_url>", "new-project <project_name>", "cat src/main/docker/Dockerfile.native | oc new-build --name <build_name> --strategy=docker --dockerfile -", "start-build <build_name> --from-dir .", "new-app <build_name>", "expose svc/ <build_name>", "quarkus.native.resources.includes = my/config/files/*", "quarkus.native.resources.includes = **/*.png,bar/**/*.txt", "quarkus.native.native-image-xmx= <maximum_memory>", "mvn package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.native-image-xmx=<maximum_memory>", "<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-failsafe-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> <configuration> <systemPropertyVariables> <native.image.path>USD{project.build.directory}/USD{project.build.finalName}-runner</native.image.path> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </execution> </executions> </plugin>", "package org.acme; import io.quarkus.test.junit.QuarkusIntegrationTest; @QuarkusIntegrationTest 1 public class GreetingResourceIT extends GreetingResourceTest { 2 // Execute the same tests but in native mode. }", "./mvnw verify -Dnative", "./mvnw verify -Dnative . GraalVM Native Image: Generating 'getting-started-1.0.0-SNAPSHOT-runner' (executable) ======================================================================================================================== [1/8] Initializing... (6.6s @ 0.22GB) Java version: 21.0.4+7-LTS, vendor version: Mandrel-23.1.4.0-1b1 Graal compiler: optimization level: 2, target machine: x86-64-v3 C compiler: gcc (redhat, x86_64, 13.2.1) Garbage collector: Serial GC (max heap size: 80% of RAM) 2 user-specific feature(s) - io.quarkus.runner.Feature: Auto-generated class by Red&#160;Hat build of Quarkus from the existing extensions - io.quarkus.runtime.graal.DisableLoggingFeature: Disables INFO logging during the analysis phase [2/8] Performing analysis... [******] (40.0s @ 2.05GB) 10,318 (86.40%) of 11,942 types reachable 15,064 (57.36%) of 26,260 fields reachable 52,128 (55.75%) of 93,501 methods reachable 3,298 types, 109 fields, and 2,698 methods registered for reflection 63 types, 68 fields, and 55 methods registered for JNI access 4 native libraries: dl, pthread, rt, z [3/8] Building universe... (5.9s @ 1.31GB) [4/8] Parsing methods... [**] (3.7s @ 2.08GB) [5/8] Inlining methods... [***] (2.0s @ 1.92GB) [6/8] Compiling methods... [******] (34.4s @ 3.25GB) [7/8] Layouting methods... [**] (4.1s @ 1.78GB) [8/8] Creating image... [**] (4.5s @ 2.31GB) 20.93MB (48.43%) for code area: 33,233 compilation units 21.95MB (50.80%) for image heap: 285,664 objects and 8 resources 337.06kB ( 0.76%) for other data 43.20MB in total . [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M7:integration-test (default) @ getting-started --- [INFO] Using auto detected provider org.apache.maven.surefire.junitplatform.JUnitPlatformProvider [INFO] [INFO] ------------------------------------------------------- [INFO] T E S T S [INFO] ------------------------------------------------------- [INFO] Running org.acme.GreetingResourceIT __ ____ __ _____ ___ __ ____ ______ --/ __ \\/ / / / _ | / _ \\/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\\ --\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/___/ 2024-09-27 14:04:52,681 INFO [io.quarkus] (main) getting-started 1.0.0-SNAPSHOT native (powered by Quarkus 3.15.3.SP1-redhat-00002) started in 0.038s. Listening on: http://0.0.0.0:8081 2024-09-27 14:04:52,682 INFO [io.quarkus] (main) Profile prod activated. 2024-09-27 14:04:52,682 INFO [io.quarkus] (main) Installed features: [cdi, rest, smallrye-context-propagation, vertx] [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.696 s - in org.acme.GreetingResourceIT [INFO] [INFO] Results: [INFO] [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M7:verify (default) @ getting-started ---", "./mvnw verify -Dnative -Dquarkus.test.wait-time= <duration>", "./mvnw test-compile failsafe:integration-test -Dnative", "./mvnw test-compile failsafe:integration-test -Dnative.image.path=<path>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html-single/compiling_your_red_hat_build_of_quarkus_applications_to_native_executables/index
Planning your deployment
Planning your deployment Red Hat OpenShift Data Foundation 4.16 Important considerations when deploying Red Hat OpenShift Data Foundation 4.16 Red Hat Storage Documentation Team Abstract Read this document for important considerations when planning your Red Hat OpenShift Data Foundation deployment.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/planning_your_deployment/index
Chapter 2. Configuring private connections
Chapter 2. Configuring private connections 2.1. Configuring private connections Private cluster access can be implemented to suit the needs of your Red Hat OpenShift Service on AWS (ROSA) environment. Procedure Access your ROSA AWS account and use one or more of the following methods to establish a private connection to your cluster: Configuring AWS VPC peering : Enable VPC peering to route network traffic between two private IP addresses. Configuring AWS VPN : Establish a Virtual Private Network to securely connect your private network to your Amazon Virtual Private Cloud. Configuring AWS Direct Connect : Configure AWS Direct Connect to establish a dedicated network connection between your private network and an AWS Direct Connect location. Configure a private cluster on ROSA . 2.2. Configuring AWS VPC peering This sample process configures an Amazon Web Services (AWS) VPC containing an Red Hat OpenShift Service on AWS cluster to peer with another AWS VPC network. For more information about creating an AWS VPC Peering connection or for other possible configurations, see the AWS VPC Peering guide. 2.2.1. VPC peering terms When setting up a VPC peering connection between two VPCs on two separate AWS accounts, the following terms are used: Red Hat OpenShift Service on AWS AWS Account The AWS account that contains the Red Hat OpenShift Service on AWS cluster. Red Hat OpenShift Service on AWS Cluster VPC The VPC that contains the Red Hat OpenShift Service on AWS cluster. Customer AWS Account Your non-Red Hat OpenShift Service on AWS AWS Account that you would like to peer with. Customer VPC The VPC in your AWS Account that you would like to peer with. Customer VPC Region The region where the customer's VPC resides. Note As of July 2018, AWS supports inter-region VPC peering between all commercial regions excluding China . 2.2.2. Initiating the VPC peer request You can send a VPC peering connection request from the Red Hat OpenShift Service on AWS AWS Account to the Customer AWS Account. Prerequisites Gather the following information about the Customer VPC required to initiate the peering request: Customer AWS account number Customer VPC ID Customer VPC Region Customer VPC CIDR Check the CIDR block used by the Red Hat OpenShift Service on AWS Cluster VPC. If it overlaps or matches the CIDR block for the Customer VPC, then peering between these two VPCs is not possible; see the Amazon VPC Unsupported VPC Peering Configurations documentation for details. If the CIDR blocks do not overlap, you can continue with the procedure. Procedure Log in to the Web Console for the Red Hat OpenShift Service on AWS AWS Account and navigate to the VPC Dashboard in the region where the cluster is being hosted. Go to the Peering Connections page and click the Create Peering Connection button. Verify the details of the account you are logged in to and the details of the account and VPC you are connecting to: Peering connection name tag : Set a descriptive name for the VPC Peering Connection. VPC (Requester) : Select the Red Hat OpenShift Service on AWS Cluster VPC ID from the dropdown *list. Account : Select Another account and provide the Customer AWS Account number *(without dashes). Region : If the Customer VPC Region differs from the current region, select Another Region and select the customer VPC Region from the dropdown list. VPC (Accepter) : Set the Customer VPC ID. Click Create Peering Connection . Confirm that the request enters a Pending state. If it enters a Failed state, confirm the details and repeat the process. 2.2.3. Accepting the VPC peer request After you create the VPC peering connection, you must accept the request in the Customer AWS Account. Prerequisites Initiate the VPC peer request. Procedure Log in to the AWS Web Console. Navigate to VPC Service . Go to Peering Connections . Click on Pending peering connection . Confirm the AWS Account and VPC ID that the request originated from. This should be from the Red Hat OpenShift Service on AWS AWS Account and Red Hat OpenShift Service on AWS Cluster VPC. Click Accept Request . 2.2.4. Configuring the routing tables After you accept the VPC peering request, both VPCs must configure their routes to communicate across the peering connection. Prerequisites Initiate and accept the VPC peer request. Procedure Log in to the AWS Web Console for the Red Hat OpenShift Service on AWS AWS Account. Navigate to the VPC Service , then Route Tables . Select the Route Table for the Red Hat OpenShift Service on AWS Cluster VPC. Note On some clusters, there may be more than one route table for a particular VPC. Select the private one that has a number of explicitly associated subnets. Select the Routes tab, then Edit . Enter the Customer VPC CIDR block in the Destination text box. Enter the Peering Connection ID in the Target text box. Click Save . You must complete the same process with the other VPC's CIDR block: Log into the Customer AWS Web Console VPC Service Route Tables . Select the Route Table for your VPC. Select the Routes tab, then Edit . Enter the Red Hat OpenShift Service on AWS Cluster VPC CIDR block in the Destination text box. Enter the Peering Connection ID in the Target text box. Click Save . The VPC peering connection is now complete. Follow the verification procedure to ensure connectivity across the peering connection is working. 2.2.5. Verifying and troubleshooting VPC peering After you set up a VPC peering connection, it is best to confirm it has been configured and is working correctly. Prerequisites Initiate and accept the VPC peer request. Configure the routing tables. Procedure In the AWS console, look at the route table for the cluster VPC that is peered. Ensure that the steps for configuring the routing tables were followed and that there is a route table entry pointing the VPC CIDR range destination to the peering connection target. If the routes look correct on both the Red Hat OpenShift Service on AWS Cluster VPC route table and Customer VPC route table, then the connection should be tested using the netcat method below. If the test calls are successful, then VPC peering is working correctly. To test network connectivity to an endpoint device, nc (or netcat ) is a helpful troubleshooting tool. It is included in the default image and provides quick and clear output if a connection can be established: Create a temporary pod using the busybox image, which cleans up after itself: USD oc run netcat-test \ --image=busybox -i -t \ --restart=Never --rm \ -- /bin/sh Check the connection using nc . Example successful connection results: / nc -zvv 192.168.1.1 8080 10.181.3.180 (10.181.3.180:8080) open sent 0, rcvd 0 Example failed connection results: / nc -zvv 192.168.1.2 8080 nc: 10.181.3.180 (10.181.3.180:8081): Connection refused sent 0, rcvd 0 Exit the container, which automatically deletes the Pod: / exit 2.3. Configuring AWS VPN This sample process configures an Amazon Web Services (AWS) Red Hat OpenShift Service on AWS cluster to use a customer's on-site hardware VPN device. Note AWS VPN does not currently provide a managed option to apply NAT to VPN traffic. See the AWS Knowledge Center for more details. Note Routing all traffic, for example 0.0.0.0/0 , through a private connection is not supported. This requires deleting the internet gateway, which disables SRE management traffic. For more information about connecting an AWS VPC to remote networks using a hardware VPN device, see the Amazon VPC VPN Connections documentation. 2.3.1. Creating a VPN connection You can configure an Amazon Web Services (AWS) Red Hat OpenShift Service on AWS cluster to use a customer's on-site hardware VPN device using the following procedures. Prerequisites Hardware VPN gateway device model and software version, for example Cisco ASA running version 8.3. See the Amazon VPC Network Administrator Guide to confirm whether your gateway device is supported by AWS. Public, static IP address for the VPN gateway device. BGP or static routing: if BGP, the ASN is required. If static routing, you must configure at least one static route. Optional: IP and Port/Protocol of a reachable service to test the VPN connection. 2.3.1.1. Configuring the VPN connection Procedure Log in to the Red Hat OpenShift Service on AWS AWS Account Dashboard, and navigate to the VPC Dashboard. Click on Your VPCs and identify the name and VPC ID for the VPC containing the Red Hat OpenShift Service on AWS cluster. From the VPC Dashboard, click Customer Gateway . Click Create Customer Gateway and give it a meaningful name. Select the routing method: Dynamic or Static . If Dynamic, enter the BGP ASN in the field that appears. Paste in the VPN gateway endpoint IP address. Click Create . If you do not already have a Virtual Private Gateway attached to the intended VPC: From the VPC Dashboard, click on Virtual Private Gateway . Click Create Virtual Private Gateway , give it a meaningful name, and click Create . Leave the default Amazon default ASN. Select the newly created gateway, click Attach to VPC , and attach it to the cluster VPC you identified earlier. 2.3.1.2. Establishing the VPN Connection Procedure From the VPC dashboard, click on Site-to-Site VPN Connections . Click Create VPN Connection . Give it a meaningful name tag. Select the virtual private gateway created previously. For Customer Gateway, select Existing . Select the customer gateway device by name. If the VPN will use BGP, select Dynamic , otherwise select Static . Enter Static IP CIDRs. If there are multiple CIDRs, add each CIDR as Another Rule . Click Create . Wait for VPN status to change to Available , approximately 5 to 10 minutes. Select the VPN you just created and click Download Configuration . From the dropdown list, select the vendor, platform, and version of the customer gateway device, then click Download . The Generic vendor configuration is also available for retrieving information in a plain text format. Note After the VPN connection has been established, be sure to set up Route Propagation or the VPN may not function as expected. Note Note the VPC subnet information, which you must add to your configuration as the remote network. 2.3.1.3. Enabling VPN route propagation After you have set up the VPN connection, you must ensure that route propagation is enabled so that the necessary routes are added to the VPC's route table. Procedure From the VPC Dashboard, click on Route Tables . Select the private Route table associated with the VPC that contains your Red Hat OpenShift Service on AWS cluster. Note On some clusters, there may be more than one route table for a particular VPC. Select the private one that has a number of explicitly associated subnets. Click on the Route Propagation tab. In the table that appears, you should see the virtual private gateway you created previously. Check the value in the Propagate column . If Propagate is set to No , click Edit route propagation , check the Propagate checkbox to the virtual private gateway's name and click Save . After you configure your VPN tunnel and AWS detects it as Up , your static or BGP routes are automatically added to the route table. 2.3.2. Verifying the VPN connection After you have set up your side of the VPN tunnel, you can verify that the tunnel is up in the AWS console and that connectivity across the tunnel is working. Prerequisites Created a VPN connection. Procedure Verify the tunnel is up in AWS. From the VPC Dashboard, click on VPN Connections . Select the VPN connection you created previously and click the Tunnel Details tab. You should be able to see that at least one of the VPN tunnels is Up . Verify the connection. To test network connectivity to an endpoint device, nc (or netcat ) is a helpful troubleshooting tool. It is included in the default image and provides quick and clear output if a connection can be established: Create a temporary pod using the busybox image, which cleans up after itself: USD oc run netcat-test \ --image=busybox -i -t \ --restart=Never --rm \ -- /bin/sh Check the connection using nc . Example successful connection results: / nc -zvv 192.168.1.1 8080 10.181.3.180 (10.181.3.180:8080) open sent 0, rcvd 0 Example failed connection results: / nc -zvv 192.168.1.2 8080 nc: 10.181.3.180 (10.181.3.180:8081): Connection refused sent 0, rcvd 0 Exit the container, which automatically deletes the Pod: / exit 2.3.3. Troubleshooting the VPN connection Tunnel does not connect If the tunnel connection is still Down , there are several things you can verify: The AWS tunnel will not initiate a VPN connection. The connection attempt must be initiated from the Customer Gateway. Ensure that your source traffic is coming from the same IP as the configured customer gateway. AWS will silently drop all traffic to the gateway whose source IP address does not match. Ensure that your configuration matches values supported by AWS . This includes IKE versions, DH groups, IKE lifetime, and more. Recheck the route table for the VPC. Ensure that propagation is enabled and that there are entries in the route table that have the virtual private gateway you created earlier as a target. Confirm that you do not have any firewall rules that could be causing an interruption. Check if you are using a policy-based VPN as this can cause complications depending on how it is configured. Further troubleshooting steps can be found at the AWS Knowledge Center . Tunnel does not stay connected If the tunnel connection has trouble staying Up consistently, know that all AWS tunnel connections must be initiated from your gateway. AWS tunnels do not initiate tunneling . Red Hat recommends setting up an SLA Monitor (Cisco ASA) or some device on your side of the tunnel that constantly sends "interesting" traffic, for example ping , nc , or telnet , at any IP address configured within the VPC CIDR range. It does not matter whether the connection is successful, just that the traffic is being directed at the tunnel. Secondary tunnel in Down state When a VPN tunnel is created, AWS creates an additional failover tunnel. Depending upon the gateway device, sometimes the secondary tunnel will be seen as in the Down state. The AWS Notification is as follows: 2.4. Configuring AWS Direct Connect This process describes accepting an AWS Direct Connect virtual interface with Red Hat OpenShift Service on AWS. For more information about AWS Direct Connect types and configuration, see the AWS Direct Connect components documentation. 2.4.1. AWS Direct Connect methods A Direct Connect connection requires a hosted Virtual Interface (VIF) connected to a Direct Connect Gateway (DXGateway), which is in turn associated to a Virtual Gateway (VGW) or a Transit Gateway in order to access a remote VPC in the same or another account. If you do not have an existing DXGateway, the typical process involves creating the hosted VIF, with the DXGateway and VGW being created in the Red Hat OpenShift Service on AWS AWS Account. If you have an existing DXGateway connected to one or more existing VGWs, the process involves the Red Hat OpenShift Service on AWS AWS Account sending an Association Proposal to the DXGateway owner. The DXGateway owner must ensure that the proposed CIDR will not conflict with any other VGWs they have associated. See the following AWS documentation for more details: Virtual Interfaces Direct Connect Gateways Associating a VGW across accounts Important When connecting to an existing DXGateway, you are responsible for the costs . There are two configuration options available: Method 1 Create the hosted VIF and then the DXGateway and VGW. Method 2 Request a connection via an existing Direct Connect Gateway that you own. 2.4.2. Creating the hosted Virtual Interface Prerequisites Gather Red Hat OpenShift Service on AWS AWS Account ID. 2.4.2.1. Determining the type of Direct Connect connection View the Direct Connect Virtual Interface details to determine the type of connection. Procedure Log in to the Red Hat OpenShift Service on AWS AWS Account Dashboard and select the correct region. Select Direct Connect from the Services menu. There will be one or more Virtual Interfaces waiting to be accepted, select one of them to view the Summary . View the Virtual Interface type: private or public. Record the Amazon side ASN value. If the Direct Connect Virtual Interface type is Private, a Virtual Private Gateway is created. If the Direct Connect Virtual Interface is Public, a Direct Connect Gateway is created. 2.4.2.2. Creating a Private Direct Connect A Private Direct Connect is created if the Direct Connect Virtual Interface type is Private. Procedure Log in to the Red Hat OpenShift Service on AWS AWS Account Dashboard and select the correct region. From the AWS region, select VPC from the Services menu. Select Virtual Private Gateways from VPN Connections . Click Create Virtual Private Gateway . Give the Virtual Private Gateway a suitable name. Select Custom ASN and enter the Amazon side ASN value gathered previously. Create the Virtual Private Gateway. Click the newly created Virtual Private Gateway and choose Attach to VPC from the Actions tab. Select the Red Hat OpenShift Service on AWS Cluster VPC from the list, and attach the Virtual Private Gateway to the VPC. From the Services menu, click Direct Connect . Choose one of the Direct Connect Virtual Interfaces from the list. Acknowledge the I understand that Direct Connect port charges apply once I click Accept Connection message, then choose Accept Connection . Choose to Accept the Virtual Private Gateway Connection and select the Virtual Private Gateway that was created in the steps. Select Accept to accept the connection. Repeat the steps if there is more than one Virtual Interface. 2.4.2.3. Creating a Public Direct Connect A Public Direct Connect is created if the Direct Connect Virtual Interface type is Public. Procedure Log in to the Red Hat OpenShift Service on AWS AWS Account Dashboard and select the correct region. From the Red Hat OpenShift Service on AWS AWS Account region, select Direct Connect from the Services menu. Select Direct Connect Gateways and Create Direct Connect Gateway . Give the Direct Connect Gateway a suitable name. In the Amazon side ASN , enter the Amazon side ASN value gathered previously. Create the Direct Connect Gateway. Select Direct Connect from the Services menu. Select one of the Direct Connect Virtual Interfaces from the list. Acknowledge the I understand that Direct Connect port charges apply once I click Accept Connection message, then choose Accept Connection . Choose to Accept the Direct Connect Gateway Connection and select the Direct Connect Gateway that was created in the steps. Click Accept to accept the connection. Repeat the steps if there is more than one Virtual Interface. 2.4.2.4. Verifying the Virtual Interfaces After the Direct Connect Virtual Interfaces have been accepted, wait a short period and view the status of the Interfaces. Procedure Log in to the Red Hat OpenShift Service on AWS AWS Account Dashboard and select the correct region. From the Red Hat OpenShift Service on AWS AWS Account region, select Direct Connect from the Services menu. Select one of the Direct Connect Virtual Interfaces from the list. Check the Interface State has become Available Check the Interface BGP Status has become Up . Repeat this verification for any remaining Direct Connect Interfaces. After the Direct Connect Virtual Interfaces are available, you can log in to the Red Hat OpenShift Service on AWS AWS Account Dashboard and download the Direct Connect configuration file for configuration on your side. 2.4.3. Connecting to an existing Direct Connect Gateway Prerequisites Confirm the CIDR range of the Red Hat OpenShift Service on AWS VPC will not conflict with any other VGWs you have associated. Gather the following information: The Direct Connect Gateway ID. The AWS Account ID associated with the virtual interface. The BGP ASN assigned for the DXGateway. Optional: the Amazon default ASN may also be used. Procedure Log in to the Red Hat OpenShift Service on AWS AWS Account Dashboard and select the correct region. From the Red Hat OpenShift Service on AWS AWS Account region, select VPC from the Services menu. From VPN Connections , select Virtual Private Gateways . Select Create Virtual Private Gateway . Give the Virtual Private Gateway a suitable name. Click Custom ASN and enter the Amazon side ASN value gathered previously or use the Amazon Provided ASN. Create the Virtual Private Gateway. In the Navigation pane of the Red Hat OpenShift Service on AWS AWS Account Dashboard, choose Virtual private gateways and select the virtual private gateway. Choose View details . Choose Direct Connect gateway associations and click Associate Direct Connect gateway . Under Association account type , for Account owner, choose Another account . For Direct Connect gateway owner , enter the ID of the AWS account that owns the Direct Connect gateway. Under Association settings , for Direct Connect gateway ID, enter the ID of the Direct Connect gateway. Under Association settings , for Virtual interface owner, enter the ID of the AWS account that owns the virtual interface for the association. Optional: Add prefixes to Allowed prefixes, separating them using commas. Choose Associate Direct Connect gateway . After the Association Proposal has been sent, it will be waiting for your acceptance. The final steps you must perform are available in the AWS Documentation . 2.4.4. Troubleshooting Direct Connect Further troubleshooting can be found in the Troubleshooting AWS Direct Connect documentation.
[ "oc run netcat-test --image=busybox -i -t --restart=Never --rm -- /bin/sh", "/ nc -zvv 192.168.1.1 8080 10.181.3.180 (10.181.3.180:8080) open sent 0, rcvd 0", "/ nc -zvv 192.168.1.2 8080 nc: 10.181.3.180 (10.181.3.180:8081): Connection refused sent 0, rcvd 0", "/ exit", "oc run netcat-test --image=busybox -i -t --restart=Never --rm -- /bin/sh", "/ nc -zvv 192.168.1.1 8080 10.181.3.180 (10.181.3.180:8080) open sent 0, rcvd 0", "/ nc -zvv 192.168.1.2 8080 nc: 10.181.3.180 (10.181.3.180:8081): Connection refused sent 0, rcvd 0", "/ exit", "You have new non-redundant VPN connections One or more of your vpn connections are not using both tunnels. This mode of operation is not highly available and we strongly recommend you configure your second tunnel. View your non-redundant VPN connections." ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/cluster_administration/configuring-private-connections
Chapter 15. Managing using qdmanage
Chapter 15. Managing using qdmanage The qdmanage tool is a command-line tool for viewing and modifying the configuration of a running router at runtime. Note If you make a change to a router using qdmanage , the change takes effect immediately, but is lost if the router is stopped. If you want to make a permanent change to a router's configuration, you must edit the router's /etc/qpid-dispatch/qdrouterd.conf configuration file. You can use qdmanage with the following syntax: This specifies: One or more optional connection options to specify the router on which to perform the operation, or to supply security credentials if the router only accepts secure connections. If you do not specify any connection options, qdmanage connects to the router listening on localhost and the default AMQP port (5672). The operation to perform on the router. One or more optional options to specify a configuration entity on which to perform the operation or how to format the command output. When you enter a qdmanage command, it is executed as an AMQP management operation request, and then the response is returned as command output in JSON format. For example, the following command executes a query operation on a router, and then returns the response in JSON format: Additional resources For more information about qdmanage , see the qdmanage man page .
[ "qdmanage [ <connection-options> ] <operation> [ <options> ]", "qdmanage query --type listener [ { \"stripAnnotations\": \"both\", \"addr\": \"127.0.0.1\", \"multiTenant\": false, \"requireSsl\": false, \"idleTimeoutSeconds\": 16, \"saslMechanisms\": \"ANONYMOUS\", \"maxFrameSize\": 16384, \"requireEncryption\": false, \"host\": \"0.0.0.0\", \"cost\": 1, \"role\": \"normal\", \"http\": false, \"maxSessions\": 32768, \"authenticatePeer\": false, \"type\": \"org.apache.qpid.dispatch.listener\", \"port\": \"amqp\", \"identity\": \"listener/0.0.0.0:amqp\", \"name\": \"listener/0.0.0.0:amqp\" } ]" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_amq_interconnect/managing-using-qdmanage-router-rhel
13.3. Installing in Text Mode
13.3. Installing in Text Mode Text mode installation offers an interactive, non-graphical interface for installing Red Hat Enterprise Linux. This can be useful on systems with no graphical capabilities; however, you should always consider the available alternatives before starting a text-based installation. Text mode is limited in the amount of choices you can make during the installation. Important Red Hat recommends that you install Red Hat Enterprise Linux using the graphical interface. If you are installing Red Hat Enterprise Linux on a system that lacks a graphical display, consider performing the installation over a VNC connection - see Chapter 25, Using VNC . The text mode installation program will prompt you to confirm the use of text mode if it detects that a VNC-based installation is possible. If your system has a graphical display, but graphical installation fails, try booting with the inst.xdriver=vesa option - see Chapter 23, Boot Options . Alternatively, consider a Kickstart installation. See Chapter 27, Kickstart Installations for more information. Figure 13.1. Text Mode Installation Installation in text mode follows a pattern similar to the graphical installation: There is no single fixed progression; you can configure many settings in any order you want using the main status screen. Screens which have already been configured, either automatically or by you, are marked as [x] , and screens which require your attention before the installation can begin are marked with [!] . Available commands are displayed below the list of available options. Note When related background tasks are being run, certain menu items can be temporarily unavailable or display the Processing... label. To refresh to the current status of text menu items, use the r option at the text mode prompt. At the bottom of the screen in text mode, a green bar is displayed showing five menu options. These options represent different screens in the tmux terminal multiplexer; by default you start in screen 1, and you can use keyboard shortcuts to switch to other screens which contain logs and an interactive command prompt. For information about available screens and shortcuts to switch to them, see Section 13.2.1, "Accessing Consoles" . Limits of interactive text mode installation include: The installer will always use the English language and the US English keyboard layout. You can configure your language and keyboard settings, but these settings will only apply to the installed system, not to the installation. You cannot configure any advanced storage methods (LVM, software RAID, FCoE, zFCP and iSCSI). It is not possible to configure custom partitioning; you must use one of the automatic partitioning settings. You also cannot configure where the boot loader will be installed. You cannot select any package add-ons to be installed; they must be added after the installation finishes using the Yum package manager. To start a text mode installation, boot the installation with the inst.text boot option used either at the boot command line in the boot menu, or in your PXE server configuration. See Chapter 12, Booting the Installation on IBM Power Systems for information about booting and using boot options.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-installation-text-mode-ppc
Chapter 2. Configuring the Block Storage service (cinder)
Chapter 2. Configuring the Block Storage service (cinder) The Block Storage service (cinder) manages the administration, security, scheduling, and overall management of all volumes. Volumes are used as the primary form of persistent storage for Compute instances. For more information about volume backups, see the Block Storage Backup Guide . Important You must install host bus adapters (HBAs) on all Controller nodes and Compute nodes in any deployment that uses the Block Storage service (cinder) and a Fibre Channel (FC) back end. 2.1. Block Storage service back ends Red Hat OpenStack Platform (RHOSP) is deployed using director. Doing so helps ensure the correct configuration of each service, including the Block Storage service (cinder) and, by extension, its back end. Director also has several integrated back-end configurations. Red Hat OpenStack Platform supports Red Hat Ceph Storage and NFS as Block Storage (cinder) back ends. By default, the Block Storage service uses an LVM back end as a repository for volumes. While this back end is suitable for test environments, LVM is not supported in production environments. For instructions on how to deploy Red Hat Ceph Storage with RHOSP, see Deploying an Overcloud with Containerized Red Hat Ceph . For instructions on how to set up NFS storage in the overcloud, see Configuring NFS Storage in the Advanced Overcloud Customization Guide . You can also configure the Block Storage service to use supported third-party storage appliances. Director includes the necessary components for deploying different back-end solutions. For a complete list of supported back-end appliances and drivers, see Component, Plug-In, and Driver Support in Red Hat OpenStack Platform . All third-party back-end appliances and drivers have additional deployment guides. Review the appropriate deployment guide to determine if a back-end appliance or driver requires a plugin. For more information about deploying a third-party storage appliance plugin, see Deploying a vendor plugin in the Advanced Overcloud Customization guide. 2.2. High availability of the Block Storage volume service The Block Storage volume service ( cinder-volume ) is deployed on Controller nodes in active-passive mode. In this case, Pacemaker maintains the high availability (HA) of this service. In Distributed Compute Node (DCN) deployments, the Block Storage volume service is deployed on the central site in active-passive mode. In this case, Pacemaker maintains the HA of this service. Only deploy the Block Storage volume service on an edge site that requires storage. Because Pacemaker cannot be deployed on edge sites, the Block Storage volume service must be deployed in active-active mode to ensure the HA of this service. The dcn-storage.yaml heat template performs this configuration. But you need to manually maintain this service. For more information about maintaining the Block Storage volume service at edge sites that require storage, see Maintenance commands for the Block Storage volume service at edge sites . Important If you use multiple storage back ends at an edge site that requires storage, then all the back ends must support active-active mode. Because if you save data on a back end that does not support active-active mode, you risk losing your data. 2.2.1. Maintenance commands for the Block Storage volume service at edge sites After deploying the Block Storage volume service ( cinder-volume ) in active-active mode at an edge site that requires storage, you can use the following commands to manage the clusters and their services. Note These commands need a Block Storage (cinder) REST API microversion of 3.17 or later. User goal Command See the service listing, including details such as cluster name, host, zone, status, state, disabled reason, and back end state. Note The default cluster name for the Red Hat Ceph Storage back end is tripleo@tripleo_ceph . cinder service-list See detailed and summary information about clusters as a whole as opposed to individual services. cinder cluster-list See detailed information about a specific cluster. cinder cluster-show <clustered_service> Replace <clustered_service> with the name of the clustered service. Enable a disabled service. cinder cluster-enable <clustered_service> Disable a clustered service. cinder cluster-disable <clustered_service> 2.2.2. Volume manage and unmanage The unmanage and manage mechanisms facilitate moving volumes from one service using version X to another service using version X+1. Both services remain running during this process. In API version 3.17 or later, you can see lists of volumes and snapshots that are available for management in Block Storage clusters. To see these lists, use the --cluster argument with cinder manageable-list or cinder snapshot-manageable-list . In API version 3.16 and later, the cinder manage command also accepts the optional --cluster argument so that you can add previously unmanaged volumes to a Block Storage cluster. 2.2.3. Volume migration on a clustered service With API version 3.16 and later, the cinder migrate and cinder-manage commands accept the --cluster argument to define the destination for active-active deployments. When you migrate a volume on a Block Storage clustered service, pass the optional --cluster argument and omit the host positional argument, because the arguments are mutually exclusive. 2.2.4. Initiating Block Storage service maintenance All Block Storage volume services perform their own maintenance when they start. In an environment with multiple volume services grouped in a cluster, you can clean up services that are not currently running. The command work-cleanup triggers server cleanups. The command returns: A list of the services that the command can clean. A list of the services that the command cannot clean because they are not currently running in the cluster. Note The work-cleanup command works only on servers running API version 3.24 or later. Prerequisites A successful undercloud installation. For more information, see Installing director on the undercloud in Director Installation and Usage . Procedure Run the following command to verify whether all of the services for a cluster are running: Alternatively, run the cluster show command. If any services are not running, run the following command to identify those specific services: Run the following command to trigger the server cleanup: Note Filters, such as --cluster , --host , and --binary , define what the command cleans. You can filter on cluster name, host name, type of service, and resource type, including a specific resource. If you do not apply filtering, the command attempts to clean everything that can be cleaned. The following example filters by cluster name: 2.3. Group volume configuration with volume types With Red Hat OpenStack Platform you can create volume types so that you can apply associated settings to the volume type. You can apply settings during volume creation, see Section 3.1, "Creating Block Storage volumes" . You can also apply settings after you create a volume, see Section 4.5, "Block Storage volume retyping" . The following list shows some of the associated setting that you can apply to a volume type: The encryption of a volume. For more information, see Section 2.7.2, "Configuring Block Storage service volume encryption with the CLI" . The back end that a volume uses. For more information, see Section 2.10, "Specifying back ends for volume creation" and Section 4.8, "Migrating a volume between back ends with the CLI" . Quality-of-Service (QoS) Specs Settings are associated with volume types using key-value pairs called Extra Specs. When you specify a volume type during volume creation, the Block Storage scheduler applies these key-value pairs as settings. You can associate multiple key-value pairs to the same volume type. Volume types provide the capability to provide different users with storage tiers. By associating specific performance, resilience, and other settings as key-value pairs to a volume type, you can map tier-specific settings to different volume types. You can then apply tier settings when creating a volume by specifying the corresponding volume type. 2.3.1. Listing back-end driver capabilities Available and supported Extra Specs vary per back-end driver. Consult the driver documentation for a list of valid Extra Specs. Alternatively, you can query the Block Storage host directly to determine which well-defined standard Extra Specs are supported by its driver. Start by logging in (through the command line) to the node hosting the Block Storage service. Prerequisites A successful undercloud installation. For more information, see Installing director on the undercloud in Director Installation and Usage . Procedure This command will return a list containing the host of each Block Storage service ( cinder-backup , cinder-scheduler , and cinder-volume ). For example: To display the driver capabilities (and, in turn, determine the supported Extra Specs) of a Block Storage service, run: Where VOLSVCHOST is the complete name of the cinder-volume 's host. For example: The Backend properties column shows a list of Extra Spec Keys that you can set, while the Value column provides information on valid corresponding values. Note Available and supported Extra Specs vary per back-end driver. Consult the driver documentation for a list of valid Extra Specs. Alternatively, you can query the Block Storage host directly to determine which well-defined standard Extra Specs are supported by its driver. Start by logging in (through the command line) to the node hosting the Block Storage service. Then: This command will return a list containing the host of each Block Storage service ( cinder-backup , cinder-scheduler , and cinder-volume ). For example: To display the driver capabilities (and, in turn, determine the supported Extra Specs) of a Block Storage service, run: Where VOLSVCHOST is the complete name of the cinder-volume 's host. For example: The Backend properties column shows a list of Extra Spec Keys that you can set, while the Value column provides information on valid corresponding values. 2.3.2. Creating and configuring a volume type Create volume types so that you can apply associated settings to the volume type. Note If the Block Storage service (cinder) is configured to use multiple back ends, then a volume type must be created for each back end. Volume types provide the capability to provide different users with storage tiers. By associating specific performance, resilience, and other settings as key-value pairs to a volume type, you can map tier-specific settings to different volume types. You can then apply tier settings when creating a volume by specifying the corresponding volume type. Prerequisites A successful undercloud installation. For more information, see Installing director on the undercloud in Director Installation and Usage . A successful overcloud deployment. For more information, see Creating a basic overcloud with CLI tools in Director Installation and Usage . Access to the Red Hat OpenStack Platform (RHOSP) Dashboard (horizon). For more information, see Overcloud deployment output in Director Installation and Usage . Procedure As an admin user in the dashboard, select Admin > Volumes > Volume Types . Click Create Volume Type . Enter the volume type name in the Name field. Click Create Volume Type . The new type appears in the Volume Types table. Select the volume type's View Extra Specs action. Click Create and specify the Key and Value . The key-value pair must be valid; otherwise, specifying the volume type during volume creation will result in an error. For instance, to specify a back end for this volume type, add the volume_backend_name Key and set the Value to the name of the required back end. Click Create . The associated setting (key-value pair) now appears in the Extra Specs table. By default, all volume types are accessible to all OpenStack projects. If you need to create volume types with restricted access, you will need to do so through the CLI. For instructions, see Section 2.3.4, "Creating and configuring private volume types" . Note You can also associate a QoS Spec to the volume type. For more information, see Section 2.6.3, "Associating a Quality-of-Service specification with a volume type" . 2.3.3. Editing a volume type Edit a volume type in the Dashboard to modify the Extra Specs configuration of the volume type. Prerequisites A successful undercloud installation. For more information, see Installing director on the undercloud . A successful overcloud deployment. For more information, see Creating a basic overcloud with CLI tools . Access to the Red Hat OpenStack Platform (RHOSP) Dashboard (horizon). For more information, see Overcloud deployment output . Procedure As an admin user in the dashboard, select Admin > Volumes > Volume Types . In the Volume Types table, select the volume type's View Extra Specs action. On the Extra Specs table of this page, you can: Add a new setting to the volume type. To do this, click Create and specify the key/value pair of the new setting you want to associate to the volume type. Edit an existing setting associated with the volume type by selecting the setting's Edit action. Delete existing settings associated with the volume type by selecting the extra specs' check box and clicking Delete Extra Specs in this and the dialog screen. To delete a volume type, select its corresponding check boxes from the Volume Types table and click Delete Volume Types . 2.3.4. Creating and configuring private volume types By default, all volume types are available to all projects. You can create a restricted volume type by marking it private . To do so, set the type's is-public flag to false . Private volume types are useful for restricting access to volumes with certain attributes. Typically, these are settings that should only be usable by specific projects; examples include new back ends or ultra-high performance configurations that are being tested. Prerequisites A successful undercloud installation. For more information, see Installing director on the undercloud in Director Installation and Usage . A successful overcloud deployment. For more information, see Creating a basic overcloud with CLI tools in Director Installation and Usage . Procedure By default, private volume types are only accessible to their creators. However, admin users can find and view private volume types using the following command: This command lists both public and private volume types, and it also includes the name and ID of each one. You need the volume type's ID to provide access to it. Access to a private volume type is granted at the project level. To grant a project access to a private volume type, run: To view which projects have access to a private volume type, run: To remove a project from the access list of a private volume type, run: Note By default, only users with administrative privileges can create, view, or configure access for private volume types. 2.4. Creating and configuring an internal project for the Block Storage service (cinder) Some Block Storage features (for example, the Image-Volume cache) require the configuration of an internal tenant . The Block Storage service uses this tenant/project to manage block storage items that do not necessarily need to be exposed to normal users. Examples of such items are images cached for frequent volume cloning or temporary copies of volumes being migrated. Prerequisites A successful undercloud installation. For more information, see Installing director on the undercloud in Director Installation and Usage . A successful overcloud deployment. For more information, see Creating a basic overcloud with CLI tools in Director Installation and Usage . Procedure To configure an internal project, first create a generic project and user, both named cinder-internal . To do so, log in to the Controller node and run: The procedure for adding Extra Config options creates an internal project. For more information, see Section 2.5, "Configuring the image-volume cache" . 2.5. Configuring the image-volume cache The Block Storage service features an optional Image-Volume cache which can be used when creating volumes from images. This cache is designed to improve the speed of volume creation from frequently-used images. For information on how to create volumes from images, see Section 3.1, "Creating Block Storage volumes" . When enabled, the Image-Volume cache stores a copy of an image the first time a volume is created from it. This stored image is cached locally to the Block Storage back end to help improve performance the time the image is used to create a volume. The Image-Volume cache's limit can be set to a size (in GB), number of images, or both. The Image-Volume cache is supported by several back ends. If you are using a third-party back end, refer to its documentation for information on Image-Volume cache support. Note The Image-Volume cache requires that an internal tenant be configured for the Block Storage service. For instructions, see Section 2.4, "Creating and configuring an internal project for the Block Storage service (cinder)" . Prerequisites A successful undercloud installation. For more information, see Installing director on the undercloud in Director Installation and Usage . Procedure To enable and configure the Image-Volume cache on a back end ( BACKEND ), add the values to an ExtraConfig section of an environment file on the undercloud. For example: 1 Replace BACKEND with the name of the target back end (specifically, its volume_backend_name value). 2 By default, the Image-Volume cache size is only limited by the back end. Change MAXSIZE to a number in GB. 3 You can also set a maximum number of images using MAXNUMBER . The Block Storage service database uses a time stamp to track when each cached image was last used to create an image. If either or both MAXSIZE and MAXNUMBER are set, the Block Storage service will delete cached images as needed to make way for new ones. Cached images with the oldest time stamp are deleted first whenever the Image-Volume cache limits are met. After you create the environment file in /home/stack/templates/ , log in as the stack user and deploy the configuration by running: Where ENV_FILE.yaml is the name of the file with the ExtraConfig settings added earlier. Important If you passed any extra environment files when you created the overcloud, pass them again here using the -e option to avoid making undesired changes to the overcloud. For more information about the openstack overcloud deploy command, see Deployment command in Director Installation and Usage . 2.6. Block Storage service (cinder) Quality-of-Service You can map multiple performance settings to a single Quality-of-Service specification (QOS Specs). Doing so allows you to provide performance tiers for different user types. Performance settings are mapped as key-value pairs to QOS Specs, similar to the way volume settings are associated to a volume type. However, QOS Specs are different from volume types in the following respects: QOS Specs are used to apply performance settings, which include limiting read/write operations to disks. Available and supported performance settings vary per storage driver. To determine which QOS Specs are supported by your back end, consult the documentation of your back end device's volume driver. Volume types are directly applied to volumes, whereas QOS Specs are not. Rather, QOS Specs are associated to volume types. During volume creation, specifying a volume type also applies the performance settings mapped to the volume type's associated QOS Specs. You can define performance limits for volumes on a per-volume basis using basic volume QOS values. The Block Storage service supports the following options: read_iops_sec write_iops_sec total_iops_sec read_bytes_sec write_bytes_sec total_bytes_sec read_iops_sec_max write_iops_sec_max total_iops_sec_max read_bytes_sec_max write_bytes_sec_max total_bytes_sec_max size_iops_sec 2.6.1. Creating and configuring a Quality-of-Service specification As an administrator, you can create and configure a QOS Spec through the QOS Specs table. You can associate more than one key/value pair to the same QOS Spec. Prerequisites A successful undercloud installation. For more information, see Installing director on the undercloud . A successful overcloud deployment. For more information, see Creating a basic overcloud with CLI tools . Access to the Red Hat OpenStack Platform (RHOSP) Dashboard (horizon). For more information, see Overcloud deployment output . Procedure As an admin user in the dashboard, select Admin > Volumes > Volume Types . On the QOS Specs table, click Create QOS Spec . Enter a name for the QOS Spec . In the Consumer field, specify where the QOS policy should be enforced: Table 2.1. Consumer Types Type Description back-end QOS policy will be applied to the Block Storage back end. front-end QOS policy will be applied to Compute. both QOS policy will be applied to both Block Storage and Compute. Click Create . The new QOS Spec should now appear in the QOS Specs table. In the QOS Specs table, select the new spec's Manage Specs action. Click Create , and specify the Key and Value . The key-value pair must be valid; otherwise, specifying a volume type associated with this QOS Spec during volume creation will fail. For example, to set read limit IOPS to 500 , use the following Key/Value pair: Click Create . The associated setting (key-value pair) now appears in the Key-Value Pairs table. 2.6.2. Setting capacity-derived Quality-of-Service limits You can use volume types to implement capacity-derived Quality-of-Service (QoS) limits on volumes. This will allow you to set a deterministic IOPS throughput based on the size of provisioned volumes. Doing this simplifies how storage resources are provided to users - namely, providing a user with pre-determined (and, ultimately, highly predictable) throughput rates based on the volume size they provision. In particular, the Block Storage service allows you to set how much IOPS to allocate to a volume based on the actual provisioned size. This throughput is set on an IOPS per GB basis through the following QoS keys: These keys allow you to set read, write, or total IOPS to scale with the size of provisioned volumes. For example, if the volume type uses read_iops_sec_per_gb=500 , then a provisioned 3GB volume would automatically have a read IOPS of 1500. Capacity-derived QoS limits are set per volume type, and configured like any normal QoS spec. In addition, these limits are supported by the underlying Block Storage service directly, and is not dependent on any particular driver. For more information about volume types, see Section 2.3, "Group volume configuration with volume types" and Section 2.3.2, "Creating and configuring a volume type" . For instructions on how to set QoS specs, Section 2.6, "Block Storage service (cinder) Quality-of-Service" . Warning When you apply a volume type (or perform a volume re-type) with capacity-derived QoS limits to an attached volume, the limits will not be applied. The limits will only be applied once you detach the volume from its instance. See Section 4.5, "Block Storage volume retyping" for information about volume re-typing. 2.6.3. Associating a Quality-of-Service specification with a volume type As an administrator, you can associate a QOS Spec to an existing volume type using the Volume Types table. Prerequisites A successful undercloud installation. For more information, see Installing director on the undercloud . A successful overcloud deployment. For more information, see Creating a basic overcloud with CLI tools . Access to the Red Hat OpenStack Platform (RHOSP) Dashboard (horizon). For more information, see Overcloud deployment output . Procedure As an administrator in the dashboard, select Admin > Volumes > Volume Types . In the Volume Types table, select the type's Manage QOS Spec Association action. Select a QOS Spec from the QOS Spec to be associated list. To disassociate a QOS specification from an existing volume type, select None . Click Associate . The selected QOS Spec now appears in the Associated QOS Spec column of the edited volume type. 2.7. Block Storage service (cinder) volume encryption Volume encryption helps provide basic data protection in case the volume back-end is either compromised or outright stolen. Both Compute and Block Storage services are integrated to allow instances to read access and use encrypted volumes. You must deploy Barbican to take advantage of volume encryption. Important Volume encryption is not supported on file-based volumes (such as NFS). Volume encryption only supports LUKS1 and not LUKS2. Retyping an unencrypted volume to an encrypted volume of the same size is not supported, because encrypted volumes require additional space to store encryption data. For more information about encrypting unencrypted volumes, see Encrypting unencrypted volumes . Volume encryption is applied through volume type. See Section 2.7.2, "Configuring Block Storage service volume encryption with the CLI" for information on encrypted volume types. 2.7.1. Configuring Block Storage service volume encryption with the Dashboard To create encrypted volumes, you first need an encrypted volume type . Encrypting a volume type involves setting what provider class, cipher, and key size it should use. Prerequisites A successful undercloud installation. For more information, see Installing director on the undercloud Director Installation and Usage . A successful overcloud deployment. For more information, see Creating a basic overcloud with CLI tools Director Installation and Usage . Access to the Red Hat OpenStack Platform (RHOSP) Dashboard (horizon). For more information, see Overcloud deployment output Director Installation and Usage . Procedure As an admin user in the dashboard, select Admin > Volumes > Volume Types . In the Actions column of the volume to be encrypted, select Create Encryption to launch the Create Volume Type Encryption wizard. From there, configure the Provider , Control Location , Cipher , and Key Size settings of the volume type's encryption. The Description column describes each setting. Important The values listed below are the only supported options for Provider , Cipher , and Key Size . Enter luks for Provider . Enter aes-xts-plain64 for Cipher . Enter 256 for Key Size . Click Create Volume Type Encryption . Once you have an encrypted volume type, you can invoke it to automatically create encrypted volumes. For more information on creating a volume type, see Section 2.3.2, "Creating and configuring a volume type" . Specifically, select the encrypted volume type from the Type drop-down list in the Create Volume window. To configure an encrypted volume type through the CLI, see Section 2.7.2, "Configuring Block Storage service volume encryption with the CLI" . You can also re-configure the encryption settings of an encrypted volume type. Select Update Encryption from the Actions column of the volume type to launch the Update Volume Type Encryption wizard. In Project > Compute > Volumes , check the Encrypted column in the Volumes table to determine whether the volume is encrypted. If the volume is encrypted, click Yes in that column to view the encryption settings. 2.7.2. Configuring Block Storage service volume encryption with the CLI To create encrypted volumes, you first need an encrypted volume type . Encrypting a volume type involves setting what provider class, cipher, and key size it should use. Prerequisites A successful undercloud installation. For more information, see Installing director on the undercloud . A successful overcloud deployment. For more information, see Creating a basic overcloud with CLI tools . Procedure Create a volume type: Configure the cipher, key size, control location, and provider settings: Create an encrypted volume: For more information, see the Manage secrets with the OpenStack Key Manager guide. 2.7.3. Automatic deletion of volume image encryption key The Block Storage service (cinder) creates an encryption key in the Key Management service (barbican) when it uploads an encrypted volume to the Image service (glance). This creates a 1:1 relationship between an encryption key and a stored image. Encryption key deletion prevents unlimited resource consumption of the Key Management service. The Block Storage, Key Management, and Image services automatically manage the key for an encrypted volume, including the deletion of the key. The Block Storage service automatically adds two properties to a volume image: cinder_encryption_key_id - The identifier of the encryption key that the Key Management service stores for a specific image. cinder_encryption_key_deletion_policy - The policy that tells the Image service to tell the Key Management service whether to delete the key associated with this image. Important The values of these properties are automatically assigned. To avoid unintentional data loss, do not adjust these values . When you create a volume image, the Block Storage service sets the cinder_encryption_key_deletion_policy property to on_image_deletion . When you delete a volume image, the Image service deletes the corresponding encryption key if the cinder_encryption_key_deletion_policy equals on_image_deletion . Important Red Hat does not recommend manual manipulation of the cinder_encryption_key_id or cinder_encryption_key_deletion_policy properties. If you use the encryption key that is identified by the value of cinder_encryption_key_id for any other purpose, you risk data loss. 2.8. Deploying availability zones for Block Storage volume back ends An availability zone is a provider-specific method of grouping cloud instances and services. Director uses CinderXXXAvailabilityZone parameters (where XXX is associated with a specific back end) to configure different availability zones for Block Storage volume back ends. Prerequisites A successful undercloud installation. For more information, see Installing director on the undercloud . Procedure Add the following parameters to the environment file to create two availability zones: Replace XXX and YYY with supported back-end values, such as: Note Search the /usr/share/openstack-tripleo-heat-templates/deployment/cinder/ directory for the heat template associated with your back end for the correct back end value. The following example deploys two back ends where rbd is zone 1 and iSCSI is zone 2: Deploy the overcloud and include the updated environment file. 2.9. Block Storage service (cinder) consistency groups You can use the Block Storage (cinder) service to set consistency groups to group multiple volumes together as a single entity. This means that you can perform operations on multiple volumes at the same time instead of individually. You can use consistency groups to create snapshots for multiple volumes simultaneously. This also means that you can restore or clone those volumes simultaneously. A volume can be a member of multiple consistency groups. However, you cannot delete, retype, or migrate volumes after you add them to a consistency group. 2.9.1. Configuring Block Storage service consistency groups By default, Block Storage security policy disables consistency groups APIs. You must enable it here before you use the feature. The related consistency group entries in the /etc/cinder/policy.json file of the node that hosts the Block Storage API service, openstack-cinder-api list the default settings: You must change these settings in an environment file and then deploy them to the overcloud by using the openstack overcloud deploy command. Do not edit the JSON file directly because the changes are overwritten time the overcloud is deployed. Prerequisites A successful undercloud installation. For more information, see Installing director on the undercloud . Procedure Edit an environment file and add a new entry to the parameter_defaults section. This ensures that the entries are updated in the containers and are retained whenever the environment is re-deployed by director with the openstack overcloud deploy command. Add a new section to an environment file using CinderApiPolicies to set the consistency group settings. The equivalent parameter_defaults section with the default settings from the JSON file appear in the following way: The value 'group:nobody' determines that no group can use this feature so it is effectively disabled. To enable it, change the group to another value. For increased security, set the permissions for both consistency group API and volume type management API to be identical. The volume type management API is set to "rule:admin_or_owner" by default in the same /etc/cinder/policy.json_ file : To make the consistency groups feature available to all users, set the API policy entries to allow users to create, use, and manage their own consistency groups. To do so, use rule:admin_or_owner : When you have created the environment file in /home/stack/templates/ , log in as the stack user and deploy the configuration: Replace <ENV_FILE.yaml> with the name of the file with the ExtraConfig settings you added. Important If you passed any extra environment files when you created the overcloud, pass them again here by using the -e option to avoid making undesired changes to the overcloud. For more information about the openstack overcloud deploy command, see Deployment Command in the Director Installation and Usage guide. 2.9.2. Creating Block Storage consistency groups with the Dashboard After you enable the consistency groups API, you can start creating consistency groups. Prerequisites A successful undercloud installation. For more information, see Installing director on the undercloud . A successful overcloud deployment. For more information, see Creating a basic overcloud with CLI tools . Access to the Red Hat OpenStack Platform (RHOSP) Dashboard (horizon). For more information, see Overcloud deployment output . Procedure As an admin user in the dashboard, select Project > Compute > Volumes > Volume Consistency Groups . Click Create Consistency Group . In the Consistency Group Information tab of the wizard, enter a name and description for your consistency group. Then, specify its Availability Zone . You can also add volume types to your consistency group. When you create volumes within the consistency group, the Block Storage service will apply compatible settings from those volume types. To add a volume type, click its + button from the All available volume types list. Click Create Consistency Group . It appears in the Volume Consistency Groups table. 2.9.3. Managing Block Storage service consistency groups with the Dashboard Use the Red Hat OpenStack Platform (RHOSP) Dashboard to manage consistency groups for Block Storage volumes. Prerequisites A successful undercloud installation. For more information, see Installing director on the undercloud in Director Installation and Usage . A successful overcloud deployment. For more information, see Creating a basic overcloud with CLI tools in Director Installation and Usage . Access to the Red Hat OpenStack Platform (RHOSP) Dashboard (horizon). For more information, see Overcloud deployment output in Director Installation and Usage . Procedure Optional: You can change the name or description of a consistency group by selecting Edit Consistency Group from its Action column. To add or remove volumes from a consistency group directly, as an admin user in the dashboard, select Project > Compute > Volumes > Volume Consistency Groups . Find the consistency group you want to configure. In the Actions column of that consistency group, select Manage Volumes . This launches the Add/Remove Consistency Group Volumes wizard. To add a volume to the consistency group, click its + button from the All available volumes list. To remove a volume from the consistency group, click its - button from the Selected volumes list. Click Edit Consistency Group . 2.9.4. Creating and managing consistency group snapshots for the Block Storage service After you add volumes to a consistency group, you can now create snapshots from it. Prerequisites A successful undercloud installation. For more information, see Installing director on the undercloud . Procedure Log in as admin user from the command line on the node that hosts the openstack-cinder-api and enter: This configures the client to use version 2 of the openstack-cinder-api . List all available consistency groups and their respective IDs: Create snapshots using the consistency group: Replace: <CGSNAPNAME> with the name of the snapshot (optional). <DESCRIPTION> with a description of the snapshot (optional). <CGNAMEID> with the name or ID of the consistency group. Display a list of all available consistency group snapshots: 2.9.5. Cloning Block Storage service consistency groups You can also use consistency groups to create a whole batch of pre-configured volumes simultaneously. You can do this by cloning an existing consistency group or restoring a consistency group snapshot. Both processes use the same command. Prerequisites A successful undercloud installation. For more information, see Installing director on the undercloud . A successful overcloud deployment. For more information, see Creating a basic overcloud with CLI tools . Procedure To clone an existing consistency group: Replace: <CGNAMEID> is the name or ID of the consistency group you want to clone. <CGNAME> is the name of your consistency group (optional). <DESCRIPTION> is a description of your consistency group (optional). To create a consistency group from a consistency group snapshot: Replace <CGSNAPNAME> with the name or ID of the snapshot you are using to create the consistency group. 2.10. Specifying back ends for volume creation Whenever multiple Block Storage (cinder) back ends are configured, you must also create a volume type for each back end. You can then use the type to specify which back end to use for a created volume. For more information about volume types, see Section 2.3, "Group volume configuration with volume types" . To specify a back end when creating a volume, select its corresponding volume type from the Type list (see Section 3.1, "Creating Block Storage volumes" ). If you do not specify a back end during volume creation, the Block Storage service automatically chooses one for you. By default, the service chooses the back end with the most available free space. You can also configure the Block Storage service to choose randomly among all available back ends instead. For more information, see Section 3.5, "Allocating volumes to multiple back ends" . 2.11. Enabling LVM2 filtering on overcloud nodes If you use LVM2 (Logical Volume Management) volumes with certain Block Storage service (cinder) back ends, the volumes that you create inside Red Hat OpenStack Platform (RHOSP) guests might become visible on the overcloud nodes that host cinder-volume or nova-compute containers. In this case, the LVM2 tools on the host scan the LVM2 volumes that the OpenStack guest creates, which can result in one or more of the following problems on Compute or Controller nodes: LVM appears to see volume groups from guests LVM reports duplicate volume group names Volume detachments fail because LVM is accessing the storage Guests fail to boot due to problems with LVM The LVM on the guest machine is in a partial state due to a missing disk that actually exists Block Storage service (cinder) actions fail on devices that have LVM Block Storage service (cinder) snapshots fail to remove correctly Errors during live migration: /etc/multipath.conf does not exist To prevent this erroneous scanning, and to segregate guest LVM2 volumes from the host node, you can enable and configure a filter with the LVMFilterEnabled heat parameter when you deploy or update the overcloud. This filter is computed from the list of physical devices that host active LVM2 volumes. You can also allow and deny block devices explicitly with the LVMFilterAllowlist and LVMFilterDenylist parameters. You can apply this filtering globally, to specific node roles, or to specific devices. Prerequisites A successful undercloud installation. For more information, see Installing the undercloud . Procedure Log in to the undercloud host as the stack user. Source the undercloud credentials file: Create a new environment file, or modify an existing environment file. In this example, create a new file lvm2-filtering.yaml : Include the following parameter in the environment file: You can further customize the implementation of the LVM2 filter. For example, to enable filtering only on Compute nodes, use the following configuration: These parameters also support regular expression. To enable filtering only on Compute nodes, and ignore all devices that start with /dev/sd , use the following configuration: Run the openstack overcloud deploy command and include the environment file that contains the LVM2 filtering configuration, as well as any other environment files that are relevant to your overcloud deployment: 2.12. Multipath configuration Use multipath to configure multiple I/O paths between server nodes and storage arrays into a single device to create redundancy and improve performance. 2.12.1. Using director to configure multipath You can configure multipath on a Red Hat OpenStack Platform (RHOSP) overcloud deployment for greater bandwidth and networking resilience. Important When you configure multipath on an existing deployment, the new workloads are multipath aware. If you have any pre-existing workloads, you must shelve and unshelve the instances to enable multipath on these instances. Prerequisites A successful undercloud installation. For more information, see Installing director on the undercloud . Procedure Log in to the undercloud host as the stack user. Source the stackrc credentials file: Use an overrides environment file or create a new one, for example multipath_overrides.yaml . Add and set the following parameter: Note The default settings will generate a basic multipath configuration that works for most environments. However, check with your storage vendor for recommendations, because some vendors have optimized configurations that are specific to their hardware. For more information about multipath, see the Configuring device mapper multipath guide. Optional: If you have a multipath configuration file for your overcloud deployment, use the MultipathdCustomConfigFile parameter to specify the location of this file: You must copy your multipath configuration file to the /var/lib/mistral directory: Replace <config_file_name> with the name of your file. Set the MultipathdCustomConfigFile parameter to this location of your multipath configuration file: parameter_defaults: MultipathdCustomConfigFile: /var/lib/mistral/<config_file_name> Note Other TripleO multipath parameters override any corresponding value in the local custom configuration file. For example, if MultipathdEnableUserFriendlyNames is False , the files on the overcloud nodes are updated to match, even if the setting is enabled in the local custom file. For more information about multipath parameters, see Multipath heat template parameters . Include the environment file in the openstack overcloud deploy command with any other environment files that are relevant to your environment: 2.12.1.1. Multipath heat template parameters Use this to understand the following parameters that enable multipath. Parameter Description Default value MultipathdEnable Defines whether to enable the multipath daemon. This parameter defaults to True through the configuration contained in the multipathd.yaml file True MultipathdEnableUserFriendlyNames Defines whether to enable the assignment of a user friendly name to each path. False MultipathdEnableFindMultipaths Defines whether to automatically create a multipath device for each path. True MultipathdSkipKpartx Defines whether to skip automatically creating partitions on the device. True MultipathdCustomConfigFile Includes a local, custom multipath configuration file on the overcloud nodes. By default, a minimal multipath.conf file is installed. NOTE: Other TripleO multipath parameters override any corresponding value in any local, custom configuration file that you add. For example, if MultipathdEnableUserFriendlyNames is False , the files on the overcloud nodes are updated to match, even if the setting is enabled in your local, custom file. 2.12.2. Verifying multipath configuration This procedure describes how to verify multipath configuration on new or existing overcloud deployments. Prerequisites A successful undercloud installation. For more information, see Installing director on the undercloud . A successful overcloud deployment. For more information, see Creating a basic overcloud with CLI tools . Procedure Create a VM. Attach a non-encrypted volume to the VM. Get the name of the Compute node that contains the instance: Replace INSTANCE with the name of the VM that you booted. Retrieve the virsh name of the instance: Replace INSTANCE with the name of the VM that you booted. Get the IP address of the Compute node: Replace compute_name with the name from the output of the nova show INSTANCE command. SSH into the Compute node that runs the VM: Replace COMPUTE_NODE_IP with the IP address of the Compute node. Log in to the container that runs virsh: Enter the following command on a Compute node instance to verify that it is using multipath in the cinder volume host location: Replace VIRSH_INSTANCE_NAME with the output of the nova show INSTANCE | grep instance_name command. If the instance shows a value other than /dev/dm- , the connection is non-multipath and you must refresh the connection info with the nova shelve and nova unshelve commands: Note If you have more than one type of back end, you must verify the instances and volumes on all back ends, because connection info that each back end returns might vary.
[ "cinder cluster-list --detailed", "cinder service-list", "cinder work-cleanup [--cluster <cluster-name>] [--host <hostname>] [--binary <binary>] [--is-up <True|true|False|false>] [--disabled <True|true|False|false>] [--resource-id <resource-id>] [--resource-type <Volume|Snapshot>]", "cinder work-cleanup --cluster tripleo@tripleo_ceph", "cinder service-list", "+------------------+---------------------------+------+--------- | Binary | Host | Zone | Status +------------------+---------------------------+------+--------- | cinder-backup | localhost.localdomain | nova | enabled | cinder-scheduler | localhost.localdomain | nova | enabled | cinder-volume | *localhost.localdomain@lvm* | nova | enabled +------------------+---------------------------+------+---------", "cinder get-capabilities _VOLSVCHOST_", "cinder get-capabilities localhost.localdomain@lvm +---------------------+-----------------------------------------+ | Volume stats | Value | +---------------------+-----------------------------------------+ | description | None | | display_name | None | | driver_version | 3.0.0 | | namespace | OS::Storage::Capabilities::localhost.loc | pool_name | None | | storage_protocol | iSCSI | | vendor_name | Open Source | | visibility | None | | volume_backend_name | lvm | +---------------------+-----------------------------------------+ +--------------------+------------------------------------------+ | Backend properties | Value | +--------------------+------------------------------------------+ | compression | {u'type': u'boolean', u'description' | qos | {u'type': u'boolean', u'des | replication | {u'type': u'boolean', u'description' | thin_provisioning | {u'type': u'boolean', u'description': u'S +--------------------+------------------------------------------+", "cinder service-list", "+------------------+---------------------------+------+--------- | Binary | Host | Zone | Status +------------------+---------------------------+------+--------- | cinder-backup | localhost.localdomain | nova | enabled | cinder-scheduler | localhost.localdomain | nova | enabled | cinder-volume | *localhost.localdomain@lvm* | nova | enabled +------------------+---------------------------+------+---------", "cinder get-capabilities _VOLSVCHOST_", "cinder get-capabilities localhost.localdomain@lvm +---------------------+-----------------------------------------+ | Volume stats | Value | +---------------------+-----------------------------------------+ | description | None | | display_name | None | | driver_version | 3.0.0 | | namespace | OS::Storage::Capabilities::localhost.loc | pool_name | None | | storage_protocol | iSCSI | | vendor_name | Open Source | | visibility | None | | volume_backend_name | lvm | +---------------------+-----------------------------------------+ +--------------------+------------------------------------------+ | Backend properties | Value | +--------------------+------------------------------------------+ | compression | {u'type': u'boolean', u'description' | qos | {u'type': u'boolean', u'des | replication | {u'type': u'boolean', u'description' | thin_provisioning | {u'type': u'boolean', u'description': u'S +--------------------+------------------------------------------+", "cinder type-create --is-public false <TYPE-NAME>", "cinder type-list", "cinder type-access-add --volume-type <TYPE-ID> --project-id <TENANT-ID>", "cinder type-access-list --volume-type <TYPE-ID>", "cinder type-access-remove --volume-type <TYPE-ID> --project-id <TENANT-ID>", "openstack project create --enable --description \"Block Storage Internal Project\" cinder-internal +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Block Storage Internal Tenant | | enabled | True | | id | cb91e1fe446a45628bb2b139d7dccaef | | name | cinder-internal | +-------------+----------------------------------+ openstack user create --project cinder-internal cinder-internal +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | None | | enabled | True | | id | 84e9672c64f041d6bfa7a930f558d946 | | name | cinder-internal | |project_id| cb91e1fe446a45628bb2b139d7dccaef | | username | cinder-internal | +----------+----------------------------------+", "parameter_defaults: ExtraConfig: cinder::config::cinder_config: DEFAULT/cinder_internal_tenant_project_id: value: TENANTID DEFAULT/cinder_internal_tenant_user_id: value: USERID BACKEND/image_volume_cache_enabled: 1 value: True BACKEND/image_volume_cache_max_size_gb: value: MAXSIZE 2 BACKEND/image_volume_cache_max_count: value: MAXNUMBER 3", "openstack overcloud deploy --templates -e /home/stack/templates/<ENV_FILE>.yaml", "read_iops_sec=500", "read_iops_sec_per_gb write_iops_sec_per_gb total_iops_sec_per_gb", "cinder type-create encrypt-type", "cinder encryption-type-create --cipher aes-xts-plain64 --key-size 256 --control-location front-end encrypt-type luks", "cinder --debug create 1 --volume-type encrypt-type --name DemoEncVol", "parameter_defaults: CinderXXXAvailabilityZone: zone1 CinderYYYAvailabilityZone: zone2", "Cinder ISCSI AvailabilityZone Cinder Nfs AvailabilityZone Cinder Rbd AvailabilityZone", "parameter_defaults: CinderRbdAvailabilityZone: zone1 CinderISCSIAvailabilityZone: zone2", "\"consistencygroup:create\" : \"group:nobody\", \"consistencygroup:delete\": \"group:nobody\", \"consistencygroup:update\": \"group:nobody\", \"consistencygroup:get\": \"group:nobody\", \"consistencygroup:get_all\": \"group:nobody\", \"consistencygroup:create_cgsnapshot\" : \"group:nobody\", \"consistencygroup:delete_cgsnapshot\": \"group:nobody\", \"consistencygroup:get_cgsnapshot\": \"group:nobody\", \"consistencygroup:get_all_cgsnapshots\": \"group:nobody\",", "parameter_defaults: CinderApiPolicies: { cinder-consistencygroup_create: { key: 'consistencygroup:create', value: 'group:nobody' }, cinder-consistencygroup_delete: { key: 'consistencygroup:delete', value: 'group:nobody' }, cinder-consistencygroup_update: { key: 'consistencygroup:update', value: 'group:nobody' }, cinder-consistencygroup_get: { key: 'consistencygroup:get', value: 'group:nobody' }, cinder-consistencygroup_get_all: { key: 'consistencygroup:get_all', value: 'group:nobody' }, cinder-consistencygroup_create_cgsnapshot: { key: 'consistencygroup:create_cgsnapshot', value: 'group:nobody' }, cinder-consistencygroup_delete_cgsnapshot: { key: 'consistencygroup:delete_cgsnapshot', value: 'group:nobody' }, cinder-consistencygroup_get_cgsnapshot: { key: 'consistencygroup:get_cgsnapshot', value: 'group:nobody' }, cinder-consistencygroup_get_all_cgsnapshots: { key: 'consistencygroup:get_all_cgsnapshots', value: 'group:nobody' }, }", "\"volume_extension:types_manage\": \"rule:admin_or_owner\",", "CinderApiPolicies: { cinder-consistencygroup_create: { key: 'consistencygroup:create', value: 'rule:admin_or_owner' }, cinder-consistencygroup_delete: { key: 'consistencygroup:delete', value: 'rule:admin_or_owner' }, cinder-consistencygroup_update: { key: 'consistencygroup:update', value: 'rule:admin_or_owner' }, cinder-consistencygroup_get: { key: 'consistencygroup:get', value: 'rule:admin_or_owner' }, cinder-consistencygroup_get_all: { key: 'consistencygroup:get_all', value: 'rule:admin_or_owner' }, cinder-consistencygroup_create_cgsnapshot: { key: 'consistencygroup:create_cgsnapshot', value: 'rule:admin_or_owner' }, cinder-consistencygroup_delete_cgsnapshot: { key: 'consistencygroup:delete_cgsnapshot', value: 'rule:admin_or_owner' }, cinder-consistencygroup_get_cgsnapshot: { key: 'consistencygroup:get_cgsnapshot', value: 'rule:admin_or_owner' }, cinder-consistencygroup_get_all_cgsnapshots: { key: 'consistencygroup:get_all_cgsnapshots', value: 'rule:admin_or_owner' }, }", "openstack overcloud deploy --templates -e /home/stack/templates/<ENV_FILE>.yaml", "export OS_VOLUME_API_VERSION=2", "cinder consisgroup-list", "cinder cgsnapshot-create --name <CGSNAPNAME> --description \"<DESCRIPTION>\" <CGNAMEID>", "cinder cgsnapshot-list", "cinder consisgroup-create-from-src --source-cg <CGNAMEID> --name <CGNAME> --description \"<DESCRIPTION>\"", "cinder consisgroup-create-from-src --cgsnapshot <CGSNAPNAME> --name <CGNAME> --description \"<DESCRIPTION>", "source ~/stackrc", "touch ~/lvm2-filtering.yaml", "parameter_defaults: LVMFilterEnabled: true", "parameter_defaults: ComputeParameters: LVMFilterEnabled: true", "parameter_defaults: ComputeParameters: LVMFilterEnabled: true LVMFilterDenylist: - /dev/sd.*", "openstack overcloud deploy --templates <environment-files> -e lvm2-filtering.yaml", "source ~/stackrc", "parameter_defaults: ExtraConfig: cinder::config::cinder_config: backend_defaults/use_multipath_for_image_xfer: value: true", "sudo cp <config_file_name> /var/lib/mistral", "parameter_defaults: MultipathdCustomConfigFile: /var/lib/mistral/<config_file_name>", "openstack overcloud deploy --templates ... -e <existing_overcloud_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/multipathd.yaml -e multipath_overrides.yaml ...", "nova show INSTANCE | grep OS-EXT-SRV-ATTR:host", "nova show INSTANCE | grep instance_name", ". stackrc nova list | grep compute_name", "ssh heat-admin@ COMPUTE_NODE_IP", "podman exec -it nova_libvirt /bin/bash", "virsh domblklist VIRSH_INSTANCE_NAME | grep /dev/dm", "nova shelve <instance> nova unshelve <instance>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/storage_guide/assembly-configuring-the-block-storage-service_osp-storage-guide
5.50. ding-libs
5.50. ding-libs 5.50.1. RHBA-2012:0799 - ding-libs bug fix update Updated ding-libs packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The ding-libs packages contain a set of libraries used by the System Security Services Daemon (SSSD) and other projects and provide functions to manipulate filesystem pathnames (libpath_utils), a hash table to manage storage and access time properties (libdhash), a data type to collect data in a hierarchical structure (libcollection), a dynamically growing, reference-counted array (libref_array), and a library to process configuration files in initialization format (INI) into a library collection data structure (libini_config). Bug Fixes BZ# 736074 Prior to this update, memory could become corrupted if the initial table size exceeded 1024 buckets. This update modifies libdhash so that large initial table sizes now correctly allocate memory. BZ# 801393 Prior to this update, buffers were filled and one character above the allocated size would be set to the null terminator if the combination of two strings,concatenated by the function path_concat(), exceeded the size of the destination buffer. This update modifies the underlying code so that the null terminator is no longer added after the end of the buffer. All users of ding-libs are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/ding-libs
Chapter 2. MTC release notes
Chapter 2. MTC release notes 2.1. Migration Toolkit for Containers 1.8 release notes The release notes for Migration Toolkit for Containers (MTC) describe new features and enhancements, deprecated features, and known issues. The MTC enables you to migrate application workloads between OpenShift Container Platform clusters at the granularity of a namespace. MTC provides a web console and an API, based on Kubernetes custom resources, to help you control the migration and minimize application downtime. For information on the support policy for MTC, see OpenShift Application and Cluster Migration Solutions , part of the Red Hat OpenShift Container Platform Life Cycle Policy . 2.1.1. Migration Toolkit for Containers 1.8.4 release notes 2.1.1.1. Technical changes Migration Toolkit for Containers (MTC) 1.8.4 has the following technical changes: MTC 1.8.4 extends its dependency resolution to include support for using OpenShift API for Data Protection (OADP) 1.4. Support for KubeVirt Virtual Machines with DirectVolumeMigration MTC 1.8.4 adds support for KubeVirt Virtual Machines (VMs) with Direct Volume Migration (DVM). 2.1.1.2. Resolved issues MTC 1.8.4 has the following major resolved issues: Ansible Operator is broken when OpenShift Virtualization is installed There is a bug in the python3-openshift package that installing OpenShift Virtualization exposes, with an exception, ValueError: too many values to unpack , returned during the task. Earlier versions of MTC are impacted, while MTC 1.8.4 has implemented a workaround. Updating to MTC 1.8.4 means you are no longer affected by this issue. (OCPBUGS-38116) UI stuck at Namespaces while creating a migration plan When trying to create a migration plan from the MTC UI, the migration plan wizard becomes stuck at the Namespaces step. This issue has been resolved in MTC 1.8.4. (MIG-1597) Migration fails with error of no matches for kind Virtual machine in version kubevirt/v1 During the migration of an application, all the necessary steps, including the backup, DVM, and restore, are successfully completed. However, the migration is marked as unsuccessful with the error message no matches for kind Virtual machine in version kubevirt/v1 . (MIG-1594) Direct Volume Migration fails when migrating to a namespace different from the source namespace On performing a migration from source cluster to target cluster, with the target namespace different from the source namespace, the DVM fails. (MIG-1592) Direct Image Migration does not respect label selector on migplan When using Direct Image Migration (DIM), if a label selector is set on the migration plan, DIM does not respect it and attempts to migrate all imagestreams in the namespace. (MIG-1533) 2.1.1.3. Known issues MTC 1.8.4 has the following known issues: The associated SCC for service account cannot be migrated in OpenShift Container Platform 4.12 The associated Security Context Constraints (SCCs) for service accounts in OpenShift Container Platform 4.12 cannot be migrated. This issue is planned to be resolved in a future release of MTC. (MIG-1454) . Rsync pod fails to start causing the DVM phase to fail The DVM phase fails due to the Rsync pod failing to start, because of a permission issue. (BZ#2231403) Migrated builder pod fails to push to image registry When migrating an application including BuildConfig from source to target cluster, the builder pod results in error, failing to push the image to the image registry. (BZ#2234781) Conflict condition gets cleared briefly after it is created When creating a new state migration plan that results in a conflict error, that error is cleared shorty after it is displayed. (BZ#2144299) PvCapacityAdjustmentRequired Warning Not Displayed After Setting pv_resizing_threshold The PvCapacityAdjustmentRequired warning fails to appear in the migration plan after the pv_resizing_threshold is adjusted. (BZ#2270160) 2.1.2. Migration Toolkit for Containers 1.8.3 release notes 2.1.2.1. Technical changes Migration Toolkit for Containers (MTC) 1.8.3 has the following technical changes: OADP 1.3 is now supported MTC 1.8.3 adds support to OpenShift API for Data Protection (OADP) as a dependency of MTC 1.8.z. 2.1.2.2. Resolved issues MTC 1.8.3 has the following major resolved issues: CVE-2024-24786: Flaw in Golang protobuf module causes unmarshal function to enter infinite loop In releases of MTC, a vulnerability was found in Golang's protobuf module, where the unmarshal function entered an infinite loop while processing certain invalid inputs. Consequently, an attacker provided carefully constructed invalid inputs, which caused the function to enter an infinite loop. With this update, the unmarshal function works as expected. For more information, see CVE-2024-24786 . CVE-2023-45857: Axios Cross-Site Request Forgery Vulnerability In releases of MTC, a vulnerability was discovered in Axios 1.5.1 that inadvertently revealed a confidential XSRF-TOKEN stored in cookies by including it in the HTTP header X-XSRF-TOKEN for every request made to the host, allowing attackers to view sensitive information. For more information, see CVE-2023-45857 . Restic backup does not work properly when the source workload is not quiesced In releases of MTC, some files did not migrate when deploying an application with a route. The Restic backup did not function as expected when the quiesce option was unchecked for the source workload. This issue has been resolved in MTC 1.8.3. For more information, see BZ#2242064 . The Migration Controller fails to install due to an unsupported value error in Velero The MigrationController failed to install due to an unsupported value error in Velero. Updating OADP 1.3.0 to OADP 1.3.1 resolves this problem. For more information, see BZ#2267018 . This issue has been resolved in MTC 1.8.3. For a complete list of all resolved issues, see the list of MTC 1.8.3 resolved issues in Jira. 2.1.2.3. Known issues Migration Toolkit for Containers (MTC) 1.8.3 has the following known issues: Ansible Operator is broken when OpenShift Virtualization is installed There is a bug in the python3-openshift package that installing OpenShift Virtualization exposes, with an exception, ValueError: too many values to unpack , returned during the task. MTC 1.8.4 has implemented a workaround. Updating to MTC 1.8.4 means you are no longer affected by this issue. (OCPBUGS-38116) The associated SCC for service account cannot be migrated in OpenShift Container Platform 4.12 The associated Security Context Constraints (SCCs) for service accounts in OpenShift Container Platform version 4.12 cannot be migrated. This issue is planned to be resolved in a future release of MTC. (MIG-1454) . For a complete list of all known issues, see the list of MTC 1.8.3 known issues in Jira. 2.1.3. Migration Toolkit for Containers 1.8.2 release notes 2.1.3.1. Resolved issues This release has the following major resolved issues: Backup phase fails after setting custom CA replication repository In releases of Migration Toolkit for Containers (MTC), after editing the replication repository, adding a custom CA certificate, successfully connecting the repository, and triggering a migration, a failure occurred during the backup phase. CVE-2023-26136: tough-cookie package before 4.1.3 are vulnerable to Prototype Pollution In releases of (MTC), versions before 4.1.3 of the tough-cookie package used in MTC were vulnerable to prototype pollution. This vulnerability occurred because CookieJar did not handle cookies properly when the value of the rejectPublicSuffixes was set to false . For more details, see (CVE-2023-26136) CVE-2022-25883 openshift-migration-ui-container: nodejs-semver: Regular expression denial of service In releases of (MTC), versions of the semver package before 7.5.2, used in MTC, were vulnerable to Regular Expression Denial of Service (ReDoS) from the function newRange , when untrusted user data was provided as a range. For more details, see (CVE-2022-25883) 2.1.3.2. Known issues MTC 1.8.2 has the following known issues: Ansible Operator is broken when OpenShift Virtualization is installed There is a bug in the python3-openshift package that installing OpenShift Virtualization exposes, with an exception, ValueError: too many values to unpack , returned during the task. MTC 1.8.4 has implemented a workaround. Updating to MTC 1.8.4 means you are no longer affected by this issue. (OCPBUGS-38116) 2.1.4. Migration Toolkit for Containers 1.8.1 release notes 2.1.4.1. Resolved issues Migration Toolkit for Containers (MTC) 1.8.1 has the following major resolved issues: CVE-2023-39325: golang: net/http, x/net/http2: rapid stream resets can cause excessive work A flaw was found in handling multiplexed streams in the HTTP/2 protocol, which is used by MTC. A client could repeatedly make a request for a new multiplex stream and immediately send an RST_STREAM frame to cancel it. This creates additional workload for the server in terms of setting up and dismantling streams, while avoiding any server-side limitations on the maximum number of active streams per connection, resulting in a denial of service due to server resource consumption. (BZ#2245079) It is advised to update to MTC 1.8.1 or later, which resolve this issue. For more details, see (CVE-2023-39325) and (CVE-2023-44487) 2.1.4.2. Known issues Migration Toolkit for Containers (MTC) 1.8.1 has the following known issues: Ansible Operator is broken when OpenShift Virtualization is installed There is a bug in the python3-openshift package that installing OpenShift Virtualization exposes. An exception, ValueError: too many values to unpack , is returned during the task. MTC 1.8.4 has implemented a workaround. Updating to MTC 1.8.4 means you are no longer affected by this issue. (OCPBUGS-38116) 2.1.5. Migration Toolkit for Containers 1.8.0 release notes 2.1.5.1. Resolved issues Migration Toolkit for Containers (MTC) 1.8.0 has the following resolved issues: Indirect migration is stuck on backup stage In releases, an indirect migration became stuck at the backup stage, due to InvalidImageName error. ( (BZ#2233097) ) PodVolumeRestore remain In Progress keeping the migration stuck at Stage Restore In releases, on performing an indirect migration, the migration became stuck at the Stage Restore step, waiting for the podvolumerestore to be completed. ( (BZ#2233868) ) Migrated application unable to pull image from internal registry on target cluster In releases, on migrating an application to the target cluster, the migrated application failed to pull the image from the internal image registry resulting in an application failure . ( (BZ#2233103) ) Migration failing on Azure due to authorization issue In releases, on an Azure cluster, when backing up to Azure storage, the migration failed at the Backup stage. ( (BZ#2238974) ) 2.1.5.2. Known issues MTC 1.8.0 has the following known issues: Ansible Operator is broken when OpenShift Virtualization is installed There is a bug in the python3-openshift package that installing OpenShift Virtualization exposes, with an exception ValueError: too many values to unpack returned during the task. MTC 1.8.4 has implemented a workaround. Updating to MTC 1.8.4 means you are no longer affected by this issue. (OCPBUGS-38116) Old Restic pods are not getting removed on upgrading MTC 1.7.x 1.8.x In this release, on upgrading the MTC Operator from 1.7.x to 1.8.x, the old Restic pods are not being removed. Therefore after the upgrade, both Restic and node-agent pods are visible in the namespace. ( (BZ#2236829) ) Migrated builder pod fails to push to image registry In this release, on migrating an application including a BuildConfig from a source to target cluster, builder pod results in error , failing to push the image to the image registry. ( (BZ#2234781) ) [UI] CA bundle file field is not properly cleared In this release, after enabling Require SSL verification and adding content to the CA bundle file for an MCG NooBaa bucket in MigStorage, the connection fails as expected. However, when reverting these changes by removing the CA bundle content and clearing Require SSL verification , the connection still fails. The issue is only resolved by deleting and re-adding the repository. ( (BZ#2240052) ) Backup phase fails after setting custom CA replication repository In (MTC), after editing the replication repository, adding a custom CA certificate, successfully connecting the repository, and triggering a migration, a failure occurs during the backup phase. This issue is resolved in MTC 1.8.2. CVE-2023-26136: tough-cookie package before 4.1.3 are vulnerable to Prototype Pollution Versions before 4.1.3 of the tough-cookie package, used in MTC, are vulnerable to prototype pollution. This vulnerability occurs because CookieJar does not handle cookies properly when the value of the rejectPublicSuffixes is set to false . This issue is resolved in MTC 1.8.2. For more details, see (CVE-2023-26136) CVE-2022-25883 openshift-migration-ui-container: nodejs-semver: Regular expression denial of service In releases of (MTC), versions of the semver package before 7.5.2, used in MTC, are vulnerable to Regular Expression Denial of Service (ReDoS) from the function newRange , when untrusted user data is provided as a range. This issue is resolved in MTC 1.8.2. For more details, see (CVE-2022-25883) 2.1.5.3. Technical changes This release has the following technical changes: Migration from OpenShift Container Platform 3 to OpenShift Container Platform 4 requires a legacy Migration Toolkit for Containers Operator and Migration Toolkit for Containers 1.7.x. Migration from MTC 1.7.x to MTC 1.8.x is not supported. You must use MTC 1.7.x to migrate anything with a source of OpenShift Container Platform 4.9 or earlier. MTC 1.7.x must be used on both source and destination. Migration Toolkit for Containers (MTC) 1.8.x only supports migrations from OpenShift Container Platform 4.10 or later to OpenShift Container Platform 4.10 or later. For migrations only involving cluster versions 4.10 and later, either 1.7.x or 1.8.x might be used. However, but it must be the same MTC 1.Y.z on both source and destination. Migration from source MTC 1.7.x to destination MTC 1.8.x is unsupported. Migration from source MTC 1.8.x to destination MTC 1.7.x is unsupported. Migration from source MTC 1.7.x to destination MTC 1.7.x is supported. Migration from source MTC 1.8.x to destination MTC 1.8.x is supported. MTC 1.8.x by default installs OADP 1.2.x. Upgrading from MTC 1.7.x to MTC 1.8.0, requires manually changing the OADP channel to 1.2. If this is not done, the upgrade of the Operator fails. 2.2. Migration Toolkit for Containers 1.7 release notes The release notes for Migration Toolkit for Containers (MTC) describe new features and enhancements, deprecated features, and known issues. The MTC enables you to migrate application workloads between OpenShift Container Platform clusters at the granularity of a namespace. You can migrate from OpenShift Container Platform 3 to 4.13 and between OpenShift Container Platform 4 clusters. MTC provides a web console and an API, based on Kubernetes custom resources, to help you control the migration and minimize application downtime. For information on the support policy for MTC, see OpenShift Application and Cluster Migration Solutions , part of the Red Hat OpenShift Container Platform Life Cycle Policy . 2.2.1. Migration Toolkit for Containers 1.7.17 release notes Migration Toolkit for Containers (MTC) 1.7.17 is a Container Grade Only (CGO) release, released to refresh the health grades of the containers, with no changes to any code in the product itself compared to that of MTC 1.7.16. 2.2.2. Migration Toolkit for Containers 1.7.16 release notes 2.2.2.1. Resolved issues This release has the following resolved issues: CVE-2023-45290: Golang: net/http : Memory exhaustion in the Request.ParseMultipartForm method A flaw was found in the net/http Golang standard library package, which impacts earlier versions of MTC. When parsing a multipart form, either explicitly with Request.ParseMultipartForm or implicitly with Request.FormValue , Request.PostFormValue , or Request.FormFile methods, limits on the total size of the parsed form are not applied to the memory consumed while reading a single form line. This permits a maliciously crafted input containing long lines to cause the allocation of arbitrarily large amounts of memory, potentially leading to memory exhaustion. To resolve this issue, upgrade to MTC 1.7.16. For more details, see CVE-2023-45290 CVE-2024-24783: Golang: crypto/x509 : Verify panics on certificates with an unknown public key algorithm A flaw was found in the crypto/x509 Golang standard library package, which impacts earlier versions of MTC. Verifying a certificate chain that contains a certificate with an unknown public key algorithm causes Certificate.Verify to panic. This affects all crypto/tls clients and servers that set Config.ClientAuth to VerifyClientCertIfGiven or RequireAndVerifyClientCert . The default behavior is for TLS servers to not verify client certificates. To resolve this issue, upgrade to MTC 1.7.16. For more details, see CVE-2024-24783 . CVE-2024-24784: Golang: net/mail : Comments in display names are incorrectly handled A flaw was found in the net/mail Golang standard library package, which impacts earlier versions of MTC. The ParseAddressList function incorrectly handles comments, text in parentheses, and display names. As this is a misalignment with conforming address parsers, it can result in different trust decisions being made by programs using different parsers. To resolve this issue, upgrade to MTC 1.7.16. For more details, see CVE-2024-24784 . CVE-2024-24785: Golang: html/template : Errors returned from MarshalJSON methods may break template escaping A flaw was found in the html/template Golang standard library package, which impacts earlier versions of MTC. If errors returned from MarshalJSON methods contain user-controlled data, they could be used to break the contextual auto-escaping behavior of the html/template package, allowing subsequent actions to inject unexpected content into templates. To resolve this issue, upgrade to MTC 1.7.16. For more details, see CVE-2024-24785 . CVE-2024-29180: webpack-dev-middleware : Lack of URL validation may lead to file leak A flaw was found in the webpack-dev-middleware package , which impacts earlier versions of MTC. This flaw fails to validate the supplied URL address sufficiently before returning local files, which could allow an attacker to craft URLs to return arbitrary local files from the developer's machine. To resolve this issue, upgrade to MTC 1.7.16. For more details, see CVE-2024-29180 . CVE-2024-30255: envoy : HTTP/2 CPU exhaustion due to CONTINUATION frame flood A flaw was found in how the envoy proxy implements the HTTP/2 codec, which impacts earlier versions of MTC. There are insufficient limitations placed on the number of CONTINUATION frames that can be sent within a single stream, even after exceeding the header map limits of envoy . This flaw could allow an unauthenticated remote attacker to send packets to vulnerable servers. These packets could consume compute resources and cause a denial of service (DoS). To resolve this issue, upgrade to MTC 1.7.16. For more details, see CVE-2024-30255 . 2.2.2.2. Known issues This release has the following known issues: Direct Volume Migration is failing as the Rsync pod on the source cluster goes into an Error state On migrating any application with a Persistent Volume Claim (PVC), the Stage migration operation succeeds with warnings, but the Direct Volume Migration (DVM) fails with the rsync pod on the source namespace moving into an error state. (BZ#2256141) The conflict condition is briefly cleared after it is created When creating a new state migration plan that returns a conflict error message, the error message is cleared very shortly after it is displayed. (BZ#2144299) Migration fails when there are multiple Volume Snapshot Locations of different provider types configured in a cluster When there are multiple Volume Snapshot Locations (VSLs) in a cluster with different provider types, but you have not set any of them as the default VSL, Velero results in a validation error that causes migration operations to fail. (BZ#2180565) 2.2.3. Migration Toolkit for Containers 1.7.15 release notes 2.2.3.1. Resolved issues This release has the following resolved issues: CVE-2024-24786: A flaw was found in Golang's protobuf module, where the unmarshal function can enter an infinite loop A flaw was found in the protojson.Unmarshal function that could cause the function to enter an infinite loop when unmarshaling certain forms of invalid JSON messages. This condition could occur when unmarshaling into a message that contained a google.protobuf.Any value or when the UnmarshalOptions.DiscardUnknown option was set in a JSON-formatted message. To resolve this issue, upgrade to MTC 1.7.15. For more details, see (CVE-2024-24786) . CVE-2024-28180: jose-go improper handling of highly compressed data A vulnerability was found in Jose due to improper handling of highly compressed data. An attacker could send a JSON Web Encryption (JWE) encrypted message that contained compressed data that used large amounts of memory and CPU when decompressed by the Decrypt or DecryptMulti functions. To resolve this issue, upgrade to MTC 1.7.15. For more details, see (CVE-2024-28180) . 2.2.3.2. Known issues This release has the following known issues: Direct Volume Migration is failing as the Rsync pod on the source cluster goes into an Error state On migrating any application with Persistent Volume Claim (PVC), the Stage migration operation succeeds with warnings, and Direct Volume Migration (DVM) fails with the rsync pod on the source namespace going into an error state. (BZ#2256141) The conflict condition is briefly cleared after it is created When creating a new state migration plan that results in a conflict error message, the error message is cleared shortly after it is displayed. (BZ#2144299) Migration fails when there are multiple Volume Snapshot Locations (VSLs) of different provider types configured in a cluster with no specified default VSL. When there are multiple VSLs in a cluster with different provider types, and you set none of them as the default VSL, Velero results in a validation error that causes migration operations to fail. (BZ#2180565) 2.2.4. Migration Toolkit for Containers 1.7.14 release notes 2.2.4.1. Resolved issues This release has the following resolved issues: CVE-2023-39325 CVE-2023-44487: various flaws A flaw was found in the handling of multiplexed streams in the HTTP/2 protocol, which is utilized by Migration Toolkit for Containers (MTC). A client could repeatedly make a request for a new multiplex stream then immediately send an RST_STREAM frame to cancel those requests. This activity created additional workloads for the server in terms of setting up and dismantling streams, but avoided any server-side limitations on the maximum number of active streams per connection. As a result, a denial of service occurred due to server resource consumption. (BZ#2243564) (BZ#2244013) (BZ#2244014) (BZ#2244015) (BZ#2244016) (BZ#2244017) To resolve this issue, upgrade to MTC 1.7.14. For more details, see (CVE-2023-44487) and (CVE-2023-39325) . CVE-2023-39318 CVE-2023-39319 CVE-2023-39321: various flaws (CVE-2023-39318) : A flaw was discovered in Golang, utilized by MTC. The html/template package did not properly handle HTML-like "" comment tokens, or the hashbang "#!" comment tokens, in <script> contexts. This flaw could cause the template parser to improperly interpret the contents of <script> contexts, causing actions to be improperly escaped. (BZ#2238062) (BZ#2238088) (CVE-2023-39319) : A flaw was discovered in Golang, utilized by MTC. The html/template package did not apply the proper rules for handling occurrences of "<script" , "<!--" , and "</script" within JavaScript literals in <script> contexts. This could cause the template parser to improperly consider script contexts to be terminated early, causing actions to be improperly escaped. (BZ#2238062) (BZ#2238088) (CVE-2023-39321) : A flaw was discovered in Golang, utilized by MTC. Processing an incomplete post-handshake message for a QUIC connection could cause a panic. (BZ#2238062) (BZ#2238088) (CVE-2023-3932) : A flaw was discovered in Golang, utilized by MTC. Connections using the QUIC transport protocol did not set an upper bound on the amount of data buffered when reading post-handshake messages, allowing a malicious QUIC connection to cause unbounded memory growth. (BZ#2238088) To resolve these issues, upgrade to MTC 1.7.14. For more details, see (CVE-2023-39318) , (CVE-2023-39319) , and (CVE-2023-39321) . 2.2.4.2. Known issues There are no major known issues in this release. 2.2.5. Migration Toolkit for Containers 1.7.13 release notes 2.2.5.1. Resolved issues There are no major resolved issues in this release. 2.2.5.2. Known issues There are no major known issues in this release. 2.2.6. Migration Toolkit for Containers 1.7.12 release notes 2.2.6.1. Resolved issues There are no major resolved issues in this release. 2.2.6.2. Known issues This release has the following known issues: Error code 504 is displayed on the Migration details page On the Migration details page, at first, the migration details are displayed without any issues. However, after sometime, the details disappear, and a 504 error is returned. ( BZ#2231106 ) Old restic pods are not removed when upgrading Migration Toolkit for Containers 1.7.x to Migration Toolkit for Containers 1.8 On upgrading the Migration Toolkit for Containers (MTC) operator from 1.7.x to 1.8.x, the old restic pods are not removed. After the upgrade, both restic and node-agent pods are visible in the namespace. ( BZ#2236829 ) 2.2.7. Migration Toolkit for Containers 1.7.11 release notes 2.2.7.1. Resolved issues There are no major resolved issues in this release. 2.2.7.2. Known issues There are no known issues in this release. 2.2.8. Migration Toolkit for Containers 1.7.10 release notes 2.2.8.1. Resolved issues This release has the following major resolved issue: Adjust rsync options in DVM In this release, you can prevent absolute symlinks from being manipulated by Rsync in the course of direct volume migration (DVM). Running DVM in privileged mode preserves absolute symlinks inside the persistent volume claims (PVCs). To switch to privileged mode, in the MigrationController CR, set the migration_rsync_privileged spec to true . ( BZ#2204461 ) 2.2.8.2. Known issues There are no known issues in this release. 2.2.9. Migration Toolkit for Containers 1.7.9 release notes 2.2.9.1. Resolved issues There are no major resolved issues in this release. 2.2.9.2. Known issues This release has the following known issue: Adjust rsync options in DVM In this release, users are unable to prevent absolute symlinks from being manipulated by rsync during direct volume migration (DVM). ( BZ#2204461 ) 2.2.10. Migration Toolkit for Containers 1.7.8 release notes 2.2.10.1. Resolved issues This release has the following major resolved issues: Velero image cannot be overridden in the Migration Toolkit for Containers (MTC) operator In releases, it was not possible to override the velero image using the velero_image_fqin parameter in the MigrationController Custom Resource (CR). ( BZ#2143389 ) Adding a MigCluster from the UI fails when the domain name has more than six characters In releases, adding a MigCluster from the UI failed when the domain name had more than six characters. The UI code expected a domain name of between two and six characters. ( BZ#2152149 ) UI fails to render the Migrations' page: Cannot read properties of undefined (reading 'name') In releases, the UI failed to render the Migrations' page, returning Cannot read properties of undefined (reading 'name') . ( BZ#2163485 ) Creating DPA resource fails on Red Hat OpenShift Container Platform 4.6 clusters In releases, when deploying MTC on an OpenShift Container Platform 4.6 cluster, the DPA failed to be created according to the logs, which resulted in some pods missing. From the logs in the migration-controller in the OpenShift Container Platform 4.6 cluster, it indicated that an unexpected null value was passed, which caused the error. ( BZ#2173742 ) 2.2.10.2. Known issues There are no known issues in this release. 2.2.11. Migration Toolkit for Containers 1.7.7 release notes 2.2.11.1. Resolved issues There are no major resolved issues in this release. 2.2.11.2. Known issues There are no known issues in this release. 2.2.12. Migration Toolkit for Containers 1.7.6 release notes 2.2.12.1. New features Implement proposed changes for DVM support with PSA in Red Hat OpenShift Container Platform 4.12 With the incoming enforcement of Pod Security Admission (PSA) in OpenShift Container Platform 4.12 the default pod would run with a restricted profile. This restricted profile would mean workloads to migrate would be in violation of this policy and no longer work as of now. The following enhancement outlines the changes that would be required to remain compatible with OCP 4.12. ( MIG-1240 ) 2.2.12.2. Resolved issues This release has the following major resolved issues: Unable to create Storage Class Conversion plan due to missing cronjob error in Red Hat OpenShift Platform 4.12 In releases, on the persistent volumes page, an error is thrown that a CronJob is not available in version batch/v1beta1 , and when clicking on cancel, the migplan is created with status Not ready . ( BZ#2143628 ) 2.2.12.3. Known issues This release has the following known issue: Conflict conditions are cleared briefly after they are created When creating a new state migration plan that will result in a conflict error, that error is cleared shorty after it is displayed. ( BZ#2144299 ) 2.2.13. Migration Toolkit for Containers 1.7.5 release notes 2.2.13.1. Resolved issues This release has the following major resolved issue: Direct Volume Migration is failing as rsync pod on source cluster move into Error state In release, migration succeeded with warnings but Direct Volume Migration failed with rsync pod on source namespace going into error state. ( *BZ#2132978 ) 2.2.13.2. Known issues This release has the following known issues: Velero image cannot be overridden in the Migration Toolkit for Containers (MTC) operator In releases, it was not possible to override the velero image using the velero_image_fqin parameter in the MigrationController Custom Resource (CR). ( BZ#2143389 ) When editing a MigHook in the UI, the page might fail to reload The UI might fail to reload when editing a hook if there is a network connection issue. After the network connection is restored, the page will fail to reload until the cache is cleared. ( BZ#2140208 ) 2.2.14. Migration Toolkit for Containers 1.7.4 release notes 2.2.14.1. Resolved issues There are no major resolved issues in this release. 2.2.14.2. Known issues Rollback missing out deletion of some resources from the target cluster On performing the roll back of an application from the Migration Toolkit for Containers (MTC) UI, some resources are not being deleted from the target cluster and the roll back is showing a status as successfully completed. ( BZ#2126880 ) 2.2.15. Migration Toolkit for Containers 1.7.3 release notes 2.2.15.1. Resolved issues This release has the following major resolved issues: Correct DNS validation for destination namespace In releases, the MigPlan could not be validated if the destination namespace started with a non-alphabetic character. ( BZ#2102231 ) Deselecting all PVCs from UI still results in an attempted PVC transfer In releases, while doing a full migration, unselecting the persistent volume claims (PVCs) would not skip selecting the PVCs and still try to migrate them. ( BZ#2106073 ) Incorrect DNS validation for destination namespace In releases, MigPlan could not be validated because the destination namespace started with a non-alphabetic character. ( BZ#2102231 ) 2.2.15.2. Known issues There are no known issues in this release. 2.2.16. Migration Toolkit for Containers 1.7.2 release notes 2.2.16.1. Resolved issues This release has the following major resolved issues: MTC UI does not display logs correctly In releases, the Migration Toolkit for Containers (MTC) UI did not display logs correctly. ( BZ#2062266 ) StorageClass conversion plan adding migstorage reference in migplan In releases, StorageClass conversion plans had a migstorage reference even though it was not being used. ( BZ#2078459 ) Velero pod log missing from downloaded logs In releases, when downloading a compressed (.zip) folder for all logs, the velero pod was missing. ( BZ#2076599 ) Velero pod log missing from UI drop down In releases, after a migration was performed, the velero pod log was not included in the logs provided in the dropdown list. ( BZ#2076593 ) Rsync options logs not visible in log-reader pod In releases, when trying to set any valid or invalid rsync options in the migrationcontroller , the log-reader was not showing any logs regarding the invalid options or about the rsync command being used. ( BZ#2079252 ) Default CPU requests on Velero/Restic are too demanding and fail in certain environments In releases, the default CPU requests on Velero/Restic were too demanding and fail in certain environments. Default CPU requests for Velero and Restic Pods are set to 500m. These values were high. ( BZ#2088022 ) 2.2.16.2. Known issues This release has the following known issues: Updating the replication repository to a different storage provider type is not respected by the UI After updating the replication repository to a different type and clicking Update Repository , it shows connection successful, but the UI is not updated with the correct details. When clicking on the Edit button again, it still shows the old replication repository information. Furthermore, when trying to update the replication repository again, it still shows the old replication details. When selecting the new repository, it also shows all the information you entered previously and the Update repository is not enabled, as if there are no changes to be submitted. ( BZ#2102020 ) Migrations fails because the backup is not found Migration fails at the restore stage because of initial backup has not been found. ( BZ#2104874 ) Update Cluster button is not enabled when updating Azure resource group When updating the remote cluster, selecting the Azure resource group checkbox, and adding a resource group does not enable the Update cluster option. ( BZ#2098594 ) Error pop-up in UI on deleting migstorage resource When creating a backupStorage credential secret in OpenShift Container Platform, if the migstorage is removed from the UI, a 404 error is returned and the underlying secret is not removed. ( BZ#2100828 ) Miganalytic resource displaying resource count as 0 in UI After creating a migplan from backend, the Miganalytic resource displays the resource count as 0 in UI. ( BZ#2102139 ) Registry validation fails when two trailing slashes are added to the Exposed route host to image registry After adding two trailing slashes, meaning // , to the exposed registry route, the MigCluster resource is showing the status as connected . When creating a migplan from backend with DIM, the plans move to the unready status. ( BZ#2104864 ) Service Account Token not visible while editing source cluster When editing the source cluster that has been added and is in Connected state, in the UI, the service account token is not visible in the field. To save the wizard, you have to fetch the token again and provide details inside the field. ( BZ#2097668 ) 2.2.17. Migration Toolkit for Containers 1.7.1 release notes 2.2.17.1. Resolved issues There are no major resolved issues in this release. 2.2.17.2. Known issues This release has the following known issues: Incorrect DNS validation for destination namespace MigPlan cannot be validated because the destination namespace starts with a non-alphabetic character. ( BZ#2102231 ) Cloud propagation phase in migration controller is not functioning due to missing labels on Velero pods The Cloud propagation phase in the migration controller is not functioning due to missing labels on Velero pods. The EnsureCloudSecretPropagated phase in the migration controller waits until replication repository secrets are propagated on both sides. As this label is missing on Velero pods, the phase is not functioning as expected. ( BZ#2088026 ) Default CPU requests on Velero/Restic are too demanding when making scheduling fail in certain environments Default CPU requests on Velero/Restic are too demanding when making scheduling fail in certain environments. Default CPU requests for Velero and Restic Pods are set to 500m. These values are high. The resources can be configured in DPA using the podConfig field for Velero and Restic. Migration operator should set CPU requests to a lower value, such as 100m, so that Velero and Restic pods can be scheduled in resource constrained environments Migration Toolkit for Containers (MTC) often operates in. ( BZ#2088022 ) Warning is displayed on persistentVolumes page after editing storage class conversion plan A warning is displayed on the persistentVolumes page after editing the storage class conversion plan. When editing the existing migration plan, a warning is displayed on the UI At least one PVC must be selected for Storage Class Conversion . ( BZ#2079549 ) Velero pod log missing from downloaded logs When downloading a compressed (.zip) folder for all logs, the velero pod is missing. ( BZ#2076599 ) Velero pod log missing from UI drop down After a migration is performed, the velero pod log is not included in the logs provided in the dropdown list. ( BZ#2076593 ) 2.2.18. Migration Toolkit for Containers 1.7.0 release notes 2.2.18.1. New features and enhancements This release has the following new features and enhancements: The Migration Toolkit for Containers (MTC) Operator now depends upon the OpenShift API for Data Protection (OADP) Operator. When you install the MTC Operator, the Operator Lifecycle Manager (OLM) automatically installs the OADP Operator in the same namespace. You can migrate from a source cluster that is behind a firewall to a cloud-based destination cluster by establishing a network tunnel between the two clusters by using the crane tunnel-api command. Converting storage classes in the MTC web console: You can convert the storage class of a persistent volume (PV) by migrating it within the same cluster. 2.2.18.2. Known issues This release has the following known issues: MigPlan custom resource does not display a warning when an AWS gp2 PVC has no available space. ( BZ#1963927 ) Direct and indirect data transfers do not work if the destination storage is a PV that is dynamically provisioned by the AWS Elastic File System (EFS). This is due to limitations of the AWS EFS Container Storage Interface (CSI) driver. ( BZ#2085097 ) Block storage for IBM Cloud must be in the same availability zone. See the IBM FAQ for block storage for virtual private cloud . MTC 1.7.6 cannot migrate cron jobs from source clusters that support v1beta1 cron jobs to clusters of OpenShift Container Platform 4.12 and later, which do not support v1beta1 cron jobs. ( BZ#2149119 ) 2.3. Migration Toolkit for Containers 1.6 release notes The release notes for Migration Toolkit for Containers (MTC) describe new features and enhancements, deprecated features, and known issues. The MTC enables you to migrate application workloads between OpenShift Container Platform clusters at the granularity of a namespace. You can migrate from OpenShift Container Platform 3 to 4.13 and between OpenShift Container Platform 4 clusters. MTC provides a web console and an API, based on Kubernetes custom resources, to help you control the migration and minimize application downtime. For information on the support policy for MTC, see OpenShift Application and Cluster Migration Solutions , part of the Red Hat OpenShift Container Platform Life Cycle Policy . 2.3.1. Migration Toolkit for Containers 1.6 release notes 2.3.1.1. New features and enhancements This release has the following new features and enhancements: State migration: You can perform repeatable, state-only migrations by selecting specific persistent volume claims (PVCs). "New operator version available" notification: The Clusters page of the MTC web console displays a notification when a new Migration Toolkit for Containers Operator is available. 2.3.1.2. Deprecated features The following features are deprecated: MTC version 1.4 is no longer supported. 2.3.1.3. Known issues This release has the following known issues: On OpenShift Container Platform 3.10, the MigrationController pod takes too long to restart. The Bugzilla report contains a workaround. ( BZ#1986796 ) Stage pods fail during direct volume migration from a classic OpenShift Container Platform source cluster on IBM Cloud. The IBM block storage plugin does not allow the same volume to be mounted on multiple pods of the same node. As a result, the PVCs cannot be mounted on the Rsync pods and on the application pods simultaneously. To resolve this issue, stop the application pods before migration. ( BZ#1887526 ) MigPlan custom resource does not display a warning when an AWS gp2 PVC has no available space. ( BZ#1963927 ) Block storage for IBM Cloud must be in the same availability zone. See the IBM FAQ for block storage for virtual private cloud . 2.4. Migration Toolkit for Containers 1.5 release notes The release notes for Migration Toolkit for Containers (MTC) describe new features and enhancements, deprecated features, and known issues. The MTC enables you to migrate application workloads between OpenShift Container Platform clusters at the granularity of a namespace. You can migrate from OpenShift Container Platform 3 to 4.13 and between OpenShift Container Platform 4 clusters. MTC provides a web console and an API, based on Kubernetes custom resources, to help you control the migration and minimize application downtime. For information on the support policy for MTC, see OpenShift Application and Cluster Migration Solutions , part of the Red Hat OpenShift Container Platform Life Cycle Policy . 2.4.1. Migration Toolkit for Containers 1.5 release notes 2.4.1.1. New features and enhancements This release has the following new features and enhancements: The Migration resource tree on the Migration details page of the web console has been enhanced with additional resources, Kubernetes events, and live status information for monitoring and debugging migrations. The web console can support hundreds of migration plans. A source namespace can be mapped to a different target namespace in a migration plan. Previously, the source namespace was mapped to a target namespace with the same name. Hook phases with status information are displayed in the web console during a migration. The number of Rsync retry attempts is displayed in the web console during direct volume migration. Persistent volume (PV) resizing can be enabled for direct volume migration to ensure that the target cluster does not run out of disk space. The threshold that triggers PV resizing is configurable. Previously, PV resizing occurred when the disk usage exceeded 97%. Velero has been updated to version 1.6, which provides numerous fixes and enhancements. Cached Kubernetes clients can be enabled to provide improved performance. 2.4.1.2. Deprecated features The following features are deprecated: MTC versions 1.2 and 1.3 are no longer supported. The procedure for updating deprecated APIs has been removed from the troubleshooting section of the documentation because the oc convert command is deprecated. 2.4.1.3. Known issues This release has the following known issues: Microsoft Azure storage is unavailable if you create more than 400 migration plans. The MigStorage custom resource displays the following message: The request is being throttled as the limit has been reached for operation type . ( BZ#1977226 ) If a migration fails, the migration plan does not retain custom persistent volume (PV) settings for quiesced pods. You must manually roll back the migration, delete the migration plan, and create a new migration plan with your PV settings. ( BZ#1784899 ) PV resizing does not work as expected for AWS gp2 storage unless the pv_resizing_threshold is 42% or greater. ( BZ#1973148 ) PV resizing does not work with OpenShift Container Platform 3.7 and 3.9 source clusters in the following scenarios: The application was installed after MTC was installed. An application pod was rescheduled on a different node after MTC was installed. OpenShift Container Platform 3.7 and 3.9 do not support the Mount Propagation feature that enables Velero to mount PVs automatically in the Restic pod. The MigAnalytic custom resource (CR) fails to collect PV data from the Restic pod and reports the resources as 0 . The MigPlan CR displays a status similar to the following: Example output status: conditions: - category: Warn lastTransitionTime: 2021-07-15T04:11:44Z message: Failed gathering extended PV usage information for PVs [nginx-logs nginx-html], please see MigAnalytic openshift-migration/ocp-24706-basicvolmig-migplan-1626319591-szwd6 for details reason: FailedRunningDf status: "True" type: ExtendedPVAnalysisFailed To enable PV resizing, you can manually restart the Restic daemonset on the source cluster or restart the Restic pods on the same nodes as the application. If you do not restart Restic, you can run the direct volume migration without PV resizing. ( BZ#1982729 ) 2.4.1.4. Technical changes This release has the following technical changes: The legacy Migration Toolkit for Containers Operator version 1.5.1 is installed manually on OpenShift Container Platform versions 3.7 to 4.5. The Migration Toolkit for Containers Operator version 1.5.1 is installed on OpenShift Container Platform versions 4.6 and later by using the Operator Lifecycle Manager.
[ "status: conditions: - category: Warn lastTransitionTime: 2021-07-15T04:11:44Z message: Failed gathering extended PV usage information for PVs [nginx-logs nginx-html], please see MigAnalytic openshift-migration/ocp-24706-basicvolmig-migplan-1626319591-szwd6 for details reason: FailedRunningDf status: \"True\" type: ExtendedPVAnalysisFailed" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/migration_toolkit_for_containers/mtc-release-notes-1
Chapter 6. Installing a cluster on GCP in a restricted network
Chapter 6. Installing a cluster on GCP in a restricted network In OpenShift Container Platform 4.16, you can install a cluster on Google Cloud Platform (GCP) in a restricted network by creating an internal mirror of the installation release content on an existing Google Virtual Private Cloud (VPC). Important You can install an OpenShift Container Platform cluster by using mirrored installation release content, but your cluster will require internet access to use the GCP APIs. 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You have an existing VPC in GCP. While installing a cluster in a restricted network that uses installer-provisioned infrastructure, you cannot use the installer-provisioned VPC. You must use a user-provisioned VPC that satisfies one of the following requirements: Contains the mirror registry Has firewall rules or a peering connection to access the mirror registry hosted elsewhere If you use a firewall, you configured it to allow the sites that your cluster requires access to. While you might need to grant access to more sites, you must grant access to *.googleapis.com and accounts.google.com . 6.2. About installations in restricted networks In OpenShift Container Platform 4.16, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 6.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 6.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 6.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. Configure a GCP account. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the network and subnets for the VPC to install the cluster in under the parent platform.gcp field: network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet> For platform.gcp.network , specify the name for the existing Google VPC. For platform.gcp.controlPlaneSubnet and platform.gcp.computeSubnet , specify the existing subnets to deploy the control plane machines and compute machines, respectively. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Make any other modifications to the install-config.yaml file that you require. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for GCP 6.5.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.5.2. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 6.1. Machine series A2 A3 C2 C2D C3 C3D E2 M1 N1 N2 N2D Tau T2D 6.5.3. Tested instance types for GCP on 64-bit ARM infrastructures The following Google Cloud Platform (GCP) 64-bit ARM instance types have been tested with OpenShift Container Platform. Example 6.2. Machine series for 64-bit ARM machines Tau T2A 6.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . As part of the installation process, you specify the custom machine type in the install-config.yaml file. Sample install-config.yaml file with a custom machine type compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3 6.5.5. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 6.5.6. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 6.5.7. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name network: existing_vpc 21 controlPlaneSubnet: control_plane_subnet 22 computeSubnet: compute_subnet 23 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 24 fips: false 25 sshKey: ssh-ed25519 AAAA... 26 additionalTrustBundle: | 27 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 28 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 15 17 18 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 9 If you do not provide these parameters and values, the installation program provides the default value. 4 10 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 11 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 6 12 Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information about granting the correct permissions for your service account, see "Machine management" "Creating compute machine sets" "Creating a compute machine set on GCP". 7 13 19 Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter will apply to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. 8 14 20 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) that should be used to boot control plane and compute machines. The project and name parameters under platform.gcp.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the project and name parameters under controlPlane.platform.gcp.osImage or compute.platform.gcp.osImage are set, they override the platform.gcp.defaultMachinePlatform.osImage parameters. 16 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 21 Specify the name of an existing VPC. 22 Specify the name of the existing subnet to deploy the control plane machines to. The subnet must belong to the VPC that you specified. 23 Specify the name of the existing subnet to deploy the compute machines to. The subnet must belong to the VPC that you specified. 24 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 25 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 26 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 27 Provide the contents of the certificate file that you used for your mirror registry. 28 Provide the imageContentSources section from the output of the command to mirror the repository. 6.5.8. Create an Ingress Controller with global access on GCP You can create an Ingress Controller that has global access to a Google Cloud Platform (GCP) cluster. Global access is only available to Ingress Controllers using internal load balancers. Prerequisites You created the install-config.yaml and complete any modifications to it. Procedure Create an Ingress Controller with global access on a new GCP cluster. Change to the directory that contains the installation program and create a manifest file: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-ingress-default-ingresscontroller.yaml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml Example output cluster-ingress-default-ingresscontroller.yaml Open the cluster-ingress-default-ingresscontroller.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration you want: Sample clientAccess configuration to Global apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService 1 Set gcp.clientAccess to Global . 2 Global access is only available to Ingress Controllers using internal load balancers. 6.5.9. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.6. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 6.7. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring a GCP cluster to use short-term credentials . 6.7.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 6.3. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 6.7.2. Configuring a GCP cluster to use short-term credentials To install a cluster that is configured to use GCP Workload Identity, you must configure the CCO utility and create the required GCP resources for your cluster. 6.7.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have added one of the following authentication options to the GCP account that the installation program uses: The IAM Workload Identity Pool Admin role. The following granular permissions: Example 6.4. Required GCP permissions compute.projects.get iam.googleapis.com/workloadIdentityPoolProviders.create iam.googleapis.com/workloadIdentityPoolProviders.get iam.googleapis.com/workloadIdentityPools.create iam.googleapis.com/workloadIdentityPools.delete iam.googleapis.com/workloadIdentityPools.get iam.googleapis.com/workloadIdentityPools.undelete iam.roles.create iam.roles.delete iam.roles.list iam.roles.undelete iam.roles.update iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.getIamPolicy iam.serviceAccounts.list iam.serviceAccounts.setIamPolicy iam.workloadIdentityPoolProviders.get iam.workloadIdentityPools.delete resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.getIamPolicy storage.buckets.setIamPolicy storage.objects.create storage.objects.delete storage.objects.list Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 6.7.2.2. Creating GCP resources with the Cloud Credential Operator utility You can use the ccoctl gcp create-all command to automate the creation of GCP resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl gcp create-all \ --name=<name> \ 1 --region=<gcp_region> \ 2 --project=<gcp_project_id> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4 1 Specify the user-defined name for all created GCP resources used for tracking. 2 Specify the GCP region in which cloud resources will be created. 3 Specify the GCP project ID in which cloud resources will be created. 4 Specify the directory containing the files of CredentialsRequest manifests to create GCP service accounts. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts. 6.7.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 6.5. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 6.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.9. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 6.10. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 6.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.12. steps Validate an installation . Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "./openshift-install create install-config --dir <installation_directory> 1", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet>", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "controlPlane: platform: gcp: secureBoot: Enabled", "compute: - platform: gcp: secureBoot: Enabled", "platform: gcp: defaultMachinePlatform: secureBoot: Enabled", "controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3", "compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name network: existing_vpc 21 controlPlaneSubnet: control_plane_subnet 22 computeSubnet: compute_subnet 23 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 24 fips: false 25 sshKey: ssh-ed25519 AAAA... 26 additionalTrustBundle: | 27 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 28 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_gcp/installing-restricted-networks-gcp-installer-provisioned
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_eclipse_temurin_21.0.3/providing-direct-documentation-feedback_openjdk
8.3.5. Searching For and Viewing Denials
8.3.5. Searching For and Viewing Denials This section assumes the setroubleshoot , setroubleshoot-server , dbus and audit packages are installed, and that the auditd , rsyslogd , and setroubleshootd daemons are running. Refer to Section 5.2, "Which Log File is Used" for information about starting these daemons. A number of tools are available for searching for and viewing SELinux denials, such as ausearch , aureport , and sealert . ausearch The audit package provides the ausearch utility. From the ausearch (8) manual page: " ausearch is a tool that can query the audit daemon logs for events based on different search criteria" [13] . The ausearch utility accesses /var/log/audit/audit.log , and as such, must be run as the Linux root user: Searching For Command all denials ausearch -m avc denials for that today ausearch -m avc -ts today denials from the last 10 minutes ausearch -m avc -ts recent To search for SELinux denials for a particular service, use the -c comm-name option, where comm-name "is the executable's name" [14] , for example, httpd for the Apache HTTP Server, and smbd for Samba: With each ausearch command, it is advised to use either the --interpret ( -i ) option for easier readability, or the --raw ( -r ) option for script processing. Refer to the ausearch (8) manual page for further ausearch options. aureport The audit package provides the aureport utility. From the aureport (8) manual page: " aureport is a tool that produces summary reports of the audit system logs" [15] . The aureport utility accesses /var/log/audit/audit.log , and as such, must be run as the Linux root user. To view a list of SELinux denials and how often each one occurred, run the aureport -a command. The following is example output that includes two denials: Refer to the aureport (8) manual page for further aureport options. sealert The setroubleshoot-server package provides the sealert utility, which reads denial messages translated by setroubleshoot-server . Denials are assigned IDs, as seen in /var/log/messages . The following is an example denial from messages : In this example, the denial ID is 8c123656-5dda-4e5d-8791-9e3bd03786b7 . The -l option takes an ID as an argument. Running the sealert -l 8c123656-5dda-4e5d-8791-9e3bd03786b7 command presents a detailed analysis of why SELinux denied access, and a possible solution for allowing access. If you are running the X Window System, have the setroubleshoot and setroubleshoot-server packages installed, and the setroubleshootd , dbus and auditd daemons are running, a warning is displayed when access is denied by SELinux: Clicking on Show launches the sealert GUI, which allows you to troubleshoot the problem: Alternatively, run the sealert -b command to launch the sealert GUI. To view a detailed analysis of all denial messages, run the sealert -l \* command. See the sealert (8) manual page for further sealert options. [13] From the ausearch (8) manual page, as shipped with the audit package in Red Hat Enterprise Linux 6. [14] From the ausearch (8) manual page, as shipped with the audit package in Red Hat Enterprise Linux 6. [15] From the aureport (8) manual page, as shipped with the audit package in Red Hat Enterprise Linux 6.
[ "~]# ausearch -m avc -c httpd", "~]# ausearch -m avc -c smbd", "~]# aureport -a AVC Report ======================================================== date time comm subj syscall class permission obj event ======================================================== 1. 05/01/2009 21:41:39 httpd unconfined_u:system_r:httpd_t:s0 195 file getattr system_u:object_r:samba_share_t:s0 denied 2 2. 05/03/2009 22:00:25 vsftpd unconfined_u:system_r:ftpd_t:s0 5 file read unconfined_u:object_r:cifs_t:s0 denied 4", "setroubleshoot: SELinux is preventing /usr/sbin/httpd from name_bind access on the tcp_socket. For complete SELinux messages. run sealert -l 8c123656-5dda-4e5d-8791-9e3bd03786b7" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-fixing_problems-searching_for_and_viewing_denials
Chapter 21. Monitoring containers
Chapter 21. Monitoring containers Use Podman commands to manage a Podman environment. With that, you can determine the health of the container, by displaying system and pod information, and monitoring Podman events. 21.1. Using a health check on a container You can use the health check to determine the health or readiness of the process running inside the container. If the health check succeeds, the container is marked as "healthy"; otherwise, it is "unhealthy". You can compare a health check with running the podman exec command and examining the exit code. The zero exit value means that the container is "healthy". Health checks can be set when building an image using the HEALTHCHECK instruction in the Containerfile or when creating the container on the command line. You can display the health-check status of a container using the podman inspect or podman ps commands. A health check consists of six basic components: Command Retries Interval Start-period Timeout Container recovery The description of health check components follows: Command ( --health-cmd option) Podman executes the command inside the target container and waits for the exit code. The other five components are related to the scheduling of the health check and they are optional. Retries ( --health-retries option) Defines the number of consecutive failed health checks that need to occur before the container is marked as "unhealthy". A successful health check resets the retry counter. Interval ( --health-interval option) Describes the time between running the health check command. Note that small intervals cause your system to spend a lot of time running health checks. The large intervals cause struggles with catching time outs. Start-period ( --health-start-period option) Describes the time between when the container starts and when you want to ignore health check failures. Timeout ( --health-timeout option) Describes the period of time the health check must complete before being considered unsuccessful. Note The values of the Retries, Interval, and Start-period components are time durations, for example "30s" or "1h15m". Valid time units are "ns," "us," or "ms", "ms," "s," "m," and "h". Container recovery ( --health-on-failure option) Determines which actions to perform when the status of a container is unhealthy. When the application fails, Podman restarts it automatically to provide robustness. The --health-on-failure option supports four actions: none : Take no action, this is the default action. kill : Kill the container. restart : Restart the container. stop : Stop the container. Note The --health-on-failure option is available in Podman version 4.2 and later. Warning Do not combine the restart action with the --restart option. When running inside of a systemd unit, consider using the kill or stop action instead, to make use of systemd restart policy. Health checks run inside the container. Health checks only make sense if you know what the health state of the service is and can differentiate between a successful and unsuccessful health check. Additional resources podman-healthcheck and podman-run man pages on your system Podman at the edge: Keeping services alive with custom healthcheck actions Monitoring container vitality and availability with Podman 21.2. Performing a health check using the command line You can set a health check when creating the container on the command line. Prerequisites The container-tools meta-package is installed. Procedure Define a health check: The --health-cmd option sets a health check command for the container. The --health-interval=0 option with 0 value indicates that you want to run the health check manually. Check the health status of the hc-container container: Using the podman inspect command: Using the podman ps command: Using the podman healthcheck run command: Additional resources podman-healthcheck and podman-run man pages on your system Podman at the edge: Keeping services alive with custom healthcheck actions Monitoring container vitality and availability with Podman 21.3. Performing a health check using a Containerfile You can set a health check by using the HEALTHCHECK instruction in the Containerfile . Prerequisites The container-tools meta-package is installed. Procedure Create a Containerfile : Note The HEALTHCHECK instruction is supported only for the docker image format. For the oci image format, the instruction is ignored. Build the container and add an image name: Run the container: Check the health status of the hc-container container: Using the podman inspect command: Using the podman ps command: Using the podman healthcheck run command: Additional resources podman-healthcheck and podman-run man pages on your system Podman at the edge: Keeping services alive with custom healthcheck actions Monitoring container vitality and availability with Podman 21.4. Displaying Podman system information The podman system command enables you to manage the Podman systems by displaying system information. Prerequisites The container-tools meta-package is installed. Procedure Display Podman system information: To show Podman disk usage, enter: To show detailed information about space usage, enter: To display information about the host, current storage stats, and build of Podman, enter: To remove all unused containers, images and volume data, enter: The podman system prune command removes all unused containers (both dangling and unreferenced), pods and optionally, volumes from local storage. Use the --all option to delete all unused images. Unused images are dangling images and any image that does not have any containers based on it. Use the --volume option to prune volumes. By default, volumes are not removed to prevent important data from being deleted if there is currently no container using the volume. Additional resources podman-system-df , podman-system-info , and podman-system-prune man pages on your system 21.5. Podman event types You can monitor events that occur in Podman. Several event types exist and each event type reports different statuses. The container event type reports the following statuses: attach checkpoint cleanup commit create exec export import init kill mount pause prune remove restart restore start stop sync unmount unpause The pod event type reports the following statuses: create kill pause remove start stop unpause The image event type reports the following statuses: prune push pull save remove tag untag The system type reports the following statuses: refresh renumber The volume type reports the following statuses: create prune remove Additional resources podman-events man page on your system 21.6. Monitoring Podman events You can monitor and print events that occur in Podman using the podman events command. Each event will include a timestamp, a type, a status, name, if applicable, and image, if applicable. Prerequisites The container-tools meta-package is installed. Procedure Run the myubi container: Display the Podman events: To display all Podman events, enter: The --stream=false option ensures that the podman events command exits when reading the last known event. You can see several events that happened when you enter the podman run command: container create when creating a new container. image pull when pulling an image if the container image is not present in the local storage. container init when initializing the container in the runtime and setting a network. container start when starting the container. container attach when attaching to the terminal of a container. That is because the container runs in the foreground. container died is emitted when the container exits. container remove because the --rm flag was used to remove the container after it exits. You can also use the journalctl command to display Podman events: To show only Podman create events, enter: You can also use the journalctl command to display Podman create events: Additional resources podman-events man page on your system Container Events and Auditing 21.7. Using Podman events for auditing Previously, the events had to be connected to an event to interpret them correctly. For example, the container-create event had to be linked with an image-pull event to know which image had been used. The container-create event also did not include all data, for example, the security settings, volumes, mounts, and so on. Beginning with Podman v4.4, you can gather all relevant information about a container directly from a single event and journald entry. The data is in JSON format, the same as from the podman container inspect command and includes all configuration and security settings of a container. You can configure Podman to attach the container-inspect data for auditing purposes. Prerequisites The container-tools meta-package is installed. Procedure Modify the ~/.config/containers/containers.conf file and add the events_container_create_inspect_data=true option to the [engine] section: For the system-wide configuration, modify the /etc/containers/containers.conf or /usr/share/container/containers.conf file. Create the container: Display the Podman events: Using the podman events command: The --format "{{.ContainerInspectData}}" option displays the inspect data. The jq ".Config.CreateCommand" transforms the JSON data into a more readable format and displays the parameters for the podman create command. Using the journalctl command: The output data for the podman events and journalctl commands are the same. Additional resources podman-events and containers.conf man pages on your system Container Events and Auditing
[ "podman run -dt --name=hc-container -p 8080:8080 --health-cmd='curl http://localhost:8080 || exit 1' --health-interval=0 registry.access.redhat.com/ubi8/httpd-24", "podman inspect --format='{{json .State.Health.Status}}' hc-container healthy", "podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a680c6919fe localhost/hc-container:latest /usr/bin/run-http... 2 minutes ago Up 2 minutes (healthy) hc-container", "podman healthcheck run hc-container healthy", "cat Containerfile FROM registry.access.redhat.com/ubi8/httpd-24 EXPOSE 8080 HEALTHCHECK CMD curl http://localhost:8080 || exit 1", "podman build --format=docker -t hc-container . STEP 1/3: FROM registry.access.redhat.com/ubi8/httpd-24 STEP 2/3: EXPOSE 8080 --> 5aea97430fd STEP 3/3: HEALTHCHECK CMD curl http://localhost:8080 || exit 1 COMMIT health-check Successfully tagged localhost/health-check:latest a680c6919fe6bf1a79219a1b3d6216550d5a8f83570c36d0dadfee1bb74b924e", "podman run -dt --name=hc-container localhost/hc-container", "podman inspect --format='{{json .State.Health.Status}}' hc-container healthy", "podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a680c6919fe localhost/hc-container:latest /usr/bin/run-http... 2 minutes ago Up 2 minutes (healthy) hc-container", "podman healthcheck run hc-container healthy", "podman system df TYPE TOTAL ACTIVE SIZE RECLAIMABLE Images 3 2 1.085GB 233.4MB (0%) Containers 2 0 28.17kB 28.17kB (100%) Local Volumes 3 0 0B 0B (0%)", "podman system df -v Images space usage: REPOSITORY TAG IMAGE ID CREATED SIZE SHARED SIZE UNIQUE SIZE CONTAINERS registry.access.redhat.com/ubi9 latest b1e63aaae5cf 13 days 233.4MB 233.4MB 0B 0 registry.access.redhat.com/ubi9/httpd-24 latest 0d04740850e8 13 days 461.5MB 0B 461.5MB 1 registry.redhat.io/rhel8/podman latest dce10f591a2d 13 days 390.6MB 233.4MB 157.2MB 1 Containers space usage: CONTAINER ID IMAGE COMMAND LOCAL VOLUMES SIZE CREATED STATUS NAMES 311180ab99fb 0d04740850e8 /usr/bin/run-httpd 0 28.17kB 16 hours exited hc1 bedb6c287ed6 dce10f591a2d podman run ubi9 echo hello 0 0B 11 hours configured dazzling_tu Local Volumes space usage: VOLUME NAME LINKS SIZE 76de0efa83a3dae1a388b9e9e67161d28187e093955df185ea228ad0b3e435d0 0 0B 8a1b4658aecc9ff38711a2c7f2da6de192c5b1e753bb7e3b25e9bf3bb7da8b13 0 0B d9cab4f6ccbcf2ac3cd750d2efff9d2b0f29411d430a119210dd242e8be20e26 0 0B", "podman system info host: arch: amd64 buildahVersion: 1.22.3 cgroupControllers: [] cgroupManager: cgroupfs cgroupVersion: v1 conmon: package: conmon-2.0.29-1.module+el8.5.0+12381+e822eb26.x86_64 path: /usr/bin/conmon version: 'conmon version 2.0.29, commit: 7d0fa63455025991c2fc641da85922fde889c91b' cpus: 2 distribution: distribution: '\"rhel\"' version: \"8.5\" eventLogger: file hostname: localhost.localdomain idMappings: gidmap: - container_id: 0 host_id: 1000 size: 1 - container_id: 1 host_id: 100000 size: 65536 uidmap: - container_id: 0 host_id: 1000 size: 1 - container_id: 1 host_id: 100000 size: 65536 kernel: 4.18.0-323.el8.x86_64 linkmode: dynamic memFree: 352288768 memTotal: 2819129344 ociRuntime: name: runc package: runc-1.0.2-1.module+el8.5.0+12381+e822eb26.x86_64 path: /usr/bin/runc version: |- runc version 1.0.2 spec: 1.0.2-dev go: go1.16.7 libseccomp: 2.5.1 os: linux remoteSocket: path: /run/user/1000/podman/podman.sock security: apparmorEnabled: false capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT rootless: true seccompEnabled: true seccompProfilePath: /usr/share/containers/seccomp.json selinuxEnabled: true serviceIsRemote: false slirp4netns: executable: /usr/bin/slirp4netns package: slirp4netns-1.1.8-1.module+el8.5.0+12381+e822eb26.x86_64 version: |- slirp4netns version 1.1.8 commit: d361001f495417b880f20329121e3aa431a8f90f libslirp: 4.4.0 SLIRP_CONFIG_VERSION_MAX: 3 libseccomp: 2.5.1 swapFree: 3113668608 swapTotal: 3124752384 uptime: 11h 24m 12.52s (Approximately 0.46 days) registries: search: - registry.fedoraproject.org - registry.access.redhat.com - registry.centos.org - docker.io store: configFile: /home/user/.config/containers/storage.conf containerStore: number: 2 paused: 0 running: 0 stopped: 2 graphDriverName: overlay graphOptions: overlay.mount_program: Executable: /usr/bin/fuse-overlayfs Package: fuse-overlayfs-1.7.1-1.module+el8.5.0+12381+e822eb26.x86_64 Version: |- fusermount3 version: 3.2.1 fuse-overlayfs: version 1.7.1 FUSE library version 3.2.1 using FUSE kernel interface version 7.26 graphRoot: /home/user/.local/share/containers/storage graphStatus: Backing Filesystem: xfs Native Overlay Diff: \"false\" Supports d_type: \"true\" Using metacopy: \"false\" imageStore: number: 3 runRoot: /run/user/1000/containers volumePath: /home/user/.local/share/containers/storage/volumes version: APIVersion: 3.3.1 Built: 1630360721 BuiltTime: Mon Aug 30 23:58:41 2021 GitCommit: \"\" GoVersion: go1.16.7 OsArch: linux/amd64 Version: 3.3.1", "podman system prune WARNING! This will remove: - all stopped containers - all stopped pods - all dangling images - all build cache Are you sure you want to continue? [y/N] y", "podman run -q --rm --name=myubi registry.access.redhat.com/ubi8/ubi:latest", "now=USD(date --iso-8601=seconds) podman events --since=now --stream=false 2023-03-08 14:27:20.696167362 +0100 CET container create d4748226a2bcd271b1bc4b9f88b54e8271c13ffea9b30529968291c62d72fe09 (image=registry.access.redhat.com/ubi8/ubi:latest, name=myubi,...) 2023-03-08 14:27:20.652325082 +0100 CET image pull registry.access.redhat.com/ubi8/ubi:latest 2023-03-08 14:27:20.795695396 +0100 CET container init d4748226a2bcd271b1bc4b9f88b54e8271c13ffea9b30529968291c62d72fe09 (image=registry.access.redhat.com/ubi8/ubi:latest, name=myubi...) 2023-03-08 14:27:20.809205161 +0100 CET container start d4748226a2bcd271b1bc4b9f88b54e8271c13ffea9b30529968291c62d72fe09 (image=registry.access.redhat.com/ubi8/ubi:latest, name=myubi...) 2023-03-08 14:27:20.809903022 +0100 CET container attach d4748226a2bcd271b1bc4b9f88b54e8271c13ffea9b30529968291c62d72fe09 (image=registry.access.redhat.com/ubi8/ubi:latest, name=myubi...) 2023-03-08 14:27:20.831710446 +0100 CET container died d4748226a2bcd271b1bc4b9f88b54e8271c13ffea9b30529968291c62d72fe09 (image=registry.access.redhat.com/ubi8/ubi:latest, name=myubi...) 2023-03-08 14:27:20.913786892 +0100 CET container remove d4748226a2bcd271b1bc4b9f88b54e8271c13ffea9b30529968291c62d72fe09 (image=registry.access.redhat.com/ubi8/ubi:latest, name=myubi...)", "journalctl --user -r SYSLOG_IDENTIFIER=podman Mar 08 14:27:20 fedora podman[129324]: 2023-03-08 14:27:20.913786892 +0100 CET m=+0.066920979 container remove Mar 08 14:27:20 fedora podman[129289]: 2023-03-08 14:27:20.696167362 +0100 CET m=+0.079089208 container create d4748226a2bcd271b1bc4b9f88b54e8271c13ffea9b30529968291c62d72f>", "podman events --filter event=create 2023-03-08 14:27:20.696167362 +0100 CET container create d4748226a2bcd271b1bc4b9f88b54e8271c13ffea9b30529968291c62d72fe09 (image=registry.access.redhat.com/ubi8/ubi:latest, name=myubi,...)", "journalctl --user -r PODMAN_EVENT=create Mar 08 14:27:20 fedora podman[129289]: 2023-03-08 14:27:20.696167362 +0100 CET m=+0.079089208 container create d4748226a2bcd271b1bc4b9f88b54e8271c13ffea9b30529968291c62d72f>", "cat ~/.config/containers/containers.conf [engine] events_container_create_inspect_data=true", "podman create registry.access.redhat.com/ubi8/ubi:latest 19524fe3c145df32d4f0c9af83e7964e4fb79fc4c397c514192d9d7620a36cd3", "now=USD(date --iso-8601=seconds) podman events --since USDnow --stream=false --format \"{{.ContainerInspectData}}\" | jq \".Config.CreateCommand\" [ \"/usr/bin/podman\", \"create\", \"registry.access.redhat.com/ubi8\" ]", "journalctl --user -r PODMAN_EVENT=create --all -o json | jq \".PODMAN_CONTAINER_INSPECT_DATA | fromjson\" | jq \".Config.CreateCommand\" [ \"/usr/bin/podman\", \"create\", \"registry.access.redhat.com/ubi8\" ]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/building_running_and_managing_containers/assembly_monitoring-containers
Preface
Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) AWS clusters in connected or disconnected environments along with out-of-the-box support for proxy environments. Note Only internal OpenShift Data Foundation clusters are supported on AWS. See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and then follow the deployment process for your environment based on your requirement: Deploy using dynamic storage devices Deploy standalone Multicloud Object Gateway component
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_amazon_web_services/preface-aws
Chapter 10. Live migration
Chapter 10. Live migration 10.1. About live migration Live migration is the process of moving a running virtual machine (VM) to another node in the cluster without interrupting the virtual workload. By default, live migration traffic is encrypted using Transport Layer Security (TLS). 10.1.1. Live migration requirements Live migration has the following requirements: The cluster must have shared storage with ReadWriteMany (RWX) access mode. The cluster must have sufficient RAM and network bandwidth. Note You must ensure that there is enough memory request capacity in the cluster to support node drains that result in live migrations. You can determine the approximate required spare memory by using the following calculation: The default number of migrations that can run in parallel in the cluster is 5. If a VM uses a host model CPU, the nodes must support the CPU. Configuring a dedicated Multus network for live migration is highly recommended. A dedicated network minimizes the effects of network saturation on tenant workloads during migration. 10.1.2. Common live migration tasks You can perform the following live migration tasks: Configure live migration settings: Limits and timeouts Maximum number of migrations per node or cluster Select a dedicated live migration network from existing networks Initiate and cancel live migration Monitor the progress of all live migrations View VM migration metrics 10.1.3. Additional resources Prometheus queries for live migration VM migration tuning VM run strategies VM and cluster eviction strategies 10.2. Configuring live migration You can configure live migration settings to ensure that the migration processes do not overwhelm the cluster. You can configure live migration policies to apply different migration configurations to groups of virtual machines (VMs). 10.2.1. Live migration settings You can configure the following live migration settings: Limits and timeouts Maximum number of migrations per node or cluster 10.2.1.1. Configuring live migration limits and timeouts Configure live migration limits and timeouts for the cluster by updating the HyperConverged custom resource (CR), which is located in the openshift-cnv namespace. Procedure Edit the HyperConverged CR and add the necessary live migration parameters: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Example configuration file apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: bandwidthPerMigration: 64Mi 1 completionTimeoutPerGiB: 800 2 parallelMigrationsPerCluster: 5 3 parallelOutboundMigrationsPerNode: 2 4 progressTimeout: 150 5 1 Bandwidth limit of each migration, where the value is the quantity of bytes per second. For example, a value of 2048Mi means 2048 MiB/s. Default: 0 , which is unlimited. 2 The migration is canceled if it has not completed in this time, in seconds per GiB of memory. For example, a VM with 6GiB memory times out if it has not completed migration in 4800 seconds. If the Migration Method is BlockMigration , the size of the migrating disks is included in the calculation. 3 Number of migrations running in parallel in the cluster. Default: 5 . 4 Maximum number of outbound migrations per node. Default: 2 . 5 The migration is canceled if memory copy fails to make progress in this time, in seconds. Default: 150 . Note You can restore the default value for any spec.liveMigrationConfig field by deleting that key/value pair and saving the file. For example, delete progressTimeout: <value> to restore the default progressTimeout: 150 . 10.2.2. Live migration policies You can create live migration policies to apply different migration configurations to groups of VMs that are defined by VM or project labels. Tip You can create live migration policies by using the web console . 10.2.2.1. Creating a live migration policy by using the command line You can create a live migration policy by using the command line. A live migration policy is applied to selected virtual machines (VMs) by using any combination of labels: VM labels such as size , os , or gpu Project labels such as priority , bandwidth , or hpc-workload For the policy to apply to a specific group of VMs, all labels on the group of VMs must match the labels of the policy. Note If multiple live migration policies apply to a VM, the policy with the greatest number of matching labels takes precedence. If multiple policies meet this criteria, the policies are sorted by alphabetical order of the matching label keys, and the first one in that order takes precedence. Procedure Create a MigrationPolicy object as in the following example: apiVersion: migrations.kubevirt.io/v1alpha1 kind: MigrationPolicy metadata: name: <migration_policy> spec: selectors: namespaceSelector: 1 hpc-workloads: "True" xyz-workloads-type: "" virtualMachineInstanceSelector: 2 workload-type: "db" operating-system: "" 1 Specify project labels. 2 Specify VM labels. Create the migration policy by running the following command: USD oc create -f <migration_policy>.yaml 10.2.3. Additional resources Configuring a dedicated Multus network for live migration 10.3. Initiating and canceling live migration You can initiate the live migration of a virtual machine (VM) to another node by using the OpenShift Container Platform web console or the command line . You can cancel a live migration by using the web console or the command line . The VM remains on its original node. Tip You can also initiate and cancel live migration by using the virtctl migrate <vm_name> and virtctl migrate-cancel <vm_name> commands. 10.3.1. Initiating live migration 10.3.1.1. Initiating live migration by using the web console You can live migrate a running virtual machine (VM) to a different node in the cluster by using the OpenShift Container Platform web console. Note The Migrate action is visible to all users but only cluster administrators can initiate a live migration. Prerequisites The VM must be migratable. If the VM is configured with a host model CPU, the cluster must have an available node that supports the CPU model. Procedure Navigate to Virtualization VirtualMachines in the web console. Select Migrate from the Options menu beside a VM. Click Migrate . 10.3.1.2. Initiating live migration by using the command line You can initiate the live migration of a running virtual machine (VM) by using the command line to create a VirtualMachineInstanceMigration object for the VM. Procedure Create a VirtualMachineInstanceMigration manifest for the VM that you want to migrate: apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceMigration metadata: name: <migration_name> spec: vmiName: <vm_name> Create the object by running the following command: USD oc create -f <migration_name>.yaml The VirtualMachineInstanceMigration object triggers a live migration of the VM. This object exists in the cluster for as long as the virtual machine instance is running, unless manually deleted. Verification Obtain the VM status by running the following command: USD oc describe vmi <vm_name> -n <namespace> Example output # ... Status: Conditions: Last Probe Time: <nil> Last Transition Time: <nil> Status: True Type: LiveMigratable Migration Method: LiveMigration Migration State: Completed: true End Timestamp: 2018-12-24T06:19:42Z Migration UID: d78c8962-0743-11e9-a540-fa163e0c69f1 Source Node: node2.example.com Start Timestamp: 2018-12-24T06:19:35Z Target Node: node1.example.com Target Node Address: 10.9.0.18:43891 Target Node Domain Detected: true 10.3.2. Canceling live migration 10.3.2.1. Canceling live migration by using the web console You can cancel the live migration of a virtual machine (VM) by using the OpenShift Container Platform web console. Procedure Navigate to Virtualization VirtualMachines in the web console. Select Cancel Migration on the Options menu beside a VM. 10.3.2.2. Canceling live migration by using the command line Cancel the live migration of a virtual machine by deleting the VirtualMachineInstanceMigration object associated with the migration. Procedure Delete the VirtualMachineInstanceMigration object that triggered the live migration, migration-job in this example: USD oc delete vmim migration-job 10.3.3. Additional resources Monitoring the progress of all live migrations by using the web console Viewing VM migration metrics by using the web console
[ "Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: bandwidthPerMigration: 64Mi 1 completionTimeoutPerGiB: 800 2 parallelMigrationsPerCluster: 5 3 parallelOutboundMigrationsPerNode: 2 4 progressTimeout: 150 5", "apiVersion: migrations.kubevirt.io/v1alpha1 kind: MigrationPolicy metadata: name: <migration_policy> spec: selectors: namespaceSelector: 1 hpc-workloads: \"True\" xyz-workloads-type: \"\" virtualMachineInstanceSelector: 2 workload-type: \"db\" operating-system: \"\"", "oc create -f <migration_policy>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceMigration metadata: name: <migration_name> spec: vmiName: <vm_name>", "oc create -f <migration_name>.yaml", "oc describe vmi <vm_name> -n <namespace>", "Status: Conditions: Last Probe Time: <nil> Last Transition Time: <nil> Status: True Type: LiveMigratable Migration Method: LiveMigration Migration State: Completed: true End Timestamp: 2018-12-24T06:19:42Z Migration UID: d78c8962-0743-11e9-a540-fa163e0c69f1 Source Node: node2.example.com Start Timestamp: 2018-12-24T06:19:35Z Target Node: node1.example.com Target Node Address: 10.9.0.18:43891 Target Node Domain Detected: true", "oc delete vmim migration-job" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/virtualization/live-migration
Deploying AMQ Broker on OpenShift
Deploying AMQ Broker on OpenShift Red Hat AMQ Broker 7.12 For Use with AMQ Broker 7.12
null
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/deploying_amq_broker_on_openshift/index
Chapter 6. Migrating business processes to the new process designer
Chapter 6. Migrating business processes to the new process designer The legacy process designer in Business Central is deprecated in Red Hat Process Automation Manager 7.13.5. It will be removed in a future Red Hat Process Automation Manager release. The legacy process designer will not receive any new enhancements or features. If you intend to use the new process designer, start migrating your processes to the new designer. Create all new processes in the new process designer. Note The process engine will continue to support the execution and deployment of business processes generated with the legacy designer into KIE Server. If you have a legacy business process that is functioning and that you do not intend to change, it is not mandatory to migrate to the new designer at this time. You can only migrate business processes that contain supported business process nodes in the new designer. More nodes will be added in future versions of Red Hat Process Automation Manager. Prerequisites You have an existing project that contains a business process asset that was created with the legacy process designer. Procedure In Business Central, click Menu Design Projects . Click the project you want to migrate, for example Mortgage_Process . Click Ok to open the project's asset list. Click the project's Business Process asset to open it in the legacy process designer. Click Migrate Migrate Diagram . Figure 6.1. Migration confirmation message Select Yes or No to confirm if you made changes. This option is only available if you have made changes to your legacy business process. Figure 6.2. Save diagram changes confirmation If the migration is successful, the business process opens in the new process designer and the business process name's extension changes from *.bpmn2 to *.bpmn. If the migration is unsuccessful due to an unsupported node type, Business Central displays the following error message: Figure 6.3. Migration failure message
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/deploying_and_managing_red_hat_process_automation_manager_services/migrating-from-legacy-designer-proc
Chapter 8. Managing metrics
Chapter 8. Managing metrics You can collect metrics to monitor how cluster components and your own workloads are performing. 8.1. Understanding metrics In OpenShift Container Platform 4.13, cluster components are monitored by scraping metrics exposed through service endpoints. You can also configure metrics collection for user-defined projects. Metrics enable you to monitor how cluster components and your own workloads are performing. You can define the metrics that you want to provide for your own workloads by using Prometheus client libraries at the application level. In OpenShift Container Platform, metrics are exposed through an HTTP service endpoint under the /metrics canonical name. You can list all available metrics for a service by running a curl query against http://<endpoint>/metrics . For instance, you can expose a route to the prometheus-example-app example application and then run the following to view all of its available metrics: USD curl http://<example_app_endpoint>/metrics Example output # HELP http_requests_total Count of all HTTP requests # TYPE http_requests_total counter http_requests_total{code="200",method="get"} 4 http_requests_total{code="404",method="get"} 2 # HELP version Version information about this binary # TYPE version gauge version{version="v0.1.0"} 1 Additional resources Prometheus client library documentation 8.2. Setting up metrics collection for user-defined projects You can create a ServiceMonitor resource to scrape metrics from a service endpoint in a user-defined project. This assumes that your application uses a Prometheus client library to expose metrics to the /metrics canonical name. This section describes how to deploy a sample service in a user-defined project and then create a ServiceMonitor resource that defines how that service should be monitored. 8.2.1. Deploying a sample service To test monitoring of a service in a user-defined project, you can deploy a sample service. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with administrative permissions for the namespace. Procedure Create a YAML file for the service configuration. In this example, it is called prometheus-example-app.yaml . Add the following deployment and service configuration details to the file: apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: replicas: 1 selector: matchLabels: app: prometheus-example-app template: metadata: labels: app: prometheus-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.2 imagePullPolicy: IfNotPresent name: prometheus-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-example-app type: ClusterIP This configuration deploys a service named prometheus-example-app in the user-defined ns1 project. This service exposes the custom version metric. Apply the configuration to the cluster: USD oc apply -f prometheus-example-app.yaml It takes some time to deploy the service. You can check that the pod is running: USD oc -n ns1 get pod Example output NAME READY STATUS RESTARTS AGE prometheus-example-app-7857545cb7-sbgwq 1/1 Running 0 81m 8.2.2. Specifying how a service is monitored To use the metrics exposed by your service, you must configure OpenShift Container Platform monitoring to scrape metrics from the /metrics endpoint. You can do this using a ServiceMonitor custom resource definition (CRD) that specifies how a service should be monitored, or a PodMonitor CRD that specifies how a pod should be monitored. The former requires a Service object, while the latter does not, allowing Prometheus to directly scrape metrics from the metrics endpoint exposed by a pod. This procedure shows you how to create a ServiceMonitor resource for a service in a user-defined project. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or the monitoring-edit cluster role. You have enabled monitoring for user-defined projects. For this example, you have deployed the prometheus-example-app sample service in the ns1 project. Note The prometheus-example-app sample service does not support TLS authentication. Procedure Create a new YAML configuration file named example-app-service-monitor.yaml . Add a ServiceMonitor resource to the YAML file. The following example creates a service monitor named prometheus-example-monitor to scrape metrics exposed by the prometheus-example-app service in the ns1 namespace: apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 1 spec: endpoints: - interval: 30s port: web 2 scheme: http selector: 3 matchLabels: app: prometheus-example-app 1 Specify a user-defined namespace where your service runs. 2 Specify endpoint ports to be scraped by Prometheus. 3 Configure a selector to match your service based on its metadata labels. Note A ServiceMonitor resource in a user-defined namespace can only discover services in the same namespace. That is, the namespaceSelector field of the ServiceMonitor resource is always ignored. Apply the configuration to the cluster: USD oc apply -f example-app-service-monitor.yaml It takes some time to deploy the ServiceMonitor resource. Verify that the ServiceMonitor resource is running: USD oc -n <namespace> get servicemonitor Example output NAME AGE prometheus-example-monitor 81m 8.2.3. Example service endpoint authentication settings You can configure authentication for service endpoints for user-defined project monitoring by using ServiceMonitor and PodMonitor custom resource definitions (CRDs). The following samples show different authentication settings for a ServiceMonitor resource. Each sample shows how to configure a corresponding Secret object that contains authentication credentials and other relevant settings. 8.2.3.1. Sample YAML authentication with a bearer token The following sample shows bearer token settings for a Secret object named example-bearer-auth in the ns1 namespace: Example bearer token secret apiVersion: v1 kind: Secret metadata: name: example-bearer-auth namespace: ns1 stringData: token: <authentication_token> 1 1 Specify an authentication token. The following sample shows bearer token authentication settings for a ServiceMonitor CRD. The example uses a Secret object named example-bearer-auth : Example bearer token authentication settings apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - authorization: credentials: key: token 1 name: example-bearer-auth 2 port: web selector: matchLabels: app: prometheus-example-app 1 The key that contains the authentication token in the specified Secret object. 2 The name of the Secret object that contains the authentication credentials. Important Do not use bearerTokenFile to configure bearer token. If you use the bearerTokenFile configuration, the ServiceMonitor resource is rejected. 8.2.3.2. Sample YAML for Basic authentication The following sample shows Basic authentication settings for a Secret object named example-basic-auth in the ns1 namespace: Example Basic authentication secret apiVersion: v1 kind: Secret metadata: name: example-basic-auth namespace: ns1 stringData: user: <basic_username> 1 password: <basic_password> 2 1 Specify a username for authentication. 2 Specify a password for authentication. The following sample shows Basic authentication settings for a ServiceMonitor CRD. The example uses a Secret object named example-basic-auth : Example Basic authentication settings apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - basicAuth: username: key: user 1 name: example-basic-auth 2 password: key: password 3 name: example-basic-auth 4 port: web selector: matchLabels: app: prometheus-example-app 1 The key that contains the username in the specified Secret object. 2 4 The name of the Secret object that contains the Basic authentication. 3 The key that contains the password in the specified Secret object. 8.2.3.3. Sample YAML authentication with OAuth 2.0 The following sample shows OAuth 2.0 settings for a Secret object named example-oauth2 in the ns1 namespace: Example OAuth 2.0 secret apiVersion: v1 kind: Secret metadata: name: example-oauth2 namespace: ns1 stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 1 Specify an Oauth 2.0 ID. 2 Specify an Oauth 2.0 secret. The following sample shows OAuth 2.0 authentication settings for a ServiceMonitor CRD. The example uses a Secret object named example-oauth2 : Example OAuth 2.0 authentication settings apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - oauth2: clientId: secret: key: id 1 name: example-oauth2 2 clientSecret: key: secret 3 name: example-oauth2 4 tokenUrl: https://example.com/oauth2/token 5 port: web selector: matchLabels: app: prometheus-example-app 1 The key that contains the OAuth 2.0 ID in the specified Secret object. 2 4 The name of the Secret object that contains the OAuth 2.0 credentials. 3 The key that contains the OAuth 2.0 secret in the specified Secret object. 5 The URL used to fetch a token with the specified clientId and clientSecret . Additional resources Enabling monitoring for user-defined projects How to scrape metrics using TLS in a ServiceMonitor configuration in a user-defined project PodMonitor API ServiceMonitor API 8.3. Viewing a list of available metrics As a cluster administrator or as a user with view permissions for all projects, you can view a list of metrics available in a cluster and output the list in JSON format. Prerequisites You are a cluster administrator, or you have access to the cluster as a user with the cluster-monitoring-view cluster role. You have installed the OpenShift Container Platform CLI ( oc ). You have obtained the OpenShift Container Platform API route for Thanos Querier. You are able to get a bearer token by using the oc whoami -t command. Important You can only use bearer token authentication to access the Thanos Querier API route. Procedure If you have not obtained the OpenShift Container Platform API route for Thanos Querier, run the following command: USD oc get routes -n openshift-monitoring thanos-querier -o jsonpath='{.status.ingress[0].host}' Retrieve a list of metrics in JSON format from the Thanos Querier API route by running the following command. This command uses oc to authenticate with a bearer token. USD curl -k -H "Authorization: Bearer USD(oc whoami -t)" https://<thanos_querier_route>/api/v1/metadata 1 1 Replace <thanos_querier_route> with the OpenShift Container Platform API route for Thanos Querier. 8.4. Querying metrics The OpenShift Container Platform monitoring dashboard enables you to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about the state of a cluster and any user-defined workloads that you are monitoring. As a cluster administrator, you can query metrics for all core OpenShift Container Platform and user-defined projects. As a developer, you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project. 8.4.1. Querying metrics for all projects as a cluster administrator As a cluster administrator or as a user with view permissions for all projects, you can access metrics for all default OpenShift Container Platform and user-defined projects in the Metrics UI. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or with view permissions for all projects. You have installed the OpenShift CLI ( oc ). Procedure From the Administrator perspective in the OpenShift Container Platform web console, select Observe Metrics . To add one or more queries, do any of the following: Option Description Create a custom query. Add your Prometheus Query Language (PromQL) query to the Expression field. As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. You can use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. You can also move your mouse pointer over a suggested item to view a brief description of that item. Add multiple queries. Select Add query . Duplicate an existing query. Select the Options menu to the query, then choose Duplicate query . Disable a query from being run. Select the Options menu to the query and choose Disable query . To run queries that you created, select Run queries . The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message. Note Queries that operate on large amounts of data might time out or overload the browser when drawing time series graphs. To avoid this, select Hide graph and calibrate your query using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs. Note By default, the query table shows an expanded view that lists every metric and its current value. You can select ˅ to minimize the expanded view for a query. Optional: The page URL now contains the queries you ran. To use this set of queries again in the future, save this URL. Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. You can select which metrics are shown by doing any of the following: Option Description Hide all metrics from a query. Click the Options menu for the query and click Hide all series . Hide a specific metric. Go to the query table and click the colored square near the metric name. Zoom into the plot and change the time range. Either: Visually select the time range by clicking and dragging on the plot horizontally. Use the menu in the left upper corner to select the time range. Reset the time range. Select Reset zoom . Display outputs for all queries at a specific point in time. Hold the mouse cursor on the plot at that point. The query outputs will appear in a pop-up box. Hide the plot. Select Hide graph . Additional resources For more information about creating PromQL queries, see the Prometheus query documentation . 8.4.2. Querying metrics for user-defined projects as a developer You can access metrics for a user-defined project as a developer or as a user with view permissions for the project. In the Developer perspective, the Metrics UI includes some predefined CPU, memory, bandwidth, and network packet queries for the selected project. You can also run custom Prometheus Query Language (PromQL) queries for CPU, memory, bandwidth, network packet and application metrics for the project. Note Developers can only use the Developer perspective and not the Administrator perspective. As a developer, you can only query metrics for one project at a time. Prerequisites You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for. You have enabled monitoring for user-defined projects. You have deployed a service in a user-defined project. You have created a ServiceMonitor custom resource definition (CRD) for the service to define how the service is monitored. Procedure From the Developer perspective in the OpenShift Container Platform web console, select Observe Metrics . Select the project that you want to view metrics for in the Project: list. Select a query from the Select query list, or create a custom PromQL query based on the selected query by selecting Show PromQL . The metrics from the queries are visualized on the plot. Note In the Developer perspective, you can only run one query at a time. Explore the visualized metrics by doing any of the following: Option Description Zoom into the plot and change the time range. Either: Visually select the time range by clicking and dragging on the plot horizontally. Use the menu in the left upper corner to select the time range. Reset the time range. Select Reset zoom . Display outputs for all queries at a specific point in time. Hold the mouse cursor on the plot at that point. The query outputs appear in a pop-up box. Additional resources For more information about creating PromQL queries, see the Prometheus query documentation . 8.5. Getting detailed information about a metrics target In the Administrator perspective in the OpenShift Container Platform web console, you can use the Metrics targets page to view, search, and filter the endpoints that are currently targeted for scraping, which helps you to identify and troubleshoot problems. For example, you can view the current status of targeted endpoints to see when OpenShift Container Platform Monitoring is not able to scrape metrics from a targeted component. The Metrics targets page shows targets for default OpenShift Container Platform projects and for user-defined projects. Prerequisites You have access to the cluster as an administrator for the project for which you want to view metrics targets. Procedure In the Administrator perspective, select Observe Targets . The Metrics targets page opens with a list of all service endpoint targets that are being scraped for metrics. This page shows details about targets for default OpenShift Container Platform and user-defined projects. This page lists the following information for each target: Service endpoint URL being scraped ServiceMonitor component being monitored The up or down status of the target Namespace Last scrape time Duration of the last scrape Optional: The list of metrics targets can be long. To find a specific target, do any of the following: Option Description Filter the targets by status and source. Select filters in the Filter list. The following filtering options are available: Status filters: Up . The target is currently up and being actively scraped for metrics. Down . The target is currently down and not being scraped for metrics. Source filters: Platform . Platform-level targets relate only to default Red Hat OpenShift Service on AWS projects. These projects provide core Red Hat OpenShift Service on AWS functionality. User . User targets relate to user-defined projects. These projects are user-created and can be customized. Search for a target by name or label. Enter a search term in the Text or Label field to the search box. Sort the targets. Click one or more of the Endpoint Status , Namespace , Last Scrape , and Scrape Duration column headers. Click the URL in the Endpoint column for a target to navigate to its Target details page. This page provides information about the target, including the following: The endpoint URL being scraped for metrics The current Up or Down status of the target A link to the namespace A link to the ServiceMonitor details Labels attached to the target The most recent time that the target was scraped for metrics
[ "curl http://<example_app_endpoint>/metrics", "HELP http_requests_total Count of all HTTP requests TYPE http_requests_total counter http_requests_total{code=\"200\",method=\"get\"} 4 http_requests_total{code=\"404\",method=\"get\"} 2 HELP version Version information about this binary TYPE version gauge version{version=\"v0.1.0\"} 1", "apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: replicas: 1 selector: matchLabels: app: prometheus-example-app template: metadata: labels: app: prometheus-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.2 imagePullPolicy: IfNotPresent name: prometheus-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-example-app type: ClusterIP", "oc apply -f prometheus-example-app.yaml", "oc -n ns1 get pod", "NAME READY STATUS RESTARTS AGE prometheus-example-app-7857545cb7-sbgwq 1/1 Running 0 81m", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 1 spec: endpoints: - interval: 30s port: web 2 scheme: http selector: 3 matchLabels: app: prometheus-example-app", "oc apply -f example-app-service-monitor.yaml", "oc -n <namespace> get servicemonitor", "NAME AGE prometheus-example-monitor 81m", "apiVersion: v1 kind: Secret metadata: name: example-bearer-auth namespace: ns1 stringData: token: <authentication_token> 1", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - authorization: credentials: key: token 1 name: example-bearer-auth 2 port: web selector: matchLabels: app: prometheus-example-app", "apiVersion: v1 kind: Secret metadata: name: example-basic-auth namespace: ns1 stringData: user: <basic_username> 1 password: <basic_password> 2", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - basicAuth: username: key: user 1 name: example-basic-auth 2 password: key: password 3 name: example-basic-auth 4 port: web selector: matchLabels: app: prometheus-example-app", "apiVersion: v1 kind: Secret metadata: name: example-oauth2 namespace: ns1 stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - oauth2: clientId: secret: key: id 1 name: example-oauth2 2 clientSecret: key: secret 3 name: example-oauth2 4 tokenUrl: https://example.com/oauth2/token 5 port: web selector: matchLabels: app: prometheus-example-app", "oc get routes -n openshift-monitoring thanos-querier -o jsonpath='{.status.ingress[0].host}'", "curl -k -H \"Authorization: Bearer USD(oc whoami -t)\" https://<thanos_querier_route>/api/v1/metadata 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/monitoring/managing-metrics
Chapter 27. Removing Stratis file systems
Chapter 27. Removing Stratis file systems You can remove an existing Stratis file system, or a Stratis pool, by destroying data on them. 27.1. Removing a Stratis file system You can remove an existing Stratis file system. Data stored on it are lost. Prerequisites Stratis is installed. See Installing Stratis . The stratisd service is running. You have created a Stratis file system. See Creating a Stratis file system . Procedure Unmount the file system: Destroy the file system: Verification Verify that the file system no longer exists: Additional resources stratis(8) man page on your system 27.2. Deleting a file system from a Stratis pool by using the web console You can use the web console to delete a file system from an existing Stratis pool. Note Deleting a Stratis pool file system erases all the data it contains. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Stratis is installed. The web console detects and installs Stratis by default. However, for manually installing Stratis, see Installing Stratis . The stratisd service is running. You have an existing Stratis pool. You have created a file system on the Stratis pool. Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . Click Storage . In the Storage table, click the Stratis pool from which you want to delete a file system. On the Stratis pool page, scroll to the Stratis filesystems section and click the menu button ... for the file system you want to delete. From the drop-down menu, select Delete . In the Confirm deletion dialog box, click Delete . 27.3. Removing a Stratis pool You can remove an existing Stratis pool. Data stored on it are lost. Prerequisites Stratis is installed. See Installing Stratis . The stratisd service is running. You have created a Stratis pool: To create an unencrypted pool, see Creating an unencrypted Stratis pool . To create an encrypted pool, see Creating an encrypted Stratis pool . Procedure List file systems on the pool: Unmount all file systems on the pool: Destroy the file systems: Destroy the pool: Verification Verify that the pool no longer exists: Additional resources stratis(8) man page on your system 27.4. Deleting a Stratis pool by using the web console You can use the web console to delete an existing Stratis pool. Note Deleting a Stratis pool erases all the data it contains. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The stratisd service is running. You have an existing Stratis pool. Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . Click Storage . In the Storage table, click the menu button ... for the Stratis pool you want to delete. From the drop-down menu, select Delete pool . In the Permanently delete pool dialog box, click Delete .
[ "umount /dev/stratis/ my-pool / my-fs", "stratis filesystem destroy my-pool my-fs", "stratis filesystem list my-pool", "stratis filesystem list my-pool", "umount /dev/stratis/ my-pool / my-fs-1 /dev/stratis/ my-pool / my-fs-2 /dev/stratis/ my-pool / my-fs-n", "stratis filesystem destroy my-pool my-fs-1 my-fs-2", "stratis pool destroy my-pool", "stratis pool list" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_storage_devices/removing-stratis-file-systems
Creating and consuming execution environments
Creating and consuming execution environments Red Hat Ansible Automation Platform 2.4 Create and use execution environments with Ansible Builder Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/creating_and_consuming_execution_environments/index
Chapter 6. GitOps CLI for use with Red Hat OpenShift GitOps
Chapter 6. GitOps CLI for use with Red Hat OpenShift GitOps The GitOps argocd CLI is a tool for configuring and managing Red Hat OpenShift GitOps and Argo CD resources from a terminal. With the GitOps CLI, you can make GitOps computing tasks simple and concise. You can install this CLI tool on different platforms. 6.1. Installing the GitOps CLI See Installing the GitOps CLI . 6.2. Additional resources What is GitOps?
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/cli_tools/gitops-argocd-cli-tools
Chapter 2. Configuring and deploying a Red Hat OpenStack Platform HCI
Chapter 2. Configuring and deploying a Red Hat OpenStack Platform HCI The following procedure describes the high-level steps involved in configuring and deploying a Red Hat OpenStack Platform (RHOSP) HCI. Each step is expanded on in subsequent sections. Procedure Prepare the predefined custom overcloud role for hyperconverged nodes, ComputeHCI . Configure resource isolation . Map storage management network ports to NICs . Deploy the overcloud . (Optional) Scale the hyperconverged nodes .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/hyperconverged_infrastructure_guide/highlevel-procedure-configuring-deploying-rhosphci
Chapter 3. Targeted Policy
Chapter 3. Targeted Policy Targeted policy is the default SELinux policy used in Red Hat Enterprise Linux. When using targeted policy, processes that are targeted run in a confined domain, and processes that are not targeted run in an unconfined domain. For example, by default, logged-in users run in the unconfined_t domain, and system processes started by init run in the unconfined_service_t domain; both of these domains are unconfined. Executable and writable memory checks may apply to both confined and unconfined domains. However, by default, subjects running in an unconfined domain can allocate writable memory and execute it. These memory checks can be enabled by setting Booleans, which allow the SELinux policy to be modified at runtime. Boolean configuration is discussed later. 3.1. Confined Processes Almost every service that listens on a network, such as sshd or httpd , is confined in Red Hat Enterprise Linux. Also, most processes that run as the root user and perform tasks for users, such as the passwd utility, are confined. When a process is confined, it runs in its own domain, such as the httpd process running in the httpd_t domain. If a confined process is compromised by an attacker, depending on SELinux policy configuration, an attacker's access to resources and the possible damage they can do is limited. Complete this procedure to ensure that SELinux is enabled and the system is prepared to perform the following example: Procedure 3.1. How to Verify SELinux Status Confirm that SELinux is enabled, is running in enforcing mode, and that targeted policy is being used. The correct output should look similar to the output below: See Section 4.4, "Permanent Changes in SELinux States and Modes" for detailed information about changing SELinux modes. As root, create a file in the /var/www/html/ directory: Enter the following command to view the SELinux context of the newly created file: By default, Linux users run unconfined in Red Hat Enterprise Linux, which is why the testfile file is labeled with the SELinux unconfined_u user. RBAC is used for processes, not files. Roles do not have a meaning for files; the object_r role is a generic role used for files (on persistent storage and network file systems). Under the /proc directory, files related to processes may use the system_r role. The httpd_sys_content_t type allows the httpd process to access this file. The following example demonstrates how SELinux prevents the Apache HTTP Server ( httpd ) from reading files that are not correctly labeled, such as files intended for use by Samba. This is an example, and should not be used in production. It assumes that the httpd and wget packages are installed, the SELinux targeted policy is used, and that SELinux is running in enforcing mode. Procedure 3.2. An Example of Confined Process As root, start the httpd daemon: Confirm that the service is running. The output should include the information below (only the time stamp will differ): Change into a directory where your Linux user has write access to, and enter the following command. Unless there are changes to the default configuration, this command succeeds: The chcon command relabels files; however, such label changes do not survive when the file system is relabeled. For permanent changes that survive a file system relabel, use the semanage utility, which is discussed later. As root, enter the following command to change the type to a type used by Samba: Enter the following command to view the changes: Note that the current DAC permissions allow the httpd process access to testfile . Change into a directory where your user has write access to, and enter the following command. Unless there are changes to the default configuration, this command fails: As root, remove testfile : If you do not require httpd to be running, as root, enter the following command to stop it: This example demonstrates the additional security added by SELinux. Although DAC rules allowed the httpd process access to testfile in step 2, because the file was labeled with a type that the httpd process does not have access to, SELinux denied access. If the auditd daemon is running, an error similar to the following is logged to /var/log/audit/audit.log : Also, an error similar to the following is logged to /var/log/httpd/error_log :
[ "~]USD sestatus SELinux status: enabled SELinuxfs mount: /sys/fs/selinux SELinux root directory: /etc/selinux Loaded policy name: targeted Current mode: enforcing Mode from config file: enforcing Policy MLS status: enabled Policy deny_unknown status: allowed Max kernel policy version: 30", "~]# touch /var/www/html/testfile", "~]USD ls -Z /var/www/html/testfile -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 /var/www/html/testfile", "~]# systemctl start httpd.service", "~]USD systemctl status httpd.service httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled) Active: active (running) since Mon 2013-08-05 14:00:55 CEST; 8s ago", "~]USD wget http://localhost/testfile --2009-11-06 17:43:01-- http://localhost/testfile Resolving localhost... 127.0.0.1 Connecting to localhost|127.0.0.1|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 0 [text/plain] Saving to: `testfile' [ <=> ] 0 --.-K/s in 0s 2009-11-06 17:43:01 (0.00 B/s) - `testfile' saved [0/0]", "~]# chcon -t samba_share_t /var/www/html/testfile", "~]USD ls -Z /var/www/html/testfile -rw-r--r-- root root unconfined_u:object_r:samba_share_t:s0 /var/www/html/testfile", "~]USD wget http://localhost/testfile --2009-11-06 14:11:23-- http://localhost/testfile Resolving localhost... 127.0.0.1 Connecting to localhost|127.0.0.1|:80... connected. HTTP request sent, awaiting response... 403 Forbidden 2009-11-06 14:11:23 ERROR 403: Forbidden.", "~]# rm -i /var/www/html/testfile", "~]# systemctl stop httpd.service", "type=AVC msg=audit(1220706212.937:70): avc: denied { getattr } for pid=1904 comm=\"httpd\" path=\"/var/www/html/testfile\" dev=sda5 ino=247576 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:samba_share_t:s0 tclass=file type=SYSCALL msg=audit(1220706212.937:70): arch=40000003 syscall=196 success=no exit=-13 a0=b9e21da0 a1=bf9581dc a2=555ff4 a3=2008171 items=0 ppid=1902 pid=1904 auid=500 uid=48 gid=48 euid=48 suid=48 fsuid=48 egid=48 sgid=48 fsgid=48 tty=(none) ses=1 comm=\"httpd\" exe=\"/usr/sbin/httpd\" subj=unconfined_u:system_r:httpd_t:s0 key=(null)", "[Wed May 06 23:00:54 2009] [error] [client 127.0.0.1 ] (13)Permission denied: access to /testfile denied" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/chap-security-enhanced_linux-targeted_policy
Chapter 69. Configuring certificate mapping rules in Identity Management
Chapter 69. Configuring certificate mapping rules in Identity Management Certificate mapping rules are a convenient way of allowing users to authenticate using certificates in scenarios when the Identity Management (IdM) administrator does not have access to certain users' certificates. This is typically because the certificates have been issued by an external certificate authority. 69.1. Certificate mapping rules for configuring authentication You might need to configure certificate mapping rules in the following scenarios: Certificates have been issued by the Certificate System of the Active Directory (AD) with which the IdM domain is in a trust relationship. Certificates have been issued by an external certificate authority. The IdM environment is large with many users using smart cards. In this case, adding full certificates can be complicated. The subject and issuer are predictable in most scenarios and therefore easier to add ahead of time than the full certificate. As a system administrator, you can create a certificate mapping rule and add certificate mapping data to a user entry even before a certificate is issued to a particular user. Once the certificate is issued, the user can log in using the certificate even though the full certificate has not yet been uploaded to the user entry. In addition, as certificates are renewed at regular intervals, certificate mapping rules reduce administrative overhead. When a user's certificate is renewed, the administrator does not have to update the user entry. For example, if the mapping is based on the Subject and Issuer values, and if the new certificate has the same subject and issuer as the old one, the mapping still applies. If, in contrast, the full certificate was used, then the administrator would have to upload the new certificate to the user entry to replace the old one. To set up certificate mapping: An administrator has to load the certificate mapping data or the full certificate into a user account. An administrator has to create a certificate mapping rule to allow successful logging into IdM for a user whose account contains a certificate mapping data entry that matches the information on the certificate. Once the certificate mapping rules have been created, when the end-user presents the certificate, stored either on a filesystem or a smart card , authentication is successful. Note The Key Distribution Center (KDC) has a cache for certificate mapping rules. The cache is populated on the first certauth request and it has a hard-coded timeout of 300 seconds. KDC will not see any changes to certificate mapping rules unless it is restarted or the cache expires. For details on the individual components that make up a mapping rule and how to obtain and use them, see Components of an identity mapping rule in IdM and Obtaining the issuer from a certificate for use in a matching rule . Note Your certificate mapping rules can depend on the use case for which you are using the certificate. For example, if you are using SSH with certificates, you must have the full certificate to extract the public key from the certificate. 69.2. Components of an identity mapping rule in IdM You configure different components when creating an identity mapping rule in IdM. Each component has a default value that you can override. You can define the components in either the web UI or the CLI. In the CLI, the identity mapping rule is created using the ipa certmaprule-add command. Mapping rule The mapping rule component associates (or maps ) a certificate with one or more user accounts. The rule defines an LDAP search filter that associates a certificate with the intended user account. Certificates issued by different certificate authorities (CAs) might have different properties and might be used in different domains. Therefore, IdM does not apply mapping rules unconditionally, but only to the appropriate certificates. The appropriate certificates are defined using matching rules . Note that if you leave the mapping rule option empty, the certificates are searched in the userCertificate attribute as a DER encoded binary file. Define the mapping rule in the CLI using the --maprule option. Matching rule The matching rule component selects a certificate to which you want to apply the mapping rule. The default matching rule matches certificates with the digitalSignature key usage and clientAuth extended key usage. Define the matching rule in the CLI using the --matchrule option. Domain list The domain list specifies the identity domains in which you want IdM to search the users when processing identity mapping rules. If you leave the option unspecified, IdM searches the users only in the local domain to which the IdM client belongs. Define the domain in the CLI using the --domain option. Priority When multiple rules are applicable to a certificate, the rule with the highest priority takes precedence. All other rules are ignored. The lower the numerical value, the higher the priority of the identity mapping rule. For example, a rule with a priority 1 has higher priority than a rule with a priority 2. If a rule has no priority value defined, it has the lowest priority. Define the mapping rule priority in the CLI using the --priority option. Certificate mapping rule example To define, using the CLI, a certificate mapping rule called simple_rule that allows authentication for a certificate issued by the Smart Card CA of the EXAMPLE.ORG organization if the Subject on that certificate matches a certmapdata entry in a user account in IdM: 69.3. Obtaining data from a certificate for use in a matching rule This procedure describes how to obtain data from a certificate so that you can copy and paste it into the matching rule of a certificate mapping rule. To get data required by a matching rule, use the sssctl cert-show or sssctl cert-eval-rule commands. Prerequisites You have the user certificate in PEM format. Procedure Create a variable pointing to your certificate that also ensures it is correctly encoded so you can retrieve the required data. Use the sssctl cert-eval-rule to determine the matching data. In the following example the certificate serial number is used. In this case, add everything after altSecurityIdentities= to the altSecurityIdentities attribute in AD for the user. If using SKI mapping, use --map='LDAPU1:(altSecurityIdentities=X509:<SKI>{subject_key_id!hex_u})' . Optional: To create a new mapping rule in the CLI based on a matching rule which specifies that the certificate issuer must match adcs19-WIN1-CA of the ad.example.com domain and the serial number of the certificate must match the altSecurityIdentities entry in a user account: 69.4. Configuring certificate mapping for users stored in IdM To enable certificate mapping in IdM if the user for whom certificate authentication is being configured is stored in IdM, a system administrator must complete the following tasks: Set up a certificate mapping rule so that IdM users with certificates that match the conditions specified in the mapping rule and in their certificate mapping data entries can authenticate to IdM. Enter certificate mapping data to an IdM user entry so that the user can authenticate using multiple certificates provided that they all contain the values specified in the certificate mapping data entry. Prerequisites The user has an account in IdM. The administrator has either the whole certificate or the certificate mapping data to add to the user entry. 69.4.1. Adding a certificate mapping rule in the IdM web UI Log in to the IdM web UI as an administrator. Navigate to Authentication Certificate Identity Mapping Rules Certificate Identity Mapping Rules . Click Add . Figure 69.1. Adding a new certificate mapping rule in the IdM web UI Enter the rule name. Enter the mapping rule. For example, to make IdM search for the Issuer and Subject entries in any certificate presented to them, and base its decision to authenticate or not on the information found in these two entries of the presented certificate: Enter the matching rule. For example, to only allow certificates issued by the Smart Card CA of the EXAMPLE.ORG organization to authenticate users to IdM: Figure 69.2. Entering the details for a certificate mapping rule in the IdM web UI Click Add at the bottom of the dialog box to add the rule and close the box. The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD: Now you have a certificate mapping rule set up that compares the type of data specified in the mapping rule that it finds on a smart card certificate with the certificate mapping data in your IdM user entries. Once it finds a match, it authenticates the matching user. 69.4.2. Adding a certificate mapping rule in the IdM CLI Obtain the administrator's credentials: Enter the mapping rule and the matching rule the mapping rule is based on. For example, to make IdM search for the Issuer and Subject entries in any certificate presented, and base its decision to authenticate or not on the information found in these two entries of the presented certificate, recognizing only certificates issued by the Smart Card CA of the EXAMPLE.ORG organization: The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD: Now you have a certificate mapping rule set up that compares the type of data specified in the mapping rule that it finds on a smart card certificate with the certificate mapping data in your IdM user entries. Once it finds a match, it authenticates the matching user. 69.4.3. Adding certificate mapping data to a user entry in the IdM web UI Log into the IdM web UI as an administrator. Navigate to Users Active users idm_user . Find the Certificate mapping data option and click Add . Choose one of the following options: If you have the certificate of idm_user : In the command-line interface, display the certificate using the cat utility or a text editor: Copy the certificate. In the IdM web UI, click Add to Certificate and paste the certificate into the window that opens up. Figure 69.3. Adding a user's certificate mapping data: certificate If you do not have the certificate of idm_user at your disposal but know the Issuer and the Subject of the certificate, check the radio button of Issuer and subject and enter the values in the two respective boxes. Figure 69.4. Adding a user's certificate mapping data: issuer and subject Click Add . Verification If you have access to the whole certificate in the .pem format, verify that the user and certificate are linked: Use the sss_cache utility to invalidate the record of idm_user in the SSSD cache and force a reload of the idm_user information: Run the ipa certmap-match command with the name of the file containing the certificate of the IdM user: The output confirms that now you have certificate mapping data added to idm_user and that a corresponding mapping rule exists. This means that you can use any certificate that matches the defined certificate mapping data to authenticate as idm_user . 69.4.4. Adding certificate mapping data to a user entry in the IdM CLI Obtain the administrator's credentials: Choose one of the following options: If you have the certificate of idm_user , add the certificate to the user account using the ipa user-add-cert command: If you do not have the certificate of idm_user but know the Issuer and the Subject of the user's certificate: Verification If you have access to the whole certificate in the .pem format, verify that the user and certificate are linked: Use the sss_cache utility to invalidate the record of idm_user in the SSSD cache and force a reload of the idm_user information: Run the ipa certmap-match command with the name of the file containing the certificate of the IdM user: The output confirms that now you have certificate mapping data added to idm_user and that a corresponding mapping rule exists. This means that you can use any certificate that matches the defined certificate mapping data to authenticate as idm_user . 69.5. Certificate mapping rules for trusts with Active Directory domains Different certificate mapping use cases are possible if an IdM deployment is in a trust relationship with an Active Directory (AD) domain. Depending on the AD configuration, the following scenarios are possible: If the certificate is issued by AD Certificate System but the user and the certificate are stored in IdM, the mapping and the whole processing of the authentication request takes place on the IdM side. For details of configuring this scenario, see Configuring certificate mapping for users stored in IdM If the user is stored in AD, the processing of the authentication request takes place in AD. There are three different subcases: The AD user entry contains the whole certificate. For details how to configure IdM in this scenario, see Configuring certificate mapping for users whose AD user entry contains the whole certificate . AD is configured to map user certificates to user accounts. In this case, the AD user entry does not contain the whole certificate but instead contains an attribute called altSecurityIdentities . For details how to configure IdM in this scenario, see Configuring certificate mapping if AD is configured to map user certificates to user accounts . The AD user entry contains neither the whole certificate nor the mapping data. In this case, there are two options: If the user certificate is issued by AD Certificate System, the certificate either contains the user principal name as the Subject Alternative Name (SAN) or, if the latest updates are applied to AD, the SID of the user in the SID extension of the certificate. Both of these can be used to map the certificate to the user. If the user certificate is on a smart card, to enable SSH with smart cards, SSSD must derive the public SSH key from the certificate and therefore the full certificate is required. The only solution is to use the ipa idoverrideuser-add command to add the whole certificate to the AD user's ID override in IdM. For details, see Configuring certificate mapping if AD user entry contains no certificate or mapping data . AD domain administrators can manually map certificates to a user in AD using the altSecurityIdentities attribute. There are six supported values for this attribute, though three mappings are considered insecure. As part of May 10,2022 security update , once it is installed, all devices are in compatibility mode and if a certificate is weakly mapped to a user, authentication occurs as expected. However, warning messages are logged identifying any certificates that are not compatible with full enforcement mode. As of November 14, 2023 or later, all devices will be updated to full enforcement mode and if a certificate fails the strong mapping criteria, authentication will be denied. For example, when an AD user requests an IdM Kerberos ticket with a certificate (PKINIT), AD needs to map the certificate to a user internally and uses the new mapping rules for this. However in IdM, the rules continue to work if IdM is used to map a certificate to a user on an IdM client, . IdM supports the new mapping templates, making it easier for an AD administrator to use the new rules and not maintain both. IdM now supports the new mapping templates added to Active Directory to include: Serial Number: LDAPU1:(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<SR>{serial_number!hex_ur}) Subject Key Id: LDAPU1:(altSecurityIdentities=X509:<SKI>{subject_key_id!hex_u}) User SID: LDAPU1:(objectsid={sid}) If you do not want to reissue certificates with the new SID extension, you can create a manual mapping by adding the appropriate mapping string to a user's altSecurityIdentities attribute in AD. 69.6. Configuring certificate mapping for users whose AD user entry contains the whole certificate This user story describes the steps necessary for enabling certificate mapping in IdM if the IdM deployment is in trust with Active Directory (AD), the user is stored in AD and the user entry in AD contains the whole certificate. Prerequisites The user does not have an account in IdM. The user has an account in AD which contains a certificate. The IdM administrator has access to data on which the IdM certificate mapping rule can be based. Note To ensure PKINIT works for a user, one of the following conditions must apply: The certificate in the user entry includes the user principal name or the SID extension for the user. The user entry in AD has a suitable entry in the altSecurityIdentities attribute. 69.6.1. Adding a certificate mapping rule in the IdM web UI Log into the IdM web UI as an administrator. Navigate to Authentication Certificate Identity Mapping Rules Certificate Identity Mapping Rules . Click Add . Figure 69.5. Adding a new certificate mapping rule in the IdM web UI Enter the rule name. Enter the mapping rule. To have the whole certificate that is presented to IdM for authentication compared to what is available in AD: Note If mapping using the full certificate, if you renew the certificate, you must ensure that you add the new certificate to the AD user object. Enter the matching rule. For example, to only allow certificates issued by the AD-ROOT-CA of the AD.EXAMPLE.COM domain to authenticate: Figure 69.6. Certificate mapping rule for a user with a certificate stored in AD Click Add . The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD in the CLI:: 69.6.2. Adding a certificate mapping rule in the IdM CLI Obtain the administrator's credentials: Enter the mapping rule and the matching rule the mapping rule is based on. To have the whole certificate that is presented for authentication compared to what is available in AD, only allowing certificates issued by the AD-ROOT-CA of the AD.EXAMPLE.COM domain to authenticate: Note If mapping using the full certificate, if you renew the certificate, you must ensure that you add the new certificate to the AD user object. The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD: 69.7. Configuring certificate mapping if AD is configured to map user certificates to user accounts This user story describes the steps necessary for enabling certificate mapping in IdM if the IdM deployment is in trust with Active Directory (AD), the user is stored in AD, and the user entry in AD contains certificate mapping data. Prerequisites The user does not have an account in IdM. The user has an account in AD which contains the altSecurityIdentities attribute, the AD equivalent of the IdM certmapdata attribute. The IdM administrator has access to data on which the IdM certificate mapping rule can be based. 69.7.1. Adding a certificate mapping rule in the IdM web UI Log into the IdM web UI as an administrator. Navigate to Authentication Certificate Identity Mapping Rules Certificate Identity Mapping Rules . Click Add . Figure 69.7. Adding a new certificate mapping rule in the IdM web UI Enter the rule name. Enter the mapping rule. For example, to make AD DC search for the Issuer and Subject entries in any certificate presented, and base its decision to authenticate or not on the information found in these two entries of the presented certificate: Enter the matching rule. For example, to only allow certificates issued by the AD-ROOT-CA of the AD.EXAMPLE.COM domain to authenticate users to IdM: Enter the domain: Figure 69.8. Certificate mapping rule if AD is configured for mapping Click Add . The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD in the CLI:: 69.7.2. Adding a certificate mapping rule in the IdM CLI Obtain the administrator's credentials: Enter the mapping rule and the matching rule the mapping rule is based on. For example, to make AD search for the Issuer and Subject entries in any certificate presented, and only allow certificates issued by the AD-ROOT-CA of the AD.EXAMPLE.COM domain: The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD: 69.7.3. Checking certificate mapping data on the AD side The altSecurityIdentities attribute is the Active Directory (AD) equivalent of certmapdata user attribute in IdM. When configuring certificate mapping in IdM in the scenario when a trusted AD domain is configured to map user certificates to user accounts, the IdM system administrator needs to check that the altSecurityIdentities attribute is set correctly in the user entries in AD. Prerequisites The user account must have user administration access. Procedure To check that AD contains the right information for the user stored in AD, use the ldapsearch command. For example, enter the command below to check with the adserver.ad.example.com server that the following conditions apply: The altSecurityIdentities attribute is set in the user entry of ad_user . The matchrule stipulates that the following conditions apply: The certificate that ad_user uses to authenticate to AD was issued by AD-ROOT-CA of the ad.example.com domain. The subject is <S>DC=com,DC=example,DC=ad,CN=Users,CN=ad_user : 69.8. Configuring certificate mapping if AD user entry contains no certificate or mapping data This user story describes the steps necessary for enabling certificate mapping in IdM if the IdM deployment is in trust with Active Directory (AD), the user is stored in AD and the user entry in AD contains neither the whole certificate nor certificate mapping data. Prerequisites The user does not have an account in IdM. The user has an account in AD which contains neither the whole certificate nor the altSecurityIdentities attribute, the AD equivalent of the IdM certmapdata attribute. The IdM administrator has done one of the following: Added the whole AD user certificate to the AD user's user ID override in IdM. Created a certificate mapping rule that maps to an alternative field in the certificate, such as Subject Alternative Name or the SID of the user. 69.8.1. Adding a certificate mapping rule in the IdM web UI Log into the IdM web UI as an administrator. Navigate to Authentication Certificate Identity Mapping Rules Certificate Identity Mapping Rules . Click Add . Figure 69.9. Adding a new certificate mapping rule in the IdM web UI Enter the rule name. Enter the mapping rule. To have the whole certificate that is presented to IdM for authentication compared to the certificate stored in the user ID override entry of the AD user entry in IdM: Note As the certificate also contains the user principal name as the SAN, or with the latest updates, the SID of the user in the SID extension of the certificate, you can also use these fields to map the certificate to the user. For example, if using the SID of the user, replace this mapping rule with LDAPU1:(objectsid={sid}) . For more information on certificate mapping, see the sss-certmap man page on your system. Enter the matching rule. For example, to only allow certificates issued by the AD-ROOT-CA of the AD.EXAMPLE.COM domain to authenticate: Enter the domain name. For example, to search for users in the ad.example.com domain: Figure 69.10. Certificate mapping rule for a user with no certificate or mapping data stored in AD Click Add . The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD in the CLI: 69.8.2. Adding a certificate mapping rule in the IdM CLI Obtain the administrator's credentials: Enter the mapping rule and the matching rule the mapping rule is based on. To have the whole certificate that is presented for authentication compared to the certificate stored in the user ID override entry of the AD user entry in IdM, only allowing certificates issued by the AD-ROOT-CA of the AD.EXAMPLE.COM domain to authenticate: Note As the certificate also contains the user principal name as the SAN, or with the latest updates, the SID of the user in the SID extension of the certificate, you can also use these fields to map the certificate to the user. For example, if using the SID of the user, replace this mapping rule with LDAPU1:(objectsid={sid}) . For more information on certificate mapping, see the sss-certmap man page on your system. The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD: 69.8.3. Adding a certificate to an AD user's ID override in the IdM web UI Navigate to Identity ID Views Default Trust View . Click Add . Figure 69.11. Adding a new user ID override in the IdM web UI In the User to override field, enter [email protected] . Copy and paste the certificate of ad_user into the Certificate field. Figure 69.12. Configuring the User ID override for an AD user Click Add . Verification Verify that the user and certificate are linked: Use the sss_cache utility to invalidate the record of [email protected] in the SSSD cache and force a reload of the [email protected] information: Run the ipa certmap-match command with the name of the file containing the certificate of the AD user: The output confirms that you have certificate mapping data added to [email protected] and that a corresponding mapping rule defined in Adding a certificate mapping rule if the AD user entry contains no certificate or mapping data exists. This means that you can use any certificate that matches the defined certificate mapping data to authenticate as [email protected] . Additional resources Using ID views for Active Directory users 69.8.4. Adding a certificate to an AD user's ID override in the IdM CLI Obtain the administrator's credentials: Store the certificate blob in a new variable called CERT : Add the certificate of [email protected] to the user account using the ipa idoverrideuser-add-cert command: Verification Verify that the user and certificate are linked: Use the sss_cache utility to invalidate the record of [email protected] in the SSSD cache and force a reload of the [email protected] information: Run the ipa certmap-match command with the name of the file containing the certificate of the AD user: The output confirms that you have certificate mapping data added to [email protected] and that a corresponding mapping rule defined in Adding a certificate mapping rule if the AD user entry contains no certificate or mapping data exists. This means that you can use any certificate that matches the defined certificate mapping data to authenticate as [email protected] . Additional resources Using ID views for Active Directory users 69.9. Combining several identity mapping rules into one To combine several identity mapping rules into one combined rule, use the | (or) character to precede the individual mapping rules, and separate them using () brackets, for example: Certificate mapping filter example 1 In the above example, the filter definition in the --maprule option includes these criteria: ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500} is a filter that links the subject and issuer from a smart card certificate to the value of the ipacertmapdata attribute in an IdM user account, as described in Adding a certificate mapping rule in IdM altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500} is a filter that links the subject and issuer from a smart card certificate to the value of the altSecurityIdentities attribute in an AD user account, as described in Adding a certificate mapping rule if the trusted AD domain is configured to map user certificates The addition of the --domain=ad.example.com option means that users mapped to a given certificate are not only searched in the local idm.example.com domain but also in the ad.example.com domain The filter definition in the --maprule option accepts the logical operator | (or), so that you can specify multiple criteria. In this case, the rule maps all user accounts that meet at least one of the criteria. Certificate mapping filter example 2 In the above example, the filter definition in the --maprule option includes these criteria: userCertificate;binary={cert!bin} is a filter that returns user entries that include the whole certificate. For AD users, creating this type of filter is described in detail in Adding a certificate mapping rule if the AD user entry contains no certificate or mapping data . ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500} is a filter that links the subject and issuer from a smart card certificate to the value of the ipacertmapdata attribute in an IdM user account, as described in Adding a certificate mapping rule in IdM . altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500} is a filter that links the subject and issuer from a smart card certificate to the value of the altSecurityIdentities attribute in an AD user account, as described in Adding a certificate mapping rule if the trusted AD domain is configured to map user certificates . The filter definition in the --maprule option accepts the logical operator | (or), so that you can specify multiple criteria. In this case, the rule maps all user accounts that meet at least one of the criteria. 69.10. Additional resources sss-certmap(5) man page on your system
[ "ipa certmaprule-add simple_rule --matchrule '<ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG' --maprule '(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})'", "CERT=USD(openssl x509 -in /path/to/certificate -outform der|base64 -w0)", "sssctl cert-eval-rule USDCERT --match='<ISSUER>CN=adcs19-WIN1-CA,DC=AD,DC=EXAMPLE,DC=COM' --map='LDAPU1:(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<SR>{serial_number!hex_ur})' Certificate matches rule. Mapping filter: (altSecurityIdentities=X509:<I>DC=com,DC=example,DC=ad,CN=adcs19-WIN1-CA<SR>0F0000000000DB8852DD7B246C9C0F0000003B)", "ipa certmaprule-add simple_rule --matchrule '<ISSUER>CN=adcs19-WIN1-CA,DC=AD,DC=EXAMPLE,DC=COM' --maprule 'LDAPU1:(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<SR>{serial_number!hex_ur})'", "(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})", "<ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG", "systemctl restart sssd", "kinit admin", "ipa certmaprule-add rule_name --matchrule '<ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG' --maprule '(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})' ------------------------------------------------------- Added Certificate Identity Mapping Rule \"rule_name\" ------------------------------------------------------- Rule name: rule_name Mapping rule: (ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500}) Matching rule: <ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG Enabled: TRUE", "systemctl restart sssd", "cat idm_user_certificate.pem -----BEGIN CERTIFICATE----- MIIFFTCCA/2gAwIBAgIBEjANBgkqhkiG9w0BAQsFADA6MRgwFgYDVQQKDA9JRE0u RVhBTVBMRS5DT00xHjAcBgNVBAMMFUNlcnRpZmljYXRlIEF1dGhvcml0eTAeFw0x ODA5MDIxODE1MzlaFw0yMDA5MDIxODE1MzlaMCwxGDAWBgNVBAoMD0lETS5FWEFN [...output truncated...]", "sss_cache -u idm_user", "ipa certmap-match idm_user_cert.pem -------------- 1 user matched -------------- Domain: IDM.EXAMPLE.COM User logins: idm_user ---------------------------- Number of entries returned 1 ----------------------------", "kinit admin", "CERT=USD(openssl x509 -in idm_user_cert.pem -outform der|base64 -w0) ipa user-add-certmapdata idm_user --certificate USDCERT", "ipa user-add-certmapdata idm_user --subject \"O=EXAMPLE.ORG,CN=test\" --issuer \"CN=Smart Card CA,O=EXAMPLE.ORG\" -------------------------------------------- Added certificate mappings to user \"idm_user\" -------------------------------------------- User login: idm_user Certificate mapping data: X509:<I>O=EXAMPLE.ORG,CN=Smart Card CA<S>CN=test,O=EXAMPLE.ORG", "sss_cache -u idm_user", "ipa certmap-match idm_user_cert.pem -------------- 1 user matched -------------- Domain: IDM.EXAMPLE.COM User logins: idm_user ---------------------------- Number of entries returned 1 ----------------------------", "(userCertificate;binary={cert!bin})", "<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com", "systemctl restart sssd", "kinit admin", "ipa certmaprule-add simpleADrule --matchrule '<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com' --maprule '(userCertificate;binary={cert!bin})' --domain ad.example.com ------------------------------------------------------- Added Certificate Identity Mapping Rule \"simpleADrule\" ------------------------------------------------------- Rule name: simpleADrule Mapping rule: (userCertificate;binary={cert!bin}) Matching rule: <ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com Domain name: ad.example.com Enabled: TRUE", "systemctl restart sssd", "(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500})", "<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com", "ad.example.com", "systemctl restart sssd", "kinit admin", "ipa certmaprule-add ad_configured_for_mapping_rule --matchrule '<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com' --maprule '(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500})' --domain=ad.example.com ------------------------------------------------------- Added Certificate Identity Mapping Rule \"ad_configured_for_mapping_rule\" ------------------------------------------------------- Rule name: ad_configured_for_mapping_rule Mapping rule: (altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500}) Matching rule: <ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com Domain name: ad.example.com Enabled: TRUE", "systemctl restart sssd", "ldapsearch -o ldif-wrap=no -LLL -h adserver.ad.example.com -p 389 -D cn=Administrator,cn=users,dc=ad,dc=example,dc=com -W -b cn=users,dc=ad,dc=example,dc=com \"(cn=ad_user)\" altSecurityIdentities Enter LDAP Password: dn: CN=ad_user,CN=Users,DC=ad,DC=example,DC=com altSecurityIdentities: X509:<I>DC=com,DC=example,DC=ad,CN=AD-ROOT-CA<S>DC=com,DC=example,DC=ad,CN=Users,CN=ad_user", "(userCertificate;binary={cert!bin})", "<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com", "systemctl restart sssd", "kinit admin", "ipa certmaprule-add simpleADrule --matchrule '<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com' --maprule '(userCertificate;binary={cert!bin})' --domain ad.example.com ------------------------------------------------------- Added Certificate Identity Mapping Rule \"simpleADrule\" ------------------------------------------------------- Rule name: simpleADrule Mapping rule: (userCertificate;binary={cert!bin}) Matching rule: <ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com Domain name: ad.example.com Enabled: TRUE", "systemctl restart sssd", "sss_cache -u [email protected]", "ipa certmap-match ad_user_cert.pem -------------- 1 user matched -------------- Domain: AD.EXAMPLE.COM User logins: [email protected] ---------------------------- Number of entries returned 1 ----------------------------", "kinit admin", "CERT=USD(openssl x509 -in /path/to/certificate -outform der|base64 -w0)", "ipa idoverrideuser-add-cert [email protected] --certificate USDCERT", "sss_cache -u [email protected]", "ipa certmap-match ad_user_cert.pem -------------- 1 user matched -------------- Domain: AD.EXAMPLE.COM User logins: [email protected] ---------------------------- Number of entries returned 1 ----------------------------", "ipa certmaprule-add ad_cert_for_ipa_and_ad_users \\ --maprule='(|(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500}))' \\ --matchrule='<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com' \\ --domain=ad.example.com", "ipa certmaprule-add ipa_cert_for_ad_users --maprule='(|(userCertificate;binary={cert!bin})(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500}))' --matchrule='<ISSUER>CN=Certificate Authority,O=REALM.EXAMPLE.COM' --domain=idm.example.com --domain=ad.example.com" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/conf-certmap-idm_configuring-and-managing-idm
Chapter 2. Creating a cluster on GCP with Workload Identity Federation authentication
Chapter 2. Creating a cluster on GCP with Workload Identity Federation authentication 2.1. Workload Identity Federation overview Workload Identity Federation (WIF) is a Google Cloud Platform (GCP) Identity and Access Management (IAM) feature that provides third parties a secure method to access resources on a customer's cloud account. WIF eliminates the need for service account keys, and is Google Cloud's preferred method of credential authentication. While service account keys can provide powerful access to your Google Cloud resources, they must be maintained by the end user and can be a security risk if they are not managed properly. WIF does not use service keys as an access method for your Google cloud resources. Instead, WIF grants access by using credentials from external identity providers to generate short-lived credentials for workloads. The workloads can then use these credentials to temporarily impersonate service accounts and access Google Cloud resources. This removes the burden of having to properly maintain service account keys, and removes the risk of unauthorized users gaining access to service account keys. The following bulleted items provides a basic overview of the Workload Identity Federation process: The owner of the Google Cloud Platform (GCP) project configures a workload identity pool with an identity provider, allowing OpenShift Dedicated to access the project's associated service accounts using short-lived credentials. This workload identity pool is configured to authenticate requests using an Identity Provider (IP) that the user defines. For applications to get access to cloud resources, they first pass credentials to Google's Security Token Service (STS). STS uses the specified identity provider to verify the credentials. Once the credentials are verified, STS returns a temporary access token to the caller, giving the application the ability to impersonate the service account bound to that identity. Operators also need access to cloud resources. By using WIF instead of service account keys to grant this access, cluster security is further strengthened, as service account keys are no longer stored in the cluster. Instead, operators are given temporary access tokens that impersonate the service accounts. These tokens are short-lived and regularly rotated. For more information about Workload Identity Federation, see the Google Cloud Platform documentation . Important Workload Identity Federation (WIF) is only available on OpenShift Dedicated version 4.17 and later, and is only supported by the Customer Cloud Subscription (CCS) infrastructure type. 2.2. Prerequisites You must complete the following prerequisites before Creating a Workload Identity Federation cluster using OpenShift Cluster Manager and Creating a Workload Identity Federation cluster using the OCM CLI . You have confirmed your Google Cloud account has the necessary resource quotas and limits to support your desired cluster size according to the cluster resource requirements. Note For more information regarding resource quotas and limits, see Additional resources . You have reviewed the introduction to OpenShift Dedicated and the documentation on architecture concepts . You have reviewed the OpenShift Dedicated cloud deployment options . You have read and completed the Required customer procedure . Note WIF supports the deployment of a private OpenShift Dedicated on Google Cloud Platform (GCP) cluster with Private Service Connect (PSC). Red Hat recommends using PSC when deploying private clusters. For more information about the prerequisites for PSC, see Prerequisites for Private Service Connect . 2.3. Creating a Workload Identity Federation cluster using OpenShift Cluster Manager Procedure Log in to OpenShift Cluster Manager and click Create cluster on the OpenShift Dedicated card. Under Billing model , configure the subscription type and infrastructure type. Select a subscription type. For information about OpenShift Dedicated subscription options, see Cluster subscriptions and registration in the OpenShift Cluster Manager documentation. Select the Customer cloud subscription infrastructure type. Click . Select Run on Google Cloud Platform . Select Workload Identity Federation as the Authentication type. Read and complete all the required prerequisites. Click the checkbox indicating that you have read and completed all the required prerequisites. To create a new WIF configuration, open a terminal window and run the following OCM CLI command. USD ocm gcp create wif-config --name <wif_name> \ 1 --project <gcp_project_id> \ 2 1 Replace <wif_name> with the name of your WIF configuration. 2 Replace <gcp_project_id> with the ID of the Google Cloud Platform (GCP) project where the WIF configuration will be implemented. Select a configured WIF configuration from the WIF configuration drop-down list. If you want to select the WIF configuration you created in the last step, click Refresh first. Click . On the Details page, provide a name for your cluster and specify the cluster details: In the Cluster name field, enter a name for your cluster. Optional: Cluster creation generates a domain prefix as a subdomain for your provisioned cluster on openshiftapps.com . If the cluster name is less than or equal to 15 characters, that name is used for the domain prefix. If the cluster name is longer than 15 characters, the domain prefix is randomly generated as a 15-character string. To customize the subdomain prefix, select the Create custom domain prefix checkbox, and enter your domain prefix name in the Domain prefix field. The domain prefix cannot be longer than 15 characters, must be unique within your organization, and cannot be changed after cluster creation. Select a cluster version from the Version drop-down menu. Note Workload Identity Federation (WIF) is only supported on OpenShift Dedicated version 4.17 and later. Select a cloud provider region from the Region drop-down menu. Select a Single zone or Multi-zone configuration. Optional: Select Enable Secure Boot support for Shielded VMs to use Shielded VMs when installing your cluster. For more information, see Shielded VMs . Important To successfully create a cluster, you must select Enable Secure Boot support for Shielded VMs if your organization has the policy constraint constraints/compute.requireShieldedVm enabled. For more information regarding GCP organizational policy constraints, see Organization policy constraints . Leave Enable user workload monitoring selected to monitor your own projects in isolation from Red Hat Site Reliability Engineer (SRE) platform metrics. This option is enabled by default. Optional: Expand Advanced Encryption to make changes to encryption settings. Select Use custom KMS keys to use custom KMS keys. If you prefer not to use custom KMS keys, leave the default setting Use default KMS Keys . With Use Custom KMS keys selected: Select a key ring location from the Key ring location drop-down menu. Select a key ring from the Key ring drop-down menu. Select a key name from the Key name drop-down menu. Provide the KMS Service Account . Optional: Select Enable FIPS cryptography if you require your cluster to be FIPS validated. Note If Enable FIPS cryptography is selected, Enable additional etcd encryption is enabled by default and cannot be disabled. You can select Enable additional etcd encryption without selecting Enable FIPS cryptography . Optional: Select Enable additional etcd encryption if you require etcd key value encryption. With this option, the etcd key values are encrypted, but not the keys. This option is in addition to the control plane storage encryption that encrypts the etcd volumes in OpenShift Dedicated clusters by default. Note By enabling etcd encryption for the key values in etcd, you incur a performance overhead of approximately 20%. The overhead is a result of introducing this second layer of encryption, in addition to the default control plane storage encryption that encrypts the etcd volumes. Consider enabling etcd encryption only if you specifically require it for your use case. Click . On the Machine pool page, select a Compute node instance type and a Compute node count . The number and types of nodes that are available depend on your OpenShift Dedicated subscription. If you are using multiple availability zones, the compute node count is per zone. Optional: Expand Add node labels to add labels to your nodes. Click Add additional label to add more node labels. Important This step refers to labels within Kubernetes, not Google Cloud. For more information regarding Kubernetes labels, see Labels and Selectors . Click . In the Cluster privacy dialog, select Public or Private to use either public or private API endpoints and application routes for your cluster. If you select Private , Use Private Service Connect is selected by default, and cannot be disabled. Private Service Connect (PSC) is Google Cloud's security-enhanced networking feature. Optional: To install the cluster in an existing GCP Virtual Private Cloud (VPC): Select Install into an existing VPC . Important Private Service Connect is supported only with Install into an existing VPC . If you are installing into an existing VPC and you want to enable an HTTP or HTTPS proxy for your cluster, select Configure a cluster-wide proxy . Important In order to configure a cluster-wide proxy for your cluster, you must first create the Cloud network address translation (NAT) and a Cloud router. See the Additional resources section for more information. Accept the default application ingress settings, or to create your own custom settings, select Custom Settings . Optional: Provide route selector. Optional: Provide excluded namespaces. Select a namespace ownership policy. Select a wildcard policy. For more information about custom application ingress settings, click on the information icon provided for each setting. Click . Optional: To install the cluster into a GCP Shared VPC, follow these steps. Important The VPC owner of the host project must enable a project as a host project in their Google Cloud console and add the Computer Network Administrator , Compute Security Administrator , and DNS Administrator roles to the following service accounts prior to cluster installation: osd-deployer osd-control-plane openshift-machine-api-gcp Failure to do so will cause the cluster go into the "Installation Waiting" state. If this occurs, you must contact the VPC owner of the host project to assign the roles to the service accounts listed above. The VPC owner of the host project has 30 days to grant the listed permissions before the cluster creation fails. For more information, see Enable a host project and Provision Shared VPC . Select Install into GCP Shared VPC . Specify the Host project ID . If the specified host project ID is incorrect, cluster creation fails. If you opted to install the cluster in an existing GCP VPC, provide your Virtual Private Cloud (VPC) subnet settings and select . You must have created the Cloud network address translation (NAT) and a Cloud router. See Additional resources for information about Cloud NATs and Google VPCs. Note If you are installing a cluster into a Shared VPC, the VPC name and subnets are shared from the host project. Click . If you opted to configure a cluster-wide proxy, provide your proxy configuration details on the Cluster-wide proxy page: Enter a value in at least one of the following fields: Specify a valid HTTP proxy URL . Specify a valid HTTPS proxy URL . In the Additional trust bundle field, provide a PEM encoded X.509 certificate bundle. The bundle is added to the trusted certificate store for the cluster nodes. An additional trust bundle file is required if you use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. This requirement applies regardless of whether the proxy is transparent or requires explicit configuration using the http-proxy and https-proxy arguments. Click . For more information about configuring a proxy with OpenShift Dedicated, see Configuring a cluster-wide proxy . In the CIDR ranges dialog, configure custom classless inter-domain routing (CIDR) ranges or use the defaults that are provided. Important CIDR configurations cannot be changed later. Confirm your selections with your network administrator before proceeding. If the cluster privacy is set to Private , you cannot access your cluster until you configure private connections in your cloud provider. On the Cluster update strategy page, configure your update preferences: Choose a cluster update method: Select Individual updates if you want to schedule each update individually. This is the default option. Select Recurring updates to update your cluster on your preferred day and start time, when updates are available. Note You can review the end-of-life dates in the update lifecycle documentation for OpenShift Dedicated. For more information, see OpenShift Dedicated update life cycle . Provide administrator approval based on your cluster update method: Individual updates: If you select an update version that requires approval, provide an administrator's acknowledgment and click Approve and continue . Recurring updates: If you selected recurring updates for your cluster, provide an administrator's acknowledgment and click Approve and continue . OpenShift Cluster Manager does not start scheduled y-stream updates for minor versions without receiving an administrator's acknowledgment. If you opted for recurring updates, select a preferred day of the week and upgrade start time in UTC from the drop-down menus. Optional: You can set a grace period for Node draining during cluster upgrades. A 1 hour grace period is set by default. Click . Note In the event of critical security concerns that significantly impact the security or stability of a cluster, Red Hat Site Reliability Engineering (SRE) might schedule automatic updates to the latest z-stream version that is not impacted. The updates are applied within 48 hours after customer notifications are provided. For a description of the critical impact security rating, see Understanding Red Hat security ratings . Review the summary of your selections and click Create cluster to start the cluster installation. The installation takes approximately 30-40 minutes to complete. Optional: On the Overview tab, you can enable the delete protection feature by selecting Enable , which is located directly under Delete Protection: Disabled . This will prevent your cluster from being deleted. To disable delete protection, select Disable . By default, clusters are created with the delete protection feature disabled. Verification You can monitor the progress of the installation in the Overview page for your cluster. You can view the installation logs on the same page. Your cluster is ready when the Status in the Details section of the page is listed as Ready . 2.4. Creating a Workload Identity Federation cluster using the OCM CLI You can create an OpenShift Dedicated on Google Cloud Platform (GCP) cluster with Workload Identity Federation (WIF) using the OpenShift Cluster Manager CLI ( ocm ) in interactive or non-interactive mode. Note Download the latest version of the OpenShift Cluster Manager CLI ( ocm ) for your operating system from the Downloads page on OpenShift Cluster Manager. Important OpenShift Cluster Manager API command-line interface ( ocm ) is a Developer Preview feature only. For more information about the support scope of Red Hat Developer Preview features, see Developer Preview Support Scope . Before creating the cluster, you must first create a WIF configuration. Note Migrating an existing non-WIF cluster to a WIF configuration is not supported. This feature can only be enabled during new cluster creation. 2.4.1. Creating a WIF configuration Procedure You can create a WIF configuration using the auto mode or the manual mode. The auto mode enables you to automatically create the service accounts for OpenShift Dedicated components as well as other IAM resources. Alternatively, you can use the manual mode. In manual mode, you are provided with commands within a script.sh file which you use to manually create the service accounts for OpenShift Dedicated components as well as other IAM resources. Based on your mode preference, run one of the following commands to create a WIF configuration: Create a WIF configuration in auto mode by running the following command: USD ocm gcp create wif-config --name <wif_name> \ 1 --project <gcp_project_id> \ 2 --version <osd_version> 3 1 Replace <wif_name> with the name of your WIF configuration. 2 Replace <gcp_project_id> with the ID of the Google Cloud Platform (GCP) project where the WIF configuration will be implemented. 3 Optional: Replace <osd_version> with the desired OpenShift Dedicated version the wif-config will need to support. If you do not specify a version, the wif-config will support the latest OpenShift Dedicated y-stream version as well as the last three supported OpenShift Dedicated y-stream versions (beginning with version 4.17). Example output 2024/09/26 13:05:41 Creating workload identity configuration... 2024/09/26 13:05:47 Workload identity pool created with name 2e1kcps6jtgla8818vqs8tbjjls4oeub 2024/09/26 13:05:47 workload identity provider created with name oidc 2024/09/26 13:05:48 IAM service account osd-worker-oeub created 2024/09/26 13:05:49 IAM service account osd-control-plane-oeub created 2024/09/26 13:05:49 IAM service account openshift-gcp-ccm-oeub created 2024/09/26 13:05:50 IAM service account openshift-gcp-pd-csi-driv-oeub created 2024/09/26 13:05:50 IAM service account openshift-image-registry-oeub created 2024/09/26 13:05:51 IAM service account openshift-machine-api-gcp-oeub created 2024/09/26 13:05:51 IAM service account osd-deployer-oeub created 2024/09/26 13:05:52 IAM service account cloud-credential-operator-oeub created 2024/09/26 13:05:52 IAM service account openshift-cloud-network-c-oeub created 2024/09/26 13:05:53 IAM service account openshift-ingress-gcp-oeub created 2024/09/26 13:05:55 Role "osd_deployer_v4.18" updated Create a WIF configuration in manual mode by running the following command: USD ocm gcp create wif-config --name <wif_name> \ 1 --project <gcp_project_id> \ 2 --mode=manual 1 Replace <wif_name> with the name of your WIF configuration. 2 Replace <gcp_project_id> with the ID of the Google Cloud Platform (GCP) project where the WIF configuration will be implemented. Once the WIF is configured, the following service accounts, roles, and groups are created. Table 2.1. WIF configuration service accounts, group and roles Service Account/Group GCP pre-defined roles and Red Hat custom roles osd-deployer osd_deployer_v4.18 osd-control-plane compute.instanceAdmin compute.networkAdmin compute.securityAdmin compute.storageAdmin osd-worker compute.storageAdmin compute.viewer cloud-credential-operator-gcp-ro-creds cloud_credential_operator_gcp_ro_creds_v4 openshift-cloud-network-config-controller-gcp openshift_cloud_network_config_controller_gcp_v4 openshift-gcp-ccm openshift_gcp_ccm_v4 openshift-gcp-pd-csi-driver-operator compute.storageAdmin iam.serviceAccountUser resourcemanager.tagUser openshift_gcp_pd_csi_driver_operator_v4 openshift-image-registry-gcp openshift_image_registry_gcs_v4 openshift-ingress-gcp openshift_ingress_gcp_v4 openshift-machine-api-gcp openshift_machine_api_gcp_v4 Access via SRE group:sd-sre-platform-gcp-access sre_managed_support For further details about WIF configuration roles and their assigned permissions, see managed-cluster-config . 2.4.2. Creating a WIF cluster Procedure You can create a WIF cluster using the interactive mode or the non-interactive mode. In interactive mode, cluster attributes are displayed automatically as prompts during the creation of the cluster. You enter the values for those prompts based on specified requirements in the fields provided. In non-interactive mode, you specify the values for specific parameters within the command. Based on your mode preference, run one of the following commands to create an OpenShift Dedicated on (GCP) cluster with WIF configuration: Create a cluster in interactive mode by running the following command: USD ocm create cluster --interactive 1 1 interactive mode enables you to specify configuration options at the interactive prompts. Create a cluster in non-interactive mode by running the following command: Note The following example is made up optional and required parameters and may differ from your non-interactive mode command. Parameters not identified as optional are required. For additional details about these and other parameters, run the ocm create cluster --help flag command in you terminal window. USD ocm create cluster <cluster_name> \ 1 --provider=gcp \ 2 --ccs=true \ 3 --wif-config <wif_name> \ 4 --region <gcp_region> \ 5 --subscription-type=marketplace-gcp \ 6 --marketplace-gcp-terms=true \ 7 --version <version> \ 8 --multi-az=true \ 9 --enable-autoscaling=true \ 10 --min-replicas=3 \ 11 --max-replicas=6 \ 12 --secure-boot-for-shielded-vms=true 13 1 Replace <cluster_name> with a name for your cluster. 2 Set value to gcp . 3 Set value to true . 4 Replace <wif_name> with the name of your WIF configuration. 5 Replace <gcp_region> with the Google Cloud Platform (GCP) region where the new cluster will be deployed. 6 Optional: The subscription billing model for the cluster. 7 Optional: If you provided a value of marketplace-gcp for the subscription-type parameter, marketplace-gcp-terms must be equal to true . 8 Optional: The desired OpenShift Dedicated version. 9 Optional: Deploy to multiple data centers. 10 Optional: Enable autoscaling of compute nodes. 11 Optional: Minimum number of compute nodes. 12 Optional: Maximum number of compute nodes. 13 Optional: Secure Boot enables the use of Shielded VMs in the Google Cloud Platform. Important If an OpenShift Dedicated version is specified, the version must also be supported by the assigned WIF configuration. If a version is specified that is not supported by the assigned WIF configuration, cluster creation will fail. If this occurs, update the assigned WIF configuration to the desired version or create a new WIF configuration with the desired version in the --version <osd_version> field. 2.4.3. Listing WIF clusters To list all of your OpenShift Dedicated clusters that have been deployed using the WIF authentication type, run the following command: USD ocm list clusters --parameter search="gcp.authentication.wif_config_id != ''" To list all of your OpenShift Dedicated clusters that have been deployed using a specific wif-config, run the following command: USD ocm list clusters --parameter search="gcp.authentication.wif_config_id = '<wif_config_id>'" 1 1 Replace <wif_config_id> with the ID of the WIF configuration. 2.4.4. Updating a WIF configuration Note Updating a WIF configuration is only applicable for y-stream updates. For an overview of the update process, including details regarding version semantics, see The Ultimate Guide to OpenShift Release and Upgrade Process for Cluster Administrators . Before upgrading a WIF-enabled OpenShift Dedicated cluster to a newer version, you must update the wif-config to that version as well. If you do not update the wif-config version before attempting to upgrade the cluster version, the cluster version upgrade will fail. You can update a wif-config to a specific OpenShift Dedicated version by running the following command: ocm gcp update wif-config <wif_name> \ 1 --version <version> 2 1 Replace <wif_name> with the name of the WIF configuration you want to update. 2 Optional: Replace <version> with the OpenShift Dedicated y-stream version you plan to update the cluster to. If you do not specify a version, the wif-config will be updated to support the latest OpenShift Dedicated y-stream version as well as the last three OpenShift Dedicated supported y-stream versions (beginning with version 4.17). 2.4.5. Verifying a WIF configuration You can verify that the configuration of resources associated with a WIF configuration are correct by running the ocm gcp verify wif-config command. If a misconfiguration is found, the output provides details about the misconfiguration and recommends that you update the WIF configuration. You need the name and ID of the WIF configuration you want to verify before verification. To obtain the name and ID of your active WIF configurations, run the following command: USD ocm gcp list wif-configs To determine if the WIF configuration you want to verify is configured correctly, run the following command: USD ocm gcp verify wif-config <wif_config_name>|<wif_config_id> 1 1 Replace <wif_config_name> and <wif_config_id> with the name and ID of your WIF configuration, respectively. Example output Error: verification failed with error: missing role 'compute.storageAdmin'. Running 'ocm gcp update wif-config' may fix errors related to cloud resource misconfiguration. exit status 1. 2.5. Additional resources For information about OpenShift Dedicated clusters using a Customer Cloud Subscription (CCS) model on Google Cloud Platform (GCP), see Customer requirements . For information about resource quotas, Resource quotas per project . For information about limits, GCP account limits . For information about required APIs, see Required customer procedure . For information about managing workload identity pools, see Manage workload identity pools and providers . For information about managing roles and permissions in your Google Cloud account, see Roles and permissions . For a list of the supported maximums, see Cluster maximums . For information about configuring identity providers, see Configuring identity providers . For information about revoking cluster privileges, see Revoking privileges and access to an OpenShift Dedicated cluster .
[ "ocm gcp create wif-config --name <wif_name> \\ 1 --project <gcp_project_id> \\ 2", "ocm gcp create wif-config --name <wif_name> \\ 1 --project <gcp_project_id> \\ 2 --version <osd_version> 3", "2024/09/26 13:05:41 Creating workload identity configuration 2024/09/26 13:05:47 Workload identity pool created with name 2e1kcps6jtgla8818vqs8tbjjls4oeub 2024/09/26 13:05:47 workload identity provider created with name oidc 2024/09/26 13:05:48 IAM service account osd-worker-oeub created 2024/09/26 13:05:49 IAM service account osd-control-plane-oeub created 2024/09/26 13:05:49 IAM service account openshift-gcp-ccm-oeub created 2024/09/26 13:05:50 IAM service account openshift-gcp-pd-csi-driv-oeub created 2024/09/26 13:05:50 IAM service account openshift-image-registry-oeub created 2024/09/26 13:05:51 IAM service account openshift-machine-api-gcp-oeub created 2024/09/26 13:05:51 IAM service account osd-deployer-oeub created 2024/09/26 13:05:52 IAM service account cloud-credential-operator-oeub created 2024/09/26 13:05:52 IAM service account openshift-cloud-network-c-oeub created 2024/09/26 13:05:53 IAM service account openshift-ingress-gcp-oeub created 2024/09/26 13:05:55 Role \"osd_deployer_v4.18\" updated", "ocm gcp create wif-config --name <wif_name> \\ 1 --project <gcp_project_id> \\ 2 --mode=manual", "ocm create cluster --interactive 1", "ocm create cluster <cluster_name> \\ 1 --provider=gcp \\ 2 --ccs=true \\ 3 --wif-config <wif_name> \\ 4 --region <gcp_region> \\ 5 --subscription-type=marketplace-gcp \\ 6 --marketplace-gcp-terms=true \\ 7 --version <version> \\ 8 --multi-az=true \\ 9 --enable-autoscaling=true \\ 10 --min-replicas=3 \\ 11 --max-replicas=6 \\ 12 --secure-boot-for-shielded-vms=true 13", "ocm list clusters --parameter search=\"gcp.authentication.wif_config_id != ''\"", "ocm list clusters --parameter search=\"gcp.authentication.wif_config_id = '<wif_config_id>'\" 1", "ocm gcp update wif-config <wif_name> \\ 1 --version <version> 2", "ocm gcp list wif-configs", "ocm gcp verify wif-config <wif_config_name>|<wif_config_id> 1", "Error: verification failed with error: missing role 'compute.storageAdmin'. Running 'ocm gcp update wif-config' may fix errors related to cloud resource misconfiguration. exit status 1." ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/openshift_dedicated_clusters_on_gcp/osd-creating-a-cluster-on-gcp-with-workload-identity-federation
Red Hat Ansible Lightspeed with IBM watsonx Code Assistant User Guide
Red Hat Ansible Lightspeed with IBM watsonx Code Assistant User Guide Red Hat Ansible Lightspeed with IBM watsonx Code Assistant 2.x_latest Learn how to use Red Hat Ansible Lightspeed with IBM watsonx Code Assistant. Red Hat Customer Content Services
[ "fatal: unable to access 'https://private.repo./mine/ansible-rulebook.git': SSL certificate problem: unable to get local issuer certificate", "kubectl create secret generic <resourcename>-custom-certs --from-file=bundle-ca.crt=<PATH/TO/YOUR/CA/PEM/FILE> 1", "spec: bundle_cacert_secret: <resourcename>-custom-certs", "secretGenerator: - name: <resourcename>-custom-certs files: - bundle-ca.crt=<path+filename> options: disableNameSuffixHash: true", "```yaml spec: extra_settings: - setting: LOGOUT_ALLOWED_HOSTS value: \"'<lightspeed_route-HostName>'\" ```", "curl -H \"Authorization: Bearer <token>\" https://<lightspeed_route>/api/v1/me/", "Install postgresql-server & run postgresql-setup command", "Create a keypair called lightspeed-keypair & create a vpc & create vpc_id var & create a security group that allows SSH & create subnet with 10.0.1.0/24 cidr & create an internet gateway & create a route table", "Install postgresql-server & run postgresql-setup command", "Create a keypair called lightspeed-keypair & create a vpc & create vpc_id var & create a security group that allows SSH & create subnet with 10.0.1.0/24 cidr & create an internet gateway & create a route table", "ansible-content-parser --version ansible-content-parser 0.0.1 using ansible-lint:6.20.0 ansible-core:2.15.4 ansible-lint --version ansible-lint 6.13.1 using ansible 2.15.4 A new release of ansible-lint is available: 6.13.1 -> 6.20.0", "ansible-content-parser --version ansible-content-parser 0.0.1 using ansible-lint:6.20.0 ansible-core:2.15.4 ansible-lint --version ansible-lint 6.20.0 using ansible-core:2.15.4 ansible-compat:4.1.10 ruamel-yaml:0.17.32 ruamel-yaml-clib:0.2.7", "ansible-content-parser --profile min --source-license undefined --source-description Samples --repo-name ansible-tower-samples --repo-url 'https://github.com/ansible/ansible-tower-samples' [email protected]:ansible/ansible-tower-samples.git /var/tmp/out_dir", "cat out_dir/ftdata.jsonl| jq { \"data_source_description\": \"Samples\", \"input\": \"---\\n- name: Hello World Sample\\n hosts: all\\n tasks:\\n - name: Hello Message\", \"license\": \"undefined\", \"module\": \"debug\", \"output\": \" debug:\\n msg: Hello World!\", \"path\": \"hello_world.yml\", \"repo_name\": \"ansible-tower-samples\", \"repo_url\": \"https://github.com/ansible/ansible-tower-samples\" }", "output/ |-- ftdata.jsonl # Training dataset 1 |-- report.txt # A human-readable report 2 | |-- repository/ 3 | |-- (files copied from the source repository) | |-- metadata/ 4 |-- (metadata files generated during the execution)", "schedule: interval: daily", "Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1006)'))", "extra_settings: - setting: ANSIBLE_AI_MODEL_MESH_API_VERIFY_SSL value: false", "extra_settings: - setting: ANSIBLE_AI_MODEL_MESH_API_VERIFY_SSL value: false" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant/2.x_latest/html-single/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant_user_guide/access.redhat.com
Chapter 4. Upgrading an SAP NetWeaver system
Chapter 4. Upgrading an SAP NetWeaver system 4.1. Upgrading an SAP NetWeaver Non-Cloud or BYOS Cloud RHEL system Follow the Upgrading from RHEL 8 to RHEL 9 guide to upgrade your SAP NetWeaver non-cloud or BYOS cloud RHEL 8.* system to RHEL 9.*. At the end of chapter Chapter 6. Verifying the post-upgrade state , verify that only the normal , eus , or e4s repositories are enabled and the RHEL release lock is set to 9.X , where X is a desired minor release. 4.2. Upgrading an SAP NetWeaver Cloud PAYG RHEL system The upgrade of SAP NetWeaver or other SAP application systems hosted on cloud provider PAYG instances is very similar to the upgrade of SAP HANA systems hosted on cloud provider PAYG instances. All non-HANA specific steps listed earlier in the SAP HANA systems upgrade on cloud provider PAYG instances procedure should be applied to complete the upgrade of SAP NetWeaver or other SAP application systems hosted on cloud provider PAYG instances. The only difference is the repo channel for standalone SAP NetWeaver hosts on Microsoft Azure PAYG instances. When upgrading standalone NetWeaver or other SAP applications hosts on Microsoft Azure PAYG instances which correspond to RHEL for SAP Applications SKU, use --channel eus instead of --channel e4s . In other cases, --channel e4s is always used. After the upgrade with --channel eus , the system will have the following Red Hat repositories: # yum repolist rhel-9-for-x86_64-appstream-eus-rhui-rpms rhel-9-for-x86_64-baseos-eus-rhui-rpms rhel-9-for-x86_64-sap-netweaver-eus-rhui-rpms The repolist may contain other non-Red Hat repositories, namely custom repositories of cloud providers for RHUI configuration. Note The current supported upgrade path for SAP NetWeaver and other SAP application systems on all cloud providers are the two latest EUS/E4S releases which are supported by Leapp for non-HANA systems as per the Upgrading from RHEL 8 to RHEL 9 document. As always, run all the upgrade steps, including the preparation and pre-upgrade steps, on a test system first until you have verified that the upgrade can be performed successfully in your production environment.
[ "yum repolist rhel-9-for-x86_64-appstream-eus-rhui-rpms rhel-9-for-x86_64-baseos-eus-rhui-rpms rhel-9-for-x86_64-sap-netweaver-eus-rhui-rpms" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/upgrading_sap_environments_from_rhel_8_to_rhel_9/asmb_upgrading_netweaver_asmb_upgrading-hana-system
Chapter 7. Installing a private cluster on IBM Cloud VPC
Chapter 7. Installing a private cluster on IBM Cloud VPC In OpenShift Container Platform version 4.12, you can install a private cluster into an existing VPC. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring IAM for IBM Cloud VPC . 7.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Create a DNS zone using IBM Cloud DNS Services and specify it as the base domain of the cluster. For more information, see "Using IBM Cloud DNS Services to configure DNS resolution". Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 7.3. Private clusters in IBM Cloud VPC To create a private cluster on IBM Cloud VPC, you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. The cluster still requires access to internet to access the IBM Cloud VPC APIs. The following items are not required or created when you install a private cluster: Public subnets Public network load balancers, which support public ingress A public DNS zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private DNS zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. 7.3.1. Limitations Private clusters on IBM Cloud VPC are subject only to the limitations associated with the existing VPC that was used for cluster deployment. 7.4. About using a custom VPC In OpenShift Container Platform 4.12, you can deploy a cluster into the subnets of an existing IBM Virtual Private Cloud (VPC). Deploying OpenShift Container Platform into an existing VPC can help you avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are in your existing subnets, it cannot choose subnet CIDRs and so forth. You must configure networking for the subnets to which you will install the cluster. 7.4.1. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create the following components: NAT gateways Subnets Route tables VPC network The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 7.4.2. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to this resource group. As part of the installation, specify the following in the install-config.yaml file: The name of the resource group The name of VPC The subnets for control plane machines and compute machines To ensure that the subnets that you provide are suitable, the installation program confirms the following: All of the subnets that you specify exist. For each availability zone in the region, you specify: One subnet for control plane machines. One subnet for compute machines. The machine CIDR that you specified contains the subnets for the compute machines and control plane machines. Note Subnet IDs are not supported. 7.4.3. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed to the entire network. TCP port 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 7.5. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 7.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a bastion host on your cloud network or a machine that has access to the to the network through a VPN. For more information about private cluster installation requirements, see "Private clusters". Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 7.8. Exporting the IBM Cloud VPC API key You must set the IBM Cloud VPC API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud account. Procedure Export your IBM Cloud VPC API key as a global variable: USD export IC_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 7.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 7.9.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 7.9.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 7.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 7.9.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 7.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . The CIDR must contain the subnets defined in platform.ibmcloud.controlPlaneSubnets and platform.ibmcloud.computeSubnets . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 7.9.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 7.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 7.9.1.4. Additional IBM Cloud VPC configuration parameters Additional IBM Cloud VPC configuration parameters are described in the following table: Table 7.4. Additional IBM Cloud VPC parameters Parameter Description Values platform.ibmcloud.resourceGroupName The name of an existing resource group. The existing VPC and subnets should be in this resource group. Cluster installation resources are created in this resource group. String, for example existing_resource_group . platform.ibmcloud.dedicatedHosts.profile The new dedicated host to create. If you specify a value for platform.ibmcloud.dedicatedHosts.name , this parameter is not required. Valid IBM Cloud VPC dedicated host profile, such as cx2-host-152x304 . [ 1 ] platform.ibmcloud.dedicatedHosts.name An existing dedicated host. If you specify a value for platform.ibmcloud.dedicatedHosts.profile , this parameter is not required. String, for example my-dedicated-host-name . platform.ibmcloud.type The instance type for all IBM Cloud VPC machines. Valid IBM Cloud VPC instance type, such as bx2-8x32 . [ 1 ] platform.ibmcloud.vpcName The name of the existing VPC that you want to deploy your cluster to. String. platform.ibmcloud.controlPlaneSubnets The name(s) of the existing subnet(s) in your VPC that you want to deploy your control plane machines to. Specify a subnet for each availability zone. String array platform.ibmcloud.computeSubnets The name(s) of the existing subnet(s) in your VPC that you want to deploy your compute machines to. Specify a subnet for each availability zone. Subnet IDs are not supported. String array To determine which profile best meets your needs, see Instance Profiles in the IBM documentation. 7.9.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 7.5. Minimum resource requirements Machine Operating System vCPU Virtual RAM Storage IOPS Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 7.9.3. Sample customized install-config.yaml file for IBM Cloud VPC You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and then modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 10 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: eu-gb 12 resourceGroupName: eu-gb-example-network-rg 13 vpcName: eu-gb-example-network-1 14 controlPlaneSubnets: 15 - eu-gb-example-network-1-cp-eu-gb-1 - eu-gb-example-network-1-cp-eu-gb-2 - eu-gb-example-network-1-cp-eu-gb-3 computeSubnets: 16 - eu-gb-example-network-1-compute-eu-gb-1 - eu-gb-example-network-1-compute-eu-gb-2 - eu-gb-example-network-1-compute-eu-gb-3 credentialsMode: Manual publish: Internal 17 pullSecret: '{"auths": ...}' 18 fips: false 19 sshKey: ssh-ed25519 AAAA... 20 1 8 12 18 Required. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 7 Enables or disables simultaneous multithreading, also known as Hyper-Threading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 9 The machine CIDR must contain the subnets for the compute machines and control plane machines. 10 The CIDR must contain the subnets defined in platform.ibmcloud.controlPlaneSubnets and platform.ibmcloud.computeSubnets . 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 13 The name of an existing resource group. The existing VPC and subnets should be in this resource group. The cluster is deployed to this resource group. 14 Specify the name of an existing VPC. 15 Specify the name of the existing subnets to which to deploy the control plane machines. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 16 Specify the name of the existing subnets to which to deploy the compute machines. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 17 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster. The default value is External . 19 Enables or disables FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated or Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. 20 Optional: provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 7.9.4. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.10. Manually creating IAM for IBM Cloud VPC Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud VPC resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, obtain the OpenShift Container Platform release image that your openshift-install binary is built to use: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract --cloud=ibmcloud --credentials-requests USDRELEASE_IMAGE \ --to=<path_to_credential_requests_directory> 1 1 The directory where the credential requests will be stored. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer If your cluster uses cluster capabilities to disable one or more optional components, delete the CredentialsRequest custom resources for any disabled components. Example credrequests directory contents for OpenShift Container Platform 4.12 on IBM Cloud VPC 0000_26_cloud-controller-manager-operator_15_credentialsrequest-ibm.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cluster-image-registry-operator_01-registry-credentials-request-ibmcos.yaml 3 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 4 0000_50_cluster-storage-operator_03_credentials_request_ibm.yaml 5 1 The Cloud Controller Manager Operator CR is required. 2 The Machine API Operator CR is required. 3 The Image Registry Operator CR is required. 4 The Ingress Operator CR is required. 5 The Storage Operator CR is an optional component and might be disabled in your cluster. Create the service ID for each credential request, assign the policies defined, create an API key in IBM Cloud VPC, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir <path_to_credential_requests_directory> \ 1 --name <cluster_name> \ 2 --output-dir <installation_directory> \ --resource-group-name <resource_group_name> 3 1 The directory where the credential requests are stored. 2 The name of the OpenShift Container Platform cluster. 3 Optional: The name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 7.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 7.12. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 7.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 7.14. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 7.15. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "export IC_API_KEY=<api_key>", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 10 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: eu-gb 12 resourceGroupName: eu-gb-example-network-rg 13 vpcName: eu-gb-example-network-1 14 controlPlaneSubnets: 15 - eu-gb-example-network-1-cp-eu-gb-1 - eu-gb-example-network-1-cp-eu-gb-2 - eu-gb-example-network-1-cp-eu-gb-3 computeSubnets: 16 - eu-gb-example-network-1-compute-eu-gb-1 - eu-gb-example-network-1-compute-eu-gb-2 - eu-gb-example-network-1-compute-eu-gb-3 credentialsMode: Manual publish: Internal 17 pullSecret: '{\"auths\": ...}' 18 fips: false 19 sshKey: ssh-ed25519 AAAA... 20", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --cloud=ibmcloud --credentials-requests USDRELEASE_IMAGE --to=<path_to_credential_requests_directory> 1", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "0000_26_cloud-controller-manager-operator_15_credentialsrequest-ibm.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cluster-image-registry-operator_01-registry-credentials-request-ibmcos.yaml 3 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 4 0000_50_cluster-storage-operator_03_credentials_request_ibm.yaml 5", "ccoctl ibmcloud create-service-id --credentials-requests-dir <path_to_credential_requests_directory> \\ 1 --name <cluster_name> \\ 2 --output-dir <installation_directory> --resource-group-name <resource_group_name> 3", "grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_ibm_cloud_vpc/installing-ibm-cloud-private
Appendix D. Host parameter hierarchy
Appendix D. Host parameter hierarchy You can access host parameters when provisioning hosts. Hosts inherit their parameters from the following locations, in order of increasing precedence: Parameter Level Set in Satellite web UI Globally defined parameters Configure > Global parameters Organization-level parameters Administer > Organizations Location-level parameters Administer > Locations Domain-level parameters Infrastructure > Domains Subnet-level parameters Infrastructure > Subnets Operating system-level parameters Hosts > Provisioning Setup > Operating Systems Host group-level parameters Configure > Host Groups Host parameters Hosts > All Hosts
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/provisioning_hosts/host_parameter_hierarchy_provisioning
function::reverse_path_walk
function::reverse_path_walk Name function::reverse_path_walk - get the full dirent path Synopsis Arguments dentry Pointer to dentry. Description Returns the path name (partial path to mount point).
[ "reverse_path_walk:string(dentry:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-reverse-path-walk
Chapter 2. Authentication mechanisms in Quarkus
Chapter 2. Authentication mechanisms in Quarkus The Quarkus Security framework supports multiple authentication mechanisms, which you can use to secure your applications. You can also combine authentication mechanisms. Tip Before you choose an authentication mechanism for securing your Quarkus applications, review the information provided. 2.1. Overview of supported authentication mechanisms Some supported authentication mechanisms are built into Quarkus, while others require you to add an extension. All of these mechanisms are detailed in the following sections: Built-in authentication mechanisms Other supported authentication mechanisms The following table maps specific authentication requirements to a supported mechanism that you can use in Quarkus: Table 2.1. Authentication requirements and mechanisms Authentication requirement Authentication mechanism Username and password Basic , Form-based authentication Bearer access token OIDC Bearer token authentication , JWT Single sign-on (SSO) OIDC Code Flow , Form-based authentication Client certificate Mutual TLS authentication For more information, see the following Token authentication mechanism comparison table. 2.2. Built-in authentication mechanisms Quarkus Security provides the following built-in authentication support: Basic authentication Form-based authentication Mutual TLS authentication 2.2.1. Basic authentication You can secure your Quarkus application endpoints with the built-in HTTP Basic authentication mechanism. For more information, see the following documentation: Basic authentication Enable Basic authentication Quarkus Security with Jakarta Persistence Getting started with Security by using Basic authentication and Jakarta Persistence Identity providers 2.2.2. Form-based authentication Quarkus provides form-based authentication that works similarly to traditional Servlet form-based authentication. Unlike traditional form authentication, the authenticated user is not stored in an HTTP session because Quarkus does not support clustered HTTP sessions. Instead, the authentication information is stored in an encrypted cookie, which can be read by all cluster members who share the same encryption key. To apply encryption, add the quarkus.http.auth.session.encryption-key property, and ensure the value you set is at least 16 characters long. The encryption key is hashed by using SHA-256. The resulting digest is used as a key for AES-256 encryption of the cookie value. The cookie contains an expiry time as part of the encrypted value, so all nodes in the cluster must have their clocks synchronized. At one-minute intervals, a new cookie gets generated with an updated expiry time if the session is in use. With applications (SPA), you typically want to avoid redirects by removing default page paths, as shown in the following example: # do not redirect, respond with HTTP 200 OK quarkus.http.auth.form.landing-page= # do not redirect, respond with HTTP 401 Unauthorized quarkus.http.auth.form.login-page= quarkus.http.auth.form.error-page= # HttpOnly must be false if you want to log out on the client; it can be true if logging out from the server quarkus.http.auth.form.http-only-cookie=false Now that you have disabled redirects for the SPA, you must log in and log out programmatically from your client. Below are examples of JavaScript methods for logging into the j_security_check endpoint and logging out of the application by destroying the cookie. const login = () => { // Create an object to represent the form data const formData = new URLSearchParams(); formData.append("j_username", username); formData.append("j_password", password); // Make an HTTP POST request using fetch against j_security_check endpoint fetch("j_security_check", { method: "POST", body: formData, headers: { "Content-Type": "application/x-www-form-urlencoded", }, }) .then((response) => { if (response.status === 200) { // Authentication was successful console.log("Authentication successful"); } else { // Authentication failed console.error("Invalid credentials"); } }) .catch((error) => { console.error(error); }); }; To log out of the SPA from the client, the cookie must be set to quarkus.http.auth.form.http-only-cookie=false so you can destroy the cookie and possibly redirect back to your main page. const logout= () => { // delete the credential cookie, essentially killing the session const removeCookie = `quarkus-credential=; Max-Age=0;path=/`; document.cookie = removeCookie; // perform post-logout actions here, such as redirecting back to your login page }; To log out of the SPA from the server, the cookie can be set to quarkus.http.auth.form.http-only-cookie=true and use this example code to destroy the cookie. @ConfigProperty(name = "quarkus.http.auth.form.cookie-name") String cookieName; @Inject CurrentIdentityAssociation identity; @POST public Response logout() { if (identity.getIdentity().isAnonymous()) { throw new UnauthorizedException("Not authenticated"); } final NewCookie removeCookie = new NewCookie.Builder(cookieName) .maxAge(0) .expiry(Date.from(Instant.EPOCH)) .path("/") .build(); return Response.noContent().cookie(removeCookie).build(); } The following properties can be used to configure form-based authentication: Configuration property fixed at build time - All other configuration properties are overridable at runtime Configuration property Type Default quarkus.http.auth.form.enabled If form authentication is enabled. Environment variable: QUARKUS_HTTP_AUTH_FORM_ENABLED boolean false quarkus.http.auth.form.post-location The post location. Environment variable: QUARKUS_HTTP_AUTH_FORM_POST_LOCATION string /j_security_check Configuration property fixed at build time - All other configuration properties are overridable at runtime Configuration property Type Default quarkus.http.auth.certificate-role-properties Properties file containing the client certificate common name (CN) to role mappings. Use it only if the mTLS authentication mechanism is enabled with either quarkus.http.ssl.client-auth=required or quarkus.http.ssl.client-auth=request . Properties file is expected to have the CN=role1,role,... ,roleN format and should be encoded using UTF-8. Environment variable: QUARKUS_HTTP_AUTH_CERTIFICATE_ROLE_PROPERTIES path quarkus.http.auth.realm The authentication realm Environment variable: QUARKUS_HTTP_AUTH_REALM string quarkus.http.auth.form.login-page The login page. Redirect to login page can be disabled by setting quarkus.http.auth.form.login-page= . Environment variable: QUARKUS_HTTP_AUTH_FORM_LOGIN_PAGE string /login.html quarkus.http.auth.form.username-parameter The username field name. Environment variable: QUARKUS_HTTP_AUTH_FORM_USERNAME_PARAMETER string j_username quarkus.http.auth.form.password-parameter The password field name. Environment variable: QUARKUS_HTTP_AUTH_FORM_PASSWORD_PARAMETER string j_password quarkus.http.auth.form.error-page The error page. Redirect to error page can be disabled by setting quarkus.http.auth.form.error-page= . Environment variable: QUARKUS_HTTP_AUTH_FORM_ERROR_PAGE string /error.html quarkus.http.auth.form.landing-page The landing page to redirect to if there is no saved page to redirect back to. Redirect to landing page can be disabled by setting quarkus.http.auth.form.landing-page= . Environment variable: QUARKUS_HTTP_AUTH_FORM_LANDING_PAGE string /index.html quarkus.http.auth.form.location-cookie Option to control the name of the cookie used to redirect the user back to the location they want to access. Environment variable: QUARKUS_HTTP_AUTH_FORM_LOCATION_COOKIE string quarkus-redirect-location quarkus.http.auth.form.timeout The inactivity (idle) timeout When inactivity timeout is reached, cookie is not renewed and a new login is enforced. Environment variable: QUARKUS_HTTP_AUTH_FORM_TIMEOUT Duration PT30M quarkus.http.auth.form.new-cookie-interval How old a cookie can get before it will be replaced with a new cookie with an updated timeout, also referred to as "renewal-timeout". Note that smaller values will result in slightly more server load (as new encrypted cookies will be generated more often); however, larger values affect the inactivity timeout because the timeout is set when a cookie is generated. For example if this is set to 10 minutes, and the inactivity timeout is 30m, if a user's last request is when the cookie is 9m old then the actual timeout will happen 21m after the last request because the timeout is only refreshed when a new cookie is generated. That is, no timeout is tracked on the server side; the timestamp is encoded and encrypted in the cookie itself, and it is decrypted and parsed with each request. Environment variable: QUARKUS_HTTP_AUTH_FORM_NEW_COOKIE_INTERVAL Duration PT1M quarkus.http.auth.form.cookie-name The cookie that is used to store the persistent session Environment variable: QUARKUS_HTTP_AUTH_FORM_COOKIE_NAME string quarkus-credential quarkus.http.auth.form.cookie-path The cookie path for the session and location cookies. Environment variable: QUARKUS_HTTP_AUTH_FORM_COOKIE_PATH string / quarkus.http.auth.form.http-only-cookie Set the HttpOnly attribute to prevent access to the cookie via JavaScript. Environment variable: QUARKUS_HTTP_AUTH_FORM_HTTP_ONLY_COOKIE boolean false quarkus.http.auth.form.cookie-same-site SameSite attribute for the session and location cookies. Environment variable: QUARKUS_HTTP_AUTH_FORM_COOKIE_SAME_SITE strict , lax , none strict quarkus.http.auth.permission."permissions".enabled Determines whether the entire permission set is enabled, or not. By default, if the permission set is defined, it is enabled. Environment variable: QUARKUS_HTTP_AUTH_PERMISSION__PERMISSIONS__ENABLED boolean quarkus.http.auth.permission."permissions".policy The HTTP policy that this permission set is linked to. There are three built-in policies: permit, deny and authenticated. Role based policies can be defined, and extensions can add their own policies. Environment variable: QUARKUS_HTTP_AUTH_PERMISSION__PERMISSIONS__POLICY string required quarkus.http.auth.permission."permissions".methods The methods that this permission set applies to. If this is not set then they apply to all methods. Note that if a request matches any path from any permission set, but does not match the constraint due to the method not being listed then the request will be denied. Method specific permissions take precedence over matches that do not have any methods set. This means that for example if Quarkus is configured to allow GET and POST requests to /admin to and no other permissions are configured PUT requests to /admin will be denied. Environment variable: QUARKUS_HTTP_AUTH_PERMISSION__PERMISSIONS__METHODS list of string quarkus.http.auth.permission."permissions".paths The paths that this permission check applies to. If the path ends in /* then this is treated as a path prefix, otherwise it is treated as an exact match. Matches are done on a length basis, so the most specific path match takes precedence. If multiple permission sets match the same path then explicit methods matches take precedence over matches without methods set, otherwise the most restrictive permissions are applied. Environment variable: QUARKUS_HTTP_AUTH_PERMISSION__PERMISSIONS__PATHS list of string quarkus.http.auth.permission."permissions".auth-mechanism Path specific authentication mechanism which must be used to authenticate a user. It needs to match HttpCredentialTransport authentication scheme such as 'basic', 'bearer', 'form', etc. Environment variable: QUARKUS_HTTP_AUTH_PERMISSION__PERMISSIONS__AUTH_MECHANISM string quarkus.http.auth.permission."permissions".shared Indicates that this policy always applies to the matched paths in addition to the policy with a winning path. Avoid creating more than one shared policy to minimize the performance impact. Environment variable: QUARKUS_HTTP_AUTH_PERMISSION__PERMISSIONS__SHARED boolean false quarkus.http.auth.policy."role-policy".roles-allowed The roles that are allowed to access resources protected by this policy. By default, access is allowed to any authenticated user. Environment variable: QUARKUS_HTTP_AUTH_POLICY__ROLE_POLICY__ROLES_ALLOWED list of string ** quarkus.http.auth.policy."role-policy".roles Add roles granted to the SecurityIdentity based on the roles that the SecurityIdentity already have. For example, the Quarkus OIDC extension can map roles from the verified JWT access token, and you may want to remap them to a deployment specific roles. Environment variable: QUARKUS_HTTP_AUTH_POLICY__ROLE_POLICY__ROLES Map<String,List<String>> quarkus.http.auth.policy."role-policy".permissions Permissions granted to the SecurityIdentity if this policy is applied successfully (the policy allows request to proceed) and the authenticated request has required role. For example, you can map permission perm1 with actions action1 and action2 to role admin by setting quarkus.http.auth.policy.role-policy1.permissions.admin=perm1:action1,perm1:action2 configuration property. Granted permissions are used for authorization with the @PermissionsAllowed annotation. Environment variable: QUARKUS_HTTP_AUTH_POLICY__ROLE_POLICY__PERMISSIONS Map<String,List<String>> quarkus.http.auth.policy."role-policy".permission-class Permissions granted by this policy will be created with a java.security.Permission implementation specified by this configuration property. The permission class must declare exactly one constructor that accepts permission name ( String ) or permission name and actions ( String , String[] ). Permission class must be registered for reflection if you run your application in a native mode. Environment variable: QUARKUS_HTTP_AUTH_POLICY__ROLE_POLICY__PERMISSION_CLASS string io.quarkus.security.StringPermission About the Duration format To write duration values, use the standard java.time.Duration format. See the Duration#parse() Java API documentation for more information. You can also use a simplified format, starting with a number: If the value is only a number, it represents time in seconds. If the value is a number followed by ms , it represents time in milliseconds. In other cases, the simplified format is translated to the java.time.Duration format for parsing: If the value is a number followed by h , m , or s , it is prefixed with PT . If the value is a number followed by d , it is prefixed with P . 2.2.3. Mutual TLS authentication Quarkus provides mutual TLS (mTLS) authentication so that you can authenticate users based on their X.509 certificates. To use this authentication method, you must first enable SSL/TLS for your application. For more information, see the Supporting secure connections with SSL/TLS section of the Quarkus "HTTP reference" guide. After your application accepts secure connections, the step is to configure the quarkus.http.ssl.certificate.trust-store-file property with the name of that file that holds all the certificates your application trusts. The specified file also includes information about how your application asks for certificates when a client, such as a browser or other service, tries to access one of its protected resources. quarkus.http.ssl.certificate.key-store-file=server-keystore.jks 1 quarkus.http.ssl.certificate.key-store-password=the_key_store_secret quarkus.http.ssl.certificate.trust-store-file=server-truststore.jks 2 quarkus.http.ssl.certificate.trust-store-password=the_trust_store_secret quarkus.http.ssl.client-auth=required 3 quarkus.http.auth.permission.default.paths=/* 4 quarkus.http.auth.permission.default.policy=authenticated quarkus.http.insecure-requests=disabled 5 1 The keystore where the server's private key is located. 2 The truststore from which the trusted certificates are loaded. 3 With the value set to required , the server demands client certificates. Set the value to REQUEST to allow the server to accept requests without a certificate. This setting is beneficial when supporting authentication methods besides mTLS. 4 Defines a policy where only authenticated users should have access to resources from your application. 5 You can explicitly disable the plain HTTP protocol, thus requiring all requests to use HTTPS. When you set quarkus.http.ssl.client-auth to required , the system automatically sets quarkus.http.insecure-requests to disabled . When the incoming request matches a valid certificate in the truststore, your application can obtain the subject by injecting a SecurityIdentity as follows: Obtaining the subject @Inject SecurityIdentity identity; @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return String.format("Hello, %s", identity.getPrincipal().getName()); } You can also get the certificate by using the code outlined in the following example: Obtaining the certificate import java.security.cert.X509Certificate; import io.quarkus.security.credential.CertificateCredential; CertificateCredential credential = identity.getCredential(CertificateCredential.class); X509Certificate certificate = credential.getCertificate(); 2.2.3.1. Mapping certificate attributes to roles The information from the client certificate can be used to add roles to Quarkus SecurityIdentity . You can add new roles to SecurityIdentity after checking a client certificate's common name (CN) attribute. The easiest way to add new roles is to use a certificate attribute to role mapping feature. For example, you can update the properties shown in the section which introduces Mutual TLS authentication as follows: quarkus.http.ssl.certificate.key-store-file=server-keystore.jks quarkus.http.ssl.certificate.key-store-password=the_key_store_secret quarkus.http.ssl.certificate.trust-store-file=server-truststore.jks quarkus.http.ssl.certificate.trust-store-password=the_trust_store_secret quarkus.http.ssl.client-auth=required quarkus.http.insecure-requests=disabled quarkus.http.auth.certificate-role-properties=cert-role-mappings.properties 1 quarkus.http.auth.permission.certauthenticated.paths=/* 2 quarkus.http.auth.permission.certauthenticated.policy=role-policy-cert 3 quarkus.http.auth.policy.role-policy-cert.roles-allowed=user,admin 4 1 The cert-role-mappings.properties classpath resource contains a map of certificate's CN values to roles in the form CN=role or CN=role1,role2 , etc. Let's assume it contains three entries: alice=user,admin , bob=user and jdoe=tester . 2 3 4 Use HTTP security policy to require that SecurityIdentity must have either user or admin roles for the requests to be authorized. Given the preceeding configuration, the request is authorized if the client certificate's CN attribute is equal to alice or bob and forbidden if it is equal to jdoe . 2.2.3.2. Using certificate attributes to augment SecurityIdentity You can always register SecurityIdentityAugmentor if the automatic Mapping certificate attributes to roles option does not suit. Custom SecurityIdentityAugmentor can check the values of different client certificate attributes and augment the SecurityIdentity accordingly. For more information about customizing SecurityIdentity , see the Security identity customization section in the Quarkus "Security tips and tricks" guide. 2.3. Other supported authentication mechanisms Quarkus Security also supports the following authentication mechanisms through extensions: OpenID Connect authentication SmallRye JWT authentication 2.3.1. OpenID Connect authentication OpenID Connect (OIDC) is an identity layer that works on top of the OAuth 2.0 protocol. OIDC enables client applications to verify the identity of a user based on the authentication performed by the OIDC provider and retrieve basic information about that user. The Quarkus quarkus-oidc extension provides a reactive, interoperable, multitenant-enabled OIDC adapter that supports Bearer token and Authorization Code Flow authentication mechanisms. The Bearer token authentication mechanism extracts the token from the HTTP Authorization header. The Authorization Code Flow mechanism redirects the user to an OIDC provider to authenticate the user's identity. After the user is redirected back to Quarkus, the mechanism completes the authentication process by exchanging the provided code that was granted for the ID, access, and refresh tokens. You can verify ID and access JSON Web Token (JWT) tokens by using the refreshable JSON Web Key (JWK) set or introspect them remotely. However, opaque, also known as binary tokens, can only be introspected remotely. Note Using the Quarkus OIDC extension, both the Bearer token and Authorization Code Flow authentication mechanisms use SmallRye JWT authentication to represent JWT tokens as MicroProfile JWT org.eclipse.microprofile.jwt.JsonWebToken . 2.3.1.1. Additional Quarkus resources for OIDC authentication For more information about OIDC authentication and authorization methods that you can use to secure your Quarkus applications, see the following resources: OIDC topic Quarkus information resource Bearer token authentication mechanism OIDC Bearer token authentication Authorization Code Flow authentication mechanism OpenID Connect (OIDC) Authorization Code Flow mechanism OIDC and SAML Identity broker OpenID Connect (OIDC) Authorization Code Flow and SAML Identity broker Multiple tenants that can support the Bearer token authentication or Authorization Code Flow mechanisms Using OpenID Connect (OIDC) multi-tenancy Securing Quarkus with commonly used OpenID Connect providers Configuring well-known OpenID Connect providers Using Keycloak to centralize authorization Using OpenID Connect (OIDC) and Keycloak to centralize authorization Note To enable the Quarkus OIDC extension at runtime, set quarkus.oidc.tenant-enabled=false at build time. Then, re-enable it at runtime by using a system property. For more information about managing the individual tenant configurations in multitenant OIDC deployments, see the Disabling tenant configurations section in the "Using OpenID Connect (OIDC) multi-tenancy" guide. 2.3.1.2. OpenID Connect client and filters The quarkus-oidc-client extension provides OidcClient for acquiring and refreshing access tokens from OpenID Connect and OAuth2 providers that support the following token grants: client-credentials password refresh_token The quarkus-oidc-client-filter extension requires the quarkus-oidc-client extension. It provides JAX-RS RESTful Web Services OidcClientRequestFilter , which sets the access token acquired by OidcClient as the Bearer scheme value of the HTTP Authorization header. This filter can be registered with MicroProfile REST client implementations injected into the current Quarkus endpoint, but it is not related to the authentication requirements of this service endpoint. For example, it can be a public endpoint or be protected with mTLS. Important In this scenario, you do not need to protect your Quarkus endpoint by using the Quarkus OpenID Connect adapter. The quarkus-oidc-token-propagation extension requires the quarkus-oidc extension. It provides Jakarta REST TokenCredentialRequestFilter , which sets the OpenID Connect Bearer token or Authorization Code Flow access token as the Bearer scheme value of the HTTP Authorization header. This filter can be registered with MicroProfile REST client implementations injected into the current Quarkus endpoint, which must be protected by using the Quarkus OIDC adapter. This filter can propagate the access token to the downstream services. For more information, see the OpenID Connect client and token propagation quickstart and OpenID Connect (OIDC) and OAuth2 client and filters reference guides. 2.3.2. SmallRye JWT authentication The quarkus-smallrye-jwt extension provides a MicroProfile JSON Web Token (JWT) 2.1 implementation and multiple options to verify signed and encrypted JWT tokens. It represents them as org.eclipse.microprofile.jwt.JsonWebToken . quarkus-smallrye-jwt is an alternative to the quarkus-oidc Bearer token authentication mechanism and verifies only JWT tokens by using either Privacy Enhanced Mail (PEM) keys or the refreshable JWK key set. quarkus-smallrye-jwt also provides the JWT generation API, which you can use to easily create signed , inner-signed , and encrypted JWT tokens. For more information, see the Using JWT RBAC guide. 2.4. Choosing between OpenID Connect, SmallRye JWT, and OAuth2 authentication mechanisms Use the following information to select the appropriate token authentication mechanism to secure your Quarkus applications. List of authentication mechanism use cases quarkus-oidc requires an OpenID Connect provider such as Keycloak, which can verify the bearer tokens or authenticate the end users with the Authorization Code flow. In both cases, quarkus-oidc requires a connection to the specified OpenID Connect provider. If the user authentication requires Authorization Code flow, or you need to support multiple tenants, use quarkus-oidc . quarkus-oidc can also request user information by using both Authorization Code Flow and Bearer access tokens. If your bearer tokens must be verified, use quarkus-oidc or quarkus-smallrye-jwt . If your bearer tokens are in a JSON web token (JWT) format, you can use any extensions in the preceding list. Both quarkus-oidc and quarkus-smallrye-jwt support refreshing the JsonWebKey (JWK) set when the OpenID Connect provider rotates the keys. Therefore, if remote token introspection must be avoided or is unsupported by the providers, use quarkus-oidc or quarkus-smallrye-jwt to verify JWT tokens. To introspect the JWT tokens remotely, you can use quarkus-oidc for verifying the opaque or binary tokens by using remote introspection. quarkus-smallrye-jwt does not support the remote introspection of both opaque or JWT tokens but instead relies on the locally available keys that are usually retrieved from the OpenID Connect provider. quarkus-oidc and quarkus-smallrye-jwt support the JWT and opaque token injection into the endpoint code. Injected JWT tokens provide more information about the user. All extensions can have the tokens injected as Principal . quarkus-smallrye-jwt supports more key formats than quarkus-oidc . quarkus-oidc uses only the JWK-formatted keys that are part of a JWK set, whereas quarkus-smallrye-jwt supports PEM keys. quarkus-smallrye-jwt handles locally signed, inner-signed-and-encrypted, and encrypted tokens. In contrast, although quarkus-oidc can also verify such tokens, it treats them as opaque tokens and verifies them through remote introspection. Note Architectural considerations drive your decision to use opaque or JSON web token (JWT) token format. Opaque tokens tend to be much shorter than JWT tokens but need most of the token-associated state to be maintained in the provider database. Opaque tokens are effectively database pointers. JWT tokens are significantly longer than opaque tokens. Nonetheless, the providers effectively delegate most of the token-associated state to the client by storing it as the token claims and either signing or encrypting them. Table 2.2. Token authentication mechanism comparison Feature required Authentication mechanism quarkus-oidc quarkus-smallrye-jwt Bearer JWT verification Local verification or introspection Local verification Bearer opaque token verification Introspection No Refreshing JsonWebKey set to verify JWT tokens Yes Yes Represent token as Principal Yes Yes Inject JWT as MP JWT Yes Yes Authorization code flow Yes No Multi-tenancy Yes No User information support Yes No PEM key format support No Yes SecretKey support No In JSON Web Key (JWK) format Inner-signed and encrypted or encrypted tokens Introspection Local verification Custom token verification No With injected JWT parser JWT as a cookie support No Yes 2.5. Combining authentication mechanisms If different sources provide the user credentials, you can combine authentication mechanisms. For example, you can combine the built-in Basic and the Quarkus quarkus-oidc Bearer token authentication mechanisms. Important You cannot combine the Quarkus quarkus-oidc Bearer token and smallrye-jwt authentication mechanisms because both mechanisms attempt to verify the token extracted from the HTTP Bearer token authentication scheme. 2.5.1. Path-specific authentication mechanisms The following configuration example demonstrates how you can enforce a single selectable authentication mechanism for a given request path: quarkus.http.auth.permission.basic-or-bearer.paths=/service quarkus.http.auth.permission.basic-or-bearer.policy=authenticated quarkus.http.auth.permission.basic.paths=/basic-only quarkus.http.auth.permission.basic.policy=authenticated quarkus.http.auth.permission.basic.auth-mechanism=basic quarkus.http.auth.permission.bearer.paths=/bearer-only quarkus.http.auth.permission.bearer.policy=authenticated quarkus.http.auth.permission.bearer.auth-mechanism=bearer Ensure that the value of the auth-mechanism property matches the authentication scheme supported by HttpAuthenticationMechanism , for example, basic , bearer , or form . 2.6. Proactive authentication Proactive authentication is enabled in Quarkus by default. This means that if an incoming request has a credential, the request will always be authenticated, even if the target page does not require authentication. For more information, see the Quarkus Proactive authentication guide. 2.7. References Quarkus Security overview Quarkus Security architecture Identity providers Authorization of web endpoints
[ "do not redirect, respond with HTTP 200 OK quarkus.http.auth.form.landing-page= do not redirect, respond with HTTP 401 Unauthorized quarkus.http.auth.form.login-page= quarkus.http.auth.form.error-page= HttpOnly must be false if you want to log out on the client; it can be true if logging out from the server quarkus.http.auth.form.http-only-cookie=false", "const login = () => { // Create an object to represent the form data const formData = new URLSearchParams(); formData.append(\"j_username\", username); formData.append(\"j_password\", password); // Make an HTTP POST request using fetch against j_security_check endpoint fetch(\"j_security_check\", { method: \"POST\", body: formData, headers: { \"Content-Type\": \"application/x-www-form-urlencoded\", }, }) .then((response) => { if (response.status === 200) { // Authentication was successful console.log(\"Authentication successful\"); } else { // Authentication failed console.error(\"Invalid credentials\"); } }) .catch((error) => { console.error(error); }); };", "const logout= () => { // delete the credential cookie, essentially killing the session const removeCookie = `quarkus-credential=; Max-Age=0;path=/`; document.cookie = removeCookie; // perform post-logout actions here, such as redirecting back to your login page };", "@ConfigProperty(name = \"quarkus.http.auth.form.cookie-name\") String cookieName; @Inject CurrentIdentityAssociation identity; @POST public Response logout() { if (identity.getIdentity().isAnonymous()) { throw new UnauthorizedException(\"Not authenticated\"); } final NewCookie removeCookie = new NewCookie.Builder(cookieName) .maxAge(0) .expiry(Date.from(Instant.EPOCH)) .path(\"/\") .build(); return Response.noContent().cookie(removeCookie).build(); }", "quarkus.http.ssl.certificate.key-store-file=server-keystore.jks 1 quarkus.http.ssl.certificate.key-store-password=the_key_store_secret quarkus.http.ssl.certificate.trust-store-file=server-truststore.jks 2 quarkus.http.ssl.certificate.trust-store-password=the_trust_store_secret quarkus.http.ssl.client-auth=required 3 quarkus.http.auth.permission.default.paths=/* 4 quarkus.http.auth.permission.default.policy=authenticated quarkus.http.insecure-requests=disabled 5", "@Inject SecurityIdentity identity; @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return String.format(\"Hello, %s\", identity.getPrincipal().getName()); }", "import java.security.cert.X509Certificate; import io.quarkus.security.credential.CertificateCredential; CertificateCredential credential = identity.getCredential(CertificateCredential.class); X509Certificate certificate = credential.getCertificate();", "quarkus.http.ssl.certificate.key-store-file=server-keystore.jks quarkus.http.ssl.certificate.key-store-password=the_key_store_secret quarkus.http.ssl.certificate.trust-store-file=server-truststore.jks quarkus.http.ssl.certificate.trust-store-password=the_trust_store_secret quarkus.http.ssl.client-auth=required quarkus.http.insecure-requests=disabled quarkus.http.auth.certificate-role-properties=cert-role-mappings.properties 1 quarkus.http.auth.permission.certauthenticated.paths=/* 2 quarkus.http.auth.permission.certauthenticated.policy=role-policy-cert 3 quarkus.http.auth.policy.role-policy-cert.roles-allowed=user,admin 4", "quarkus.http.auth.permission.basic-or-bearer.paths=/service quarkus.http.auth.permission.basic-or-bearer.policy=authenticated quarkus.http.auth.permission.basic.paths=/basic-only quarkus.http.auth.permission.basic.policy=authenticated quarkus.http.auth.permission.basic.auth-mechanism=basic quarkus.http.auth.permission.bearer.paths=/bearer-only quarkus.http.auth.permission.bearer.policy=authenticated quarkus.http.auth.permission.bearer.auth-mechanism=bearer" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/security_architecture/security-authentication-mechanisms
D.4. Outline View
D.4. Outline View The Outline View is a utility view which provides both at tree view dedicated to a specific model (open in an editor) and a scaled thumbnail diagram representative of the diagram open in the corresponding Diagram Editor . You can show the Outline View by clicking its tab. If there is no open editors, the view indicates that Outline is not available . If a Model Editor is open, then the root of the displayed tree will be the model for the editor that is currently in focus in Teiid Designer (tab on top).
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/outline_view
Configuring and managing logical volumes
Configuring and managing logical volumes Red Hat Enterprise Linux 8 Configuring and managing LVM Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_logical_volumes/index
Chapter 9. TokenRequest [authentication.k8s.io/v1]
Chapter 9. TokenRequest [authentication.k8s.io/v1] Description TokenRequest requests a token for a given service account. Type object Required spec 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object TokenRequestSpec contains client provided parameters of a token request. status object TokenRequestStatus is the result of a token request. 9.1.1. .spec Description TokenRequestSpec contains client provided parameters of a token request. Type object Required audiences Property Type Description audiences array (string) Audiences are the intendend audiences of the token. A recipient of a token must identify themself with an identifier in the list of audiences of the token, and otherwise should reject the token. A token issued for multiple audiences may be used to authenticate against any of the audiences listed but implies a high degree of trust between the target audiences. boundObjectRef object BoundObjectReference is a reference to an object that a token is bound to. expirationSeconds integer ExpirationSeconds is the requested duration of validity of the request. The token issuer may return a token with a different validity duration so a client needs to check the 'expiration' field in a response. 9.1.2. .spec.boundObjectRef Description BoundObjectReference is a reference to an object that a token is bound to. Type object Property Type Description apiVersion string API version of the referent. kind string Kind of the referent. Valid kinds are 'Pod' and 'Secret'. name string Name of the referent. uid string UID of the referent. 9.1.3. .status Description TokenRequestStatus is the result of a token request. Type object Required token expirationTimestamp Property Type Description expirationTimestamp Time ExpirationTimestamp is the time of expiration of the returned token. token string Token is the opaque bearer token. 9.2. API endpoints The following API endpoints are available: /api/v1/namespaces/{namespace}/serviceaccounts/{name}/token POST : create token of a ServiceAccount 9.2.1. /api/v1/namespaces/{namespace}/serviceaccounts/{name}/token Table 9.1. Global path parameters Parameter Type Description name string name of the TokenRequest Table 9.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create token of a ServiceAccount Table 9.3. Body parameters Parameter Type Description body TokenRequest schema Table 9.4. HTTP responses HTTP code Reponse body 200 - OK TokenRequest schema 201 - Created TokenRequest schema 202 - Accepted TokenRequest schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/authorization_apis/tokenrequest-authentication-k8s-io-v1
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To provide feedback, open a Jira issue that describes your concerns. Provide as much detail as possible so that your request can be addressed quickly. Prerequisites You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure To provide your feedback, use the following steps: Click the following link: Create Issue In the Summary text box, enter a brief description of the issue. In the Description text box, provide more details about the issue. Include the URL where you found the issue. Provide information for any other required fields. Allow fields that contain default information to remain at the defaults. Click Create to create the Jira issue for the documentation team. A documentation issue will be created and routed to the appropriate documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_the_subscriptions_service/proc-providing-feedback-on-redhat-documentation
Chapter 11. Enabling RT-KVM for NFV Workloads
Chapter 11. Enabling RT-KVM for NFV Workloads To facilitate installing and configuring Red Hat Enterprise Linux Real Time KVM (RT-KVM), Red Hat OpenStack Platform provides the following features: A real-time Compute node role that provisions Red Hat Enterprise Linux for real-time. The additional RT-KVM kernel module. Automatic configuration of the Compute node. 11.1. Planning for your RT-KVM Compute nodes When planning for RT-KVM Compute nodes, ensure that the following tasks are completed: You must use Red Hat certified servers for your RT-KVM Compute nodes. For more information, see Red Hat Enterprise Linux for Real Time certified servers . Register your undercloud and attach a valid Red Hat OpenStack Platform subscription. For more information, see Registering the undercloud and attaching subscriptions in the Director Installation and Usage guide. Enable the repositories that are required for the undercloud, such as the rhel-9-server-nfv-rpms repository for RT-KVM, and update the system packages to the latest versions. Note You need a separate subscription to a Red Hat OpenStack Platform for Real Time SKU before you can access this repository. For more information, see Enabling repositories for the undercloud in the Director Installation and Usage guide. Building the real-time image Install the libguestfs-tools package on the undercloud to get the virt-customize tool: Important If you install the libguestfs-tools package on the undercloud, disable iscsid.socket to avoid port conflicts with the tripleo_iscsid service on the undercloud: Extract the images: Copy the default image: Register your image to enable Red Hat repositories relevant to your customizations. Replace [username] and [password] with valid credentials in the following example. Note For security, you can remove credentials from the history file if they are used on the command prompt. You can delete individual lines in history using the history -d command followed by the line number. Find a list of pool IDs from your account's subscriptions, and attach the appropriate pool ID to your image. Add the repositories necessary for Red Hat OpenStack Platform with NFV. Create a script to configure real-time capabilities on the image. Run the script to configure the real-time image: Note If you see the following line in the rt.sh script output, "grubby fatal error: unable to find a suitable template" , you can ignore this error. Examine the virt-customize.log file that resulted from the command, to check that the packages installed correctly using the rt.sh script . Relabel SELinux: Extract vmlinuz and initrd: Note The software version in the vmlinuz and initramfs filenames vary with the kernel version. Upload the image: You now have a real-time image you can use with the ComputeOvsDpdkRT composable role on your selected Compute nodes. Modifying BIOS settings on RT-KVM Compute nodes To reduce latency on your RT-KVM Compute nodes, disable all options for the following parameters in your Compute node BIOS settings: Power Management Hyper-Threading CPU sleep states Logical processors 11.2. Configuring OVS-DPDK with RT-KVM Note You must determine the best values for the OVS-DPDK parameters that you set in the network-environment.yaml file to optimize your OpenStack network for OVS-DPDK. For more details, see Deriving DPDK parameters with workflows . 11.2.1. Generating the ComputeOvsDpdk composable role Use the ComputeOvsDpdkRT role to specify Compute nodes for the real-time compute image. Generate roles_data.yaml for the ComputeOvsDpdkRT role. 11.2.2. Configuring the OVS-DPDK parameters Important Determine the best values for the OVS-DPDK parameters in the network-environment.yaml file to optimize your deployment. For more information, see Section 9.1, "Deriving DPDK parameters with workflows" . Add the NIC configuration for the OVS-DPDK role you use under resource_registry : resource_registry: # Specify the relative/absolute path to the config files you want to use for override the default. OS::TripleO::ComputeOvsDpdkRT::Net::SoftwareConfig: nic-configs/compute-ovs-dpdk.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml Under parameter_defaults , set the OVS-DPDK, and RT-KVM parameters: # DPDK compute node. ComputeOvsDpdkRTParameters: KernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on isolcpus=1-7,17-23,9-15,25-31" TunedProfileName: "realtime-virtual-host" IsolCpusList: "1,2,3,4,5,6,7,9,10,17,18,19,20,21,22,23,11,12,13,14,15,25,26,27,28,29,30,31" NovaComputeCpuDedicatedSet: ['2,3,4,5,6,7,18,19,20,21,22,23,10,11,12,13,14,15,26,27,28,29,30,31'] NovaReservedHostMemory: 4096 OvsDpdkSocketMemory: "1024,1024" OvsDpdkMemoryChannels: "4" OvsPmdCoreList: "1,17,9,25" VhostuserSocketGroup: "hugetlbfs" ComputeOvsDpdkRTImage: "overcloud-realtime-compute" 11.2.3. Deploying the overcloud Deploy the overcloud for ML2-OVS: (undercloud) [stack@undercloud-0 ~]USD openstack overcloud deploy \ --templates \ -r /home/stack/ospd-16-vlan-dpdk-ctlplane-bonding-rt/roles_data.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs-dpdk.yaml \ -e /home/stack/ospd-16-vxlan-dpdk-data-bonding-rt-hybrid/containers-prepare-parameter.yaml \ -e /home/stack/ospd-16-vxlan-dpdk-data-bonding-rt-hybrid/network-environment.yaml 11.3. Launching an RT-KVM instance Perform the following steps to launch an RT-KVM instance on a real-time enabled Compute node: Create an RT-KVM flavor on the overcloud: Launch an RT-KVM instance: To verify that the instance uses the assigned emulator threads, run the following command:
[ "(undercloud) [stack@undercloud-0 ~]USD sudo dnf install libguestfs-tools", "sudo systemctl disable --now iscsid.socket", "(undercloud) [stack@undercloud-0 ~]USD tar -xf /usr/share/rhosp-director-images/overcloud-full.tar (undercloud) [stack@undercloud-0 ~]USD tar -xf /usr/share/rhosp-director-images/ironic-python-agent.tar", "(undercloud) [stack@undercloud-0 ~]USD cp overcloud-hardened-uefi-full.qcow2 overcloud-realtime-compute.qcow2", "virt-customize -a overcloud-realtime-compute.qcow2 --run-command 'subscription-manager register --username=[username] --password=[password]' subscription-manager release --set 8.4", "sudo subscription-manager list --all --available | less virt-customize -a overcloud-realtime-compute.qcow2 --run-command 'subscription-manager attach --pool [pool-ID]'", "virt-customize -a overcloud-realtime-compute.qcow2 --run-command 'sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhel-8-for-x86_64-highavailability-eus-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=openstack-16.2-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-nfv-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms'", "(undercloud) [stack@undercloud-0 ~]USD cat <<'EOF' > rt.sh #!/bin/bash set -eux dnf -v -y --setopt=protected_packages= erase kernel.USD(uname -m) dnf -v -y install kernel-rt kernel-rt-kvm tuned-profiles-nfv-host grubby --set-default /boot/vmlinuz*rt* EOF", "(undercloud) [stack@undercloud-0 ~]USD virt-customize -a overcloud-realtime-compute.qcow2 -v --run rt.sh 2>&1 | tee virt-customize.log", "(undercloud) [stack@undercloud-0 ~]USD cat virt-customize.log | grep Verifying Verifying : kernel-3.10.0-957.el7.x86_64 1/1 Verifying : 10:qemu-kvm-tools-rhev-2.12.0-18.el7_6.1.x86_64 1/8 Verifying : tuned-profiles-realtime-2.10.0-6.el7_6.3.noarch 2/8 Verifying : linux-firmware-20180911-69.git85c5d90.el7.noarch 3/8 Verifying : tuned-profiles-nfv-host-2.10.0-6.el7_6.3.noarch 4/8 Verifying : kernel-rt-kvm-3.10.0-957.10.1.rt56.921.el7.x86_64 5/8 Verifying : tuna-0.13-6.el7.noarch 6/8 Verifying : kernel-rt-3.10.0-957.10.1.rt56.921.el7.x86_64 7/8 Verifying : rt-setup-2.0-6.el7.x86_64 8/8", "(undercloud) [stack@undercloud-0 ~]USD virt-customize -a overcloud-realtime-compute.qcow2 --selinux-relabel", "(undercloud) [stack@undercloud-0 ~]USD mkdir image (undercloud) [stack@undercloud-0 ~]USD guestmount -a overcloud-realtime-compute.qcow2 -i --ro image (undercloud) [stack@undercloud-0 ~]USD cp image/boot/vmlinuz-3.10.0-862.rt56.804.el7.x86_64 ./overcloud-realtime-compute.vmlinuz (undercloud) [stack@undercloud-0 ~]USD cp image/boot/initramfs-3.10.0-862.rt56.804.el7.x86_64.img ./overcloud-realtime-compute.initrd (undercloud) [stack@undercloud-0 ~]USD guestunmount image", "(undercloud) [stack@undercloud-0 ~]USD openstack overcloud image upload --update-existing --os-image-name overcloud-realtime-compute.qcow2", "(undercloud) [stack@undercloud-0 ~]USD openstack overcloud roles generate -o roles_data.yaml Controller ComputeOvsDpdkRT", "resource_registry: # Specify the relative/absolute path to the config files you want to use for override the default. OS::TripleO::ComputeOvsDpdkRT::Net::SoftwareConfig: nic-configs/compute-ovs-dpdk.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml", "DPDK compute node. ComputeOvsDpdkRTParameters: KernelArgs: \"default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on isolcpus=1-7,17-23,9-15,25-31\" TunedProfileName: \"realtime-virtual-host\" IsolCpusList: \"1,2,3,4,5,6,7,9,10,17,18,19,20,21,22,23,11,12,13,14,15,25,26,27,28,29,30,31\" NovaComputeCpuDedicatedSet: ['2,3,4,5,6,7,18,19,20,21,22,23,10,11,12,13,14,15,26,27,28,29,30,31'] NovaReservedHostMemory: 4096 OvsDpdkSocketMemory: \"1024,1024\" OvsDpdkMemoryChannels: \"4\" OvsPmdCoreList: \"1,17,9,25\" VhostuserSocketGroup: \"hugetlbfs\" ComputeOvsDpdkRTImage: \"overcloud-realtime-compute\"", "(undercloud) [stack@undercloud-0 ~]USD openstack overcloud deploy --templates -r /home/stack/ospd-16-vlan-dpdk-ctlplane-bonding-rt/roles_data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs-dpdk.yaml -e /home/stack/ospd-16-vxlan-dpdk-data-bonding-rt-hybrid/containers-prepare-parameter.yaml -e /home/stack/ospd-16-vxlan-dpdk-data-bonding-rt-hybrid/network-environment.yaml", "openstack flavor create r1.small 99 4096 20 4 openstack flavor set --property hw:cpu_policy=dedicated 99 openstack flavor set --property hw:cpu_realtime=yes 99 openstack flavor set --property hw:mem_page_size=1GB 99 openstack flavor set --property hw:cpu_realtime_mask=\"^0-1\" 99 openstack flavor set --property hw:cpu_emulator_threads=isolate 99", "openstack server create --image <rhel> --flavor r1.small --nic net-id=<dpdk-net> test-rt", "virsh dumpxml <instance-id> | grep vcpu -A1 <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='3'/> <vcpupin vcpu='2' cpuset='5'/> <vcpupin vcpu='3' cpuset='7'/> <emulatorpin cpuset='0-1'/> <vcpusched vcpus='2-3' scheduler='fifo' priority='1'/> </cputune>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/network_functions_virtualization_planning_and_configuration_guide/enable-rtkvm-nfv-workload_rhosp-nfv
Chapter 23. Data sets authoring
Chapter 23. Data sets authoring A data set is a collection of related sets of information and can be stored in a database, in a Microsoft Excel file, or in memory. A data set definition instructs Business Central methods to access, read, and parse a data set. Business Central does not store data. It enables you to define access to a data set regardless of where the data is stored. For example, if data is stored in a database, a valid data set can contain the entire database or a subset of the database as a result of an SQL query. In both cases the data is used as input for the reporting components of Business Central which then displays the information. To access a data set, you must create and register a data set definition. The data set definition specifies the location of the data set, options to access it, read it, and parse it, and the columns that it contains. Note The Data Sets page is visible only to users with the admin role. 23.1. Adding data sets You can create a data set to fetch data from an external data source and use that data for the reporting components. Procedure In Business Central, go to Admin Data Sets . The Data Sets page opens. Click New Data Set and select one of the following provider types: Bean: Generates a data set from a Java class CSV: Generates a data set from a remote or local CSV file SQL: Generates a data set from an ANSI-SQL compliant database Elastic Search: Generates a data set from Elastic Search nodes Prometheus: Generates a data set using the Prometheus query Kafka: Generates a data set using metrics from Kafka broker, consumer, or producer Note You must configure KIE Server for Prometheus , Kafka , and Execution Server options. Complete the Data Set Creation Wizard and click Test . Note The configuration steps differ based on the provider you choose. Click Save . 23.2. Editing data sets You can edit existing data sets to ensure that the data fetched to the reporting components is up-to-date. Procedure In Business Central, go to Admin Data Sets . The Data Set Explorer page opens. In the Data Set Explorer pane, search for the data set you want to edit, select the data set, and click Edit . In the Data Set Editor pane, use the appropriate tab to edit the data as required. The tabs differ based on the data set provider type you chose. For example, the following changes are applicable for editing a CSV data provider: CSV Configuration: Enables you to change the name of the data set definition, the source file, the separator, and other properties. Preview: Enables you to preview the data. After you click Test in the CSV Configuration tab, the system executes the data set lookup call and if the data is available, a preview appears. Note that the Preview tab has two sub-tabs: Data columns: Enables you to specify what columns are part of your data set definition. Filter: Enables you to add a new filter. Advanced: Enables you to manage the following configurations: Caching: See Caching data for more information. Cache life-cycle Enables you to specify an interval of time after which a data set (or data) is refreshed. The Refresh on stale data feature refreshes the cached data when the back-end data changes. After making the required changes, click Validate . Click Save . 23.3. Data refresh The data refresh feature enables you to specify an interval of time after which a data set (or data) is refreshed. You can access the Data refresh every feature on the Advanced tab of the data set. The Refresh on stale data feature refreshes the cached data when the back-end data changes. 23.4. Caching data Business Central provides caching mechanisms for storing data sets and performing data operations using in-memory data. Caching data reduces network traffic, remote system payload, and processing time. To avoid performance issues, configure the cache settings in Business Central. For any data lookup call that results in a data set, the caching method determines where the data lookup call is executed and where the resulting data set is stored. An example of a data lookup call would be all the mortgage applications whose locale parameter is set as "Urban". Business Central data set functionality provides two cache levels: Client level Back-end level You can set the Client Cache and Backend Cache settings on the Advanced tab of the data set. Client cache When the cache is turned on, the data set is cached in a web browser during the lookup operation and further lookup operations do not perform requests to the back-end. Data set operations like grouping, aggregations, filtering, and sorting are processed in the web browser. Enable client caching only if the data set size is small, for example, for data sets with less than 10 MB of data. For large data sets, browser issues such as slow performance or intermittent freezing can occur. Client caching reduces the number of back-end requests including requests to the storage system. Back-end cache When the cache is enabled, the decision engine caches the data set. This reduces the number of back-end requests to the remote storage system. All data set operations are performed in the decision engine using in-memory data. Enable back-end caching only if the data set size is not updated frequently and it can be stored and processed in memory. Using back-end caching is also useful in cases with low latency connectivity issues with the remote storage. Note Back-end cache settings are not always visible in the Advanced tab of the Data Set Editor because Java and CSV data providers rely on back-end caching (data set must be in the memory) in order to resolve any data lookup operation using the in-memory decision engine.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/managing_red_hat_decision_manager_and_kie_server_settings/data-sets-authoring-con_configuring-central
Chapter 2. The Go compiler
Chapter 2. The Go compiler The Go compiler is a build tool and dependency manager for the Go programming language. It offers error checking and optimization of your code. 2.1. Prerequisites Go Toolset is installed. For more information, see Installing Go Toolset . 2.2. Setting up a Go workspace To compile a Go program, you need to set up a Go workspace. Procedure Create a workspace directory as a subdirectory of USDGOPATH/src . A common choice is USDHOME/go . Place your source files into your workspace directory. Set the location of your workspace directory as an environment variable to the USDHOME/.bashrc file by running: Replace < workspace_dir > with the name of your workspace directory. Additional resources The official Go workspaces documentation . 2.3. Compiling a Go program You can compile your Go program using the Go compiler. The Go compiler creates an executable binary file as a result of compiling. Prerequisites A set up Go workspace with configured modules. For information on how to set up a workspace, see Setting up a Go workspace . Procedure In your project directory, run: On Red Hat Enterprise Linux 8: Replace < output_file > with the desired name of your output file and < go_main_package > with the name of your main package. On Red Hat Enterprise Linux 9: Replace < output_file > with the desired name of your output file and < go_main_package > with the name of your main package. 2.4. Running a Go program The Go compiler creates an executable binary file as a result of compiling. Complete the following steps to execute this file and run your program. Prerequisites Your program is compiled. For more information on how to compile your program, see Compiling a Go program . Procedure To run your program, run in the directory containing the executable file: Replace < file_name > with the name of your executable file. 2.5. Installing compiled Go projects You can install already compiled Go projects to use their executable files and libraries in further Go projects. After installation, the executable files and libraries of the project are copied to according directories in the Go workspace. Its dependencies are installed as well. Prerequisites A Go workspace with configured modules. For more information, see Setting up a Go workspace . Procedure To install a Go project, run: On Red Hat Enterprise Linux 8: Replace < go_project > with the name of the Go project you want to install. On Red Hat Enterprise Linux 9: Replace < go_project > with the name of the Go project you want to install. 2.6. Downloading and installing Go projects You can download and install third-party Go projects from online resources to use their executable files and libraries in further Go projects. After installation, the executable files and libraries of the project are copied to according directories in the Go workspace. Its dependencies are installed as well. Prerequisites A Go workspace. For more information, see Setting up a Go workspace . Procedure To download and install a Go project, run: On Red Hat Enterprise Linux 8: Replace < third_party_go_project > with the name of the project you want to download. On Red Hat Enterprise Linux 9: Replace < third_party_go_project > with the name of the project you want to download. For information on possible values of third-party projects, run: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: 2.7. Additional resources For more information on the Go compiler, see the official Go documentation . To display the help index included in Go Toolset, run: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: To display documentation for specific Go packages, run: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: See Go packages for an overview of Go packages.
[ "echo 'export GOPATH=< workspace_dir >' >> USDHOME/.bashrc source USDHOME/.bashrc", "go build -o < output_file > < go_main_package >", "go build -o < output_file > < go_main_package >", "./< file_name >", "go install < go_project >", "go install < go_project >", "go install < third_party_go_project >", "go install < third_party_go_project >", "go help importpath", "go help importpath", "go help", "go help", "go doc < package_name >", "go doc < package_name >" ]
https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_go_1.20.10_toolset/assembly_the-go-compiler_using-go-toolset
Chapter 11. Troubleshooting
Chapter 11. Troubleshooting This section describes resources for troubleshooting the Migration Toolkit for Containers (MTC). For known issues, see the MTC release notes . 11.1. MTC workflow You can migrate Kubernetes resources, persistent volume data, and internal container images to OpenShift Container Platform 4.14 by using the Migration Toolkit for Containers (MTC) web console or the Kubernetes API. MTC migrates the following resources: A namespace specified in a migration plan. Namespace-scoped resources: When the MTC migrates a namespace, it migrates all the objects and resources associated with that namespace, such as services or pods. Additionally, if a resource that exists in the namespace but not at the cluster level depends on a resource that exists at the cluster level, the MTC migrates both resources. For example, a security context constraint (SCC) is a resource that exists at the cluster level and a service account (SA) is a resource that exists at the namespace level. If an SA exists in a namespace that the MTC migrates, the MTC automatically locates any SCCs that are linked to the SA and also migrates those SCCs. Similarly, the MTC migrates persistent volumes that are linked to the persistent volume claims of the namespace. Note Cluster-scoped resources might have to be migrated manually, depending on the resource. Custom resources (CRs) and custom resource definitions (CRDs): MTC automatically migrates CRs and CRDs at the namespace level. Migrating an application with the MTC web console involves the following steps: Install the Migration Toolkit for Containers Operator on all clusters. You can install the Migration Toolkit for Containers Operator in a restricted environment with limited or no internet access. The source and target clusters must have network access to each other and to a mirror registry. Configure the replication repository, an intermediate object storage that MTC uses to migrate data. The source and target clusters must have network access to the replication repository during migration. If you are using a proxy server, you must configure it to allow network traffic between the replication repository and the clusters. Add the source cluster to the MTC web console. Add the replication repository to the MTC web console. Create a migration plan, with one of the following data migration options: Copy : MTC copies the data from the source cluster to the replication repository, and from the replication repository to the target cluster. Note If you are using direct image migration or direct volume migration, the images or volumes are copied directly from the source cluster to the target cluster. Move : MTC unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters. Note Although the replication repository does not appear in this diagram, it is required for migration. Run the migration plan, with one of the following options: Stage copies data to the target cluster without stopping the application. A stage migration can be run multiple times so that most of the data is copied to the target before migration. Running one or more stage migrations reduces the duration of the cutover migration. Cutover stops the application on the source cluster and moves the resources to the target cluster. Optional: You can clear the Halt transactions on the source cluster during migration checkbox. About MTC custom resources The Migration Toolkit for Containers (MTC) creates the following custom resources (CRs): MigCluster (configuration, MTC cluster): Cluster definition MigStorage (configuration, MTC cluster): Storage definition MigPlan (configuration, MTC cluster): Migration plan The MigPlan CR describes the source and target clusters, replication repository, and namespaces being migrated. It is associated with 0, 1, or many MigMigration CRs. Note Deleting a MigPlan CR deletes the associated MigMigration CRs. BackupStorageLocation (configuration, MTC cluster): Location of Velero backup objects VolumeSnapshotLocation (configuration, MTC cluster): Location of Velero volume snapshots MigMigration (action, MTC cluster): Migration, created every time you stage or migrate data. Each MigMigration CR is associated with a MigPlan CR. Backup (action, source cluster): When you run a migration plan, the MigMigration CR creates two Velero backup CRs on each source cluster: Backup CR #1 for Kubernetes objects Backup CR #2 for PV data Restore (action, target cluster): When you run a migration plan, the MigMigration CR creates two Velero restore CRs on the target cluster: Restore CR #1 (using Backup CR #2) for PV data Restore CR #2 (using Backup CR #1) for Kubernetes objects 11.2. Migration Toolkit for Containers custom resource manifests Migration Toolkit for Containers (MTC) uses the following custom resource (CR) manifests for migrating applications. 11.2.1. DirectImageMigration The DirectImageMigration CR copies images directly from the source cluster to the destination cluster. apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <direct_image_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration namespaces: 1 - <source_namespace_1> - <source_namespace_2>:<destination_namespace_3> 2 1 One or more namespaces containing images to be migrated. By default, the destination namespace has the same name as the source namespace. 2 Source namespace mapped to a destination namespace with a different name. 11.2.2. DirectImageStreamMigration The DirectImageStreamMigration CR copies image stream references directly from the source cluster to the destination cluster. apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageStreamMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <direct_image_stream_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration imageStreamRef: name: <image_stream> namespace: <source_image_stream_namespace> destNamespace: <destination_image_stream_namespace> 11.2.3. DirectVolumeMigration The DirectVolumeMigration CR copies persistent volumes (PVs) directly from the source cluster to the destination cluster. apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigration metadata: name: <direct_volume_migration> namespace: openshift-migration spec: createDestinationNamespaces: false 1 deleteProgressReportingCRs: false 2 destMigClusterRef: name: <host_cluster> 3 namespace: openshift-migration persistentVolumeClaims: - name: <pvc> 4 namespace: <pvc_namespace> srcMigClusterRef: name: <source_cluster> namespace: openshift-migration 1 Set to true to create namespaces for the PVs on the destination cluster. 2 Set to true to delete DirectVolumeMigrationProgress CRs after migration. The default is false so that DirectVolumeMigrationProgress CRs are retained for troubleshooting. 3 Update the cluster name if the destination cluster is not the host cluster. 4 Specify one or more PVCs to be migrated. 11.2.4. DirectVolumeMigrationProgress The DirectVolumeMigrationProgress CR shows the progress of the DirectVolumeMigration CR. apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigrationProgress metadata: labels: controller-tools.k8s.io: "1.0" name: <direct_volume_migration_progress> spec: clusterRef: name: <source_cluster> namespace: openshift-migration podRef: name: <rsync_pod> namespace: openshift-migration 11.2.5. MigAnalytic The MigAnalytic CR collects the number of images, Kubernetes resources, and the persistent volume (PV) capacity from an associated MigPlan CR. You can configure the data that it collects. apiVersion: migration.openshift.io/v1alpha1 kind: MigAnalytic metadata: annotations: migplan: <migplan> name: <miganalytic> namespace: openshift-migration labels: migplan: <migplan> spec: analyzeImageCount: true 1 analyzeK8SResources: true 2 analyzePVCapacity: true 3 listImages: false 4 listImagesLimit: 50 5 migPlanRef: name: <migplan> namespace: openshift-migration 1 Optional: Returns the number of images. 2 Optional: Returns the number, kind, and API version of the Kubernetes resources. 3 Optional: Returns the PV capacity. 4 Returns a list of image names. The default is false so that the output is not excessively long. 5 Optional: Specify the maximum number of image names to return if listImages is true . 11.2.6. MigCluster The MigCluster CR defines a host, local, or remote cluster. apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: labels: controller-tools.k8s.io: "1.0" name: <host_cluster> 1 namespace: openshift-migration spec: isHostCluster: true 2 # The 'azureResourceGroup' parameter is relevant only for Microsoft Azure. azureResourceGroup: <azure_resource_group> 3 caBundle: <ca_bundle_base64> 4 insecure: false 5 refresh: false 6 # The 'restartRestic' parameter is relevant for a source cluster. restartRestic: true 7 # The following parameters are relevant for a remote cluster. exposedRegistryPath: <registry_route> 8 url: <destination_cluster_url> 9 serviceAccountSecretRef: name: <source_secret> 10 namespace: openshift-config 1 Update the cluster name if the migration-controller pod is not running on this cluster. 2 The migration-controller pod runs on this cluster if true . 3 Microsoft Azure only: Specify the resource group. 4 Optional: If you created a certificate bundle for self-signed CA certificates and if the insecure parameter value is false , specify the base64-encoded certificate bundle. 5 Set to true to disable SSL verification. 6 Set to true to validate the cluster. 7 Set to true to restart the Restic pods on the source cluster after the Stage pods are created. 8 Remote cluster and direct image migration only: Specify the exposed secure registry path. 9 Remote cluster only: Specify the URL. 10 Remote cluster only: Specify the name of the Secret object. 11.2.7. MigHook The MigHook CR defines a migration hook that runs custom code at a specified stage of the migration. You can create up to four migration hooks. Each hook runs during a different phase of the migration. You can configure the hook name, runtime duration, a custom image, and the cluster where the hook will run. The migration phases and namespaces of the hooks are configured in the MigPlan CR. apiVersion: migration.openshift.io/v1alpha1 kind: MigHook metadata: generateName: <hook_name_prefix> 1 name: <mighook> 2 namespace: openshift-migration spec: activeDeadlineSeconds: 1800 3 custom: false 4 image: <hook_image> 5 playbook: <ansible_playbook_base64> 6 targetCluster: source 7 1 Optional: A unique hash is appended to the value for this parameter so that each migration hook has a unique name. You do not need to specify the value of the name parameter. 2 Specify the migration hook name, unless you specify the value of the generateName parameter. 3 Optional: Specify the maximum number of seconds that a hook can run. The default is 1800 . 4 The hook is a custom image if true . The custom image can include Ansible or it can be written in a different programming language. 5 Specify the custom image, for example, quay.io/konveyor/hook-runner:latest . Required if custom is true . 6 Base64-encoded Ansible playbook. Required if custom is false . 7 Specify the cluster on which the hook will run. Valid values are source or destination . 11.2.8. MigMigration The MigMigration CR runs a MigPlan CR. You can configure a Migmigration CR to run a stage or incremental migration, to cancel a migration in progress, or to roll back a completed migration. apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <migmigration> namespace: openshift-migration spec: canceled: false 1 rollback: false 2 stage: false 3 quiescePods: true 4 keepAnnotations: true 5 verify: false 6 migPlanRef: name: <migplan> namespace: openshift-migration 1 Set to true to cancel a migration in progress. 2 Set to true to roll back a completed migration. 3 Set to true to run a stage migration. Data is copied incrementally and the pods on the source cluster are not stopped. 4 Set to true to stop the application during migration. The pods on the source cluster are scaled to 0 after the Backup stage. 5 Set to true to retain the labels and annotations applied during the migration. 6 Set to true to check the status of the migrated pods on the destination cluster are checked and to return the names of pods that are not in a Running state. 11.2.9. MigPlan The MigPlan CR defines the parameters of a migration plan. You can configure destination namespaces, hook phases, and direct or indirect migration. Note By default, a destination namespace has the same name as the source namespace. If you configure a different destination namespace, you must ensure that the namespaces are not duplicated on the source or the destination clusters because the UID and GID ranges are copied during migration. apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: labels: controller-tools.k8s.io: "1.0" name: <migplan> namespace: openshift-migration spec: closed: false 1 srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration hooks: 2 - executionNamespace: <namespace> 3 phase: <migration_phase> 4 reference: name: <hook> 5 namespace: <hook_namespace> 6 serviceAccount: <service_account> 7 indirectImageMigration: true 8 indirectVolumeMigration: false 9 migStorageRef: name: <migstorage> namespace: openshift-migration namespaces: - <source_namespace_1> 10 - <source_namespace_2> - <source_namespace_3>:<destination_namespace_4> 11 refresh: false 12 1 The migration has completed if true . You cannot create another MigMigration CR for this MigPlan CR. 2 Optional: You can specify up to four migration hooks. Each hook must run during a different migration phase. 3 Optional: Specify the namespace in which the hook will run. 4 Optional: Specify the migration phase during which a hook runs. One hook can be assigned to one phase. Valid values are PreBackup , PostBackup , PreRestore , and PostRestore . 5 Optional: Specify the name of the MigHook CR. 6 Optional: Specify the namespace of MigHook CR. 7 Optional: Specify a service account with cluster-admin privileges. 8 Direct image migration is disabled if true . Images are copied from the source cluster to the replication repository and from the replication repository to the destination cluster. 9 Direct volume migration is disabled if true . PVs are copied from the source cluster to the replication repository and from the replication repository to the destination cluster. 10 Specify one or more source namespaces. If you specify only the source namespace, the destination namespace is the same. 11 Specify the destination namespace if it is different from the source namespace. 12 The MigPlan CR is validated if true . 11.2.10. MigStorage The MigStorage CR describes the object storage for the replication repository. Amazon Web Services (AWS), Microsoft Azure, Google Cloud Storage, Multi-Cloud Object Gateway, and generic S3-compatible cloud storage are supported. AWS and the snapshot copy method have additional parameters. apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: labels: controller-tools.k8s.io: "1.0" name: <migstorage> namespace: openshift-migration spec: backupStorageProvider: <backup_storage_provider> 1 volumeSnapshotProvider: <snapshot_storage_provider> 2 backupStorageConfig: awsBucketName: <bucket> 3 awsRegion: <region> 4 credsSecretRef: namespace: openshift-config name: <storage_secret> 5 awsKmsKeyId: <key_id> 6 awsPublicUrl: <public_url> 7 awsSignatureVersion: <signature_version> 8 volumeSnapshotConfig: awsRegion: <region> 9 credsSecretRef: namespace: openshift-config name: <storage_secret> 10 refresh: false 11 1 Specify the storage provider. 2 Snapshot copy method only: Specify the storage provider. 3 AWS only: Specify the bucket name. 4 AWS only: Specify the bucket region, for example, us-east-1 . 5 Specify the name of the Secret object that you created for the storage. 6 AWS only: If you are using the AWS Key Management Service, specify the unique identifier of the key. 7 AWS only: If you granted public access to the AWS bucket, specify the bucket URL. 8 AWS only: Specify the AWS signature version for authenticating requests to the bucket, for example, 4 . 9 Snapshot copy method only: Specify the geographical region of the clusters. 10 Snapshot copy method only: Specify the name of the Secret object that you created for the storage. 11 Set to true to validate the cluster. 11.3. Logs and debugging tools This section describes logs and debugging tools that you can use for troubleshooting. 11.3.1. Viewing migration plan resources You can view migration plan resources to monitor a running migration or to troubleshoot a failed migration by using the MTC web console and the command line interface (CLI). Procedure In the MTC web console, click Migration Plans . Click the Migrations number to a migration plan to view the Migrations page. Click a migration to view the Migration details . Expand Migration resources to view the migration resources and their status in a tree view. Note To troubleshoot a failed migration, start with a high-level resource that has failed and then work down the resource tree towards the lower-level resources. Click the Options menu to a resource and select one of the following options: Copy oc describe command copies the command to your clipboard. Log in to the relevant cluster and then run the command. The conditions and events of the resource are displayed in YAML format. Copy oc logs command copies the command to your clipboard. Log in to the relevant cluster and then run the command. If the resource supports log filtering, a filtered log is displayed. View JSON displays the resource data in JSON format in a web browser. The data is the same as the output for the oc get <resource> command. 11.3.2. Viewing a migration plan log You can view an aggregated log for a migration plan. You use the MTC web console to copy a command to your clipboard and then run the command from the command line interface (CLI). The command displays the filtered logs of the following pods: Migration Controller Velero Restic Rsync Stunnel Registry Procedure In the MTC web console, click Migration Plans . Click the Migrations number to a migration plan. Click View logs . Click the Copy icon to copy the oc logs command to your clipboard. Log in to the relevant cluster and enter the command on the CLI. The aggregated log for the migration plan is displayed. 11.3.3. Using the migration log reader You can use the migration log reader to display a single filtered view of all the migration logs. Procedure Get the mig-log-reader pod: USD oc -n openshift-migration get pods | grep log Enter the following command to display a single migration log: USD oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1 1 The -c plain option displays the log without colors. 11.3.4. Accessing performance metrics The MigrationController custom resource (CR) records metrics and pulls them into on-cluster monitoring storage. You can query the metrics by using Prometheus Query Language (PromQL) to diagnose migration performance issues. All metrics are reset when the Migration Controller pod restarts. You can access the performance metrics and run queries by using the OpenShift Container Platform web console. Procedure In the OpenShift Container Platform web console, click Observe Metrics . Enter a PromQL query, select a time window to display, and click Run Queries . If your web browser does not display all the results, use the Prometheus console. 11.3.4.1. Provided metrics The MigrationController custom resource (CR) provides metrics for the MigMigration CR count and for its API requests. 11.3.4.1.1. cam_app_workload_migrations This metric is a count of MigMigration CRs over time. It is useful for viewing alongside the mtc_client_request_count and mtc_client_request_elapsed metrics to collate API request information with migration status changes. This metric is included in Telemetry. Table 11.1. cam_app_workload_migrations metric Queryable label name Sample label values Label description status running , idle , failed , completed Status of the MigMigration CR type stage, final Type of the MigMigration CR 11.3.4.1.2. mtc_client_request_count This metric is a cumulative count of Kubernetes API requests that MigrationController issued. It is not included in Telemetry. Table 11.2. mtc_client_request_count metric Queryable label name Sample label values Label description cluster https://migcluster-url:443 Cluster that the request was issued against component MigPlan , MigCluster Sub-controller API that issued request function (*ReconcileMigPlan).Reconcile Function that the request was issued from kind SecretList , Deployment Kubernetes kind the request was issued for 11.3.4.1.3. mtc_client_request_elapsed This metric is a cumulative latency, in milliseconds, of Kubernetes API requests that MigrationController issued. It is not included in Telemetry. Table 11.3. mtc_client_request_elapsed metric Queryable label name Sample label values Label description cluster https://cluster-url.com:443 Cluster that the request was issued against component migplan , migcluster Sub-controller API that issued request function (*ReconcileMigPlan).Reconcile Function that the request was issued from kind SecretList , Deployment Kubernetes resource that the request was issued for 11.3.4.1.4. Useful queries The table lists some helpful queries that can be used for monitoring performance. Table 11.4. Useful queries Query Description mtc_client_request_count Number of API requests issued, sorted by request type sum(mtc_client_request_count) Total number of API requests issued mtc_client_request_elapsed API request latency, sorted by request type sum(mtc_client_request_elapsed) Total latency of API requests sum(mtc_client_request_elapsed) / sum(mtc_client_request_count) Average latency of API requests mtc_client_request_elapsed / mtc_client_request_count Average latency of API requests, sorted by request type cam_app_workload_migrations{status="running"} * 100 Count of running migrations, multiplied by 100 for easier viewing alongside request counts 11.3.5. Using the must-gather tool You can collect logs, metrics, and information about MTC custom resources by using the must-gather tool. The must-gather data must be attached to all customer cases. You can collect data for a one-hour or a 24-hour period and view the data with the Prometheus console. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. Procedure Navigate to the directory where you want to store the must-gather data. Run the oc adm must-gather command for one of the following data collection options: To collect data for the past 24 hours, run the following command: USD oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.8 This command saves the data as the must-gather/must-gather.tar.gz file. You can upload this file to a support case on the Red Hat Customer Portal . To collect data for the past 24 hours, run the following command: USD oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.8 -- /usr/bin/gather_metrics_dump This operation can take a long time. This command saves the data as the must-gather/metrics/prom_data.tar.gz file. 11.3.6. Debugging Velero resources with the Velero CLI tool You can debug Backup and Restore custom resources (CRs) and retrieve logs with the Velero CLI tool. The Velero CLI tool provides more detailed information than the OpenShift CLI tool. Syntax Use the oc exec command to run a Velero CLI command: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> <command> <cr_name> Example USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql Help option Use the velero --help option to list all Velero CLI commands: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ --help Describe command Use the velero describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> describe <cr_name> Example USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql The following types of restore errors and warnings are shown in the output of a velero describe request: Velero : A list of messages related to the operation of Velero itself, for example, messages related to connecting to the cloud, reading a backup file, and so on Cluster : A list of messages related to backing up or restoring cluster-scoped resources Namespaces : A list of list of messages related to backing up or restoring resources stored in namespaces One or more errors in one of these categories results in a Restore operation receiving the status of PartiallyFailed and not Completed . Warnings do not lead to a change in the completion status. Important For resource-specific errors, that is, Cluster and Namespaces errors, the restore describe --details output includes a resource list that lists all resources that Velero succeeded in restoring. For any resource that has such an error, check to see if the resource is actually in the cluster. If there are Velero errors, but no resource-specific errors, in the output of a describe command, it is possible that the restore completed without any actual problems in restoring workloads, but carefully validate post-restore applications. For example, if the output contains PodVolumeRestore or node agent-related errors, check the status of PodVolumeRestores and DataDownloads . If none of these are failed or still running, then volume data might have been fully restored. Logs command Use the velero logs command to retrieve the logs of a Backup or Restore CR: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> logs <cr_name> Example USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf 11.3.7. Debugging a partial migration failure You can debug a partial migration failure warning message by using the Velero CLI to examine the Restore custom resource (CR) logs. A partial failure occurs when Velero encounters an issue that does not cause a migration to fail. For example, if a custom resource definition (CRD) is missing or if there is a discrepancy between CRD versions on the source and target clusters, the migration completes but the CR is not created on the target cluster. Velero logs the issue as a partial failure and then processes the rest of the objects in the Backup CR. Procedure Check the status of a MigMigration CR: USD oc get migmigration <migmigration> -o yaml Example output status: conditions: - category: Warn durable: true lastTransitionTime: "2021-01-26T20:48:40Z" message: 'Final Restore openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf: partially failed on destination cluster' status: "True" type: VeleroFinalRestorePartiallyFailed - category: Advisory durable: true lastTransitionTime: "2021-01-26T20:48:42Z" message: The migration has completed with warnings, please look at `Warn` conditions. reason: Completed status: "True" type: SucceededWithWarnings Check the status of the Restore CR by using the Velero describe command: USD oc -n {namespace} exec deployment/velero -c velero -- ./velero \ restore describe <restore> Example output Phase: PartiallyFailed (run 'velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf' for more information) Errors: Velero: <none> Cluster: <none> Namespaces: migration-example: error restoring example.com/migration-example/migration-example: the server could not find the requested resource Check the Restore CR logs by using the Velero logs command: USD oc -n {namespace} exec deployment/velero -c velero -- ./velero \ restore logs <restore> Example output time="2021-01-26T20:48:37Z" level=info msg="Attempting to restore migration-example: migration-example" logSource="pkg/restore/restore.go:1107" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf time="2021-01-26T20:48:37Z" level=info msg="error restoring migration-example: the server could not find the requested resource" logSource="pkg/restore/restore.go:1170" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf The Restore CR log error message, the server could not find the requested resource , indicates the cause of the partially failed migration. 11.3.8. Using MTC custom resources for troubleshooting You can check the following Migration Toolkit for Containers (MTC) custom resources (CRs) to troubleshoot a failed migration: MigCluster MigStorage MigPlan BackupStorageLocation The BackupStorageLocation CR contains a migrationcontroller label to identify the MTC instance that created the CR: labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93 VolumeSnapshotLocation The VolumeSnapshotLocation CR contains a migrationcontroller label to identify the MTC instance that created the CR: labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93 MigMigration Backup MTC changes the reclaim policy of migrated persistent volumes (PVs) to Retain on the target cluster. The Backup CR contains an openshift.io/orig-reclaim-policy annotation that indicates the original reclaim policy. You can manually restore the reclaim policy of the migrated PVs. Restore Procedure List the MigMigration CRs in the openshift-migration namespace: USD oc get migmigration -n openshift-migration Example output NAME AGE 88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s Inspect the MigMigration CR: USD oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration The output is similar to the following examples. MigMigration example output name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10 namespace: openshift-migration labels: <none> annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147 apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: creationTimestamp: 2019-08-29T01:01:29Z generation: 20 resourceVersion: 88179 selfLink: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10 uid: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 spec: migPlanRef: name: socks-shop-mig-plan namespace: openshift-migration quiescePods: true stage: false status: conditions: category: Advisory durable: True lastTransitionTime: 2019-08-29T01:03:40Z message: The migration has completed successfully. reason: Completed status: True type: Succeeded phase: Completed startTimestamp: 2019-08-29T01:01:29Z events: <none> Velero backup CR #2 example output that describes the PV data apiVersion: velero.io/v1 kind: Backup metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: "true" openshift.io/migration-registry: 172.30.105.179:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6 openshift.io/orig-reclaim-policy: delete creationTimestamp: "2019-08-29T01:03:15Z" generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10- generation: 1 labels: app.kubernetes.io/part-of: migration migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 velero.io/storage-location: myrepo-vpzq9 name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 namespace: openshift-migration resourceVersion: "87313" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6 spec: excludedNamespaces: [] excludedResources: [] hooks: resources: [] includeClusterResources: null includedNamespaces: - sock-shop includedResources: - persistentvolumes - persistentvolumeclaims - namespaces - imagestreams - imagestreamtags - secrets - configmaps - pods labelSelector: matchLabels: migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 storageLocation: myrepo-vpzq9 ttl: 720h0m0s volumeSnapshotLocations: - myrepo-wv6fx status: completionTimestamp: "2019-08-29T01:02:36Z" errors: 0 expiration: "2019-09-28T01:02:35Z" phase: Completed startTimestamp: "2019-08-29T01:02:35Z" validationErrors: null version: 1 volumeSnapshotsAttempted: 0 volumeSnapshotsCompleted: 0 warnings: 0 Velero restore CR #2 example output that describes the Kubernetes resources apiVersion: velero.io/v1 kind: Restore metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: "true" openshift.io/migration-registry: 172.30.90.187:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88 creationTimestamp: "2019-08-28T00:09:49Z" generateName: e13a1b60-c927-11e9-9555-d129df7f3b96- generation: 3 labels: app.kubernetes.io/part-of: migration migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88 migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88 name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx namespace: openshift-migration resourceVersion: "82329" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx uid: 26983ec0-c928-11e9-825a-06fa9fb68c88 spec: backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f excludedNamespaces: null excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io includedNamespaces: null includedResources: null namespaceMapping: null restorePVs: true status: errors: 0 failureReason: "" phase: Completed validationErrors: null warnings: 15 11.4. Common issues and concerns This section describes common issues and concerns that can cause issues during migration. 11.4.1. Direct volume migration does not complete If direct volume migration does not complete, the target cluster might not have the same node-selector annotations as the source cluster. Migration Toolkit for Containers (MTC) migrates namespaces with all annotations to preserve security context constraints and scheduling requirements. During direct volume migration, MTC creates Rsync transfer pods on the target cluster in the namespaces that were migrated from the source cluster. If a target cluster namespace does not have the same annotations as the source cluster namespace, the Rsync transfer pods cannot be scheduled. The Rsync pods remain in a Pending state. You can identify and fix this issue by performing the following procedure. Procedure Check the status of the MigMigration CR: USD oc describe migmigration <pod> -n openshift-migration The output includes the following status message: Example output Some or all transfer pods are not running for more than 10 mins on destination cluster On the source cluster, obtain the details of a migrated namespace: USD oc get namespace <namespace> -o yaml 1 1 Specify the migrated namespace. On the target cluster, edit the migrated namespace: USD oc edit namespace <namespace> Add the missing openshift.io/node-selector annotations to the migrated namespace as in the following example: apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: "region=east" ... Run the migration plan again. 11.4.2. Error messages and resolutions This section describes common error messages you might encounter with the Migration Toolkit for Containers (MTC) and how to resolve their underlying causes. 11.4.2.1. CA certificate error displayed when accessing the MTC console for the first time If a CA certificate error message is displayed the first time you try to access the MTC console, the likely cause is the use of self-signed CA certificates in one of the clusters. To resolve this issue, navigate to the oauth-authorization-server URL displayed in the error message and accept the certificate. To resolve this issue permanently, add the certificate to the trust store of your web browser. If an Unauthorized message is displayed after you have accepted the certificate, navigate to the MTC console and refresh the web page. 11.4.2.2. OAuth timeout error in the MTC console If a connection has timed out message is displayed in the MTC console after you have accepted a self-signed certificate, the causes are likely to be the following: Interrupted network access to the OAuth server Interrupted network access to the OpenShift Container Platform console Proxy configuration that blocks access to the oauth-authorization-server URL. See MTC console inaccessible because of OAuth timeout error for details. To determine the cause of the timeout: Inspect the MTC console web page with a browser web inspector. Check the Migration UI pod log for errors. 11.4.2.3. Certificate signed by unknown authority error If you use a self-signed certificate to secure a cluster or a replication repository for the MTC, certificate verification might fail with the following error message: Certificate signed by unknown authority . You can create a custom CA certificate bundle file and upload it in the MTC web console when you add a cluster or a replication repository. Procedure Download a CA certificate from a remote endpoint and save it as a CA bundle file: USD echo -n | openssl s_client -connect <host_FQDN>:<port> \ 1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2 1 Specify the host FQDN and port of the endpoint, for example, api.my-cluster.example.com:6443 . 2 Specify the name of the CA bundle file. 11.4.2.4. Backup storage location errors in the Velero pod log If a Velero Backup custom resource contains a reference to a backup storage location (BSL) that does not exist, the Velero pod log might display the following error messages: USD oc logs <Velero_Pod> -n openshift-migration Example output level=error msg="Error checking repository for stale locks" error="error getting backup storage location: BackupStorageLocation.velero.io \"ts-dpa-1\" not found" error.file="/remote-source/src/github.com/vmware-tanzu/velero/pkg/restic/repository_manager.go:259" You can ignore these error messages. A missing BSL cannot cause a migration to fail. 11.4.2.5. Pod volume backup timeout error in the Velero pod log If a migration fails because Restic times out, the following error is displayed in the Velero pod log. level=error msg="Error backing up item" backup=velero/monitoring error="timed out waiting for all PodVolumeBackups to complete" error.file="/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165" error.function="github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes" group=v1 The default value of restic_timeout is one hour. You can increase this parameter for large migrations, keeping in mind that a higher value may delay the return of error messages. Procedure In the OpenShift Container Platform web console, navigate to Operators Installed Operators . Click Migration Toolkit for Containers Operator . In the MigrationController tab, click migration-controller . In the YAML tab, update the following parameter value: spec: restic_timeout: 1h 1 1 Valid units are h (hours), m (minutes), and s (seconds), for example, 3h30m15s . Click Save . 11.4.2.6. Restic verification errors in the MigMigration custom resource If data verification fails when migrating a persistent volume with the file system data copy method, the following error is displayed in the MigMigration CR. Example output status: conditions: - category: Warn durable: true lastTransitionTime: 2020-04-16T20:35:16Z message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>` for details 1 status: "True" type: ResticVerifyErrors 2 1 The error message identifies the Restore CR name. 2 ResticVerifyErrors is a general error warning type that includes verification errors. Note A data verification error does not cause the migration process to fail. You can check the Restore CR to identify the source of the data verification error. Procedure Log in to the target cluster. View the Restore CR: USD oc describe <registry-example-migration-rvwcm> -n openshift-migration The output identifies the persistent volume with PodVolumeRestore errors. Example output status: phase: Completed podVolumeRestoreErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration podVolumeRestoreResticErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration View the PodVolumeRestore CR: USD oc describe <migration-example-rvwcm-98t49> The output identifies the Restic pod that logged the errors. Example output completionTimestamp: 2020-05-01T20:49:12Z errors: 1 resticErrors: 1 ... resticPod: <restic-nr2v5> View the Restic pod log to locate the errors: USD oc logs -f <restic-nr2v5> 11.4.2.7. Restic permission error when migrating from NFS storage with root_squash enabled If you are migrating data from NFS storage and root_squash is enabled, Restic maps to nfsnobody and does not have permission to perform the migration. The following error is displayed in the Restic pod log. Example output backup=openshift-migration/<backup_id> controller=pod-volume-backup error="fork/exec /usr/bin/restic: permission denied" error.file="/go/src/github.com/vmware-tanzu/velero/pkg/controller/pod_volume_backup_controller.go:280" error.function="github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup" logSource="pkg/controller/pod_volume_backup_controller.go:280" name=<backup_id> namespace=openshift-migration You can resolve this issue by creating a supplemental group for Restic and adding the group ID to the MigrationController CR manifest. Procedure Create a supplemental group for Restic on the NFS storage. Set the setgid bit on the NFS directories so that group ownership is inherited. Add the restic_supplemental_groups parameter to the MigrationController CR manifest on the source and target clusters: spec: restic_supplemental_groups: <group_id> 1 1 Specify the supplemental group ID. Wait for the Restic pods to restart so that the changes are applied. 11.4.3. Applying the Skip SELinux relabel workaround with spc_t automatically on workloads running on OpenShift Container Platform When attempting to migrate a namespace with Migration Toolkit for Containers (MTC) and a substantial volume associated with it, the rsync-server may become frozen without any further information to troubleshoot the issue. 11.4.3.1. Diagnosing the need for the Skip SELinux relabel workaround Search for an error of Unable to attach or mount volumes for pod... timed out waiting for the condition in the kubelet logs from the node where the rsync-server for the Direct Volume Migration (DVM) runs. Example kubelet log kubenswrapper[3879]: W0326 16:30:36.749224 3879 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/8905d88e-6531-4d65-9c2a-eff11dc7eb29/volumes/kubernetes.io~csi/pvc-287d1988-3fd9-4517-a0c7-22539acd31e6/mount and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699 kubenswrapper[3879]: E0326 16:32:02.706363 3879 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition" pod="caboodle-preprod/rsync-server" kubenswrapper[3879]: E0326 16:32:02.706496 3879 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition" pod="caboodle-preprod/rsync-server" podUID=8905d88e-6531-4d65-9c2a-eff11dc7eb29 11.4.3.2. Resolving using the Skip SELinux relabel workaround To resolve this issue, set the migration_rsync_super_privileged parameter to true in both the source and destination MigClusters using the MigrationController custom resource (CR). Example MigrationController CR apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: migration_rsync_super_privileged: true 1 azure_resource_group: "" cluster_name: host mig_namespace_limit: "10" mig_pod_limit: "100" mig_pv_limit: "100" migration_controller: true migration_log_reader: true migration_ui: true migration_velero: true olm_managed: true restic_timeout: 1h version: 1.8.3 1 The value of the migration_rsync_super_privileged parameter indicates whether or not to run Rsync Pods as super privileged containers ( spc_t selinux context ). Valid settings are true or false . 11.5. Rolling back a migration You can roll back a migration by using the MTC web console or the CLI. You can also roll back a migration manually . 11.5.1. Rolling back a migration by using the MTC web console You can roll back a migration by using the Migration Toolkit for Containers (MTC) web console. Note The following resources remain in the migrated namespaces for debugging after a failed direct volume migration (DVM): Config maps (source and destination clusters) Secret objects (source and destination clusters) Rsync CRs (source cluster) These resources do not affect rollback. You can delete them manually. If you later run the same migration plan successfully, the resources from the failed migration are deleted automatically. If your application was stopped during a failed migration, you must roll back the migration to prevent data corruption in the persistent volume. Rollback is not required if the application was not stopped during migration because the original application is still running on the source cluster. Procedure In the MTC web console, click Migration plans . Click the Options menu beside a migration plan and select Rollback under Migration . Click Rollback and wait for rollback to complete. In the migration plan details, Rollback succeeded is displayed. Verify that rollback was successful in the OpenShift Container Platform web console of the source cluster: Click Home Projects . Click the migrated project to view its status. In the Routes section, click Location to verify that the application is functioning, if applicable. Click Workloads Pods to verify that the pods are running in the migrated namespace. Click Storage Persistent volumes to verify that the migrated persistent volume is correctly provisioned. 11.5.2. Rolling back a migration from the command line interface You can roll back a migration by creating a MigMigration custom resource (CR) from the command line interface. Note The following resources remain in the migrated namespaces for debugging after a failed direct volume migration (DVM): Config maps (source and destination clusters) Secret objects (source and destination clusters) Rsync CRs (source cluster) These resources do not affect rollback. You can delete them manually. If you later run the same migration plan successfully, the resources from the failed migration are deleted automatically. If your application was stopped during a failed migration, you must roll back the migration to prevent data corruption in the persistent volume. Rollback is not required if the application was not stopped during migration because the original application is still running on the source cluster. Procedure Create a MigMigration CR based on the following example: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <migmigration> namespace: openshift-migration spec: ... rollback: true ... migPlanRef: name: <migplan> 1 namespace: openshift-migration EOF 1 Specify the name of the associated MigPlan CR. In the MTC web console, verify that the migrated project resources have been removed from the target cluster. Verify that the migrated project resources are present in the source cluster and that the application is running. 11.5.3. Rolling back a migration manually You can roll back a failed migration manually by deleting the stage pods and unquiescing the application. If you run the same migration plan successfully, the resources from the failed migration are deleted automatically. Note The following resources remain in the migrated namespaces after a failed direct volume migration (DVM): Config maps (source and destination clusters) Secret objects (source and destination clusters) Rsync CRs (source cluster) These resources do not affect rollback. You can delete them manually. Procedure Delete the stage pods on all clusters: USD oc delete USD(oc get pods -l migration.openshift.io/is-stage-pod -n <namespace>) 1 1 Namespaces specified in the MigPlan CR. Unquiesce the application on the source cluster by scaling the replicas to their premigration number: USD oc scale deployment <deployment> --replicas=<premigration_replicas> The migration.openshift.io/preQuiesceReplicas annotation in the Deployment CR displays the premigration number of replicas: apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" migration.openshift.io/preQuiesceReplicas: "1" Verify that the application pods are running on the source cluster: USD oc get pod -n <namespace> Additional resources Deleting Operators from a cluster using the web console
[ "apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration namespaces: 1 - <source_namespace_1> - <source_namespace_2>:<destination_namespace_3> 2", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageStreamMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_stream_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration imageStreamRef: name: <image_stream> namespace: <source_image_stream_namespace> destNamespace: <destination_image_stream_namespace>", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigration metadata: name: <direct_volume_migration> namespace: openshift-migration spec: createDestinationNamespaces: false 1 deleteProgressReportingCRs: false 2 destMigClusterRef: name: <host_cluster> 3 namespace: openshift-migration persistentVolumeClaims: - name: <pvc> 4 namespace: <pvc_namespace> srcMigClusterRef: name: <source_cluster> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigrationProgress metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_volume_migration_progress> spec: clusterRef: name: <source_cluster> namespace: openshift-migration podRef: name: <rsync_pod> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigAnalytic metadata: annotations: migplan: <migplan> name: <miganalytic> namespace: openshift-migration labels: migplan: <migplan> spec: analyzeImageCount: true 1 analyzeK8SResources: true 2 analyzePVCapacity: true 3 listImages: false 4 listImagesLimit: 50 5 migPlanRef: name: <migplan> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: labels: controller-tools.k8s.io: \"1.0\" name: <host_cluster> 1 namespace: openshift-migration spec: isHostCluster: true 2 The 'azureResourceGroup' parameter is relevant only for Microsoft Azure. azureResourceGroup: <azure_resource_group> 3 caBundle: <ca_bundle_base64> 4 insecure: false 5 refresh: false 6 The 'restartRestic' parameter is relevant for a source cluster. restartRestic: true 7 The following parameters are relevant for a remote cluster. exposedRegistryPath: <registry_route> 8 url: <destination_cluster_url> 9 serviceAccountSecretRef: name: <source_secret> 10 namespace: openshift-config", "apiVersion: migration.openshift.io/v1alpha1 kind: MigHook metadata: generateName: <hook_name_prefix> 1 name: <mighook> 2 namespace: openshift-migration spec: activeDeadlineSeconds: 1800 3 custom: false 4 image: <hook_image> 5 playbook: <ansible_playbook_base64> 6 targetCluster: source 7", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: canceled: false 1 rollback: false 2 stage: false 3 quiescePods: true 4 keepAnnotations: true 5 verify: false 6 migPlanRef: name: <migplan> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migplan> namespace: openshift-migration spec: closed: false 1 srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration hooks: 2 - executionNamespace: <namespace> 3 phase: <migration_phase> 4 reference: name: <hook> 5 namespace: <hook_namespace> 6 serviceAccount: <service_account> 7 indirectImageMigration: true 8 indirectVolumeMigration: false 9 migStorageRef: name: <migstorage> namespace: openshift-migration namespaces: - <source_namespace_1> 10 - <source_namespace_2> - <source_namespace_3>:<destination_namespace_4> 11 refresh: false 12", "apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migstorage> namespace: openshift-migration spec: backupStorageProvider: <backup_storage_provider> 1 volumeSnapshotProvider: <snapshot_storage_provider> 2 backupStorageConfig: awsBucketName: <bucket> 3 awsRegion: <region> 4 credsSecretRef: namespace: openshift-config name: <storage_secret> 5 awsKmsKeyId: <key_id> 6 awsPublicUrl: <public_url> 7 awsSignatureVersion: <signature_version> 8 volumeSnapshotConfig: awsRegion: <region> 9 credsSecretRef: namespace: openshift-config name: <storage_secret> 10 refresh: false 11", "oc -n openshift-migration get pods | grep log", "oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1", "oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.8", "oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.8 -- /usr/bin/gather_metrics_dump", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero --help", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf", "oc get migmigration <migmigration> -o yaml", "status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-01-26T20:48:40Z\" message: 'Final Restore openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf: partially failed on destination cluster' status: \"True\" type: VeleroFinalRestorePartiallyFailed - category: Advisory durable: true lastTransitionTime: \"2021-01-26T20:48:42Z\" message: The migration has completed with warnings, please look at `Warn` conditions. reason: Completed status: \"True\" type: SucceededWithWarnings", "oc -n {namespace} exec deployment/velero -c velero -- ./velero restore describe <restore>", "Phase: PartiallyFailed (run 'velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf' for more information) Errors: Velero: <none> Cluster: <none> Namespaces: migration-example: error restoring example.com/migration-example/migration-example: the server could not find the requested resource", "oc -n {namespace} exec deployment/velero -c velero -- ./velero restore logs <restore>", "time=\"2021-01-26T20:48:37Z\" level=info msg=\"Attempting to restore migration-example: migration-example\" logSource=\"pkg/restore/restore.go:1107\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf time=\"2021-01-26T20:48:37Z\" level=info msg=\"error restoring migration-example: the server could not find the requested resource\" logSource=\"pkg/restore/restore.go:1170\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf", "labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93", "labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93", "oc get migmigration -n openshift-migration", "NAME AGE 88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s", "oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration", "name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10 namespace: openshift-migration labels: <none> annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147 apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: creationTimestamp: 2019-08-29T01:01:29Z generation: 20 resourceVersion: 88179 selfLink: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10 uid: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 spec: migPlanRef: name: socks-shop-mig-plan namespace: openshift-migration quiescePods: true stage: false status: conditions: category: Advisory durable: True lastTransitionTime: 2019-08-29T01:03:40Z message: The migration has completed successfully. reason: Completed status: True type: Succeeded phase: Completed startTimestamp: 2019-08-29T01:01:29Z events: <none>", "apiVersion: velero.io/v1 kind: Backup metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.105.179:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6 openshift.io/orig-reclaim-policy: delete creationTimestamp: \"2019-08-29T01:03:15Z\" generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10- generation: 1 labels: app.kubernetes.io/part-of: migration migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 velero.io/storage-location: myrepo-vpzq9 name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 namespace: openshift-migration resourceVersion: \"87313\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6 spec: excludedNamespaces: [] excludedResources: [] hooks: resources: [] includeClusterResources: null includedNamespaces: - sock-shop includedResources: - persistentvolumes - persistentvolumeclaims - namespaces - imagestreams - imagestreamtags - secrets - configmaps - pods labelSelector: matchLabels: migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 storageLocation: myrepo-vpzq9 ttl: 720h0m0s volumeSnapshotLocations: - myrepo-wv6fx status: completionTimestamp: \"2019-08-29T01:02:36Z\" errors: 0 expiration: \"2019-09-28T01:02:35Z\" phase: Completed startTimestamp: \"2019-08-29T01:02:35Z\" validationErrors: null version: 1 volumeSnapshotsAttempted: 0 volumeSnapshotsCompleted: 0 warnings: 0", "apiVersion: velero.io/v1 kind: Restore metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.90.187:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88 creationTimestamp: \"2019-08-28T00:09:49Z\" generateName: e13a1b60-c927-11e9-9555-d129df7f3b96- generation: 3 labels: app.kubernetes.io/part-of: migration migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88 migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88 name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx namespace: openshift-migration resourceVersion: \"82329\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx uid: 26983ec0-c928-11e9-825a-06fa9fb68c88 spec: backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f excludedNamespaces: null excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io includedNamespaces: null includedResources: null namespaceMapping: null restorePVs: true status: errors: 0 failureReason: \"\" phase: Completed validationErrors: null warnings: 15", "oc describe migmigration <pod> -n openshift-migration", "Some or all transfer pods are not running for more than 10 mins on destination cluster", "oc get namespace <namespace> -o yaml 1", "oc edit namespace <namespace>", "apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"region=east\"", "echo -n | openssl s_client -connect <host_FQDN>:<port> \\ 1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2", "oc logs <Velero_Pod> -n openshift-migration", "level=error msg=\"Error checking repository for stale locks\" error=\"error getting backup storage location: BackupStorageLocation.velero.io \\\"ts-dpa-1\\\" not found\" error.file=\"/remote-source/src/github.com/vmware-tanzu/velero/pkg/restic/repository_manager.go:259\"", "level=error msg=\"Error backing up item\" backup=velero/monitoring error=\"timed out waiting for all PodVolumeBackups to complete\" error.file=\"/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165\" error.function=\"github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes\" group=v1", "spec: restic_timeout: 1h 1", "status: conditions: - category: Warn durable: true lastTransitionTime: 2020-04-16T20:35:16Z message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>` for details 1 status: \"True\" type: ResticVerifyErrors 2", "oc describe <registry-example-migration-rvwcm> -n openshift-migration", "status: phase: Completed podVolumeRestoreErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration podVolumeRestoreResticErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration", "oc describe <migration-example-rvwcm-98t49>", "completionTimestamp: 2020-05-01T20:49:12Z errors: 1 resticErrors: 1 resticPod: <restic-nr2v5>", "oc logs -f <restic-nr2v5>", "backup=openshift-migration/<backup_id> controller=pod-volume-backup error=\"fork/exec /usr/bin/restic: permission denied\" error.file=\"/go/src/github.com/vmware-tanzu/velero/pkg/controller/pod_volume_backup_controller.go:280\" error.function=\"github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup\" logSource=\"pkg/controller/pod_volume_backup_controller.go:280\" name=<backup_id> namespace=openshift-migration", "spec: restic_supplemental_groups: <group_id> 1", "kubenswrapper[3879]: W0326 16:30:36.749224 3879 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/8905d88e-6531-4d65-9c2a-eff11dc7eb29/volumes/kubernetes.io~csi/pvc-287d1988-3fd9-4517-a0c7-22539acd31e6/mount and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699 kubenswrapper[3879]: E0326 16:32:02.706363 3879 kubelet.go:1841] \"Unable to attach or mount volumes for pod; skipping pod\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" kubenswrapper[3879]: E0326 16:32:02.706496 3879 pod_workers.go:965] \"Error syncing pod, skipping\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" podUID=8905d88e-6531-4d65-9c2a-eff11dc7eb29", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: migration_rsync_super_privileged: true 1 azure_resource_group: \"\" cluster_name: host mig_namespace_limit: \"10\" mig_pod_limit: \"100\" mig_pv_limit: \"100\" migration_controller: true migration_log_reader: true migration_ui: true migration_velero: true olm_managed: true restic_timeout: 1h version: 1.8.3", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: rollback: true migPlanRef: name: <migplan> 1 namespace: openshift-migration EOF", "oc delete USD(oc get pods -l migration.openshift.io/is-stage-pod -n <namespace>) 1", "oc scale deployment <deployment> --replicas=<premigration_replicas>", "apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: \"1\" migration.openshift.io/preQuiesceReplicas: \"1\"", "oc get pod -n <namespace>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/migration_toolkit_for_containers/troubleshooting-mtc
Server Configuration Guide
Server Configuration Guide Red Hat build of Keycloak 26.0 Red Hat Customer Content Services
[ "bin/kc.[sh|bat] start --db-url-host=mykeycloakdb", "export KC_DB_URL_HOST=mykeycloakdb", "db-url-host=mykeycloakdb", "bin/kc.[sh|bat] start --help", "db-url-host=USD{MY_DB_HOST}", "db-url-host=USD{MY_DB_HOST:mydb}", "bin/kc.[sh|bat] --config-file=/path/to/myconfig.conf start", "keytool -importpass -alias kc.db-password -keystore keystore.p12 -storepass keystorepass -storetype PKCS12 -v", "bin/kc.[sh|bat] start --config-keystore=/path/to/keystore.p12 --config-keystore-password=keystorepass --config-keystore-type=PKCS12", "bin/kc.[sh|bat] start-dev", "bin/kc.[sh|bat] start", "bin/kc.[sh|bat] build <build-options>", "bin/kc.[sh|bat] build --help", "bin/kc.[sh|bat] build --db=postgres", "bin/kc.[sh|bat] start --optimized <configuration-options>", "bin/kc.[sh|bat] build --db=postgres", "db-url-host=keycloak-postgres db-username=keycloak db-password=change_me hostname=mykeycloak.acme.com https-certificate-file", "bin/kc.[sh|bat] start --optimized", "bin/kc.[sh|bat] start --spi-admin-allowed-system-variables=FOO,BAR", "export JAVA_OPTS_APPEND=\"-Djava.net.preferIPv4Stack=true\"", "bin/kc.[sh|bat] start --bootstrap-admin-username tmpadm --bootstrap-admin-password pass", "bin/kc.[sh|bat] start-dev --bootstrap-admin-client-id tmpadm --bootstrap-admin-client-secret secret", "bin/kc.[sh|bat] bootstrap-admin user", "bin/kc.[sh|bat] bootstrap-admin user --username tmpadm --password:env PASS_VAR", "bin/kc.[sh|bat] bootstrap-admin service", "bin/kc.[sh|bat] bootstrap-admin service --client-id tmpclient --client-secret:env=SECRET_VAR", "bin/kcadm.[sh|bat] config credentials --server http://localhost:8080 --realm master --client <service_account_client_name> --secret <service_account_secret>", "bin/kcadm.[sh|bat] get users/{userId}/credentials -r {realm}", "bin/kcadm.[sh|bat] delete users/{userId}/credentials/{credentialId} -r {realm}", "bin/kc.[sh|bat] bootstrap-admin user --username tmpadm --no-prompt", "bin/kc.[sh|bat] bootstrap-admin user --password:env PASS_VAR --no-prompt", "bin/kc.[sh|bat] bootstrap-admin user --username:env <YourUsernameEnv> --password:env <YourPassEnv>", "bin/kc.[sh|bat] bootstrap-admin service --client-id:env <YourClientIdEnv> --client-secret:env <YourSecretEnv>", "FROM registry.redhat.io/rhbk/keycloak-rhel9:26 as builder Enable health and metrics support ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true Configure a database vendor ENV KC_DB=postgres WORKDIR /opt/keycloak for demonstration purposes only, please make sure to use proper certificates in production instead RUN keytool -genkeypair -storepass password -storetype PKCS12 -keyalg RSA -keysize 2048 -dname \"CN=server\" -alias server -ext \"SAN:c=DNS:localhost,IP:127.0.0.1\" -keystore conf/server.keystore RUN /opt/keycloak/bin/kc.sh build FROM registry.redhat.io/rhbk/keycloak-rhel9:26 COPY --from=builder /opt/keycloak/ /opt/keycloak/ change these values to point to a running postgres instance ENV KC_DB=postgres ENV KC_DB_URL=<DBURL> ENV KC_DB_USERNAME=<DBUSERNAME> ENV KC_DB_PASSWORD=<DBPASSWORD> ENV KC_HOSTNAME=localhost ENTRYPOINT [\"/opt/keycloak/bin/kc.sh\"]", "A example build step that downloads a JAR file from a URL and adds it to the providers directory FROM registry.redhat.io/rhbk/keycloak-rhel9:26 as builder Add the provider JAR file to the providers directory ADD --chown=keycloak:keycloak --chmod=644 <MY_PROVIDER_JAR_URL> /opt/keycloak/providers/myprovider.jar Context: RUN the build command RUN /opt/keycloak/bin/kc.sh build", "FROM registry.access.redhat.com/ubi9 AS ubi-micro-build COPY mycertificate.crt /etc/pki/ca-trust/source/anchors/mycertificate.crt RUN update-ca-trust FROM registry.redhat.io/rhbk/keycloak-rhel9 COPY --from=ubi-micro-build /etc/pki /etc/pki", "FROM registry.access.redhat.com/ubi9 AS ubi-micro-build RUN mkdir -p /mnt/rootfs RUN dnf install --installroot /mnt/rootfs <package names go here> --releasever 9 --setopt install_weak_deps=false --nodocs -y && dnf --installroot /mnt/rootfs clean all && rpm --root /mnt/rootfs -e --nodeps setup FROM registry.redhat.io/rhbk/keycloak-rhel9 COPY --from=ubi-micro-build /mnt/rootfs /", "build . -t mykeycloak", "run --name mykeycloak -p 8443:8443 -p 9000:9000 -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me mykeycloak start --optimized --hostname=localhost", "run --name mykeycloak -p 3000:8443 -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me mykeycloak start --optimized --hostname=https://localhost:3000", "run --name mykeycloak -p 8080:8080 -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me registry.redhat.io/rhbk/keycloak-rhel9:26 start-dev", "run --name mykeycloak -p 8080:8080 -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me registry.redhat.io/rhbk/keycloak-rhel9:26 start --db=postgres --features=token-exchange --db-url=<JDBC-URL> --db-username=<DB-USER> --db-password=<DB-PASSWORD> --https-key-store-file=<file> --https-key-store-password=<password>", "setting the admin username -e KC_BOOTSTRAP_ADMIN_USERNAME=<admin-user-name> setting the initial password -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me", "run --name keycloak_unoptimized -p 8080:8080 -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me -v /path/to/realm/data:/opt/keycloak/data/import registry.redhat.io/rhbk/keycloak-rhel9:26 start-dev --import-realm", "run --name mykeycloak -p 8080:8080 -m 1g -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=change_me -e JAVA_OPTS_KC_HEAP=\"-XX:MaxHeapFreeRatio=30 -XX:MaxRAMPercentage=65\" registry.redhat.io/rhbk/keycloak-rhel9:26 start-dev", "bin/kc.[sh|bat] start --https-certificate-file=/path/to/certfile.pem --https-certificate-key-file=/path/to/keyfile.pem", "bin/kc.[sh|bat] start --https-key-store-file=/path/to/existing-keystore-file", "bin/kc.[sh|bat] start --https-key-store-password=<value>", "bin/kc.[sh|bat] start --https-protocols=<protocol>[,<protocol>]", "bin/kc.[sh|bat] start --https-port=<port>", "bin/kc.[sh|bat] start --hostname my.keycloak.org", "bin/kc.[sh|bat] start --hostname https://my.keycloak.org", "bin/kc.[sh|bat] start --hostname https://my.keycloak.org:123/auth", "bin/kc.[sh|bat] start --hostname https://my.keycloak.org --hostname-backchannel-dynamic true", "bin/kc.[sh|bat] start --hostname https://my.keycloak.org --http-enabled true", "bin/kc.[sh|bat] start --hostname-strict false --proxy-headers forwarded", "bin/kc.[sh|bat] start --hostname my.keycloak.org --proxy-headers xforwarded", "bin/kc.[sh|bat] start --hostname https://my.keycloak.org --proxy-headers xforwarded", "bin/kc.[sh|bat] start --hostname https://my.keycloak.org --hostname-admin https://admin.my.keycloak.org:8443", "bin/kc.[sh|bat] start --hostname my.keycloak.org", "bin/kc.[sh|bat] start --hostname https://my.keycloak.org --hostname-backchannel-dynamic true", "bin/kc.[sh|bat] start --hostname https://my.keycloak.org --hostname-admin https://admin.my.keycloak.org:8443", "bin/kc.[sh|bat] start --hostname=mykeycloak --hostname-debug=true", "bin/kc.[sh|bat] start --proxy-headers forwarded", "bin/kc.[sh|bat] start --spi-sticky-session-encoder-infinispan-should-attach-route=false", "bin/kc.[sh|bat] start --proxy-headers forwarded --proxy-trusted-addresses=192.168.0.32,127.0.0.0/8", "bin/kc.[sh|bat] start --proxy-protocol-enabled true", "bin/kc.[sh|bat] build --spi-x509cert-lookup-provider=<provider>", "bin/kc.[sh|bat] start --spi-x509cert-lookup-<provider>-ssl-client-cert=SSL_CLIENT_CERT --spi-x509cert-lookup-<provider>-ssl-cert-chain-prefix=CERT_CHAIN --spi-x509cert-lookup-<provider>-certificate-chain-length=10", "FROM registry.redhat.io/rhbk/keycloak-rhel9:26 ADD --chown=keycloak:keycloak --chmod=644 https://repo1.maven.org/maven2/com/oracle/database/jdbc/ojdbc11/23.5.0.24.07/ojdbc11-23.5.0.24.07.jar /opt/keycloak/providers/ojdbc11.jar ADD --chown=keycloak:keycloak --chmod=644 https://repo1.maven.org/maven2/com/oracle/database/nls/orai18n/23.5.0.24.07/orai18n-23.5.0.24.07.jar /opt/keycloak/providers/orai18n.jar Setting the build parameter for the database: ENV KC_DB=oracle Add all other build parameters needed, for example enable health and metrics: ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true To be able to use the image with the Red Hat build of Keycloak Operator, it needs to be optimized, which requires Red Hat build of Keycloak's build step: RUN /opt/keycloak/bin/kc.sh build", "FROM registry.redhat.io/rhbk/keycloak-rhel9:26 ADD --chown=keycloak:keycloak --chmod=644 https://repo1.maven.org/maven2/com/microsoft/sqlserver/mssql-jdbc/12.8.1.jre11/mssql-jdbc-12.8.1.jre11.jar /opt/keycloak/providers/mssql-jdbc.jar Setting the build parameter for the database: ENV KC_DB=mssql Add all other build parameters needed, for example enable health and metrics: ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true To be able to use the image with the Red Hat build of Keycloak Operator, it needs to be optimized, which requires Red Hat build of Keycloak's build step: RUN /opt/keycloak/bin/kc.sh build", "bin/kc.[sh|bat] start --db postgres --db-url-host mypostgres --db-username myuser --db-password change_me", "bin/kc.[sh|bat] start --db postgres --db-url jdbc:postgresql://mypostgres/mydatabase", "bin/kc.[sh|bat] start --db postgres --db-driver=my.Driver", "show server_encoding;", "create database keycloak with encoding 'UTF8';", "FROM registry.redhat.io/rhbk/keycloak-rhel9:26 ADD --chmod=0666 https://github.com/awslabs/aws-advanced-jdbc-wrapper/releases/download/2.3.1/aws-advanced-jdbc-wrapper-2.3.1.jar /opt/keycloak/providers/aws-advanced-jdbc-wrapper.jar", "bin/kc.[sh|bat] start --spi-dblock-jpa-lock-wait-timeout 900", "bin/kc.[sh|bat] build --db=<vendor> --transaction-xa-enabled=true", "bin/kc.[sh|bat] start --spi-connections-jpa-quarkus-migration-strategy=manual", "bin/kc.[sh|bat] start --spi-connections-jpa-quarkus-initialize-empty=false", "bin/kc.[sh|bat] start --spi-connections-jpa-quarkus-migration-export=<path>/<file.sql>", "bin/kc.[sh|bat] start --cache=ispn", "<distributed-cache name=\"sessions\" owners=\"2\"> <expiration lifespan=\"-1\"/> </distributed-cache>", "bin/kc.sh start --features-disabled=persistent-user-sessions", "bin/kc.[sh|bat] start --cache-config-file=my-cache-file.xml", "bin/kc.[sh|bat] start --cache-stack=<stack>", "bin/kc.[sh|bat] start --cache-stack=<ec2|google|azure>", "<jgroups> <stack name=\"my-encrypt-udp\" extends=\"udp\"> <SSL_KEY_EXCHANGE keystore_name=\"server.jks\" keystore_password=\"password\" stack.combine=\"INSERT_AFTER\" stack.position=\"VERIFY_SUSPECT2\"/> <ASYM_ENCRYPT asym_keylength=\"2048\" asym_algorithm=\"RSA\" change_key_on_coord_leave = \"false\" change_key_on_leave = \"false\" use_external_key_exchange = \"true\" stack.combine=\"INSERT_BEFORE\" stack.position=\"pbcast.NAKACK2\"/> </stack> </jgroups> <cache-container name=\"keycloak\"> <transport lock-timeout=\"60000\" stack=\"my-encrypt-udp\"/> </cache-container>", "bin/kc.[sh|bat] start --metrics-enabled=true --cache-metrics-histograms-enabled=true", "bin/kc.[sh|bat] start --spi-connections-http-client-default-<configurationoption>=<value>", "HTTPS_PROXY=https://www-proxy.acme.com:8080 NO_PROXY=google.com,login.facebook.com", ".*\\.(google|googleapis)\\.com", "bin/kc.[sh|bat] start --spi-connections-http-client-default-proxy-mappings='.*\\\\.(google|googleapis)\\\\.com;http://www-proxy.acme.com:8080'", ".*\\.(google|googleapis)\\.com;http://proxyuser:[email protected]:8080", "All requests to Google APIs use http://www-proxy.acme.com:8080 as proxy .*\\.(google|googleapis)\\.com;http://www-proxy.acme.com:8080 All requests to internal systems use no proxy .*\\.acme\\.com;NO_PROXY All other requests use http://fallback:8080 as proxy .*;http://fallback:8080", "bin/kc.[sh|bat] start --truststore-paths=/opt/truststore/myTrustStore.pfx,/opt/other-truststore/myOtherTrustStore.pem", "bin/kc.[sh|bat] start --https-client-auth=<none|request|required>", "bin/kc.[sh|bat] start --https-trust-store-file=/path/to/file --https-trust-store-password=<value>", "bin/kc.[sh|bat] build --features=\"<name>[,<name>]\"", "bin/kc.[sh|bat] build --features=\"docker,token-exchange\"", "bin/kc.[sh|bat] build --features=\"preview\"", "bin/kc.[sh|bat] build --features-disabled=\"<name>[,<name>]\"", "bin/kc.[sh|bat] build --features-disabled=\"impersonation\"", "spi-<spi-id>-<provider-id>-<property>=<value>", "spi-connections-http-client-default-connection-pool-size=10", "bin/kc.[sh|bat] start --spi-connections-http-client-default-connection-pool-size=10", "bin/kc.[sh|bat] build --spi-email-template-provider=mycustomprovider", "bin/kc.[sh|bat] build --spi-password-hashing-provider-default=mycustomprovider", "bin/kc.[sh|bat] build --spi-email-template-mycustomprovider-enabled=true", "bin/kc.[sh|bat] start --log-level=<root-level>", "bin/kc.[sh|bat] start --log-level=\"<root-level>,<org.category1>:<org.category1-level>\"", "bin/kc.[sh|bat] start --log-level=\"INFO,org.hibernate:debug,org.hibernate.hql.internal.ast:info\"", "bin/kc.[sh|bat] start --log=\"<handler1>,<handler2>\"", "bin/kc.[sh|bat] start --log=console,file --log-level=debug --log-console-level=info", "bin/kc.[sh|bat] start --log=console,file,syslog --log-level=debug --log-console-level=warn --log-syslog-level=warn", "bin/kc.[sh|bat] start --log=console,file,syslog --log-level=debug,org.keycloak.events:trace, --log-syslog-level=trace --log-console-level=info --log-file-level=info", "bin/kc.[sh|bat] start --log-console-format=\"'<format>'\"", "bin/kc.[sh|bat] start --log-console-format=\"'%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] (%t) %s%e%n'\"", "bin/kc.[sh|bat] start --log-console-output=json", "{\"timestamp\":\"2022-02-25T10:31:32.452+01:00\",\"sequence\":8442,\"loggerClassName\":\"org.jboss.logging.Logger\",\"loggerName\":\"io.quarkus\",\"level\":\"INFO\",\"message\":\"Keycloak 18.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 3.253s. Listening on: http://0.0.0.0:8080\",\"threadName\":\"main\",\"threadId\":1,\"mdc\":{},\"ndc\":\"\",\"hostName\":\"host-name\",\"processName\":\"QuarkusEntryPoint\",\"processId\":36946}", "bin/kc.[sh|bat] start --log-console-output=default", "2022-03-02 10:36:50,603 INFO [io.quarkus] (main) Keycloak 18.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 3.615s. Listening on: http://0.0.0.0:8080", "bin/kc.[sh|bat] start --log-console-color=<false|true>", "bin/kc.[sh|bat] start --log-console-level=warn", "bin/kc.[sh|bat] start --log=\"console,file\"", "bin/kc.[sh|bat] start --log=\"console,file\" --log-file=<path-to>/<your-file.log>", "bin/kc.[sh|bat] start --log-file-format=\"<pattern>\"", "bin/kc.[sh|bat] start --log-file-level=warn", "bin/kc.[sh|bat] start --log=\"console,syslog\"", "bin/kc.[sh|bat] start --log=\"console,syslog\" --log-syslog-app-name=kc-p-itadmins", "bin/kc.[sh|bat] start --log=\"console,syslog\" --log-syslog-endpoint=myhost:12345", "bin/kc.[sh|bat] start --log-syslog-level=warn", "bin/kc.[sh|bat] start --log=\"console,syslog\" --log-syslog-protocol=udp", "bin/kc.[sh|bat] start --log-syslog-format=\"'<format>'\"", "bin/kc.[sh|bat] start --log-syslog-format=\"'%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] (%t) %s%e%n'\"", "bin/kc.[sh|bat] start --log-syslog-type=rfc3164", "bin/kc.[sh|bat] start --log-syslog-max-length=1536", "bin/kc.[sh|bat] start --log-syslog-output=json", "2024-04-05T12:32:20.616+02:00 host keycloak 2788276 io.quarkus - {\"timestamp\":\"2024-04-05T12:32:20.616208533+02:00\",\"sequence\":9948,\"loggerClassName\":\"org.jboss.logging.Logger\",\"loggerName\":\"io.quarkus\",\"level\":\"INFO\",\"message\":\"Profile prod activated. \",\"threadName\":\"main\",\"threadId\":1,\"mdc\":{},\"ndc\":\"\",\"hostName\":\"host\",\"processName\":\"QuarkusEntryPoint\",\"processId\":2788276}", "bin/kc.[sh|bat] start --log-syslog-output=default", "2024-04-05T12:31:38.473+02:00 host keycloak 2787568 io.quarkus - 2024-04-05 12:31:38,473 INFO [io.quarkus] (main) Profile prod activated.", "fips-mode-setup --check", "fips-mode-setup --enable", "keytool -genkeypair -sigalg SHA512withRSA -keyalg RSA -storepass passwordpassword -keystore USDKEYCLOAK_HOME/conf/server.keystore -alias localhost -dname CN=localhost -keypass passwordpassword", "securerandom.strongAlgorithms=PKCS11:SunPKCS11-NSS-FIPS", "keytool -keystore USDKEYCLOAK_HOME/conf/server.keystore -storetype bcfks -providername BCFIPS -providerclass org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider -provider org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider -providerpath USDKEYCLOAK_HOME/providers/bc-fips-*.jar -alias localhost -genkeypair -sigalg SHA512withRSA -keyalg RSA -storepass passwordpassword -dname CN=localhost -keypass passwordpassword -J-Djava.security.properties=/tmp/kc.keystore-create.java.security", "bin/kc.[sh|bat] start --features=fips --hostname=localhost --https-key-store-password=passwordpassword --log-level=INFO,org.keycloak.common.crypto:TRACE,org.keycloak.crypto:TRACE", "KC(BCFIPS version 2.0 Approved Mode, FIPS-JVM: enabled) version 1.0 - class org.keycloak.crypto.fips.KeycloakFipsSecurityProvider,", "--spi-password-hashing-pbkdf2-sha512-max-padding-length=14", "fips.provider.7=XMLDSig", "-Djava.security.properties=/location/to/your/file/kc.java.security", "cp USDKEYCLOAK_HOME/providers/bc-fips-*.jar USDKEYCLOAK_HOME/bin/client/lib/ cp USDKEYCLOAK_HOME/providers/bctls-fips-*.jar USDKEYCLOAK_HOME/bin/client/lib/ cp USDKEYCLOAK_HOME/providers/bcutil-fips-*.jar USDKEYCLOAK_HOME/bin/client/lib/", "echo \"keystore.type=bcfks fips.keystore.type=bcfks\" > /tmp/kcadm.java.security export KC_OPTS=\"-Djava.security.properties=/tmp/kcadm.java.security\"", "FROM registry.redhat.io/rhbk/keycloak-rhel9:26 as builder ADD files /tmp/files/ WORKDIR /opt/keycloak RUN cp /tmp/files/*.jar /opt/keycloak/providers/ RUN cp /tmp/files/keycloak-fips.keystore.* /opt/keycloak/conf/server.keystore RUN cp /tmp/files/kc.java.security /opt/keycloak/conf/ RUN /opt/keycloak/bin/kc.sh build --features=fips --fips-mode=strict FROM registry.redhat.io/rhbk/keycloak-rhel9:26 COPY --from=builder /opt/keycloak/ /opt/keycloak/ ENTRYPOINT [\"/opt/keycloak/bin/kc.sh\"]", "{ \"status\": \"UP\", \"checks\": [] }", "{ \"status\": \"UP\", \"checks\": [ { \"name\": \"Keycloak database connections health check\", \"status\": \"UP\" } ] }", "bin/kc.[sh|bat] build --health-enabled=true", "curl --head -fsS http://localhost:9000/health/ready", "bin/kc.[sh|bat] build --health-enabled=true --metrics-enabled=true", "bin/kc.[sh|bat] start --metrics-enabled=true", "HELP base_gc_total Displays the total number of collections that have occurred. This attribute lists -1 if the collection count is undefined for this collector. TYPE base_gc_total counter base_gc_total{name=\"G1 Young Generation\",} 14.0 HELP jvm_memory_usage_after_gc_percent The percentage of long-lived heap pool used after the last GC event, in the range [0..1] TYPE jvm_memory_usage_after_gc_percent gauge jvm_memory_usage_after_gc_percent{area=\"heap\",pool=\"long-lived\",} 0.0 HELP jvm_threads_peak_threads The peak live thread count since the Java virtual machine started or peak was reset TYPE jvm_threads_peak_threads gauge jvm_threads_peak_threads 113.0 HELP agroal_active_count Number of active connections. These connections are in use and not available to be acquired. TYPE agroal_active_count gauge agroal_active_count{datasource=\"default\",} 0.0 HELP base_memory_maxHeap_bytes Displays the maximum amount of memory, in bytes, that can be used for memory management. TYPE base_memory_maxHeap_bytes gauge base_memory_maxHeap_bytes 1.6781410304E10 HELP process_start_time_seconds Start time of the process since unix epoch. TYPE process_start_time_seconds gauge process_start_time_seconds 1.675188449054E9 HELP system_load_average_1m The sum of the number of runnable entities queued to available processors and the number of runnable entities running on the available processors averaged over a period of time TYPE system_load_average_1m gauge system_load_average_1m 4.005859375", "bin/kc.[sh|bat] start --tracing-enabled=true --features=opentelemetry", "run --name jaeger -p 16686:16686 -p 4317:4317 -p 4318:4318 jaegertracing/all-in-one", "2024-08-05 15:27:07,144 traceId=b636ac4c665ceb901f7fdc3fc7e80154, parentId=d59cea113d0c2549, spanId=d59cea113d0c2549, sampled=true WARN [org.keycloak.events]", "bin/kc.[sh|bat] start --tracing-enabled=true --features=opentelemetry --log=console --log-console-include-trace=false", "bin/kc.[sh|bat] export --help", "bin/kc.[sh|bat] export --dir <dir>", "bin/kc.[sh|bat] export --dir <dir> --users different_files --users-per-file 100", "bin/kc.[sh|bat] export --file <file>", "bin/kc.[sh|bat] export [--dir|--file] <path> --realm my-realm", "bin/kc.[sh|bat] import --help", "bin/kc.[sh|bat] import --dir <dir>", "bin/kc.[sh|bat] import --dir <dir> --override false", "bin/kc.[sh|bat] import --file <file>", "{ \"realm\": \"USD{MY_REALM_NAME}\", \"enabled\": true, }", "bin/kc.[sh|bat] start --import-realm", "bin/kc.[sh|bat] build --vault=file", "bin/kc.[sh|bat] build --vault=keystore", "bin/kc.[sh|bat] start --vault-dir=/my/path", "USD{vault.<realmname>_<secretname>}", "keytool -importpass -alias <realm-name>_<alias> -keystore keystore.p12 -storepass keystorepassword", "bin/kc.[sh|bat] start --vault-file=/path/to/keystore.p12 --vault-pass=<value> --vault-type=<value>", "sso__realm_ldap__credential" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html-single/server_configuration_guide//~
Chapter 8. Configuring certificates issued by ADCS for smart card authentication in IdM
Chapter 8. Configuring certificates issued by ADCS for smart card authentication in IdM To configure smart card authentication in IdM for users whose certificates are issued by Active Directory (AD) certificate services: Your deployment is based on cross-forest trust between Identity Management (IdM) and Active Directory (AD). You want to allow smart card authentication for users whose accounts are stored in AD. Certificates are created and stored in Active Directory Certificate Services (ADCS). For an overview of smart card authentication, see Understanding smart card authentication . Configuration is accomplished in the following steps: Copying CA and user certificates from Active Directory to the IdM server and client Configuring the IdM server and clients for smart card authentication using ADCS certificates Converting a PFX (PKCS#12) file to be able to store the certificate and private key into the smart card Configuring timeouts in the sssd.conf file Creating certificate mapping rules for smart card authentication Prerequisites Identity Management (IdM) and Active Directory (AD) trust is installed For details, see Installing trust between IdM and AD . Active Directory Certificate Services (ADCS) is installed and certificates for users are generated 8.1. Windows Server settings required for trust configuration and certificate usage You must configure the following on the Windows Server: Active Directory Certificate Services (ADCS) is installed Certificate Authority is created Optional: If you are using Certificate Authority Web Enrollment, the Internet Information Services (IIS) must be configured Export the certificate: Key must have 2048 bits or more Include a private key You will need a certificate in the following format: Personal Information Exchange - PKCS #12(.PFX) Enable certificate privacy 8.2. Copying certificates from Active Directory using sftp To be able to use smart card authetication, you need to copy the following certificate files: A root CA certificate in the CER format: adcs-winserver-ca.cer on your IdM server. A user certificate with a private key in the PFX format: aduser1.pfx on an IdM client. Note This procedure expects SSH access is allowed. If SSH is unavailable the user must copy the file from the AD Server to the IdM server and client. Procedure Connect from the IdM server and copy the adcs-winserver-ca.cer root certificate to the IdM server: Connect from the IdM client and copy the aduser1.pfx user certificate to the client: Now the CA certificate is stored in the IdM server and the user certificates is stored on the client machine. 8.3. Configuring the IdM server and clients for smart card authentication using ADCS certificates You must configure the IdM (Identity Management) server and clients to be able to use smart card authentication in the IdM environment. IdM includes the ipa-advise scripts which makes all necessary changes: Install necessary packages Configure IdM server and clients Copy the CA certificates into the expected locations You can run ipa-advise on your IdM server. Follow this procedure to configure your server and clients for smart card authentication: On an IdM server: Preparing the ipa-advise script to configure your IdM server for smart card authentication. On an IdM server: Preparing the ipa-advise script to configure your IdM client for smart card authentication. On an IdM server: Applying the the ipa-advise server script on the IdM server using the AD certificate. Moving the client script to the IdM client machine. On an IdM client: Applying the the ipa-advise client script on the IdM client using the AD certificate. Prerequisites The certificate has been copied to the IdM server. Obtain the Kerberos ticket. Log in as a user with administration rights. Procedure On the IdM server, use the ipa-advise script for configuring a client: On the IdM server, use the ipa-advise script for configuring a server: On the IdM server, execute the script: It configures the IdM Apache HTTP Server. It enables Public Key Cryptography for Initial Authentication in Kerberos (PKINIT) on the Key Distribution Center (KDC). It configures the IdM Web UI to accept smart card authorization requests. Copy the sc_client.sh script to the client system: Copy the Windows certificate to the client system: On the client system, run the client script: The CA certificate is installed in the correct format on the IdM server and client systems and step is to copy the user certificates onto the smart card itself. 8.4. Converting the PFX file Before you store the PFX (PKCS#12) file into the smart card, you must: Convert the file to the PEM format Extract the private key and the certificate to two different files Prerequisites The PFX file is copied into the IdM client machine. Procedure On the IdM client, into the PEM format: Extract the key into the separate file: Extract the public certificate into the separate file: At this point, you can store the aduser1.key and aduser1.crt into the smart card. 8.5. Installing tools for managing and using smart cards Prerequisites The gnutls-utils package is installed. The opensc package is installed. The pcscd service is running. Before you can configure your smart card, you must install the corresponding tools, which can generate certificates and start the pscd service. Procedure Install the opensc and gnutls-utils packages: Start the pcscd service. Verification Verify that the pcscd service is up and running 8.6. Preparing your smart card and uploading your certificates and keys to your smart card Follow this procedure to configure your smart card with the pkcs15-init tool, which helps you to configure: Erasing your smart card Setting new PINs and optional PIN Unblocking Keys (PUKs) Creating a new slot on the smart card Storing the certificate, private key, and public key in the slot If required, locking the smart card settings as certain smart cards require this type of finalization Note The pkcs15-init tool may not work with all smart cards. You must use the tools that work with the smart card you are using. Prerequisites The opensc package, which includes the pkcs15-init tool, is installed. For more details, see Installing tools for managing and using smart cards . The card is inserted in the reader and connected to the computer. You have a private key, a public key, and a certificate to store on the smart card. In this procedure, testuser.key , testuserpublic.key , and testuser.crt are the names used for the private key, public key, and the certificate. You have your current smart card user PIN and Security Officer PIN (SO-PIN). Procedure Erase your smart card and authenticate yourself with your PIN: The card has been erased. Initialize your smart card, set your user PIN and PUK, and your Security Officer PIN and PUK: The pcks15-init tool creates a new slot on the smart card. Set a label and the authentication ID for the slot: The label is set to a human-readable value, in this case, testuser . The auth-id must be two hexadecimal values, in this case it is set to 01 . Store and label the private key in the new slot on the smart card: Note The value you specify for --id must be the same when storing your private key and storing your certificate in the step. Specifying your own value for --id is recommended as otherwise a more complicated value is calculated by the tool. Store and label the certificate in the new slot on the smart card: Optional: Store and label the public key in the new slot on the smart card: Note If the public key corresponds to a private key or certificate, specify the same ID as the ID of the private key or certificate. Optional: Certain smart cards require you to finalize the card by locking the settings: At this stage, your smart card includes the certificate, private key, and public key in the newly created slot. You have also created your user PIN and PUK and the Security Officer PIN and PUK. 8.7. Configuring timeouts in sssd.conf Authentication with a smart card certificate might take longer than the default timeouts used by SSSD. Time out expiration can be caused by: Slow reader A forwarding form a physical device into a virtual environment Too many certificates stored on the smart card Slow response from the OCSP (Online Certificate Status Protocol) responder if OCSP is used to verify the certificates In this case you can prolong the following timeouts in the sssd.conf file, for example, to 60 seconds: p11_child_timeout krb5_auth_timeout Prerequisites You must be logged in as root. Procedure Open the sssd.conf file: Change the value of p11_child_timeout : Change the value of krb5_auth_timeout : Save the settings. Now, the interaction with the smart card is allowed to run for 1 minute (60 seconds) before authentication will fail with a timeout. 8.8. Creating certificate mapping rules for smart card authentication If you want to use one certificate for a user who has accounts in AD (Active Directory) and in IdM (Identity Management), you can create a certificate mapping rule on the IdM server. After creating such a rule, the user is able to authenticate with their smart card in both domains. For details about certificate mapping rules, see Certificate mapping rules for configuring authentication .
[ "root@idmserver ~]# sftp [email protected] [email protected]'s password: Connected to [email protected]. sftp> cd <Path to certificates> sftp> ls adcs-winserver-ca.cer aduser1.pfx sftp> sftp> get adcs-winserver-ca.cer Fetching <Path to certificates>/adcs-winserver-ca.cer to adcs-winserver-ca.cer <Path to certificates>/adcs-winserver-ca.cer 100% 1254 15KB/s 00:00 sftp quit", "sftp [email protected] [email protected]'s password: Connected to [email protected]. sftp> cd /<Path to certificates> sftp> get aduser1.pfx Fetching <Path to certificates>/aduser1.pfx to aduser1.pfx <Path to certificates>/aduser1.pfx 100% 1254 15KB/s 00:00 sftp quit", "ipa-advise config-client-for-smart-card-auth > sc_client.sh", "ipa-advise config-server-for-smart-card-auth > sc_server.sh", "sh -x sc_server.sh adcs-winserver-ca.cer", "scp sc_client.sh [email protected]:/root Password: sc_client.sh 100% 2857 1.6MB/s 00:00", "scp adcs-winserver-ca.cer [email protected]:/root Password: adcs-winserver-ca.cer 100% 1254 952.0KB/s 00:00", "sh -x sc_client.sh adcs-winserver-ca.cer", "openssl pkcs12 -in aduser1.pfx -out aduser1_cert_only.pem -clcerts -nodes Enter Import Password:", "openssl pkcs12 -in adduser1.pfx -nocerts -out adduser1.pem > aduser1.key", "openssl pkcs12 -in adduser1.pfx -clcerts -nokeys -out aduser1_cert_only.pem > aduser1.crt", "yum -y install opensc gnutls-utils", "systemctl start pcscd", "systemctl status pcscd", "pkcs15-init --erase-card --use-default-transport-keys Using reader with a card: Reader name PIN [Security Officer PIN] required. Please enter PIN [Security Officer PIN]:", "pkcs15-init --create-pkcs15 --use-default-transport-keys --pin 963214 --puk 321478 --so-pin 65498714 --so-puk 784123 Using reader with a card: Reader name", "pkcs15-init --store-pin --label testuser --auth-id 01 --so-pin 65498714 --pin 963214 --puk 321478 Using reader with a card: Reader name", "pkcs15-init --store-private-key testuser.key --label testuser_key --auth-id 01 --id 01 --pin 963214 Using reader with a card: Reader name", "pkcs15-init --store-certificate testuser.crt --label testuser_crt --auth-id 01 --id 01 --format pem --pin 963214 Using reader with a card: Reader name", "pkcs15-init --store-public-key testuserpublic.key --label testuserpublic_key --auth-id 01 --id 01 --pin 963214 Using reader with a card: Reader name", "pkcs15-init -F", "vim /etc/sssd/sssd.conf", "[pam] p11_child_timeout = 60", "[domain/IDM.EXAMPLE.COM] krb5_auth_timeout = 60" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_certificates_in_idm/configuring-certificates-issued-by-adcs-for-smart-card-authentication-in-idm_working-with-idm-certificates
4.4. Diagnosing and Correcting Problems in a Cluster
4.4. Diagnosing and Correcting Problems in a Cluster For information about diagnosing and correcting problems in a cluster, contact an authorized Red Hat support representative.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-admin-problems-conga-ca
7.221. samba4
7.221. samba4 7.221.1. RHSA-2013:0506 - Moderate: samba4 security, bug fix and enhancement update Updated samba4 packages that fix one security issue, multiple bugs, and add various enhancements are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Samba is an open-source implementation of the Server Message Block (SMB) or Common Internet File System (CIFS) protocol, which allows PC-compatible machines to share files, printers, and other information. Note The samba4 packages have been upgraded to upstream version 4.0.0, which provides a number of bug fixes and enhancements over the version. In particular, improved interoperability with Active Directory (AD) domains. SSSD now uses the libndr-krb5pac library to parse the Privilege Attribute Certificate (PAC) issued by an AD Key Distribution Center (KDC). The Cross Realm Kerberos Trust functionality provided by Identity Management, which relies on the capabilities of the samba4 client library, is included as a Technology Preview. This functionality and server libraries, is included as a Technology Preview. This functionality uses the libndr-nbt library to prepare Connection-less Lightweight Directory Access Protocol (CLDAP) messages. Additionally, various improvements have been made to the Local Security Authority (LSA) and Net Logon services to allow verification of trust from a Windows system. Because the Cross Realm Kerberos Trust functionality is considered a Technology Preview, selected samba4 components are considered to be a Technology Preview. For more information on which Samba packages are considered a Technology Preview, refer to Table 5.1, "Samba4 Package Support" in the Release Notes . (BZ# 766333 , BZ# 882188 ) Security Fix CVE-2012-1182 A flaw was found in the Samba suite's Perl-based DCE/RPC IDL (PIDL) compiler, used to generate code to handle RPC calls. This could result in code generated by the PIDL compiler to not sufficiently protect against buffer overflows. Bug Fix BZ# 878564 Prior to this update, if the Active Directory (AD) server was rebooted, Winbind sometimes failed to reconnect when requested by "wbinfo -n" or "wbinfo -s" commands. Consequently, looking up users using the wbinfo tool failed. This update applies upstream patches to fix this problem and now looking up a Security Identifier (SID) for a username, or a username for a given SID, works as expected after a domain controller is rebooted. All users of samba4 are advised to upgrade to these updated packages, which fix these issues and add these enhancements. Warning: If you upgrade from Red Hat Enterprise Linux 6.3 to Red Hat Enterprise Linux 6.4 and you have Samba in use, you should make sure that you uninstall the package named "samba4" to avoid conflicts during the upgrade.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/samba4
Chapter 3. Differences between OpenShift Container Platform 3 and 4
Chapter 3. Differences between OpenShift Container Platform 3 and 4 OpenShift Container Platform 4.17 introduces architectural changes and enhancements/ The procedures that you used to manage your OpenShift Container Platform 3 cluster might not apply to OpenShift Container Platform 4. For information on configuring your OpenShift Container Platform 4 cluster, review the appropriate sections of the OpenShift Container Platform documentation. For information on new features and other notable technical changes, review the OpenShift Container Platform 4.17 release notes . It is not possible to upgrade your existing OpenShift Container Platform 3 cluster to OpenShift Container Platform 4. You must start with a new OpenShift Container Platform 4 installation. Tools are available to assist in migrating your control plane settings and application workloads. 3.1. Architecture With OpenShift Container Platform 3, administrators individually deployed Red Hat Enterprise Linux (RHEL) hosts, and then installed OpenShift Container Platform on top of these hosts to form a cluster. Administrators were responsible for properly configuring these hosts and performing updates. OpenShift Container Platform 4 represents a significant change in the way that OpenShift Container Platform clusters are deployed and managed. OpenShift Container Platform 4 includes new technologies and functionality, such as Operators, machine sets, and Red Hat Enterprise Linux CoreOS (RHCOS), which are core to the operation of the cluster. This technology shift enables clusters to self-manage some functions previously performed by administrators. This also ensures platform stability and consistency, and simplifies installation and scaling. Beginning with OpenShift Container Platform 4.13, RHCOS now uses Red Hat Enterprise Linux (RHEL) 9.2 packages. This enhancement enables the latest fixes and features as well as the latest hardware support and driver updates. For more information about how this upgrade to RHEL 9.2 might affect your options configuration and services as well as driver and container support, see the RHCOS now uses RHEL 9.2 in the OpenShift Container Platform 4.13 release notes . For more information, see OpenShift Container Platform architecture . Immutable infrastructure OpenShift Container Platform 4 uses Red Hat Enterprise Linux CoreOS (RHCOS), which is designed to run containerized applications, and provides efficient installation, Operator-based management, and simplified upgrades. RHCOS is an immutable container host, rather than a customizable operating system like RHEL. RHCOS enables OpenShift Container Platform 4 to manage and automate the deployment of the underlying container host. RHCOS is a part of OpenShift Container Platform, which means that everything runs inside a container and is deployed using OpenShift Container Platform. In OpenShift Container Platform 4, control plane nodes must run RHCOS, ensuring that full-stack automation is maintained for the control plane. This makes rolling out updates and upgrades a much easier process than in OpenShift Container Platform 3. For more information, see Red Hat Enterprise Linux CoreOS (RHCOS) . Operators Operators are a method of packaging, deploying, and managing a Kubernetes application. Operators ease the operational complexity of running another piece of software. They watch over your environment and use the current state to make decisions in real time. Advanced Operators are designed to upgrade and react to failures automatically. For more information, see Understanding Operators . 3.2. Installation and upgrade Installation process To install OpenShift Container Platform 3.11, you prepared your Red Hat Enterprise Linux (RHEL) hosts, set all of the configuration values your cluster needed, and then ran an Ansible playbook to install and set up your cluster. In OpenShift Container Platform 4.17, you use the OpenShift installation program to create a minimum set of resources required for a cluster. After the cluster is running, you use Operators to further configure your cluster and to install new services. After first boot, Red Hat Enterprise Linux CoreOS (RHCOS) systems are managed by the Machine Config Operator (MCO) that runs in the OpenShift Container Platform cluster. For more information, see Installation process . If you want to add Red Hat Enterprise Linux (RHEL) worker machines to your OpenShift Container Platform 4.17 cluster, you use an Ansible playbook to join the RHEL worker machines after the cluster is running. For more information, see Adding RHEL compute machines to an OpenShift Container Platform cluster . Infrastructure options In OpenShift Container Platform 3.11, you installed your cluster on infrastructure that you prepared and maintained. In addition to providing your own infrastructure, OpenShift Container Platform 4 offers an option to deploy a cluster on infrastructure that the OpenShift Container Platform installation program provisions and the cluster maintains. For more information, see OpenShift Container Platform installation overview . Upgrading your cluster In OpenShift Container Platform 3.11, you upgraded your cluster by running Ansible playbooks. In OpenShift Container Platform 4.17, the cluster manages its own updates, including updates to Red Hat Enterprise Linux CoreOS (RHCOS) on cluster nodes. You can easily upgrade your cluster by using the web console or by using the oc adm upgrade command from the OpenShift CLI and the Operators will automatically upgrade themselves. If your OpenShift Container Platform 4.17 cluster has RHEL worker machines, then you will still need to run an Ansible playbook to upgrade those worker machines. For more information, see Updating clusters . 3.3. Migration considerations Review the changes and other considerations that might affect your transition from OpenShift Container Platform 3.11 to OpenShift Container Platform 4. 3.3.1. Storage considerations Review the following storage changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.17. Local volume persistent storage Local storage is only supported by using the Local Storage Operator in OpenShift Container Platform 4.17. It is not supported to use the local provisioner method from OpenShift Container Platform 3.11. For more information, see Persistent storage using local volumes . FlexVolume persistent storage The FlexVolume plugin location changed from OpenShift Container Platform 3.11. The new location in OpenShift Container Platform 4.17 is /etc/kubernetes/kubelet-plugins/volume/exec . Attachable FlexVolume plugins are no longer supported. For more information, see Persistent storage using FlexVolume . Container Storage Interface (CSI) persistent storage Persistent storage using the Container Storage Interface (CSI) was Technology Preview in OpenShift Container Platform 3.11. OpenShift Container Platform 4.17 ships with several CSI drivers . You can also install your own driver. For more information, see Persistent storage using the Container Storage Interface (CSI) . Red Hat OpenShift Data Foundation OpenShift Container Storage 3, which is available for use with OpenShift Container Platform 3.11, uses Red Hat Gluster Storage as the backing storage. Red Hat OpenShift Data Foundation 4, which is available for use with OpenShift Container Platform 4, uses Red Hat Ceph Storage as the backing storage. For more information, see Persistent storage using Red Hat OpenShift Data Foundation and the interoperability matrix article. Unsupported persistent storage options Support for the following persistent storage options from OpenShift Container Platform 3.11 has changed in OpenShift Container Platform 4.17: GlusterFS is no longer supported. CephFS as a standalone product is no longer supported. Ceph RBD as a standalone product is no longer supported. If you used one of these in OpenShift Container Platform 3.11, you must choose a different persistent storage option for full support in OpenShift Container Platform 4.17. For more information, see Understanding persistent storage . Migration of in-tree volumes to CSI drivers OpenShift Container Platform 4 is migrating in-tree volume plugins to their Container Storage Interface (CSI) counterparts. In OpenShift Container Platform 4.17, CSI drivers are the new default for the following in-tree volume types: Amazon Web Services (AWS) Elastic Block Storage (EBS) Azure Disk Azure File Google Cloud Platform Persistent Disk (GCP PD) OpenStack Cinder VMware vSphere Note As of OpenShift Container Platform 4.13, VMware vSphere is not available by default. However, you can opt into VMware vSphere. All aspects of volume lifecycle, such as creation, deletion, mounting, and unmounting, is handled by the CSI driver. For more information, see CSI automatic migration . 3.3.2. Networking considerations Review the following networking changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.17. Network isolation mode The default network isolation mode for OpenShift Container Platform 3.11 was ovs-subnet , though users frequently switched to use ovn-multitenant . The default network isolation mode for OpenShift Container Platform 4.17 is controlled by a network policy. If your OpenShift Container Platform 3.11 cluster used the ovs-subnet or ovs-multitenant mode, it is recommended to switch to a network policy for your OpenShift Container Platform 4.17 cluster. Network policies are supported upstream, are more flexible, and they provide the functionality that ovs-multitenant does. If you want to maintain the ovs-multitenant behavior while using a network policy in OpenShift Container Platform 4.17, follow the steps to configure multitenant isolation using network policy . For more information, see About network policy . OVN-Kubernetes as the default networking plugin in Red Hat OpenShift Networking In OpenShift Container Platform 3.11, OpenShift SDN was the default networking plugin in Red Hat OpenShift Networking. In OpenShift Container Platform 4.17, OVN-Kubernetes is now the default networking plugin. For more information on the removal of the OpenShift SDN network plugin and why it has been removed see OpenShiftSDN CNI removal in OCP 4.17 . For information on OVN-Kubernetes features that are similar to features in the OpenShift SDN plugin see: Configuring an egress IP address Configuring an egress firewall for a project Enabling multicast for a project Deploying an egress router pod in redirect mode Configuring multitenant isolation with network policy Warning You should install OpenShift Container Platform 4 with the OVN-Kubernetes network plugin because it is not possible to upgrade a cluster to OpenShift Container Platform 4.17 if it is using the OpenShift SDN network plugin. 3.3.3. Logging considerations Review the following logging changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.17. Deploying OpenShift Logging OpenShift Container Platform 4 provides a simple deployment mechanism for OpenShift Logging, by using a Cluster Logging custom resource. Aggregated logging data You cannot transition your aggregate logging data from OpenShift Container Platform 3.11 into your new OpenShift Container Platform 4 cluster. Unsupported logging configurations Some logging configurations that were available in OpenShift Container Platform 3.11 are no longer supported in OpenShift Container Platform 4.17. 3.3.4. Security considerations Review the following security changes to consider when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.17. Unauthenticated access to discovery endpoints In OpenShift Container Platform 3.11, an unauthenticated user could access the discovery endpoints (for example, /api/* and /apis/* ). For security reasons, unauthenticated access to the discovery endpoints is no longer allowed in OpenShift Container Platform 4.17. If you do need to allow unauthenticated access, you can configure the RBAC settings as necessary; however, be sure to consider the security implications as this can expose internal cluster components to the external network. Identity providers Configuration for identity providers has changed for OpenShift Container Platform 4, including the following notable changes: The request header identity provider in OpenShift Container Platform 4.17 requires mutual TLS, where in OpenShift Container Platform 3.11 it did not. The configuration of the OpenID Connect identity provider was simplified in OpenShift Container Platform 4.17. It now obtains data, which previously had to specified in OpenShift Container Platform 3.11, from the provider's /.well-known/openid-configuration endpoint. For more information, see Understanding identity provider configuration . OAuth token storage format Newly created OAuth HTTP bearer tokens no longer match the names of their OAuth access token objects. The object names are now a hash of the bearer token and are no longer sensitive. This reduces the risk of leaking sensitive information. Default security context constraints The restricted security context constraints (SCC) in OpenShift Container Platform 4 can no longer be accessed by any authenticated user as the restricted SCC in OpenShift Container Platform 3.11. The broad authenticated access is now granted to the restricted-v2 SCC, which is more restrictive than the old restricted SCC. The restricted SCC still exists; users that want to use it must be specifically given permissions to do it. For more information, see Managing security context constraints . 3.3.5. Monitoring considerations Review the following monitoring changes when transitioning from OpenShift Container Platform 3.11 to OpenShift Container Platform 4.17. You cannot migrate Hawkular configurations and metrics to Prometheus. Alert for monitoring infrastructure availability The default alert that triggers to ensure the availability of the monitoring structure was called DeadMansSwitch in OpenShift Container Platform 3.11. This was renamed to Watchdog in OpenShift Container Platform 4. If you had PagerDuty integration set up with this alert in OpenShift Container Platform 3.11, you must set up the PagerDuty integration for the Watchdog alert in OpenShift Container Platform 4. For more information, see Configuring alert routing for default platform alerts .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/migrating_from_version_3_to_4/planning-migration-3-4
Chapter 2. Understanding API compatibility guidelines
Chapter 2. Understanding API compatibility guidelines Important This guidance does not cover layered OpenShift Container Platform offerings. 2.1. API compatibility guidelines Red Hat recommends that application developers adopt the following principles in order to improve compatibility with OpenShift Container Platform: Use APIs and components with support tiers that match the application's need. Build applications using the published client libraries where possible. Applications are only guaranteed to run correctly if they execute in an environment that is as new as the environment it was built to execute against. An application that was built for OpenShift Container Platform 4.14 is not guaranteed to function properly on OpenShift Container Platform 4.13. Do not design applications that rely on configuration files provided by system packages or other components. These files can change between versions unless the upstream community is explicitly committed to preserving them. Where appropriate, depend on any Red Hat provided interface abstraction over those configuration files in order to maintain forward compatibility. Direct file system modification of configuration files is discouraged, and users are strongly encouraged to integrate with an Operator provided API where available to avoid dual-writer conflicts. Do not depend on API fields prefixed with unsupported<FieldName> or annotations that are not explicitly mentioned in product documentation. Do not depend on components with shorter compatibility guarantees than your application. Do not perform direct storage operations on the etcd server. All etcd access must be performed via the api-server or through documented backup and restore procedures. Red Hat recommends that application developers follow the compatibility guidelines defined by Red Hat Enterprise Linux (RHEL). OpenShift Container Platform strongly recommends the following guidelines when building an application or hosting an application on the platform: Do not depend on a specific Linux kernel or OpenShift Container Platform version. Avoid reading from proc , sys , and debug file systems, or any other pseudo file system. Avoid using ioctls to directly interact with hardware. Avoid direct interaction with cgroups in order to not conflict with OpenShift Container Platform host-agents that provide the container execution environment. Note During the lifecycle of a release, Red Hat makes commercially reasonable efforts to maintain API and application operating environment (AOE) compatibility across all minor releases and z-stream releases. If necessary, Red Hat might make exceptions to this compatibility goal for critical impact security or other significant issues. 2.2. API compatibility exceptions The following are exceptions to compatibility in OpenShift Container Platform: RHEL CoreOS file system modifications not made with a supported Operator No assurances are made at this time that a modification made to the host operating file system is preserved across minor releases except for where that modification is made through the public interface exposed via a supported Operator, such as the Machine Config Operator or Node Tuning Operator. Modifications to cluster infrastructure in cloud or virtualized environments No assurances are made at this time that a modification to the cloud hosting environment that supports the cluster is preserved except for where that modification is made through a public interface exposed in the product or is documented as a supported configuration. Cluster infrastructure providers are responsible for preserving their cloud or virtualized infrastructure except for where they delegate that authority to the product through an API. Functional defaults between an upgraded cluster and a new installation No assurances are made at this time that a new installation of a product minor release will have the same functional defaults as a version of the product that was installed with a prior minor release and upgraded to the equivalent version. For example, future versions of the product may provision cloud infrastructure with different defaults than prior minor versions. In addition, different default security choices may be made in future versions of the product than those made in past versions of the product. Past versions of the product will forward upgrade, but preserve legacy choices where appropriate specifically to maintain backwards compatibility. Usage of API fields that have the prefix "unsupported" or undocumented annotations Select APIs in the product expose fields with the prefix unsupported<FieldName> . No assurances are made at this time that usage of this field is supported across releases or within a release. Product support can request a customer to specify a value in this field when debugging specific problems, but its usage is not supported outside of that interaction. Usage of annotations on objects that are not explicitly documented are not assured support across minor releases. API availability per product installation topology The OpenShift distribution will continue to evolve its supported installation topology, and not all APIs in one install topology will necessarily be included in another. For example, certain topologies may restrict read/write access to particular APIs if they are in conflict with the product installation topology or not include a particular API at all if not pertinent to that topology. APIs that exist in a given topology will be supported in accordance with the compatibility tiers defined above. 2.3. API compatibility common terminology 2.3.1. Application Programming Interface (API) An API is a public interface implemented by a software program that enables it to interact with other software. In OpenShift Container Platform, the API is served from a centralized API server and is used as the hub for all system interaction. 2.3.2. Application Operating Environment (AOE) An AOE is the integrated environment that executes the end-user application program. The AOE is a containerized environment that provides isolation from the host operating system (OS). At a minimum, AOE allows the application to run in an isolated manner from the host OS libraries and binaries, but still share the same OS kernel as all other containers on the host. The AOE is enforced at runtime and it describes the interface between an application and its operating environment. It includes intersection points between the platform, operating system and environment, with the user application including projection of downward API, DNS, resource accounting, device access, platform workload identity, isolation among containers, isolation between containers and host OS. The AOE does not include components that might vary by installation, such as Container Network Interface (CNI) plugin selection or extensions to the product such as admission hooks. Components that integrate with the cluster at a level below the container environment might be subjected to additional variation between versions. 2.3.3. Compatibility in a virtualized environment Virtual environments emulate bare-metal environments such that unprivileged applications that run on bare-metal environments will run, unmodified, in corresponding virtual environments. Virtual environments present simplified abstracted views of physical resources, so some differences might exist. 2.3.4. Compatibility in a cloud environment OpenShift Container Platform might choose to offer integration points with a hosting cloud environment via cloud provider specific integrations. The compatibility of these integration points are specific to the guarantee provided by the native cloud vendor and its intersection with the OpenShift Container Platform compatibility window. Where OpenShift Container Platform provides an integration with a cloud environment natively as part of the default installation, Red Hat develops against stable cloud API endpoints to provide commercially reasonable support with forward looking compatibility that includes stable deprecation policies. Example areas of integration between the cloud provider and OpenShift Container Platform include, but are not limited to, dynamic volume provisioning, service load balancer integration, pod workload identity, dynamic management of compute, and infrastructure provisioned as part of initial installation. 2.3.5. Major, minor, and z-stream releases A Red Hat major release represents a significant step in the development of a product. Minor releases appear more frequently within the scope of a major release and represent deprecation boundaries that might impact future application compatibility. A z-stream release is an update to a minor release which provides a stream of continuous fixes to an associated minor release. API and AOE compatibility is never broken in a z-stream release except when this policy is explicitly overridden in order to respond to an unforeseen security impact. For example, in the release 4.13.2: 4 is the major release version 13 is the minor release version 2 is the z-stream release version 2.3.6. Extended user support (EUS) A minor release in an OpenShift Container Platform major release that has an extended support window for critical bug fixes. Users are able to migrate between EUS releases by incrementally adopting minor versions between EUS releases. It is important to note that the deprecation policy is defined across minor releases and not EUS releases. As a result, an EUS user might have to respond to a deprecation when migrating to a future EUS while sequentially upgrading through each minor release. 2.3.7. Developer Preview An optional product capability that is not officially supported by Red Hat, but is intended to provide a mechanism to explore early phase technology. By default, Developer Preview functionality is opt-in, and subject to removal at any time. Enabling a Developer Preview feature might render a cluster unsupportable dependent upon the scope of the feature. If you are a Red( )Hat customer or partner and have feedback about these developer preview versions, file an issue by using the OpenShift Bugs tracker . Do not use the formal Red( )Hat support service ticket process. You can read more about support handling in the following knowledge article . 2.3.8. Technology Preview An optional product capability that provides early access to upcoming product innovations to test functionality and provide feedback during the development process. The feature is not fully supported, might not be functionally complete, and is not intended for production use. Usage of a Technology Preview function requires explicit opt-in. Learn more about the Technology Preview Features Support Scope .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/api_overview/compatibility-guidelines
Using Eclipse 4.17
Using Eclipse 4.17 Red Hat Developer Tools 1 Installing Eclipse 4.17 and the first steps with the application Eva-Lotte Gebhardt [email protected] Olga Tikhomirova [email protected] Peter Macko Kevin Owen Yana Hontyk Red Hat Developer Group Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_eclipse_4.17/index
Chapter 4. Reviewing the results of JBoss Server Migration Tool execution
Chapter 4. Reviewing the results of JBoss Server Migration Tool execution 4.1. Review the migrated configuration files When the migration is complete, review the migrated server configuration files in the EAP_NEW_HOME /standalone/configuration/ and EAP_NEW_HOME /domain/configuration/ directories. Note that any original EAP_NEW_HOME target server configuration file names selected for migration are backed up and are now appended with .beforeMigration . The EAP_NEW_HOME target server configuration file names not appended with .beforeMigration are now updated with the content migrated from the EAP_PREVIOUS_HOME source server configuration. The original configuration files located in the EAP_PREVIOUS_HOME source server configuration directories remain untouched. The logging.properties and standalone-load-balancer.xml files in the target configuration directories remain untouched. If you choose to migrate all of the available configurations, you should see the following configuration files in the target server directories. Example: List of configuration files on the target server 4.2. Tracking migration task execution The JBoss Server Migration Tool begins each target server migration by executing a root task, which can then execute subtasks. Those subtasks can then also execute additional tasks and subtasks. As it executes, the tool tracks each migration task, along with any subtasks, and saves the results in a tree structure that is later used to build the reports. Each migration task is given a name, which consists of a task name concatenated with optional attributes using the following syntax. The name defines the task subject or type, and the attributes are used to distinguish between subtasks and sibling tasks. For example, all of the following are names to distinguish Jakarta Enterprise Beans subsystem update tasks. Since a migration task can be executed multiple times under different parent tasks, each task is stored in the tree using each of its parent task names, starting with root, separated by a > character. The task execution tree is used to build the migration reports. A task execution can result in one of the following statuses. Table 4.1. Server migration task execution statuses Status Description Success The task executed successfully. Skipped The task skipped the execution, either because it was not needed or because it was configured to be skipped. Fail The task execution failed. 4.3. Review the Task Summary log The Task Summary is generated and printed to the migration console and to the JBoss Server Migration Tool log file. It provides a high-level overview of the migration results, by component and subtask, as a hierarchical list. Additional resources See the appendix of this guide for an example Task Summary report . For more information about options to configure the task summary report, see Configuring the Task Summary log . 4.4. Review the JBoss Server Migration Tool reports The JBoss Server Migration Tool generates nicely formatted HTML and XML reports in the MIGRATION_TOOL_HOME/reports/ directory. These reports provide a detailed analysis of the migration process and how the target server was configured during the migration. The default names for these reports are migration-report.html and migration-report.xml . Each of these names is configurable. This section provides a brief overview of the content of these reports. The JBoss Server Migration Tool HTML report file. The JBoss Server Migration Tool XML report file. Additional resources For information about how to configure the reports, see Configuring reporting for JBoss Server Migration Tool . 4.4.1. JBoss Server Migration Tool HTML report The HTML report consists of three sections. Summary This section provides the execution start time, information about the source and target servers, and the result of the migration. Environment This section lists the environment properties that were used for the migration. Tasks This section, which includes collapsible subsections, provides statistics and a map of the executed migration tasks. Each task is listed by its name and is color-coded according to the status of the completion of the task: Green if it was successful. Red if it failed. Gray if it was skipped. Additional resources For an example HTML report , see the appendix of this guide. For configuration options for the HTML report, see Configuring the HTML Report . 4.4.2. JBoss Server Migration Tool XML report The XML Report is a low level report that provides all of the migration data gathered by the tool. It is formatted in a way that it can be imported into and manipulated by third-party spreadsheet or other data manipulation tools. Additional resources For an example XML report , see the appendix of this guide. For configuration options for the XML report, see Configuring the XML Report .
[ "ls EAP_NEW_HOME /standalone/configuration/ application-roles.properties application-roles.properties.beforeMigration application-users.properties application-users.properties.beforeMigration logging.properties mgmt-groups.properties mgmt-groups.properties.beforeMigration mgmt-users.properties mgmt-users.properties.beforeMigration standalone-full-ha.xml standalone-full-ha.xml.beforeMigration standalone-full.xml standalone-full.xml.beforeMigration standalone-ha.xml standalone-ha.xml.beforeMigration standalone-load-balancer.xml standalone-osgi.xml standalone-osgi.xml.beforeMigration standalone.xml standalone.xml.beforeMigration ls EAP_NEW_HOME /domain/configuration/ application-roles.properties application-roles.properties.beforeMigration application-users.properties application-users.properties.beforeMigration domain.xml domain.xml.beforeMigration host-master.xml host-master.xml.beforeMigration host-slave.xml host-slave.xml.beforeMigration host.xml host.xml.beforeMigration logging.properties mgmt-groups.properties mgmt-groups.properties.beforeMigration mgmt-users.properties mgmt-users.properties.beforeMigration", "TASK_NAME ( ATTRIBUTE_1_NAME = ATTRIBUTE_1_VALUE , ATTRIBUTE_2_NAME = ATTRIBUTE_2_VALUE , ... )", "subsystem.ejb3.update subsystem.ejb3.update.activate-ejb3-remoting-http-connector(resource=/subsystem=ejb3) subsystem.ejb3.update.setup-default-sfsb-passivation-disabled-cache(resource=/subsystem=ejb3) subsystem.ejb3.update.add-infinispan-passivation-store-and-distributable-cache(resource=/subsystem=ejb3)" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_the_jboss_server_migration_tool/assembly_review-results-server-migration-tool-execution_server-migration-tool
Chapter 8. Using Fernet keys for encryption in the overcloud
Chapter 8. Using Fernet keys for encryption in the overcloud Fernet is the default token provider, that replaces uuid . You can review your Fernet deployment and test that tokens are working correctly. Fernet uses three types of keys, which are stored in /var/lib/config-data/puppet-generated/keystone/etc/keystone/fernet-keys . The highest-numbered directory contains the primary key, which generates new tokens and decrypts existing tokens. 8.1. Reviewing the Fernet deployment To test that Fernet tokens are working correctly, retrieve the IP address of the Controller node, SSH into the Controller node, and review the settings of the token driver and provider. Procedure Retrieve the IP address of the Controller node: SSH into the Controller node: Retrieve the values of the token driver and provider settings: Test the Fernet provider: The result includes the long Fernet token.
[ "[stack@director ~]USD source ~/stackrc [stack@director ~]USD openstack server list -------------------------------------- ------------------------- -------- ---------------------+ | ID | Name | Status | Networks | -------------------------------------- ------------------------- -------- ---------------------+ | 756fbd73-e47b-46e6-959c-e24d7fb71328 | overcloud-controller-0 | ACTIVE | ctlplane=192.0.2.16 | | 62b869df-1203-4d58-8e45-fac6cd4cfbee | overcloud-novacompute-0 | ACTIVE | ctlplane=192.0.2.8 | -------------------------------------- ------------------------- -------- ---------------------+", "[tripleo-admin@overcloud-controller-0 ~]USD ssh [email protected]", "[tripleo-admin@overcloud-controller-0 ~]USD sudo crudini --get /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf token driver sql [tripleo-admin@overcloud-controller-0 ~]USD sudo crudini --get /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf token provider fernet", "[tripleo-admin@overcloud-controller-0 ~]USD exit [stack@director ~]USD source ~/overcloudrc [stack@director ~]USD openstack token issue ------------ -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | ------------ -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | expires | 2016-09-20 05:26:17+00:00 | | id | gAAAAABX4LppE8vaiFZ992eah2i3edpO1aDFxlKZq6a_RJzxUx56QVKORrmW0-oZK3-Xuu2wcnpYq_eek2SGLz250eLpZOzxKBR0GsoMfxJU8mEFF8NzfLNcbuS-iz7SV-N1re3XEywSDG90JcgwjQfXW-8jtCm-n3LL5IaZexAYIw059T_-cd8 | | project_id | 26156621d0d54fc39bf3adb98e63b63d | | user_id | 397daf32cadd490a8f3ac23a626ac06c | ------------ -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/hardening_red_hat_openstack_platform/assembly-using-fernet-keys-for-encryption-in-the-overcloud_security_and_hardening
Chapter 13. Networking for hosted control planes
Chapter 13. Networking for hosted control planes For standalone OpenShift Container Platform, proxy support is mainly about ensuring that workloads in the cluster are configured to use the HTTP or HTTPS proxy to access external services, honoring the NO_PROXY setting if one is configured, and accepting any trust bundle that is configured for the proxy. For hosted control planes, proxy support involves the following additional use cases. 13.1. Control plane workloads that need to access external services Operators that run in the control plane need to access external services through the proxy that is configured for the hosted cluster. The proxy is usually accessible only through the data plane. The control plane workloads are as follows: The Control Plane Operator needs to validate and obtain endpoints from certain identity providers when it creates the OAuth server configuration. The OAuth server needs non-LDAP identity provider access. The OpenShift API server handles image registry metadata import. The Ingress Operator needs access to validate external canary routes. In a hosted cluster, you must send traffic that originates from the Control Plane Operator, Ingress Operator, OAuth server, and OpenShift API server pods through the data plane to the configured proxy and then to its final destination. Note Some operations are not possible when a hosted cluster is reduced to zero compute nodes; for example, when you import OpenShift image streams from a registry that requires proxy access. 13.2. Compute nodes that need to access an ignition endpoint When compute nodes need a proxy to access the ignition endpoint, you must configure the proxy in the user-data stub that is configured on the compute node when it is created. For cases where machines need a proxy to access the ignition URL, the proxy configuration is included in the stub. The stub resembles the following example: --- {"ignition":{"config":{"merge":[{"httpHeaders":[{"name":"Authorization","value":"Bearer ..."},{"name":"TargetConfigVersionHash","value":"a4c1b0dd"}],"source":"https://ignition.controlplanehost.example.com/ignition","verification":{}}],"replace":{"verification":{}}},"proxy":{"httpProxy":"http://proxy.example.org:3128", "httpsProxy":"https://proxy.example.org:3129", "noProxy":"host.example.org"},"security":{"tls":{"certificateAuthorities":[{"source":"...","verification":{}}]}},"timeouts":{},"version":"3.2.0"},"passwd":{},"storage":{},"systemd":{}} --- 13.3. Compute nodes that need to access the API server This use case is relevant to self-managed hosted control planes, not to Red Hat OpenShift Service on AWS with hosted control planes. For communication with the control plane, hosted control planes uses a local proxy in every compute node that listens on IP address 172.20.0.1 and forwards traffic to the API server. If an external proxy is required to access the API server, that local proxy needs to use the external proxy to send traffic out. When a proxy is not needed, hosted control planes uses haproxy for the local proxy, which only forwards packets via TCP. When a proxy is needed, hosted control planes uses a custom proxy, control-plane-operator-kubernetes-default-proxy , to send traffic through the external proxy. 13.4. Management clusters that need external access The HyperShift Operator has a controller that monitors the OpenShift global proxy configuration of the management cluster and sets the proxy environment variables on its own deployment. Control plane deployments that need external access are configured with the proxy environment variables of the management cluster. 13.5. Additional resources Configuring the cluster-wide proxy
[ "--- {\"ignition\":{\"config\":{\"merge\":[{\"httpHeaders\":[{\"name\":\"Authorization\",\"value\":\"Bearer ...\"},{\"name\":\"TargetConfigVersionHash\",\"value\":\"a4c1b0dd\"}],\"source\":\"https://ignition.controlplanehost.example.com/ignition\",\"verification\":{}}],\"replace\":{\"verification\":{}}},\"proxy\":{\"httpProxy\":\"http://proxy.example.org:3128\", \"httpsProxy\":\"https://proxy.example.org:3129\", \"noProxy\":\"host.example.org\"},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"...\",\"verification\":{}}]}},\"timeouts\":{},\"version\":\"3.2.0\"},\"passwd\":{},\"storage\":{},\"systemd\":{}} ---" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/hosted_control_planes/networking-for-hosted-control-planes
Managing resource use
Managing resource use Red Hat OpenShift GitOps 1.11 Configuring resource requests and limits for Argo CD workloads Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.11/html/managing_resource_use/index
8.4.7. System Upgrades and pacemaker_remote
8.4.7. System Upgrades and pacemaker_remote As of Red Hat Enterprise Linux 6.8, if the pacemaker_remote service is stopped on an active Pacemaker Remote node, the cluster will gracefully migrate resources off the node before stopping the node. This allows you to perform software upgrades and other routine maintenance procedures without removing the node from the cluster. Once pacemaker_remote is shut down, however, the cluster will immediately try to reconnect. If pacemaker_remote is not restarted within the resource's monitor timeout, the cluster will consider the monitor operation as failed. If you wish to avoid monitor failures when the pacemaker_remote service is stopped on an active Pacemaker Remote node, you can use the following procedure to take the node out of the cluster before performing any system administration that might stop pacemaker_remote Warning For Red Hat Enterprise Linux release 6.7 and earlier, if pacemaker_remote stops on a node that is currently integrated into a cluster, the cluster will fence that node. If the stop happens automatically as part of a yum update process, the system could be left in an unusable state (particularly if the kernel is also being upgraded at the same time as pacemaker_remote ). For Red Hat Enterprise Linux release 6.7 and earlier you must use the following procedure to take the node out of the cluster before performing any system administration that might stop pacemaker_remote . Use the following procedure to take a node out of a cluster when performing maintenance on a node running pacemaker_remote : Stop the node's connection resource with the pcs resource disable resourcename , which will move all services off the node. For guest nodes, this will also stop the VM, so the VM must be started outside the cluster (for example, using virsh ) to perform any maintenance. Perform the desired maintenance. When ready to return the node to the cluster, re-enable the resource with the pcs resource enable .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/remotenode_upgrade
Chapter 10. Managing Activation Keys
Chapter 10. Managing Activation Keys Activation keys provide a method to automate system registration and subscription attachment. You can create multiple keys and associate them with different environments and Content Views. For example, you might create a basic activation key with a subscription for Red Hat Enterprise Linux workstations and associate it with Content Views from a particular environment. You can use activation keys during content host registration to improve the speed, simplicity and consistency of the process. Note that activation keys are used only when hosts are registered. If changes are made to an activation key, it is applicable only to hosts that are registered with the amended activation key in the future. The changes are not made to existing hosts. Activation keys can define the following properties for content hosts: Associated subscriptions and subscription attachment behavior Available products and repositories A life cycle environment and a Content View Host collection membership System purpose Content View Conflicts between Host Creation and Registration When you provision a host, Satellite uses provisioning templates and other content from the Content View that you set in the host group or host settings. When the host is registered, the Content View from the activation key overwrites the original Content View from the host group or host settings. Then Satellite uses the Content View from the activation key for every future task, for example, rebuilding a host. When you rebuild a host, ensure that you set the Content View that you want to use in the activation key and not in the host group or host settings. Using the Same Activation Key with Multiple Content Hosts You can apply the same activation key to multiple content hosts if it contains enough subscriptions. However, activation keys set only the initial configuration for a content host. When the content host is registered to an organization, the organization's content can be attached to the content host manually. Using Multiple Activation Keys with a Content Host A content host can be associated with multiple activation keys that are combined to define the host settings. In case of conflicting settings, the last specified activation key takes precedence. You can specify the order of precedence by setting a host group parameter as follows: 10.1. Creating an Activation Key You can use activation keys to define a specific set of subscriptions to attach to hosts during registration. The subscriptions that you add to an activation key must be available within the associated Content View. Subscription Manager attaches subscriptions differently depending on the following factors: Are there any subscriptions associated with the activation key? Is the auto-attach option enabled? For Red Hat Enterprise Linux 8 hosts: Is there system purpose set on the activation key? Note that Satellite automatically attaches subscriptions only for the products installed on a host. For subscriptions that do not list products installed on Red Hat Enterprise Linux by default, such as the Extended Update Support (EUS) subscription, use an activation key specifying the required subscriptions and with the auto-attach disabled. Based on the factors, there are three possible scenarios for subscribing with activation keys: Activation key that attaches subscriptions automatically. With no subscriptions specified and auto-attach enabled, hosts using the activation key search for the best fitting subscription from the ones provided by the Content View associated with the activation key. This is similar to entering the subscription-manager --auto-attach command. For Red Hat Enterprise Linux 8 hosts, you can configure the activation key to set system purpose on hosts during registration to enhance the automatic subscriptions attachment. Activation key providing a custom set of subscription for auto-attach. If there are subscriptions specified and auto-attach is enabled, hosts using the activation key select the best fitting subscription from the list specified in the activation key. Setting system purpose on the activation key does not affect this scenario. Activation key with the exact set of subscriptions. If there are subscriptions specified and auto-attach is disabled, hosts using the activation key are associated with all subscriptions specified in the activation key. Setting system purpose on the activation key does not affect this scenario. Custom Products If a custom product, typically containing content not provided by Red Hat, is assigned to an activation key, this product is always enabled for the registered content host regardless of the auto-attach setting. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Activation keys and click Create Activation Key . In the Name field, enter the name of the activation key. If you want to set a limit, clear the Unlimited hosts checkbox, and in the Limit field, enter the maximum number of systems you can register with the activation key. If you want unlimited hosts to register with the activation key, ensure the Unlimited Hosts checkbox is selected. Optional: In the Description field, enter a description for the activation key. From the Environment list, select the environment to use. From the Content View list, select a Content View to use. If you intend to use the deprecated Katello Agent instead of Remote Execution , the Content View must contain the Satellite Client 6 repository because it contains the katello-agent package. If Simple Content Access (SCA) is enabled: In the Repository Sets tab, ensure only your named repository is enabled. If SCA is not enabled: Click the Subscriptions tab, then click the Add submenu. Click the checkbox under the subscription you created before. Click Add Selected . Click Save . Optional: For Red Hat Enterprise Linux 8 hosts, in the System Purpose section, you can configure the activation key with system purpose to set on hosts during registration to enhance subscriptions auto attachment. CLI procedure Create the activation key: Optional: For Red Hat Enterprise Linux 8 hosts, enter the following command to configure the activation key with system purpose to set on hosts during registration to enhance subscriptions auto attachment. Obtain a list of your subscription IDs: Attach the Red Hat Enterprise Linux subscription UUID to the activation key: List the product content associated with the activation key: If Simple Content Access (SCA) is enabled: If SCA is not enabled: Override the default auto-enable status for the Satellite Client 6 repository. The default status is set to disabled. To enable, enter the following command: 10.2. Updating Subscriptions Associated with an Activation Key Use this procedure to change the subscriptions associated with an activation key. To use the CLI instead of the Satellite web UI, see the CLI procedure . Note that changes to an activation key apply only to machines provisioned after the change. To update subscriptions on existing content hosts, see Section 5.7, "Updating Red Hat Subscriptions on Multiple Hosts" . Procedure In the Satellite web UI, navigate to Content > Activation keys and click the name of the activation key. Click the Subscriptions tab. To remove subscriptions, select List/Remove , and then select the checkboxes to the left of the subscriptions to be removed and then click Remove Selected . To add subscriptions, select Add , and then select the checkboxes to the left of the subscriptions to be added and then click Add Selected . Click the Repository Sets tab and review the repositories' status settings. To enable or disable a repository, select the checkbox for a repository and then change the status using the Select Action list. Click the Details tab, select a Content View for this activation key, and then click Save . CLI procedure List the subscriptions that the activation key currently contains: Remove the required subscription from the activation key: For the --subscription-id option, you can use either the UUID or the ID of the subscription. Attach new subscription to the activation key: For the --subscription-id option, you can use either the UUID or the ID of the subscription. List the product content associated with the activation key: Override the default auto-enable status for the required repository: For the --value option, enter 1 for enable, 0 for disable. 10.3. Using Activation Keys for Host Registration You can use activation keys to complete the following tasks: Registering new hosts during provisioning through Red Hat Satellite. The kickstart provisioning templates in Red Hat Satellite contain commands to register the host using an activation key that is defined when creating a host. Registering existing Red Hat Enterprise Linux hosts. Configure Subscription Manager to use Satellite Server for registration and specify the activation key when running the subscription-manager register command. You can register hosts with Satellite using the host registration feature, the Satellite API, or Hammer CLI. Procedure In the Satellite web UI, navigate to Hosts > Register Host . Click Generate to create the registration command. Click on the files icon to copy the command to your clipboard. Log in to the host you want register and run the previously generated command. Check the /etc/yum.repos.d/redhat.repo file and ensure that the appropriate repositories have been enabled. CLI procedure Generate the host registration command using the Hammer CLI: If your hosts do not trust the SSL certificate of Satellite Server, you can disable SSL validation by adding the --insecure flag to the registration command. Log in to the host you want register and run the previously generated command. Check the /etc/yum.repos.d/redhat.repo file and ensure that the appropriate repositories have been enabled. API procedure Generate the host registration command using the Satellite API: If your hosts do not trust the SSL certificate of Satellite Server, you can disable SSL validation by adding the --insecure flag to the registration command. Use an activation key to simplify specifying the environments. For more information, see Managing Activation Keys in the Content Management guide. To enter a password as command line argument, use username:password syntax. Keep in mind this can save the password in the shell history. For more information about registration see Registering a Host to Red Hat Satellite in Managing Hosts . Log in to the host you want register and run the previously generated command. Check the /etc/yum.repos.d/redhat.repo file and ensure that the appropriate repositories have been enabled. Multiple Activation Keys You can use multiple activation keys when registering a content host. You can then create activation keys for specific subscription sets and combine them according to content host requirements. For example, the following command registers a content host to your organization with both VDC and OpenShift subscriptions: Settings Conflicts If there are conflicting settings in activation keys, the rightmost key takes precedence. Settings that conflict: Service Level , Release Version , Environment , Content View , and Product Content . Settings that do not conflict and the host gets the union of them: Subscriptions and Host Collections . Settings that influence the behavior of the key itself and not the host configuration: Content Host Limit and Auto-Attach . 10.4. Enabling Auto-Attach When auto-attach is enabled on an activation key and there are subscriptions associated with the key, the subscription management service selects and attaches the best-matched associated subscriptions based on a set of criteria like currently installed products, architecture, and preferences like service level. You can enable auto-attach and have no subscriptions associated with the key. This type of key is commonly used to register virtual machines when you do not want the virtual machine to consume a physical subscription, but to inherit a host-based subscription from the hypervisor. For more information, see Configuring Virtual Machine Subscriptions in Red Hat Satellite . Auto-attach is enabled by default. Disable the option if you want to force attach all subscriptions associated with the activation key. Procedure In the Satellite web UI, navigate to Content > Activation Keys . Click the activation key name that you want to edit. Click the Subscriptions tab. Click the edit icon to Auto-Attach . Select or clear the checkbox to enable or disable auto-attach. Click Save . CLI procedure Enter the following command to enable auto-attach on the activation key: 10.5. Setting the Service Level You can configure an activation key to define a default service level for the new host created with the activation key. Setting a default service level selects only the matching subscriptions to be attached to the host. For example, if the default service level on an activation key is set to Premium, only subscriptions with premium service levels are attached to the host upon registration. Procedure In the Satellite web UI, navigate to Content > Activation Keys . Click the activation key name you want to edit. Click the edit icon to Service Level . Select the required service level from the list. The list only contains service levels available to the activation key. Click Save . CLI procedure Enter the following command to set a default service level to Premium on the activation key:
[ "hammer hostgroup set-parameter --hostgroup \" My_Host_Group \" --name \" My_Activation_Key \" --value \" name_of_first_key \", \" name_of_second_key \",", "hammer activation-key create --name \" My_Activation_Key \" --unlimited-hosts --description \" Example Stack in the Development Environment \" --lifecycle-environment \" Development \" --content-view \" Stack \" --organization \" My_Organization \"", "hammer activation-key update --organization \" My_Organization \" --name \" My_Activation_Key \" --service-level \" Standard \" --purpose-usage \" Development/Test \" --purpose-role \" Red Hat Enterprise Linux Server \" --purpose-addons \" addons \"", "hammer subscription list --organization \" My_Organization \"", "hammer activation-key add-subscription --name \" My_Activation_Key \" --subscription-id My_Subscription_ID --organization \" My_Organization \"", "hammer activation-key product-content --content-access-mode-all true --name \" My_Activation_Key \" --organization \" My_Organization \"", "hammer activation-key product-content --name \" My_Activation_Key \" --organization \" My_Organization \"", "hammer activation-key content-override --name \" My_Activation_Key \" --content-label rhel-7-server-satellite-client-6-rpms --value 1 --organization \" My_Organization \"", "hammer activation-key subscriptions --name My_Activation_Key --organization \" My_Organization \"", "hammer activation-key remove-subscription --name \" My_Activation_Key \" --subscription-id ff808181533518d50152354246e901aa --organization \" My_Organization \"", "hammer activation-key add-subscription --name \" My_Activation_Key \" --subscription-id ff808181533518d50152354246e901aa --organization \" My_Organization \"", "hammer activation-key product-content --name \" My_Activation_Key \" --organization \" My_Organization \"", "hammer activation-key content-override --name \" My_Activation_Key \" --content-label content_label --value 1 --organization \" My_Organization \"", "hammer host-registration generate-command --activation-keys \" My_Activation_Key \"", "hammer host-registration generate-command --activation-keys \" My_Activation_Key \" --insecure true", "curl -X POST https://satellite.example.com/api/registration_commands --user \" My_User_Name \" -H 'Content-Type: application/json' -d '{ \"registration_command\": { \"activation_keys\": [\" My_Activation_Key_1 , My_Activation_Key_2 \"] }}'", "curl -X POST https://satellite.example.com/api/registration_commands --user \" My_User_Name \" -H 'Content-Type: application/json' -d '{ \"registration_command\": { \"activation_keys\": [\" My_Activation_Key_1 , My_Activation_Key_2 \"], \"insecure\": true }}'", "subscription-manager register --org=\" My_Organization \" --activationkey=\"ak-VDC,ak-OpenShift\"", "hammer activation-key update --name \" My_Activation_Key \" --organization \" My_Organization \" --auto-attach true", "hammer activation-key update --name \" My_Activation_Key \" --organization \" My_Organization \" --service-level premium" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_content/managing_activation_keys_content-management
Configuring and managing networking
Configuring and managing networking Red Hat Enterprise Linux 9 Managing network interfaces and advanced networking features Red Hat Customer Content Services
[ "NamePolicy=keep kernel database onboard slot path", "ip link show 2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff", "udevadm info --query=property --property=ID_NET_NAMING_SCHEME /sys/class/net/eno1' ID_NET_NAMING_SCHEME=rhel-9.0", "grubby --update-kernel=ALL --args=net.naming-scheme= rhel-9.4", "reboot", "ip link show 2: eno1np0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff", "nmcli -f device,name connection show DEVICE NAME eno1 example_profile", "nmcli connection modify example_profile connection.interface-name \"eno1np0\"", "nmcli connection up example_profile", "udevadm info --query=property --property=ID_NET_NAMING_SCHEME /sys/class/net/eno1np0' ID_NET_NAMING_SCHEME=_rhel-9.4", "ip link show 2: net0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff", "ip link show enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff", "cat /sys/class/net/enp1s0/type 1", "SUBSYSTEM==\"net\",ACTION==\"add\",ATTR{address}==\" <MAC_address> \",ATTR{type}==\" <device_type_id> \",NAME=\" <new_interface_name> \"", "SUBSYSTEM==\"net\",ACTION==\"add\",ATTR{address}==\" 00:00:5e:00:53:1a \",ATTR{type}==\" 1 \",NAME=\" provider0 \"", "dracut -f", "nmcli -f device,name connection show DEVICE NAME enp1s0 example_profile", "nmcli connection modify example_profile connection.interface-name \"\"", "nmcli connection modify example_profile match.interface-name \"provider0 enp1s0\"", "reboot", "ip link show provider0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff", "nmcli connection modify example_profile match.interface-name \"provider0\"", "nmcli connection up example_profile", "ip link show enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff", "mkdir -p /etc/systemd/network/", "[Match] MACAddress= <MAC_address> [Link] Name= <new_interface_name>", "[Match] MACAddress=00:00:5e:00:53:1a [Link] Name=provider0", "dracut -f", "nmcli -f device,name connection show DEVICE NAME enp1s0 example_profile", "nmcli connection modify example_profile connection.interface-name \"\"", "nmcli connection modify example_profile match.interface-name \"provider0 enp1s0\"", "reboot", "ip link show provider0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff", "nmcli connection modify example_profile match.interface-name \"provider0\"", "nmcli connection up example_profile", "ip link show enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff", "mkdir -p /etc/systemd/network/", "cp /usr/lib/systemd/network/99-default.link /etc/systemd/network/98-lan.link", "[Match] MACAddress= <MAC_address> [Link] AlternativeName= <alternative_interface_name_1> AlternativeName= <alternative_interface_name_2> AlternativeName= <alternative_interface_name_n>", "[Match] MACAddress=00:00:5e:00:53:1a [Link] NamePolicy=keep kernel database onboard slot path AlternativeNamesPolicy=database onboard slot path MACAddressPolicy=none AlternativeName=provider", "dracut -f", "reboot", "ip address show provider 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff altname provider", "nmcli connection show NAME UUID TYPE DEVICE Wired connection 1 a5eb6490-cc20-3668-81f8-0314a27f3f75 ethernet enp1s0", "nmcli connection add con-name <connection-name> ifname <device-name> type ethernet", "nmcli connection modify \"Wired connection 1\" connection.id \"Internal-LAN\"", "nmcli connection show Internal-LAN connection.interface-name: enp1s0 connection.autoconnect: yes ipv4.method: auto ipv6.method: auto", "nmcli connection modify Internal-LAN ipv4.method auto", "nmcli connection modify Internal-LAN ipv4.method manual ipv4.addresses 192.0.2.1/24 ipv4.gateway 192.0.2.254 ipv4.dns 192.0.2.200 ipv4.dns-search example.com", "nmcli connection modify Internal-LAN ipv6.method auto", "nmcli connection modify Internal-LAN ipv6.method manual ipv6.addresses 2001:db8:1::fffe/64 ipv6.gateway 2001:db8:1::fffe ipv6.dns 2001:db8:1::ffbb ipv6.dns-search example.com", "nmcli connection modify <connection-name> <setting> <value>", "nmcli connection up Internal-LAN", "ip address show enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:17:b8:b6 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever inet6 2001:db8:1::fffe/64 scope global noprefixroute valid_lft forever preferred_lft forever", "ip route show default default via 192.0.2.254 dev enp1s0 proto static metric 102", "ip -6 route show default default via 2001:db8:1::ffee dev enp1s0 proto static metric 102 pref medium", "cat /etc/resolv.conf search example.com nameserver 192.0.2.200 nameserver 2001:db8:1::ffbb", "ping <host-name-or-IP-address>", "nmcli connection show NAME UUID TYPE DEVICE Wired connection 1 a5eb6490-cc20-3668-81f8-0314a27f3f75 ethernet enp1s0", "nmcli connection edit type ethernet con-name \" <connection-name> \"", "nmcli connection edit con-name \" <connection-name> \"", "nmcli> set connection.id Internal-LAN", "nmcli> print connection.interface-name: enp1s0 connection.autoconnect: yes ipv4.method: auto ipv6.method: auto", "nmcli> set connection.interface-name enp1s0", "nmcli> set ipv4.method auto", "nmcli> ipv4.addresses 192.0.2.1/24 Do you also want to set 'ipv4.method' to 'manual'? [yes]: yes nmcli> ipv4.gateway 192.0.2.254 nmcli> ipv4.dns 192.0.2.200 nmcli> ipv4.dns-search example.com", "nmcli> set ipv6.method auto", "nmcli> ipv6.addresses 2001:db8:1::fffe/64 Do you also want to set 'ipv6.method' to 'manual'? [yes]: yes nmcli> ipv6.gateway 2001:db8:1::fffe nmcli> ipv6.dns 2001:db8:1::ffbb nmcli> ipv6.dns-search example.com", "nmcli> save persistent", "nmcli> quit", "ip address show enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:17:b8:b6 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever inet6 2001:db8:1::fffe/64 scope global noprefixroute valid_lft forever preferred_lft forever", "ip route show default default via 192.0.2.254 dev enp1s0 proto static metric 102", "ip -6 route show default default via 2001:db8:1::ffee dev enp1s0 proto static metric 102 pref medium", "cat /etc/resolv.conf search example.com nameserver 192.0.2.200 nameserver 2001:db8:1::ffbb", "ping <host-name-or-IP-address>", "nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet unavailable --", "nmtui", "ip address show enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:17:b8:b6 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever inet6 2001:db8:1::fffe/64 scope global noprefixroute valid_lft forever preferred_lft forever", "ip route show default default via 192.0.2.254 dev enp1s0 proto static metric 102", "ip -6 route show default default via 2001:db8:1::ffee dev enp1s0 proto static metric 102 pref medium", "cat /etc/resolv.conf search example.com nameserver 192.0.2.200 nameserver 2001:db8:1::ffbb", "ping <host-name-or-IP-address>", "ip address show enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:17:b8:b6 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever inet6 2001:db8:1::fffe/64 scope global noprefixroute valid_lft forever preferred_lft forever", "ip route show default default via 192.0.2.254 dev enp1s0 proto static metric 102", "ip -6 route show default default via 2001:db8:1::ffee dev enp1s0 proto static metric 102 pref medium", "cat /etc/resolv.conf search example.com nameserver 192.0.2.200 nameserver 2001:db8:1::ffbb", "ping <host-name-or-IP-address>", "nm-connection-editor", "ip address show enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:17:b8:b6 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever inet6 2001:db8:1::fffe/64 scope global noprefixroute valid_lft forever preferred_lft forever", "ip route show default default via 192.0.2.254 dev enp1s0 proto static metric 102", "ip -6 route show default default via 2001:db8:1::ffee dev enp1s0 proto static metric 102 pref medium", "cat /etc/resolv.conf search example.com nameserver 192.0.2.200 nameserver 2001:db8:1::ffbb", "ping <host-name-or-IP-address>", "--- interfaces: - name: enp1s0 type: ethernet state: up ipv4: enabled: true address: - ip: 192.0.2.1 prefix-length: 24 dhcp: false ipv6: enabled: true address: - ip: 2001:db8:1::1 prefix-length: 64 autoconf: false dhcp: false routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.0.2.254 next-hop-interface: enp1s0 - destination: ::/0 next-hop-address: 2001:db8:1::fffe next-hop-interface: enp1s0 dns-resolver: config: search: - example.com server: - 192.0.2.200 - 2001:db8:1::ffbb", "--- interfaces: - name: <profile_name> type: ethernet identifier: mac-address mac-address: <mac_address>", "nmstatectl apply ~/create-ethernet-profile.yml", "nmstatectl show enp1s0", "ip address show enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:17:b8:b6 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever inet6 2001:db8:1::fffe/64 scope global noprefixroute valid_lft forever preferred_lft forever", "ip route show default default via 192.0.2.254 dev enp1s0 proto static metric 102", "ip -6 route show default default via 2001:db8:1::ffee dev enp1s0 proto static metric 102 pref medium", "cat /etc/resolv.conf search example.com nameserver 192.0.2.200 nameserver 2001:db8:1::ffbb", "ping <host-name-or-IP-address>", "managed-node-01.example.com interface=enp1s0 ip_v4=192.0.2.1/24 ip_v6=2001:db8:1::1/64 gateway_v4=192.0.2.254 gateway_v6=2001:db8:1::fffe managed-node-02.example.com interface=enp1s0 ip_v4=192.0.2.2/24 ip_v6=2001:db8:1::2/64 gateway_v4=192.0.2.254 gateway_v6=2001:db8:1::fffe", "--- - name: Configure the network hosts: managed-node-01.example.com,managed-node-02.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: \"{{ interface }}\" interface_name: \"{{ interface }}\" type: ethernet autoconnect: yes ip: address: - \"{{ ip_v4 }}\" - \"{{ ip_v6 }}\" gateway4: \"{{ gateway_v4 }}\" gateway6: \"{{ gateway_v6 }}\" dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"address\": \"192.0.2.1\", \"alias\": \"enp1s0\", \"broadcast\": \"192.0.2.255\", \"gateway\": \"192.0.2.254\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"netmask\": \"255.255.255.0\", \"network\": \"192.0.2.0\", \"prefix\": \"24\", \"type\": \"ether\" }, \"ansible_default_ipv6\": { \"address\": \"2001:db8:1::1\", \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"prefix\": \"64\", \"scope\": \"global\", \"type\": \"ether\" }, \"ansible_dns\": { \"nameservers\": [ \"192.0.2.1\", \"2001:db8:1::ffbb\" ], \"search\": [ \"example.com\" ] },", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: example match: path: - pci-0000:00:0[1-3].0 - &!pci-0000:00:02.0 type: ethernet autoconnect: yes ip: address: - 192.0.2.1/24 - 2001:db8:1::1/64 gateway4: 192.0.2.254 gateway6: 2001:db8:1::fffe dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"address\": \"192.0.2.1\", \"alias\": \"enp1s0\", \"broadcast\": \"192.0.2.255\", \"gateway\": \"192.0.2.254\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"netmask\": \"255.255.255.0\", \"network\": \"192.0.2.0\", \"prefix\": \"24\", \"type\": \"ether\" }, \"ansible_default_ipv6\": { \"address\": \"2001:db8:1::1\", \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"prefix\": \"64\", \"scope\": \"global\", \"type\": \"ether\" }, \"ansible_dns\": { \"nameservers\": [ \"192.0.2.1\", \"2001:db8:1::ffbb\" ], \"search\": [ \"example.com\" ] },", "--- interfaces: - name: enp1s0 type: ethernet state: up ipv4: enabled: true auto-dns: true auto-gateway: true auto-routes: true dhcp: true ipv6: enabled: true auto-dns: true auto-gateway: true auto-routes: true autoconf: true dhcp: true", "--- interfaces: - name: <profile_name> type: ethernet identifier: mac-address mac-address: <mac_address>", "nmstatectl apply ~/create-ethernet-profile.yml", "nmstatectl show enp1s0", "ip address show enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:17:b8:b6 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever inet6 2001:db8:1::fffe/64 scope global noprefixroute valid_lft forever preferred_lft forever", "ip route show default default via 192.0.2.254 dev enp1s0 proto static metric 102", "ip -6 route show default default via 2001:db8:1::ffee dev enp1s0 proto static metric 102 pref medium", "cat /etc/resolv.conf search example.com nameserver 192.0.2.200 nameserver 2001:db8:1::ffbb", "ping <host-name-or-IP-address>", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 interface_name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"address\": \"192.0.2.1\", \"alias\": \"enp1s0\", \"broadcast\": \"192.0.2.255\", \"gateway\": \"192.0.2.254\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"netmask\": \"255.255.255.0\", \"network\": \"192.0.2.0\", \"prefix\": \"24\", \"type\": \"ether\" }, \"ansible_default_ipv6\": { \"address\": \"2001:db8:1::1\", \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"prefix\": \"64\", \"scope\": \"global\", \"type\": \"ether\" }, \"ansible_dns\": { \"nameservers\": [ \"192.0.2.1\", \"2001:db8:1::ffbb\" ], \"search\": [ \"example.com\" ] },", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: example match: path: - pci-0000:00:0[1-3].0 - &!pci-0000:00:02.0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"address\": \"192.0.2.1\", \"alias\": \"enp1s0\", \"broadcast\": \"192.0.2.255\", \"gateway\": \"192.0.2.254\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"netmask\": \"255.255.255.0\", \"network\": \"192.0.2.0\", \"prefix\": \"24\", \"type\": \"ether\" }, \"ansible_default_ipv6\": { \"address\": \"2001:db8:1::1\", \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"prefix\": \"64\", \"scope\": \"global\", \"type\": \"ether\" }, \"ansible_dns\": { \"nameservers\": [ \"192.0.2.1\", \"2001:db8:1::ffbb\" ], \"search\": [ \"example.com\" ] },", "nmcli connection add con-name \"Wired connection 1\" connection.multi-connect multiple match.interface-name enp* type ethernet", "nmcli connection show \"Wired connection 1\" connection.id: Wired connection 1 connection.multi-connect: 3 (multiple) match.interface-name: enp*", "nmcli connection show NAME UUID TYPE DEVICE Wired connection 1 6f22402e-c0cc-49cf-b702-eaf0cd5ea7d1 ethernet enp7s0 Wired connection 1 6f22402e-c0cc-49cf-b702-eaf0cd5ea7d1 ethernet enp8s0 Wired connection 1 6f22402e-c0cc-49cf-b702-eaf0cd5ea7d1 ethernet enp9s0", "udevadm info /sys/class/net/enp* | grep ID_PATH= E: ID_PATH=pci-0000:07:00.0 E: ID_PATH=pci-0000:08:00.0", "nmcli connection add type ethernet connection.multi-connect multiple match.path \"pci-0000:07:00.0 pci-0000:08:00.0\" con-name \"Wired connection 1\"", "nmcli connection show NAME UUID TYPE DEVICE Wired connection 1 9cee0958-512f-4203-9d3d-b57af1d88466 ethernet enp7s0 Wired connection 1 9cee0958-512f-4203-9d3d-b57af1d88466 ethernet enp8s0", "nmcli connection show \"Wired connection 1\" connection.id: Wired connection 1 connection.multi-connect: 3 (multiple) match.path: pci-0000:07:00.0,pci-0000:08:00.0", "nmcli connection add type bond con-name bond0 ifname bond0 bond.options \"mode=active-backup\"", "nmcli connection add type bond con-name bond0 ifname bond0 bond.options \"mode=active-backup,miimon=1000\"", "nmcli device status DEVICE TYPE STATE CONNECTION enp7s0 ethernet disconnected -- enp8s0 ethernet disconnected -- bridge0 bridge connected bridge0 bridge1 bridge connected bridge1", "nmcli connection add type ethernet port-type bond con-name bond0-port1 ifname enp7s0 controller bond0 nmcli connection add type ethernet port-type bond con-name bond0-port2 ifname enp8s0 controller bond0", "nmcli connection modify bridge0 controller bond0 nmcli connection modify bridge1 controller bond0", "nmcli connection up bridge0 nmcli connection up bridge1", "nmcli connection modify bond0 ipv4.method disabled", "nmcli connection modify bond0 ipv4.addresses '192.0.2.1/24' ipv4.gateway '192.0.2.254' ipv4.dns '192.0.2.253' ipv4.dns-search 'example.com' ipv4.method manual", "nmcli connection modify bond0 ipv6.method disabled", "nmcli connection modify bond0 ipv6.addresses '2001:db8:1::1/64' ipv6.gateway '2001:db8:1::fffe' ipv6.dns '2001:db8:1::fffd' ipv6.dns-search 'example.com' ipv6.method manual", "nmcli connection modify bond0-port1 bond-port. <parameter> <value>", "nmcli connection up bond0", "nmcli device DEVICE TYPE STATE CONNECTION enp7s0 ethernet connected bond0-port1 enp8s0 ethernet connected bond0-port2", "nmcli connection modify bond0 connection.autoconnect-ports 1", "nmcli connection up bond0", "cat /proc/net/bonding/bond0", "cat /proc/net/bonding/bond0", "nmcli device status DEVICE TYPE STATE CONNECTION enp7s0 ethernet unavailable -- enp8s0 ethernet unavailable --", "nmtui", "cat /proc/net/bonding/bond0", "nm-connection-editor", "cat /proc/net/bonding/ bond0", "--- interfaces: - name: bond0 type: bond state: up ipv4: enabled: true address: - ip: 192.0.2.1 prefix-length: 24 dhcp: false ipv6: enabled: true address: - ip: 2001:db8:1::1 prefix-length: 64 autoconf: false dhcp: false link-aggregation: mode: active-backup port: - enp1s0 - enp7s0 - name: enp1s0 type: ethernet state: up - name: enp7s0 type: ethernet state: up routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.0.2.254 next-hop-interface: bond0 - destination: ::/0 next-hop-address: 2001:db8:1::fffe next-hop-interface: bond0 dns-resolver: config: search: - example.com server: - 192.0.2.200 - 2001:db8:1::ffbb", "nmstatectl apply ~/create-bond.yml", "nmcli device status DEVICE TYPE STATE CONNECTION bond0 bond connected bond0", "nmcli connection show bond0 connection.id: bond0 connection.uuid: 79cbc3bd-302e-4b1f-ad89-f12533b818ee connection.stable-id: -- connection.type: bond connection.interface-name: bond0", "nmstatectl show bond0", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Bond connection profile with two Ethernet ports ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # Bond profile - name: bond0 type: bond interface_name: bond0 ip: dhcp4: yes auto6: yes bond: mode: active-backup state: up # Port profile for the 1st Ethernet device - name: bond0-port1 interface_name: enp7s0 type: ethernet controller: bond0 state: up # Port profile for the 2nd Ethernet device - name: bond0-port2 interface_name: enp8s0 type: ethernet controller: bond0 state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "nmcli connection add type bond con-name bond0 ifname bond0 bond.options \"mode=active-backup\"", "nmcli connection modify bond0 ipv4.addresses ' 192.0.2.1/24 ' nmcli connection modify bond0 ipv4.gateway ' 192.0.2.254 ' nmcli connection modify bond0 ipv4.dns ' 192.0.2.253 ' nmcli connection modify bond0 ipv4.dns-search ' example.com ' nmcli connection modify bond0 ipv4.method manual", "nmcli connection modify bond0 ipv6.addresses ' 2001:db8:1::1/64 ' nmcli connection modify bond0 ipv6.gateway ' 2001:db8:1::fffe ' nmcli connection modify bond0 ipv6.dns ' 2001:db8:1::fffd ' nmcli connection modify bond0 ipv6.dns-search ' example.com ' nmcli connection modify bond0 ipv6.method manual", "nmcli connection show NAME UUID TYPE DEVICE Docking_station 256dd073-fecc-339d-91ae-9834a00407f9 ethernet enp11s0u1 Wi-Fi 1f1531c7-8737-4c60-91af-2d21164417e8 wifi wlp1s0", "nmcli connection modify Docking_station controller bond0", "nmcli connection modify Wi-Fi controller bond0", "nmcli connection modify bond0 +bond.options fail_over_mac=1", "nmcli con modify bond0 +bond.options \"primary=enp11s0u1\"", "nmcli connection modify bond0 connection.autoconnect-ports 1", "nmcli connection up bond0", "cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: fault-tolerance (active-backup) (fail_over_mac active) Primary Slave: enp11s0u1 (primary_reselect always) Currently Active Slave: enp11s0u1 MII Status: up MII Polling Interval (ms): 1 Up Delay (ms): 0 Down Delay (ms): 0 Peer Notification Delay (ms): 0 Slave Interface: enp11s0u1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:53:00:59:da:b7 Slave queue ID: 0 Slave Interface: wlp1s0 MII Status: up Speed: Unknown Duplex: Unknown Link Failure Count: 2 Permanent HW addr: 00:53:00:b3:22:ba Slave queue ID: 0", "nmcli connection show team-team0 | grep -E \"^ip\" ipv4.method: manual ipv4.dns: 192.0.2.253 ipv4.dns-search: example.com ipv4.addresses: 192.0.2.1/24 ipv4.gateway: 192.0.2.254 ipv6.method: manual ipv6.dns: 2001:db8:1::fffd ipv6.dns-search: example.com ipv6.addresses: 2001:db8:1::1/64 ipv6.gateway: 2001:db8:1::fffe", "teamdctl team0 config dump actual > /tmp/team0.json", "nmcli connection delete team-team0 nmcli connection delete team-team0-port1 nmcli connection delete team-team0-port2", "team2bond --config= /tmp/team0.json --rename= bond0 nmcli con add type bond ifname bond0 bond.options \" mode=active-backup,num_grat_arp=1,num_unsol_na=1,resend_igmp=1,miimon=100,miimon=100 \" nmcli con add type ethernet ifname enp7s0 controller bond0 nmcli con add type ethernet ifname enp8s0 controller bond0", "team2bond --config= /tmp/team0.json --rename= bond0 --exec-cmd Connection ' bond-bond0 ' ( 0241a531-0c72-4202-80df-73eadfc126b5 ) successfully added. Connection ' bond-port-enp7s0 ' ( 38489729-b624-4606-a784-1ccf01e2f6d6 ) successfully added. Connection ' bond-port-enp8s0 ' ( de97ec06-7daa-4298-9a71-9d4c7909daa1 ) successfully added.", "nmcli connection modify bond-bond0 ipv4.addresses ' 192.0.2.1/24 ' nmcli connection modify bond-bond0 ipv4.gateway ' 192.0.2.254 ' nmcli connection modify bond-bond0 ipv4.dns ' 192.0.2.253 ' nmcli connection modify bond-bond0 ipv4.dns-search ' example.com ' nmcli connection modify bond-bond0 ipv4.method manual", "nmcli connection modify bond-bond0 ipv6.addresses ' 2001:db8:1::1/64 ' nmcli connection modify bond-bond0 ipv6.gateway ' 2001:db8:1::fffe ' nmcli connection modify bond-bond0 ipv6.dns ' 2001:db8:1::fffd ' nmcli connection modify bond-bond0 ipv6.dns-search ' example.com ' nmcli connection modify bond-bond0 ipv6.method manual", "nmcli connection up bond-bond0", "nmcli connection show bond-bond0 | grep -E \"^ip\" ipv4.method: manual ipv4.dns: 192.0.2.253 ipv4.dns-search: example.com ipv4.addresses: 192.0.2.1/24 ipv4.gateway: 192.0.2.254 ipv6.method: manual ipv6.dns: 2001:db8:1::fffd ipv6.dns-search: example.com ipv6.addresses: 2001:db8:1::1/64 ipv6.gateway: 2001:db8:1::fffe", "cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v5.13.0-0.rc7.51.el9.x86_64 Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: enp7s0 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Peer Notification Delay (ms): 0 Slave Interface: enp7s0 MII Status: up Speed: Unknown Duplex: Unknown Link Failure Count: 0 Permanent HW addr: 52:54:00:bf:b1:a9 Slave queue ID: 0 Slave Interface: enp8s0 MII Status: up Speed: Unknown Duplex: Unknown Link Failure Count: 0 Permanent HW addr: 52:54:00:04:36:0f Slave queue ID: 0", "cat /proc/net/bonding/ bond0", "nmcli connection add type team con-name team0 ifname team0 team.runner activebackup", "nmcli connection modify team0 team.link-watchers \"name= ethtool \"", "nmcli connection modify team0 team.link-watchers \"name= ethtool delay-up= 2500 \"", "nmcli connection modify team0 team.link-watchers \"name= ethtool delay-up= 2 , name= arp_ping source-host= 192.0.2.1 target-host= 192.0.2.2 \"", "nmcli device status DEVICE TYPE STATE CONNECTION enp7s0 ethernet disconnected -- enp8s0 ethernet disconnected -- bond0 bond connected bond0 bond1 bond connected bond1", "nmcli connection add type ethernet port-type team con-name team0-port1 ifname enp7s0 controller team0 nmcli connection add type ethernet port--type team con-name team0-port2 ifname enp8s0 controller team0", "nmcli connection modify bond0 controller team0 nmcli connection modify bond1 controller team0", "nmcli connection up bond0 nmcli connection up bond1", "nmcli connection modify team0 ipv4.method disabled", "nmcli connection modify team0 ipv4.addresses ' 192.0.2.1/24 ' ipv4.gateway ' 192.0.2.254 ' ipv4.dns ' 192.0.2.253 ' ipv4.dns-search ' example.com ' ipv4.method manual", "nmcli connection modify team0 ipv6.method disabled", "nmcli connection modify team0 ipv6.addresses ' 2001:db8:1::1/64 ' ipv6.gateway ' 2001:db8:1::fffe ' ipv6.dns ' 2001:db8:1::fffd ' ipv6.dns-search ' example.com ' ipv6.method manual", "nmcli connection up team0", "teamdctl team0 state setup: runner: activebackup ports: enp7s0 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 enp8s0 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: enp7s0", "teamdctl team0 state setup: runner: activebackup ports: enp7s0 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 enp8s0 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: enp7s0", "nm-connection-editor", "teamdctl team0 state setup: runner: activebackup ports: enp7s0 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 enp8s0 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: enp7s0", "nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet disconnected enp1s0 bridge0 bridge connected bridge0 bond0 bond connected bond0", "nmcli connection add type vlan con-name vlan10 ifname vlan10 vlan.parent enp1s0 vlan.id 10", "nmcli connection modify vlan10 ethernet.mtu 2000", "nmcli connection modify vlan10 ipv4.method disabled", "nmcli connection modify vlan10 ipv4.addresses '192.0.2.1/24' ipv4.gateway '192.0.2.254' ipv4.dns '192.0.2.253' ipv4.method manual", "nmcli connection modify vlan10 ipv6.method disabled", "nmcli connection modify vlan10 ipv6.addresses '2001:db8:1::1/32' ipv6.gateway '2001:db8:1::fffe' ipv6.dns '2001:db8:1::fffd' ipv6.method manual", "nmcli connection up vlan10", "ip -d addr show vlan10 4: vlan10@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:72:2f:6e brd ff:ff:ff:ff:ff:ff promiscuity 0 vlan protocol 802.1Q id 10 <REORDER_HDR> numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute vlan10 valid_lft forever preferred_lft forever inet6 2001:db8:1::1/32 scope global noprefixroute valid_lft forever preferred_lft forever inet6 fe80::8dd7:9030:6f8e:89e6/64 scope link noprefixroute valid_lft forever preferred_lft forever", "nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet connected enp1s0", "nmcli connection add type vlan con-name vlan10 dev enp1s0 vlan.id 10", "nmcli connection modify vlan10 ethernet.mtu 2000", "nmcli connection add type vlan con-name vlan10.20 dev enp1s0.10 id 20 vlan.protocol 802.1ad", "nmcli connection modify vlan10.20 ipv4.method manual ipv4.addresses 192.0.2.1/24 ipv4.gateway 192.0.2.254 ipv4.dns 192.0.2.200", "nmcli connection modify vlan10 ipv4.addresses '192.0.2.1/24' ipv4.gateway '192.0.2.254' ipv4.dns '192.0.2.253' ipv4.method manual", "nmcli connection up vlan10.20", "ip -d addr show enp1s0.10.20 10: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:d2:74:3e brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 0 maxmtu 65535 vlan protocol 802.1ad id 20 <REORDER_HDR> numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0.10.20 valid_lft forever preferred_lft forever inet6 2001:db8:1::1/32 scope global noprefixroute valid_lft forever preferred_lft forever inet6 fe80::ce3b:84c5:9ef8:d0e6/64 scope link noprefixroute valid_lft forever preferred_lft forever", "nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet unavailable --", "nmtui", "ip -d addr show vlan10 4: vlan10@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:72:2f:6e brd ff:ff:ff:ff:ff:ff promiscuity 0 vlan protocol 802.1Q id 10 <REORDER_HDR> numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute vlan10 valid_lft forever preferred_lft forever inet6 2001:db8:1::1/32 scope global noprefixroute valid_lft forever preferred_lft forever inet6 fe80::8dd7:9030:6f8e:89e6/64 scope link noprefixroute valid_lft forever preferred_lft forever", "nm-connection-editor", "ip -d addr show vlan10 4: vlan10@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:d5:e0:fb brd ff:ff:ff:ff:ff:ff promiscuity 0 vlan protocol 802.1Q id 10 <REORDER_HDR> numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute vlan10 valid_lft forever preferred_lft forever inet6 2001:db8:1::1/32 scope global noprefixroute valid_lft forever preferred_lft forever inet6 fe80::8dd7:9030:6f8e:89e6/64 scope link noprefixroute valid_lft forever preferred_lft forever", "--- interfaces: - name: vlan10 type: vlan state: up ipv4: enabled: true address: - ip: 192.0.2.1 prefix-length: 24 dhcp: false ipv6: enabled: true address: - ip: 2001:db8:1::1 prefix-length: 64 autoconf: false dhcp: false vlan: base-iface: enp1s0 id: 10 - name: enp1s0 type: ethernet state: up routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.0.2.254 next-hop-interface: vlan10 - destination: ::/0 next-hop-address: 2001:db8:1::fffe next-hop-interface: vlan10 dns-resolver: config: search: - example.com server: - 192.0.2.200 - 2001:db8:1::ffbb", "nmstatectl apply ~/create-vlan.yml", "nmcli device status DEVICE TYPE STATE CONNECTION vlan10 vlan connected vlan10", "nmcli connection show vlan10 connection.id: vlan10 connection.uuid: 1722970f-788e-4f81-bd7d-a86bf21c9df5 connection.stable-id: -- connection.type: vlan connection.interface-name: vlan10", "nmstatectl show vlan0", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: VLAN connection profile with Ethernet port ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # Ethernet profile - name: enp1s0 type: ethernet interface_name: enp1s0 autoconnect: yes state: up ip: dhcp4: no auto6: no # VLAN profile - name: enp1s0.10 type: vlan vlan: id: 10 ip: dhcp4: yes auto6: yes parent: enp1s0 state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'ip -d addr show enp1s0.10' managed-node-01.example.com | CHANGED | rc=0 >> 4: vlan10@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:72:2f:6e brd ff:ff:ff:ff:ff:ff promiscuity 0 vlan protocol 802.1Q id 10 <REORDER_HDR> numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535", "nmcli connection add type bridge con-name bridge0 ifname bridge0", "nmcli device status DEVICE TYPE STATE CONNECTION enp7s0 ethernet disconnected -- enp8s0 ethernet disconnected -- bond0 bond connected bond0 bond1 bond connected bond1", "nmcli connection add type ethernet port-type bridge con-name bridge0-port1 ifname enp7s0 controller bridge0 nmcli connection add type ethernet port-type bridge con-name bridge0-port2 ifname enp8s0 controller bridge0", "nmcli connection modify bond0 controller bridge0 nmcli connection modify bond1 controller bridge0", "nmcli connection up bond0 nmcli connection up bond1", "nmcli connection modify bridge0 ipv4.method disabled", "nmcli connection modify bridge0 ipv4.addresses '192.0.2.1/24' ipv4.gateway '192.0.2.254' ipv4.dns '192.0.2.253' ipv4.dns-search 'example.com' ipv4.method manual", "nmcli connection modify bridge0 ipv6.method disabled", "nmcli connection modify bridge0 ipv6.addresses '2001:db8:1::1/64' ipv6.gateway '2001:db8:1::fffe' ipv6.dns '2001:db8:1::fffd' ipv6.dns-search 'example.com' ipv6.method manual", "nmcli connection modify bridge0 bridge.priority '16384'", "nmcli connection up bridge0", "nmcli device DEVICE TYPE STATE CONNECTION enp7s0 ethernet connected bridge0-port1 enp8s0 ethernet connected bridge0-port2", "nmcli connection modify bridge0 connection.autoconnect-ports 1", "nmcli connection up bridge0", "ip link show master bridge0 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:62:61:0e brd ff:ff:ff:ff:ff:ff 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:9e:f1:ce brd ff:ff:ff:ff:ff:ff", "bridge link show 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state forwarding priority 32 cost 100 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state listening priority 32 cost 100 5: enp9s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge1 state forwarding priority 32 cost 100 6: enp11s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge1 state blocking priority 32 cost 100", "nmcli device status DEVICE TYPE STATE CONNECTION enp7s0 ethernet unavailable -- enp8s0 ethernet unavailable --", "nmtui", "ip link show master bridge0 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:62:61:0e brd ff:ff:ff:ff:ff:ff 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:9e:f1:ce brd ff:ff:ff:ff:ff:ff", "bridge link show 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state forwarding priority 32 cost 100 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state listening priority 32 cost 100", "nm-connection-editor", "ip link show master bridge0 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:62:61:0e brd ff:ff:ff:ff:ff:ff 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:9e:f1:ce brd ff:ff:ff:ff:ff:ff", "bridge link show 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state forwarding priority 32 cost 100 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state listening priority 32 cost 100 5: enp9s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge1 state forwarding priority 32 cost 100 6: enp11s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge1 state blocking priority 32 cost 100", "--- interfaces: - name: bridge0 type: linux-bridge state: up ipv4: enabled: true address: - ip: 192.0.2.1 prefix-length: 24 dhcp: false ipv6: enabled: true address: - ip: 2001:db8:1::1 prefix-length: 64 autoconf: false dhcp: false bridge: options: stp: enabled: true port: - name: enp1s0 - name: enp7s0 - name: enp1s0 type: ethernet state: up - name: enp7s0 type: ethernet state: up routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.0.2.254 next-hop-interface: bridge0 - destination: ::/0 next-hop-address: 2001:db8:1::fffe next-hop-interface: bridge0 dns-resolver: config: search: - example.com server: - 192.0.2.200 - 2001:db8:1::ffbb", "nmstatectl apply ~/create-bridge.yml", "nmcli device status DEVICE TYPE STATE CONNECTION bridge0 bridge connected bridge0", "nmcli connection show bridge0 connection.id: bridge0_ connection.uuid: e2cc9206-75a2-4622-89cf-1252926060a9 connection.stable-id: -- connection.type: bridge connection.interface-name: bridge0", "nmstatectl show bridge0", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Bridge connection profile with two Ethernet ports ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # Bridge profile - name: bridge0 type: bridge interface_name: bridge0 ip: dhcp4: yes auto6: yes state: up # Port profile for the 1st Ethernet device - name: bridge0-port1 interface_name: enp7s0 type: ethernet controller: bridge0 port_type: bridge state: up # Port profile for the 2nd Ethernet device - name: bridge0-port2 interface_name: enp8s0 type: ethernet controller: bridge0 port_type: bridge state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'ip link show master bridge0' managed-node-01.example.com | CHANGED | rc=0 >> 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:62:61:0e brd ff:ff:ff:ff:ff:ff 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:9e:f1:ce brd ff:ff:ff:ff:ff:ff", "ansible managed-node-01.example.com -m command -a 'bridge link show' managed-node-01.example.com | CHANGED | rc=0 >> 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state forwarding priority 32 cost 100 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state listening priority 32 cost 100", "@west @east : PPKS \"user1\" \"thestringismeanttobearandomstr\"", "dnf install libreswan", "systemctl stop ipsec rm /var/lib/ipsec/nss/*db ipsec initnss", "systemctl enable ipsec --now", "firewall-cmd --add-service=\"ipsec\" firewall-cmd --runtime-to-permanent", "ipsec newhostkey", "ipsec showhostkey --left --ckaid 2d3ea57b61c9419dfd6cf43a1eb6cb306c0e857d", "ipsec showhostkey --right --ckaid a9e1f6ce9ecd3608c24e8f701318383f41798f03", "conn mytunnel leftid=@west left=192.1.2.23 leftrsasigkey=0sAQOrlo+hOafUZDlCQmXFrje/oZm [...] W2n417C/4urYHQkCvuIQ== rightid=@east right=192.1.2.45 rightrsasigkey=0sAQO3fwC6nSSGgt64DWiYZzuHbc4 [...] D/v8t5YTQ== authby=rsasig", "systemctl restart ipsec", "ipsec auto --add mytunnel", "ipsec auto --up mytunnel", "auto=start", "cp /etc/ipsec.d/ my_host-to-host.conf /etc/ipsec.d/ my_site-to-site .conf", "conn mysubnet also=mytunnel leftsubnet=192.0.1.0/24 rightsubnet=192.0.2.0/24 auto=start conn mysubnet6 also=mytunnel leftsubnet=2001:db8:0:1::/64 rightsubnet=2001:db8:0:2::/64 auto=start the following part of the configuration file is the same for both host-to-host and site-to-site connections: conn mytunnel leftid=@west left=192.1.2.23 leftrsasigkey=0sAQOrlo+hOafUZDlCQmXFrje/oZm [...] W2n417C/4urYHQkCvuIQ== rightid=@east right=192.1.2.45 rightrsasigkey=0sAQO3fwC6nSSGgt64DWiYZzuHbc4 [...] D/v8t5YTQ== authby=rsasig", "conn roadwarriors ikev2=insist # support (roaming) MOBIKE clients (RFC 4555) mobike=yes fragmentation=yes left=1.2.3.4 # if access to the LAN is given, enable this, otherwise use 0.0.0.0/0 # leftsubnet=10.10.0.0/16 leftsubnet=0.0.0.0/0 leftcert=gw.example.com leftid=%fromcert leftxauthserver=yes leftmodecfgserver=yes right=%any # trust our own Certificate Agency rightca=%same # pick an IP address pool to assign to remote users # 100.64.0.0/16 prevents RFC1918 clashes when remote users are behind NAT rightaddresspool=100.64.13.100-100.64.13.254 # if you want remote clients to use some local DNS zones and servers modecfgdns=\"1.2.3.4, 5.6.7.8\" modecfgdomains=\"internal.company.com, corp\" rightxauthclient=yes rightmodecfgclient=yes authby=rsasig # optionally, run the client X.509 ID through pam to allow or deny client # pam-authorize=yes # load connection, do not initiate auto=add # kill vanished roadwarriors dpddelay=1m dpdtimeout=5m dpdaction=clear", "conn to-vpn-server ikev2=insist # pick up our dynamic IP left=%defaultroute leftsubnet=0.0.0.0/0 leftcert=myname.example.com leftid=%fromcert leftmodecfgclient=yes # right can also be a DNS hostname right=1.2.3.4 # if access to the remote LAN is required, enable this, otherwise use 0.0.0.0/0 # rightsubnet=10.10.0.0/16 rightsubnet=0.0.0.0/0 fragmentation=yes # trust our own Certificate Agency rightca=%same authby=rsasig # allow narrowing to the server's suggested assigned IP and remote subnet narrowing=yes # support (roaming) MOBIKE clients (RFC 4555) mobike=yes # initiate connection auto=start", "systemctl stop ipsec rm /var/lib/ipsec/nss/*db", "ipsec initnss", "ipsec import nodeXXX.p12", "cat /etc/ipsec.d/mesh.conf conn clear auto=ondemand 1 type=passthrough authby=never left=%defaultroute right=%group conn private auto=ondemand type=transport authby=rsasig failureshunt=drop negotiationshunt=drop ikev2=insist left=%defaultroute leftcert= nodeXXXX leftid=%fromcert 2 rightid=%fromcert right=%opportunisticgroup conn private-or-clear auto=ondemand type=transport authby=rsasig failureshunt=passthrough negotiationshunt=passthrough # left left=%defaultroute leftcert= nodeXXXX 3 leftid=%fromcert leftrsasigkey=%cert # right rightrsasigkey=%cert rightid=%fromcert right=%opportunisticgroup", "echo \"10.15.0.0/16\" >> /etc/ipsec.d/policies/private", "echo \"10.15.34.0/24\" >> /etc/ipsec.d/policies/private-or-clear", "echo \"10.15.1.2/32\" >> /etc/ipsec.d/policies/clear", "systemctl restart ipsec", "ping <nodeYYY>", "certutil -L -d sql:/etc/ipsec.d Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI west u,u,u ca CT,,", "ipsec trafficstatus 006 #2: \"private#10.15.0.0/16\"[1] ... <nodeYYY> , type=ESP, add_time=1691399301, inBytes=512, outBytes=512, maxBytes=2^63B, id='C=US, ST=NC, O=Example Organization, CN=east'", "dnf install libreswan", "systemctl stop ipsec rm /var/lib/ipsec/nss/*db", "systemctl enable ipsec --now", "firewall-cmd --add-service=\"ipsec\" firewall-cmd --runtime-to-permanent", "fips-mode-setup --enable", "reboot", "ipsec whack --fipsstatus 000 FIPS mode enabled", "journalctl -u ipsec Jan 22 11:26:50 localhost.localdomain pluto[3076]: FIPS Mode: YES", "ipsec pluto --selftest 2>&1 | head -6 Initializing NSS using read-write database \"sql:/var/lib/ipsec/nss\" FIPS Mode: YES NSS crypto library initialized FIPS mode enabled for pluto daemon NSS library is running in FIPS mode FIPS HMAC integrity support [disabled]", "ipsec pluto --selftest 2>&1 | grep disabled Encryption algorithm CAMELLIA_CTR disabled; not FIPS compliant Encryption algorithm CAMELLIA_CBC disabled; not FIPS compliant Encryption algorithm NULL disabled; not FIPS compliant Encryption algorithm CHACHA20_POLY1305 disabled; not FIPS compliant Hash algorithm MD5 disabled; not FIPS compliant PRF algorithm HMAC_MD5 disabled; not FIPS compliant PRF algorithm AES_XCBC disabled; not FIPS compliant Integrity algorithm HMAC_MD5_96 disabled; not FIPS compliant Integrity algorithm HMAC_SHA2_256_TRUNCBUG disabled; not FIPS compliant Integrity algorithm AES_XCBC_96 disabled; not FIPS compliant DH algorithm MODP1536 disabled; not FIPS compliant DH algorithm DH31 disabled; not FIPS compliant", "ipsec pluto --selftest 2>&1 | grep ESP | grep FIPS | sed \"s/^.*FIPS//\" aes_ccm, aes_ccm_c aes_ccm_b aes_ccm_a NSS(CBC) 3des NSS(GCM) aes_gcm, aes_gcm_c NSS(GCM) aes_gcm_b NSS(GCM) aes_gcm_a NSS(CTR) aesctr NSS(CBC) aes aes_gmac NSS sha, sha1, sha1_96, hmac_sha1 NSS sha512, sha2_512, sha2_512_256, hmac_sha2_512 NSS sha384, sha2_384, sha2_384_192, hmac_sha2_384 NSS sha2, sha256, sha2_256, sha2_256_128, hmac_sha2_256 aes_cmac null NSS(MODP) null, dh0 NSS(MODP) dh14 NSS(MODP) dh15 NSS(MODP) dh16 NSS(MODP) dh17 NSS(MODP) dh18 NSS(ECP) ecp_256, ecp256 NSS(ECP) ecp_384, ecp384 NSS(ECP) ecp_521, ecp521", "certutil -N -d sql:/var/lib/ipsec/nss Enter Password or Pin for \"NSS Certificate DB\": Enter a password which will be used to encrypt your keys. The password should be at least 8 characters long, and should contain at least one non-alphabetic character. Enter new password:", "cat /etc/ipsec.d/nsspassword NSS Certificate DB:_<password>_", "<token_1> : <password1> <token_2> : <password2>", "systemctl restart ipsec", "systemctl status ipsec ● ipsec.service - Internet Key Exchange (IKE) Protocol Daemon for IPsec Loaded: loaded (/usr/lib/systemd/system/ipsec.service; enabled; vendor preset: disable> Active: active (running)", "journalctl -u ipsec pluto[6214]: Initializing NSS using read-write database \"sql:/var/lib/ipsec/nss\" pluto[6214]: NSS Password from file \"/etc/ipsec.d/nsspassword\" for token \"NSS Certificate DB\" with length 20 passed to NSS pluto[6214]: NSS crypto library initialized", "listen-tcp=yes", "enable-tcp=fallback tcp-remoteport=4500", "enable-tcp=yes tcp-remoteport=4500", "systemctl restart ipsec", "ethtool -S enp1s0 | grep -E \"_ipsec\" tx_ipsec: 10 rx_ipsec: 10", "ping -c 5 remote_ip_address", "ethtool -S enp1s0 | grep -E \"_ipsec\" tx_ipsec: 15 rx_ipsec: 15", "nmcli connection modify bond0 ethtool.feature-esp-hw-offload on", "nmcli connection up bond0", "conn example nic-offload=yes", "systemctl restart ipsec", "grep \"Currently Active Slave\" /proc/net/bonding/ bond0 Currently Active Slave: enp1s0", "ethtool -S enp1s0 | grep -E \"_ipsec\" tx_ipsec: 10 rx_ipsec: 10", "ping -c 5 remote_ip_address", "ethtool -S enp1s0 | grep -E \"_ipsec\" tx_ipsec: 15 rx_ipsec: 15", "- name: Host to host VPN hosts: managed-node-01.example.com, managed-node-02.example.com roles: - rhel-system-roles.vpn vars: vpn_connections: - hosts: managed-node-01.example.com: managed-node-02.example.com: vpn_manage_firewall: true vpn_manage_selinux: true", "vpn_connections: - hosts: managed-node-01.example.com: <external_node> : hostname: <IP_address_or_hostname>", "- name: Multiple VPN hosts: managed-node-01.example.com, managed-node-02.example.com roles: - rhel-system-roles.vpn vars: vpn_connections: - name: control_plane_vpn hosts: managed-node-01.example.com: hostname: 192.0.2.0 # IP for the control plane managed-node-02.example.com: hostname: 192.0.2.1 - name: data_plane_vpn hosts: managed-node-01.example.com: hostname: 10.0.0.1 # IP for the data plane managed-node-02.example.com: hostname: 10.0.0.2", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ipsec status | grep <connection_name>", "ipsec trafficstatus | grep <connection_name>", "ipsec auto --add <connection_name>", "- name: Mesh VPN hosts: managed-node-01.example.com, managed-node-02.example.com, managed-node-03.example.com roles: - rhel-system-roles.vpn vars: vpn_connections: - opportunistic: true auth_method: cert policies: - policy: private cidr: default - policy: private-or-clear cidr: 198.51.100.0/24 - policy: private cidr: 192.0.2.0/24 - policy: clear cidr: 192.0.2.7/32 vpn_manage_firewall: true vpn_manage_selinux: true", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "conn MyExample ikev2=never ike=aes-sha2,aes-sha1;modp2048 esp=aes_gcm,aes-sha2,aes-sha1", "include /etc/crypto-policies/back-ends/libreswan.config", "ipsec trafficstatus 006 #8: \"vpn.example.com\"[1] 192.0.2.1, type=ESP, add_time=1595296930, inBytes=5999, outBytes=3231, id='@vpn.example.com', lease=100.64.13.5/32", "ipsec auto --add vpn.example.com 002 added connection description \"vpn.example.com\"", "ipsec auto --up vpn.example.com", "ipsec auto --up vpn.example.com 181 \"vpn.example.com\"[1] 192.0.2.2 #15: initiating IKEv2 IKE SA 181 \"vpn.example.com\"[1] 192.0.2.2 #15: STATE_PARENT_I1: sent v2I1, expected v2R1 010 \"vpn.example.com\"[1] 192.0.2.2 #15: STATE_PARENT_I1: retransmission; will wait 0.5 seconds for response 010 \"vpn.example.com\"[1] 192.0.2.2 #15: STATE_PARENT_I1: retransmission; will wait 1 seconds for response 010 \"vpn.example.com\"[1] 192.0.2.2 #15: STATE_PARENT_I1: retransmission; will wait 2 seconds for", "ipsec auto --up vpn.example.com 002 \"vpn.example.com\" #9: initiating Main Mode 102 \"vpn.example.com\" #9: STATE_MAIN_I1: sent MI1, expecting MR1 010 \"vpn.example.com\" #9: STATE_MAIN_I1: retransmission; will wait 0.5 seconds for response 010 \"vpn.example.com\" #9: STATE_MAIN_I1: retransmission; will wait 1 seconds for response 010 \"vpn.example.com\" #9: STATE_MAIN_I1: retransmission; will wait 2 seconds for response", "tcpdump -i eth0 -n -n esp or udp port 500 or udp port 4500 or tcp port 4500", "ipsec auto --up vpn.example.com 000 \"vpn.example.com\"[1] 192.0.2.2 #16: ERROR: asynchronous network error report on wlp2s0 (192.0.2.2:500), complainant 198.51.100.1: Connection refused [errno 111, origin ICMP type 3 code 3 (not authenticated)]", "ipsec auto --up vpn.example.com 003 \"vpn.example.com\"[1] 193.110.157.148 #3: dropping unexpected IKE_SA_INIT message containing NO_PROPOSAL_CHOSEN notification; message payloads: N; missing payloads: SA,KE,Ni", "ipsec auto --up vpn.example.com 182 \"vpn.example.com\"[1] 193.110.157.148 #5: STATE_PARENT_I2: sent v2I2, expected v2R2 {auth=IKEv2 cipher=AES_GCM_16_256 integ=n/a prf=HMAC_SHA2_256 group=MODP2048} 002 \"vpn.example.com\"[1] 193.110.157.148 #6: IKE_AUTH response contained the error notification NO_PROPOSAL_CHOSEN", "ipsec auto --up vpn.example.com 1v2 \"vpn.example.com\" #1: STATE_PARENT_I2: sent v2I2, expected v2R2 {auth=IKEv2 cipher=AES_GCM_16_256 integ=n/a prf=HMAC_SHA2_512 group=MODP2048} 002 \"vpn.example.com\" #2: IKE_AUTH response contained the error notification TS_UNACCEPTABLE", "ipsec auto --up vpn.example.com 031 \"vpn.example.com\" #2: STATE_QUICK_I1: 60 second timeout exceeded after 0 retransmits. No acceptable response to our first Quick Mode message: perhaps peer likes no proposal", "ipsec auto --up vpn.example.com 003 \"vpn.example.com\" #1: received Hash Payload does not match computed value 223 \"vpn.example.com\" #1: sending notification INVALID_HASH_INFORMATION to 192.0.2.23:500", "ipsec auto --up vpn.example.com 002 \"vpn.example.com\" #1: IKE SA authentication request rejected by peer: AUTHENTICATION_FAILED", "iptables -I FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu", "iptables -I FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 1380", "conn myvpn left=172.16.0.1 leftsubnet=10.0.2.0/24 right=172.16.0.2 rightsubnet=192.168.0.0/16 ...", "iptables -t nat -I POSTROUTING -o eth0 -j MASQUERADE", "iptables -t nat -I POSTROUTING -s 10.0.2.0/24 -d 192.168.0.0/16 -j RETURN", "cat /proc/net/xfrm_stat XfrmInError 0 XfrmInBufferError 0", "journalctl -b | grep pluto", "journalctl -f -u ipsec", "nm-connection-editor", "dnf install nmstate libreswan NetworkManager-libreswan", "systemctl restart NetworkManager", "systemctl stop ipsec rm /etc/ipsec.d/*db ipsec initnss", "systemctl enable --now ipsec", "ipsec import node-example.p12", "--- interfaces: - name: 'example_ipsec_conn1' 1 type: ipsec ipv4: enabled: true dhcp: true libreswan: ipsec-interface: 'yes' 2 left: '192.0.2.250' 3 leftid: '%fromcert' 4 leftcert: 'local-host.example.com' 5 right: '192.0.2.150' 6 rightid: '%fromcert' 7 ikev2: 'insist' 8 ikelifetime: '24h' 9 salifetime: '24h' 10", "nmstatectl apply ~/create-pki-authentication.yml", "ip xfrm status", "ip xfrm policy", "dnf install nmstate libreswan NetworkManager-libreswan", "systemctl restart NetworkManager", "systemctl stop ipsec rm /etc/ipsec.d/*db ipsec initnss", "ipsec newhostkey --output", "ipsec showhostkey --list", "ipsec showhostkey --left --ckaid <0sAwEAAesFfVZqFzRA9F>", "ipsec showhostkey --right --ckaid <0sAwEAAesFfVZqFzRA9E>", "systemctl enable --now ipsec", "--- interfaces: - name: 'example_ipsec_conn1' 1 type: ipsec 2 ipv4: enabled: true dhcp: true libreswan: ipsec-interface: '99' 3 leftrsasigkey: '0sAwEAAesFfVZqFzRA9F' 4 left: '192.0.2.250' 5 leftid: 'local-host-rsa.example.com' 6 right: '192.0.2.150' 7 rightrsasigkey: '0sAwEAAesFfVZqFzRA9E' 8 rightid: 'remote-host-rsa.example.com' 9 ikev2: 'insist' 10", "nmstatectl apply ~/create-rsa-authentication.yml", "ip addr show example_ipsec_conn1", "ip xfrm status", "ip xfrm policy", "dnf install nmstate libreswan NetworkManager-libreswan", "systemctl restart NetworkManager", "systemctl stop ipsec rm /etc/ipsec.d/*db ipsec initnss", "systemctl enable --now ipsec", "--- interfaces: - name: 'example_ipsec_conn1' 1 type: ipsec ipv4: enabled: true dhcp: true libreswan: ipsec-interface: 'no' 2 right: '192.0.2.250' 3 rightid: 'remote-host.example.org' 4 left: '192.0.2.150' 5 leftid: 'local-host.example.org' 6 psk: \"example_password\" ikev2: 'insist' 7", "nmstatectl apply ~/create-pks-authentication.yml", "ip addr show example_ipsec_conn1", "ip xfrm status", "ip xfrm policy", "dnf install nmstate libreswan NetworkManager-libreswan", "systemctl restart NetworkManager", "systemctl stop ipsec rm /etc/ipsec.d/*db ipsec initnss", "ipsec import node-example.p12", "systemctl enable --now ipsec", "--- interfaces: - name: 'example_ipsec_conn1' 1 type: ipsec libreswan: left: '192.0.2.250' 2 leftid: 'local-host.example.com' 3 leftcert: 'local-host.example.com' 4 leftmodecfgclient: 'no' 5 right: '192.0.2.150' 6 rightid: 'remote-host.example.com' 7 rightsubnet: '192.0.2.150/32' 8 ikev2: 'insist' 9", "nmstatectl apply ~/create-p2p-vpn-authentication.yml", "ip xfrm policy", "ip xfrm status", "dnf install nmstate libreswan NetworkManager-libreswan", "systemctl restart NetworkManager", "systemctl stop ipsec rm /etc/ipsec.d/*db ipsec initnss", "ipsec import node-example.p12", "systemctl enable --now ipsec", "--- interfaces: - name: 'example_ipsec_conn1' 1 type: ipsec libreswan: type: 'transport' 2 ipsec-interface: '99' 3 left: '192.0.2.250' 4 leftid: '%fromcert' 5 leftcert: 'local-host.example.org' 6 right: '192.0.2.150' 7 prefix-length: '32' 8 rightid: '%fromcert' 9 ikev2: 'insist' 10 ikelifetime: '24h' 11 salifetime: '24h' 12", "nmstatectl apply ~/create-p2p-transport-authentication.yml", "ip xfrm status", "ip xfrm policy", "dnf install wireguard-tools", "wg genkey | tee /etc/wireguard/USDHOSTNAME.private.key | wg pubkey > /etc/wireguard/USDHOSTNAME.public.key", "chmod 600 /etc/wireguard/USDHOSTNAME.private.key /etc/wireguard/USDHOSTNAME.public.key", "cat /etc/wireguard/USDHOSTNAME.private.key YFAnE0psgIdiAF7XR4abxiwVRnlMfeltxu10s/c4JXg=", "cat /etc/wireguard/USDHOSTNAME.public.key UtjqCJ57DeAscYKRfp7cFGiQqdONRn69u249Fa4O6BE=", "nmcli connection add type wireguard con-name server-wg0 ifname wg0 autoconnect no", "nmcli connection modify server-wg0 ipv4.method manual ipv4.addresses 192.0.2.1/24", "nmcli connection modify server-wg0 ipv6.method manual ipv6.addresses 2001:db8:1::1/32", "nmcli connection modify server-wg0 wireguard.private-key \"YFAnE0psgIdiAF7XR4abxiwVRnlMfeltxu10s/c4JXg=\"", "nmcli connection modify server-wg0 wireguard.listen-port 51820", "[wireguard-peer.bnwfQcC8/g2i4vvEqcRUM2e6Hi3Nskk6G9t4r26nFVM=] allowed-ips=192.0.2.2;2001:db8:1::2;", "nmcli connection load /etc/NetworkManager/system-connections/server-wg0.nmconnection", "nmcli connection modify server-wg0 autoconnect yes", "nmcli connection up server-wg0", "wg show wg0 interface: wg0 public key: UtjqCJ57DeAscYKRfp7cFGiQqdONRn69u249Fa4O6BE= private key: (hidden) listening port: 51820 peer: bnwfQcC8/g2i4vvEqcRUM2e6Hi3Nskk6G9t4r26nFVM= allowed ips: 192.0.2.2/32, 2001:db8:1::2/128", "ip address show wg0 20: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000 link/none inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute wg0 valid_lft forever preferred_lft forever inet6 2001:db8:1::1/32 scope global noprefixroute valid_lft forever preferred_lft forever inet6 fe80::3ef:8863:1ce2:844/64 scope link noprefixroute valid_lft forever preferred_lft forever", "nmtui", "wg show wg0 interface: wg0 public key: UtjqCJ57DeAscYKRfp7cFGiQqdONRn69u249Fa4O6BE= private key: (hidden) listening port: 51820 peer: bnwfQcC8/g2i4vvEqcRUM2e6Hi3Nskk6G9t4r26nFVM= allowed ips: 192.0.2.2/32, 2001:db8:1::2/128", "ip address show wg0 20: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000 link/none inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute wg0 valid_lft forever preferred_lft forever inet6 _2001:db8:1::1/32 scope global noprefixroute valid_lft forever preferred_lft forever inet6 fe80::3ef:8863:1ce2:844/64 scope link noprefixroute valid_lft forever preferred_lft forever", "wg show wg0 interface: wg0 public key: UtjqCJ57DeAscYKRfp7cFGiQqdONRn69u249Fa4O6BE= private key: (hidden) listening port: 51820 peer: bnwfQcC8/g2i4vvEqcRUM2e6Hi3Nskk6G9t4r26nFVM= allowed ips: 192.0.2.2/32, 2001:db8:1::2/128", "ip address show wg0 20: wg0: <POINTOPOINT,NOARP,UP,LOWERUP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000 link/none inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute wg0 valid_lft forever preferred_lft forever inet6 2001:db8:1::1/32 scope global noprefixroute valid_lft forever preferred_lft forever inet6 fe80::3ef:8863:1ce2:844/64 scope link noprefixroute valid_lft forever preferred_lft forever", "nm-connection-editor", "wg show wg0 interface: wg0 public key: UtjqCJ57DeAscYKRfp7cFGiQqdONRn69u249Fa4O6BE= private key: (hidden) listening port: 51820 peer: bnwfQcC8/g2i4vvEqcRUM2e6Hi3Nskk6G9t4r26nFVM= allowed ips: 192.0.2.2/32, 2001:db8:1::2/128", "ip address show wg0 20 : wg0 : <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000 link/none inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute wg0 valid_lft forever preferred_lft forever inet6 2001:db8:1::1/32 scope global noprefixroute valid_lft forever preferred_lft forever inet6 fe80::3ef:8863:1ce2:844/64 scope link noprefixroute valid_lft forever preferred_lft forever", "dnf install wireguard-tools", "[Interface] Address = 192.0.2.1/24, 2001:db8:1::1/32 ListenPort = 51820 PrivateKey = YFAnE0psgIdiAF7XR4abxiwVRnlMfeltxu10s/c4JXg= [Peer] PublicKey = bnwfQcC8/g2i4vvEqcRUM2e6Hi3Nskk6G9t4r26nFVM= AllowedIPs = 192.0.2.2, 2001:db8:1::2", "systemctl enable --now wg-quick@wg0", "wg show wg0 interface: wg0 public key: UtjqCJ57DeAscYKRfp7cFGiQqdONRn69u249Fa4O6BE= private key: (hidden) listening port: 51820 peer: bnwfQcC8/g2i4vvEqcRUM2e6Hi3Nskk6G9t4r26nFVM= allowed ips: 192.0.2.2/32, 2001:db8:1::2/128", "ip address show wg0 20: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000 link/none inet 192.0.2.1/24 scope global wg0 valid_lft forever preferred_lft forever inet6 2001:db8:1::1/32 scope global valid_lft forever preferred_lft forever", "firewall-cmd --permanent --add-port=51820/udp --zone=public", "firewall-cmd --permanent --zone=public --add-masquerade", "firewall-cmd --reload", "firewall-cmd --list-all public (active) ports: 51820/udp masquerade: yes", "firewall-cmd --permanent --zone=public --add-masquerade firewall-cmd --reload", "firewall-cmd --list-all --zone=public public (active) ports: 51820/udp masquerade: yes", "firewall-cmd --list-all public (active) ports: 51820/udp masquerade: yes", "nmcli connection add type wireguard con-name client-wg0 ifname wg0 autoconnect no", "nmcli connection modify client-wg0 autoconnect no", "nmcli connection modify client-wg0 ipv4.method manual ipv4.addresses 192.0.2.2/24", "nmcli connection modify client-wg0 ipv6.method manual ipv6.addresses 2001:db8:1::2/32", "nmcli connection modify client-wg0 ipv4.gateway 192.0.2.1 ipv6.gateway 2001:db8:1::1", "nmcli connection modify client-wg0 wireguard.private-key \"aPUcp5vHz8yMLrzk8SsDyYnV33IhE/k20e52iKJFV0A=\"", "[wireguard-peer.UtjqCJ57DeAscYKRfp7cFGiQqdONRn69u249Fa4O6BE=] endpoint=server.example.com:51820 allowed-ips=192.0.2.1;2001:db8:1::1; persistent-keepalive=20", "nmcli connection load /etc/NetworkManager/system-connections/client-wg0.nmconnection", "nmcli connection up client-wg0", "ping 192.0.2.1 ping6 2001:db8:1::1", "wg show wg0 interface: wg0 public key: bnwfQcC8/g2i4vvEqcRUM2e6Hi3Nskk6G9t4r26nFVM= private key: (hidden) listening port: 51820 peer: UtjqCJ57DeAscYKRfp7cFGiQqdONRn69u249Fa4O6BE= endpoint: server.example.com:51820 allowed ips: 192.0.2.1/32, 2001:db8:1::1/128 latest handshake: 1 minute, 41 seconds ago transfer: 824 B received, 1.01 KiB sent persistent keepalive: every 20 seconds", "ip address show wg0 10: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000 link/none inet 192.0.2.2/24 brd 192.0.2.255 scope global noprefixroute wg0 valid_lft forever preferred_lft forever inet6 2001:db8:1::2/32 scope global noprefixroute valid_lft forever preferred_lft forever inet6 fe80::73d9:6f51:ea6f:863e/64 scope link noprefixroute valid_lft forever preferred_lft forever", "nmtui", "ping 192.0.2.1 ping6 2001:db8:1::1", "wg show wg0 interface: wg0 public key: bnwfQcC8/g2i4vvEqcRUM2e6Hi3Nskk6G9t4r26nFVM= private key: (hidden) listening port: 51820 peer: UtjqCJ57DeAscYKRfp7cFGiQqdONRn69u249Fa4O6BE= endpoint: server.example.com:51820_ allowed ips: 192.0.2.1/32, 2001:db8:1::1/128 latest handshake: 1 minute, 41 seconds ago transfer: 824 B received, 1.01 KiB sent persistent keepalive: every 20 seconds", "ip address show wg0 10: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000 link/none inet 192.0.2.2/24 brd 192.0.2.255 scope global noprefixroute wg0 valid_lft forever preferred_lft forever inet6 2001:db8:1::2/32 scope global noprefixroute valid_lft forever preferred_lft forever inet6 fe80::73d9:6f51:ea6f:863e/64 scope link noprefixroute valid_lft forever preferred_lft forever", "ping 192.0.2.1", "wg show wg0 interface: wg0 public key: bnwfQcC8/g2i4vvEqcRUM2e6Hi3Nskk6G9t4r26nFVM= private key: (hidden) listening port: 45513 peer: UtjqCJ57DeAscYKRfp7cFGiQqdONRn69u249Fa4O6BE= endpoint: server.example.com:51820 allowed ips: 192.0.2.1/32, 2001:db8:1::1/128 latest handshake: 1 minute, 41 seconds ago transfer: 824 B received, 1.01 KiB sent persistent keepalive: every 20 seconds", "ip address show wg0 10: wg0: <POINTOPOINT,NOARP,UP,LOWERUP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000 link/none inet 192.0.2.2/24 brd 192.0.2.255 scope global noprefixroute wg0 valid_lft forever preferred_lft forever inet6 2001:db8:1::2/32 scope global noprefixroute valid_lft forever preferred_lft forever inet6 fe80::73d9:6f51:ea6f:863e/64 scope link noprefixroute valid_lft forever preferred_lft forever", "nm-connection-editor", "ping 192.0.2.1 ping6 2001:db8:1::1", "wg show wg0 interface: wg0 public key: bnwfQcC8/g2i4vvEqcRUM2e6Hi3Nskk6G9t4r26nFVM= private key: (hidden) listening port: 51820 peer: UtjqCJ57DeAscYKRfp7cFGiQqdONRn69u249Fa4O6BE= endpoint: server.example.com:51820 allowed ips: 192.0.2.1/32, 2001:db8:1::1/128 latest handshake: 1 minute, 41 seconds ago transfer: 824 B received, 1.01 KiB sent persistent keepalive: every 20 seconds", "ip address show wg0 10: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000 link/none inet 192.0.2.2/24 brd 192.0.2.255 scope global noprefixroute wg0 valid_lft forever preferred_lft forever inet6 2001:db8:1::2/32 scope global noprefixroute valid_lft forever preferred_lft forever inet6 fe80::73d9:6f51:ea6f:863e/64 scope link noprefixroute valid_lft forever preferred_lft forever", "dnf install wireguard-tools", "[Interface] Address = 192.0.2.2/24, 2001:db8:1::2/32 PrivateKey = aPUcp5vHz8yMLrzk8SsDyYnV33IhE/k20e52iKJFV0A= [Peer] PublicKey = UtjqCJ57DeAscYKRfp7cFGiQqdONRn69u249Fa4O6BE= AllowedIPs = 192.0.2.1, 2001:db8:1::1 Endpoint = server.example.com:51820 PersistentKeepalive = 20", "systemctl enable --now wg-quick@wg0", "ping 192.0.2.1 ping6 2001:db8:1::1", "wg show wg0 interface: wg0 public key: bnwfQcC8/g2i4vvEqcRUM2e6Hi3Nskk6G9t4r26nFVM= private key: (hidden) listening port: 51820 peer: UtjqCJ57DeAscYKRfp7cFGiQqdONRn69u249Fa4O6BE= endpoint: server.example.com:51820 allowed ips: 192.0.2.1/32, 2001:db8:1::1/128 latest handshake: 1 minute, 41 seconds ago transfer: 824 B received, 1.01 KiB sent persistent keepalive: every 20 seconds", "ip address show wg0 10: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000 link/none inet 192.0.2.2/24 scope global wg0 valid_lft forever preferred_lft forever inet6 2001:db8:1::2/32 scope global valid_lft forever preferred_lft forever", "nmcli connection add type ip-tunnel ip-tunnel.mode ipip con-name tun0 ifname tun0 remote 198.51.100.5 local 203.0.113.10", "nmcli connection modify tun0 ipv4.addresses '10.0.1.1/30'", "nmcli connection modify tun0 ipv4.method manual", "nmcli connection modify tun0 +ipv4.routes \"172.16.0.0/24 10.0.1.2\"", "nmcli connection up tun0", "echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf", "nmcli connection add type ip-tunnel ip-tunnel.mode ipip con-name tun0 ifname tun0 remote 203.0.113.10 local 198.51.100.5", "nmcli connection modify tun0 ipv4.addresses '10.0.1.2/30'", "nmcli connection modify tun0 ipv4.method manual", "nmcli connection modify tun0 +ipv4.routes \"192.0.2.0/24 10.0.1.1\"", "nmcli connection up tun0", "echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf", "ping 172.16.0.1", "ping 192.0.2.1", "nmcli connection add type ip-tunnel ip-tunnel.mode gre con-name gre1 ifname gre1 remote 198.51.100.5 local 203.0.113.10", "nmcli connection modify gre1 ipv4.addresses '10.0.1.1/30'", "nmcli connection modify gre1 ipv4.method manual", "nmcli connection modify gre1 +ipv4.routes \"172.16.0.0/24 10.0.1.2\"", "nmcli connection up gre1", "echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf", "nmcli connection add type ip-tunnel ip-tunnel.mode gre con-name gre1 ifname gre1 remote 203.0.113.10 local 198.51.100.5", "nmcli connection modify gre1 ipv4.addresses '10.0.1.2/30'", "nmcli connection modify gre1 ipv4.method manual", "nmcli connection modify gre1 +ipv4.routes \"192.0.2.0/24 10.0.1.1\"", "nmcli connection up gre1", "echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf", "ping 172.16.0.1", "ping 192.0.2.1", "nmcli connection add type bridge con-name bridge0 ifname bridge0", "nmcli connection modify bridge0 ipv4.addresses '192.0.2.1/24' nmcli connection modify bridge0 ipv4.method manual", "nmcli connection add type ethernet port-type bridge con-name bridge0-port1 ifname enp1s0 controller bridge0", "nmcli connection add type ip-tunnel ip-tunnel.mode gretap port-type bridge con-name bridge0-port2 ifname gretap1 remote 198.51.100.5 local 203.0.113.10 controller bridge0", "nmcli connection modify bridge0 bridge.stp no", "nmcli connection modify bridge0 connection.autoconnect-ports 1", "nmcli connection up bridge0", "nmcli connection add type bridge con-name bridge0 ifname bridge0", "nmcli connection modify bridge0 ipv4.addresses '192.0.2.2/24' nmcli connection modify bridge0 ipv4.method manual", "nmcli connection add type ethernet port-type bridge con-name bridge0-port1 ifname enp1s0 controller bridge0", "nmcli connection add type ip-tunnel ip-tunnel.mode gretap port-type bridge con-name bridge0-port2 ifname gretap1 remote 203.0.113.10 local 198.51.100.5 controller bridge0", "nmcli connection modify bridge0 bridge.stp no", "nmcli connection modify bridge0 connection.autoconnect-ports 1", "nmcli connection up bridge0", "nmcli device nmcli device DEVICE TYPE STATE CONNECTION bridge0 bridge connected bridge0 enp1s0 ethernet connected bridge0-port1 gretap1 iptunnel connected bridge0-port2", "ping 192.0.2.2", "ping 192.0.2.1", "nmcli connection add con-name Example ifname enp1s0 type ethernet", "nmcli connection modify Example ipv4.addresses 198.51.100.2/24 ipv4.method manual ipv4.gateway 198.51.100.254 ipv4.dns 198.51.100.200 ipv4.dns-search example.com", "nmcli connection up Example", "nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet connected Example", "ping RHEL-host-B.example.com", "nmcli connection add type bridge con-name br0 ifname br0 ipv4.method disabled ipv6.method disabled", "nmcli connection add type vxlan port-type bridge con-name br0-vxlan10 ifname vxlan10 id 10 local 198.51.100.2 remote 203.0.113.1 controller br0", "nmcli connection up br0", "firewall-cmd --permanent --add-port=8472/udp firewall-cmd --reload", "bridge fdb show dev vxlan10 2a:53:bd:d5:b3:0a master br0 permanent 00:00:00:00:00:00 dst 203.0.113.1 self permanent", "<network> <name>vxlan10-bridge</name> <forward mode=\"bridge\" /> <bridge name=\"br0\" /> </network>", "virsh net-define ~/vxlan10-bridge.xml", "rm ~/vxlan10-bridge.xml", "virsh net-start vxlan10-bridge", "virsh net-autostart vxlan10-bridge", "virsh net-list Name State Autostart Persistent ---------------------------------------------------- vxlan10-bridge active yes yes", "virt-install ... --network network: vxlan10-bridge", "virt-xml VM_name --edit --network network= vxlan10-bridge", "virsh shutdown VM_name virsh start VM_name", "virsh domiflist VM_name Interface Type Source Model MAC ------------------------------------------------------------------- vnet1 bridge vxlan10-bridge virtio 52:54:00:c5:98:1c", "ip link show master vxlan10-bridge 18: vxlan10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 2a:53:bd:d5:b3:0a brd ff:ff:ff:ff:ff:ff 19: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 52:54:00:c5:98:1c brd ff:ff:ff:ff:ff:ff", "arping -c 1 192.0.2.2 ARPING 192.0.2.2 from 192.0.2.1 enp1s0 Unicast reply from 192.0.2.2 [ 52:54:00:c5:98:1c ] 1.450ms Sent 1 probe(s) (0 broadcast(s)) Received 1 response(s) (0 request(s), 0 broadcast(s))", "nmcli radio wifi on", "nmcli device wifi list IN-USE BSSID SSID MODE CHAN RATE SIGNAL BARS SECURITY 00:53:00:2F:3B:08 Office Infra 44 270 Mbit/s 57 ▂▄▆_ WPA2 WPA3 00:53:00:15:03:BF -- Infra 1 130 Mbit/s 48 ▂▄__ WPA2 WPA3", "nmcli device wifi connect Office --ask Password: wifi-password", "nmcli device wifi connect Office password <wifi_password>", "nmcli connection modify Office ipv4.method manual ipv4.addresses 192.0.2.1/24 ipv4.gateway 192.0.2.254 ipv4.dns 192.0.2.200 ipv4.dns-search example.com", "nmcli connection modify Office ipv6.method manual ipv6.addresses 2001:db8:1::1/64 ipv6.gateway 2001:db8:1::fffe ipv6.dns 2001:db8:1::ffbb ipv6.dns-search example.com", "nmcli connection up Office", "nmcli connection show --active NAME ID TYPE DEVICE Office 2501eb7e-7b16-4dc6-97ef-7cc460139a58 wifi wlp0s20f3", "*ping -c 3 example.com", "ping -c 3 example.com", "ping -c 3 example.com", "nmtui", "nmcli connection show --active NAME ID TYPE DEVICE Office 2501eb7e-7b16-4dc6-97ef-7cc460139a58 wifi wlp0s20f3", "ping -c 3 example.com", "nm-connection-editor", "ping -c 3 example.com", "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "pwd: <password>", "--- - name: Configure a wifi connection with 802.1X authentication hosts: managed-node-01.example.com tasks: - name: Copy client key for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/client.key\" dest: \"/etc/pki/tls/private/client.key\" mode: 0400 - name: Copy client certificate for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/client.crt\" dest: \"/etc/pki/tls/certs/client.crt\" - name: Copy CA certificate for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/ca.crt\" dest: \"/etc/pki/ca-trust/source/anchors/ca.crt\" - name: Wifi connection profile with dynamic IP address settings and 802.1X ansible.builtin.import_role: name: rhel-system-roles.network vars: network_connections: - name: Wifi connection profile with dynamic IP address settings and 802.1X interface_name: wlp1s0 state: up type: wireless autoconnect: yes ip: dhcp4: true auto6: true wireless: ssid: \"Example-wifi\" key_mgmt: \"wpa-eap\" ieee802_1x: identity: <user_name> eap: tls private_key: \"/etc/pki/tls/client.key\" private_key_password: \"{{ pwd }}\" private_key_password_flags: none client_cert: \"/etc/pki/tls/client.pem\" ca_cert: \"/etc/pki/tls/cacert.pem\" domain_suffix_match: \"example.com\"", "ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "nmcli connection modify wlp1s0 wireless-security.key-mgmt wpa-eap 802-1x.eap peap 802-1x.phase2-auth mschapv2 802-1x.identity user_name", "nmcli connection modify wlp1s0 802-1x.password password", "nmcli connection modify wlp1s0 802-1x.ca-cert /etc/pki/ca-trust/source/anchors/ca.crt", "nmcli connection up wlp1s0", "iw reg get global country US: DFS-FCC", "COUNTRY= <country_code>", "setregdomain", "iw reg get global country DE: DFS-ETSI", "nmcli device status | grep wifi wlp0s20f3 wifi disconnected --", "nmcli -f WIFI-PROPERTIES.AP device show wlp0s20f3 WIFI-PROPERTIES.AP: yes", "dnf install dnsmasq NetworkManager-wifi", "nmcli device wifi hotspot ifname wlp0s20f3 con-name Example-Hotspot ssid Example-Hotspot password \" password \"", "nmcli connection modify Example-Hotspot 802-11-wireless-security.key-mgmt sae", "nmcli connection modify Example-Hotspot ipv4.addresses 192.0.2.254/24", "nmcli connection up Example-Hotspot", "ss -tulpn | grep -E \":53|:67\" udp UNCONN 0 0 10.42.0.1 :53 0.0.0.0:* users:((\"dnsmasq\",pid= 55905 ,fd= 6 )) udp UNCONN 0 0 0.0.0.0:67 0.0.0.0:* users:((\"dnsmasq\",pid= 55905 ,fd= 4 )) tcp LISTEN 0 32 10.42.0.1 :53 0.0.0.0:* users:((\"dnsmasq\",pid= 55905 ,fd= 7 ))", "nft list ruleset table ip nm-shared-wlp0s20f3 { chain nat_postrouting { type nat hook postrouting priority srcnat; policy accept; ip saddr 10.42.0.0/24 ip daddr != 10.42.0.0/24 masquerade } chain filter_forward { type filter hook forward priority filter; policy accept; ip daddr 10.42.0.0/24 oifname \" wlp0s20f3 \" ct state { established, related } accept ip saddr 10.42.0.0/24 iifname \" wlp0s20f3 \" accept iifname \" wlp0s20f3 \" oifname \" wlp0s20f3 \" accept iifname \" wlp0s20f3 \" reject oifname \" wlp0s20f3 \" reject } }", "nmcli device wifi IN-USE BSSID SSID MODE CHAN RATE SIGNAL BARS SECURITY 00:53:00:88:29:04 Example-Hotspot Infra 11 130 Mbit/s 62 ▂▄▆_ WPA3", "ping -c 3 www.redhat.com", "dd if=/dev/urandom count=16 bs=1 2> /dev/null | hexdump -e '1/2 \"%04x\"' 50b71a8ef0bd5751ea76de6d6c98c03a", "dd if=/dev/urandom count=32 bs=1 2> /dev/null | hexdump -e '1/2 \"%04x\"' f2b4297d39da7330910a74abc0449feb45b5c0b9fc23df1430e1898fcf1c4550", "nmcli connection add type macsec con-name macsec0 ifname macsec0 connection.autoconnect yes macsec.parent enp1s0 macsec.mode psk macsec.mka-cak 50b71a8ef0bd5751ea76de6d6c98c03a macsec.mka-ckn f2b4297d39da7330910a74abc0449feb45b5c0b9fc23df1430e1898fcf1c4550", "nmcli connection modify macsec0 ipv4.method manual ipv4.addresses '192.0.2.1/24' ipv4.gateway '192.0.2.254' ipv4.dns '192.0.2.253'", "nmcli connection modify macsec0 ipv6.method manual ipv6.addresses '2001:db8:1::1/32' ipv6.gateway '2001:db8:1::fffe' ipv6.dns '2001:db8:1::fffd'", "nmcli connection up macsec0", "tcpdump -nn -i enp1s0", "tcpdump -nn -i macsec0", "ip macsec show", "ip -s macsec show", "dd if=/dev/urandom count=16 bs=1 2> /dev/null | hexdump -e '1/2 \"%04x\"' 50b71a8ef0bd5751ea76de6d6c98c03a", "dd if=/dev/urandom count=32 bs=1 2> /dev/null | hexdump -e '1/2 \"%04x\"' f2b4297d39da7330910a74abc0449feb45b5c0b9fc23df1430e1898fcf1c4550", "--- routes: config: - destination: 0.0.0.0/0 next-hop-interface: macsec0 next-hop-address: 192.0.2.2 table-id: 254 - destination: 192.0.2.2/32 next-hop-interface: macsec0 next-hop-address: 0.0.0.0 table-id: 254 dns-resolver: config: search: - example.com server: - 192.0.2.200 - 2001:db8:1::ffbb interfaces: - name: macsec0 type: macsec state: up ipv4: enabled: true address: - ip: 192.0.2.1 prefix-length: 32 ipv6: enabled: true address: - ip: 2001:db8:1::1 prefix-length: 64 macsec: encrypt: true base-iface: enp0s1 mka-cak: 50b71a8ef0bd5751ea76de6d6c98c03a mka-ckn: f2b4297d39da7330910a74abc0449feb45b5c0b9fc23df1430e1898fcf1c4550 port: 0 validation: strict send-sci: true", "nmstatectl apply create-macsec-connection.yml", "nmstatectl show macsec0", "tcpdump -nn -i enp0s1", "tcpdump -nn -i macsec0", "ip macsec show", "ip -s macsec show", "ip link add link real_NIC_device name IPVLAN_device type ipvlan mode l2", "ip link add link enp0s31f6 name my_ipvlan type ipvlan mode l2 ip link 47: my_ipvlan@enp0s31f6: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether e8:6a:6e:8a:a2:44 brd ff:ff:ff:ff:ff:ff", "ip addr add dev IPVLAN_device IP_address/subnet_mask_prefix", "ip neigh add dev peer_device IPVLAN_device_IP_address lladdr MAC_address", "ip route add dev <real_NIC_device> <peer_IP_address/32>", "ip route add dev real_NIC_device peer_IP_address/32", "ip link set dev IPVLAN_device up", "ping IP_address", "ip link show 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:74:79:56 brd ff:ff:ff:ff:ff:ff", "[keyfile] unmanaged-devices=interface-name:enp1s0", "[keyfile] unmanaged-devices=mac:52:54:00:74:79:56", "[keyfile] unmanaged-devices=type:ethernet", "[keyfile] unmanaged-devices=interface-name:enp1s0;interface-name:enp7s0", "systemctl reload NetworkManager", "nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet unmanaged --", "NetworkManager --print-config [keyfile] unmanaged-devices=interface-name:enp1s0", "nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet disconnected --", "nmcli device set enp1s0 managed no", "nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet unmanaged --", "nmcli connection add con-name example-loopback type loopback", "nmcli connection modify example-loopback +ipv4.addresses 192.0.2.1/24", "nmcli con mod example-loopback loopback.mtu 16384", "nmcli connection modify example-loopback ipv4.dns 192.0.2.0", "nmcli connection up example-loopback", "ip address show lo 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16384 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 192.0.2.1/24 brd 192.0.2.255 scope global lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever", "cat /etc/resolv.conf nameserver 192.0.2.0", "nmcli connection add type dummy ifname dummy0 ipv4.method manual ipv4.addresses 192.0.2.1/24 ipv6.method manual ipv6.addresses 2001:db8:2::1/64", "nmcli connection show NAME UUID TYPE DEVICE dummy-dummy0 aaf6eb56-73e5-4746-9037-eed42caa8a65 dummy dummy0", "nmcli connection show NAME UUID TYPE DEVICE Example 7a7e0151-9c18-4e6f-89ee-65bb2d64d365 ethernet enp1s0", "nmcli connection modify Example ipv6.method \"disabled\"", "nmcli connection up Example", "ip address show enp1s0 2: enp1s0 : <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:6b:74:be brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.10.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever", "cat /proc/sys/net/ipv6/conf/ enp1s0 /disable_ipv6 1", "nmcli general hostname old-hostname.example.com", "nmcli general hostname new-hostname.example.com", "reboot", "systemctl restart <service_name>", "nmcli general hostname new-hostname.example.com", "hostnamectl status --static old-hostname.example.com", "hostnamectl set-hostname new-hostname.example.com", "reboot", "systemctl restart <service_name>", "hostnamectl status --static new-hostname.example.com", "[main] dhcp=dhclient", "dnf install dhcp-client", "systemctl restart NetworkManager", "Apr 26 09:54:19 server NetworkManager[27748]: <info> [1650959659.8483] dhcp-init: Using DHCP client 'dhclient'", "nmcli connection modify <connection_name> ipv4.dhcp-timeout 30 ipv6.dhcp-timeout 30", "nmcli connection modify <connection_name> ipv4.may-fail <value>", "nmcli connection modify <connection_name> ipv6.may-fail <value>", "#!/bin/bash Run dhclient.exit-hooks.d scripts if [ -n \"USDDHCP4_DHCP_LEASE_TIME\" ] ; then if [ \"USD2\" = \"dhcp4-change\" ] || [ \"USD2\" = \"up\" ] ; then if [ -d /etc/dhcp/dhclient-exit-hooks.d ] ; then for f in /etc/dhcp/dhclient-exit-hooks.d/*.sh ; do if [ -x \"USD{f}\" ]; then . \"USD{f}\" fi done fi fi fi", "chown root:root /etc/NetworkManager/dispatcher.d/12-dhclient-down", "chmod 0700 /etc/NetworkManager/dispatcher.d/12-dhclient-down", "restorecon /etc/NetworkManager/dispatcher.d/12-dhclient-down", "[main] dns=none", "systemctl reload NetworkManager", "systemctl reload NetworkManager", "cat /etc/resolv.conf", "NetworkManager --print-config dns=none", "rm /etc/resolv.conf", "ln -s /etc/resolv.conf.manually-configured /etc/resolv.conf", "[connection]", "ipv4.dns-priority=200 ipv6.dns-priority=200", "systemctl reload NetworkManager", "nmcli connection show NAME UUID TYPE DEVICE Example_con_1 d17ee488-4665-4de2-b28a-48befab0cd43 ethernet enp1s0 Example_con_2 916e4f67-7145-3ffa-9f7b-e7cada8f6bf7 ethernet enp7s0", "nmcli connection modify <connection_name> ipv4.dns-priority 10 ipv6.dns-priority 10", "nmcli connection up <connection_name>", "cat /etc/resolv.conf", "dnf install dnsmasq", "dns=dnsmasq", "systemctl reload NetworkManager", "journalctl -xeu NetworkManager Jun 02 13:30:17 <client_hostname>_ dnsmasq[5298]: using nameserver 198.51.100.7#53 for domain example.com", "dnf install tcpdump", "tcpdump -i any port 53", "host -t A www.example.com host -t A www.redhat.com", "13:52:42.234533 tun0 Out IP server .43534 > 198.51.100.7 .domain: 50121+ A? www.example.com. (33) 13:52:57.753235 enp1s0 Out IP server .40864 > 192.0.2.1 .domain: 6906+ A? www.redhat.com. (33)", "cat /etc/resolv.conf nameserver 127.0.0.1", "ss -tulpn | grep \"127.0.0.1:53\" udp UNCONN 0 0 127.0.0.1:53 0.0.0.0:* users:((\"dnsmasq\",pid=7340,fd=18)) tcp LISTEN 0 32 127.0.0.1:53 0.0.0.0:* users:((\"dnsmasq\",pid=7340,fd=19))", "journalctl -u NetworkManager", "systemctl --now enable systemd-resolved", "dns=systemd-resolved", "systemctl reload NetworkManager", "resolvectl Link 2 ( enp1s0 ) Current Scopes: DNS Protocols: +DefaultRoute Current DNS Server: 192.0.2.1 DNS Servers: 192.0.2.1 Link 3 ( tun0 ) Current Scopes: DNS Protocols: -DefaultRoute Current DNS Server: 198.51.100.7 DNS Servers: 198.51.100.7 203.0.113.19 DNS Domain: example.com", "dnf install tcpdump", "tcpdump -i any port 53", "host -t A www.example.com host -t A www.redhat.com", "13:52:42.234533 tun0 Out IP server .43534 > 198.51.100.7 .domain: 50121+ A? www.example.com. (33) 13:52:57.753235 enp1s0 Out IP server .40864 > 192.0.2.1 .domain: 6906+ A? www.redhat.com. (33)", "cat /etc/resolv.conf nameserver 127.0.0.53", "ss -tulpn | grep \"127.0.0.53\" udp UNCONN 0 0 127.0.0.53%lo:53 0.0.0.0:* users:((\"systemd-resolve\",pid=1050,fd=12)) tcp LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:* users:((\"systemd-resolve\",pid=1050,fd=13))", "nmcli connection modify <connection_name> ipv4.gateway \" <IPv4_gateway_address> \"", "nmcli connection modify <connection_name> ipv6.gateway \" <IPv6_gateway_address> \"", "nmcli connection up <connection_name>", "ip -4 route default via 192.0.2.1 dev example proto static metric 100", "ip -6 route default via 2001:db8:1::1 dev example proto static metric 100 pref medium", "nmcli connection edit <connection_name>", "nmcli> set ipv4.gateway \" <IPv4_gateway_address> \"", "nmcli> set ipv6.gateway \" <IPv6_gateway_address> \"", "nmcli> print ipv4.gateway: <IPv4_gateway_address> ipv6.gateway: <IPv6_gateway_address>", "nmcli> save persistent", "nmcli> activate <connection_name>", "nmcli> quit", "ip -4 route default via 192.0.2.1 dev example proto static metric 100", "ip -6 route default via 2001:db8:1::1 dev example proto static metric 100 pref medium", "nm-connection-editor", "nmcli connection up example", "ip -4 route default via 192.0.2.1 dev example proto static metric 100", "ip -6 route default via 2001:db8:1::1 dev example proto static metric 100 pref medium", "ip -4 route default via 192.0.2.1 dev example proto static metric 100", "ip -6 route default via 2001:db8:1::1 dev example proto static metric 100 pref medium", "--- routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.0.2.1 next-hop-interface: enp1s0", "nmstatectl apply ~/set-default-gateway.yml", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: address: - 198.51.100.20/24 - 2001:db8:1::1/64 gateway4: 198.51.100.254 gateway6: 2001:db8:1::fffe dns: - 198.51.100.200 - 2001:db8:1::ffbb dns_search: - example.com state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"gateway\": \"198.51.100.254\", \"interface\": \"enp1s0\", }, \"ansible_default_ipv6\": { \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", }", "nmcli connection modify <connection_name> ipv4.route-metric <value> ipv6.route-metric <value>", "nmcli connection modify <connection_name> ipv4.never-default yes ipv6.never-default yes", "nmcli connection up <connection_name>", "ip -4 route default via 192.0.2.1 dev enp1s0 proto static metric 101 default via 198.51.100.1 dev enp7s0 proto static metric 102", "ip -6 route default via 2001:db8:1::1 dev enp1s0 proto static metric 101 pref medium default via 2001:db8:2::1 dev enp7s0 proto static metric 102 pref medium", "nmcli -f GENERAL.CONNECTION,IP4.GATEWAY,IP6.GATEWAY device show enp1s0 GENERAL.CONNECTION: Corporate-LAN IP4.GATEWAY: 192.0.2.1 IP6.GATEWAY: 2001:db8:1::1 nmcli -f GENERAL.CONNECTION,IP4.GATEWAY,IP6.GATEWAY device show enp7s0 GENERAL.CONNECTION: Internet-Provider IP4.GATEWAY: 198.51.100.1 IP6.GATEWAY: 2001:db8:2::1", "nmcli connection modify Corporate-LAN ipv4.never-default yes ipv6.never-default yes", "nmcli connection up Corporate-LAN", "ip -4 route default via 198.51.100.1 dev enp7s0 proto static metric 102", "ip -6 route default via 2001:db8:2::1 dev enp7s0 proto static metric 102 pref medium", "nmcli connection modify connection_name ipv4.routes \" ip [/ prefix ] [ next_hop ] [ metric ] [ attribute = value ] [ attribute = value ] ...\"", "nmcli connection modify connection_name +ipv4.routes \" <route> \"", "nmcli connection modify connection_name -ipv4.routes \" <route> \"", "nmcli connection modify LAN +ipv4.routes \"198.51.100.0/24 192.0.2.10\"", "nmcli connection modify <connection_profile> +ipv4.routes \" <remote_network_1> / <subnet_mask_1> <gateway_1> , <remote_network_n> / <subnet_mask_n> <gateway_n> , ...\"", "nmcli connection modify LAN +ipv6.routes \"2001:db8:2::/64 2001:db8:1::10\"", "nmcli connection up LAN", "ip -4 route 198.51.100.0/24 via 192.0.2.10 dev enp1s0", "ip -6 route 2001:db8:2::/64 via 2001:db8:1::10 dev enp1s0 metric 1024 pref medium", "nmtui", "ip route 192.0.2.0/24 via 198.51.100.1 dev example proto static metric 100", "ip -4 route 198.51.100.0/24 via 192.0.2.10 dev enp1s0", "ip -6 route 2001:db8:2::/64 via 2001:db8:1::10 dev enp1s0 metric 1024 pref medium", "nm-connection-editor", "nmcli connection up example", "ip -4 route 198.51.100.0/24 via 192.0.2.10 dev enp1s0", "ip -6 route 2001:db8:2::/64 via 2001:db8:1::10 dev enp1s0 metric 1024 pref medium", "nmcli connection edit example", "nmcli> set ipv4.routes 198.51.100.0/24 192.0.2.10", "nmcli> set ipv6.routes 2001:db8:2::/64 2001:db8:1::10", "nmcli> print ipv4.routes: { ip = 198.51.100.0/24 , nh = 192.0.2.10 } ipv6.routes: { ip = 2001:db8:2::/64 , nh = 2001:db8:1::10 }", "nmcli> save persistent", "nmcli> activate example", "nmcli> quit", "ip -4 route 198.51.100.0/24 via 192.0.2.10 dev enp1s0", "ip -6 route 2001:db8:2::/64 via 2001:db8:1::10 dev enp1s0 metric 1024 pref medium", "--- routes: config: - destination: 198.51.100.0/24 next-hop-address: 192.0.2.10 next-hop-interface: enp1s0 - destination: 2001:db8:2::/64 next-hop-address: 2001:db8:1::10 next-hop-interface: enp1s0", "nmstatectl apply ~/add-static-route-to-enp1s0.yml", "ip -4 route 198.51.100.0/24 via 192.0.2.10 dev enp1s0", "ip -6 route 2001:db8:2::/64 via 2001:db8:1::10 dev enp1s0 metric 1024 pref medium", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp7s0 type: ethernet autoconnect: yes ip: address: - 192.0.2.1/24 - 2001:db8:1::1/64 gateway4: 192.0.2.254 gateway6: 2001:db8:1::fffe dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com route: - network: 198.51.100.0 prefix: 24 gateway: 192.0.2.10 - network: 2001:db8:2:: prefix: 64 gateway: 2001:db8:1::10 state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'ip -4 route' managed-node-01.example.com | CHANGED | rc=0 >> 198.51.100.0/24 via 192.0.2.10 dev enp7s0", "ansible managed-node-01.example.com -m command -a 'ip -6 route' managed-node-01.example.com | CHANGED | rc=0 >> 2001:db8:2::/64 via 2001:db8:1::10 dev enp7s0 metric 1024 pref medium", "nmcli connection add type ethernet con-name Provider-A ifname enp7s0 ipv4.method manual ipv4.addresses 198.51.100.1/30 ipv4.gateway 198.51.100.2 ipv4.dns 198.51.100.200 connection.zone external", "nmcli connection add type ethernet con-name Provider-B ifname enp1s0 ipv4.method manual ipv4.addresses 192.0.2.1/30 ipv4.routes \"0.0.0.0/0 192.0.2.2 table=5000\" connection.zone external", "nmcli connection add type ethernet con-name Internal-Workstations ifname enp8s0 ipv4.method manual ipv4.addresses 10.0.0.1/24 ipv4.routes \"10.0.0.0/24 table=5000\" ipv4.routing-rules \"priority 5 from 10.0.0.0/24 table 5000\" connection.zone trusted", "nmcli connection add type ethernet con-name Servers ifname enp9s0 ipv4.method manual ipv4.addresses 203.0.113.1/24 connection.zone trusted", "dnf install traceroute", "traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 10.0.0.1 (10.0.0.1) 0.337 ms 0.260 ms 0.223 ms 2 192.0.2.1 (192.0.2.1) 0.884 ms 1.066 ms 1.248 ms", "dnf install traceroute", "traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 203.0.113.1 (203.0.113.1) 2.179 ms 2.073 ms 1.944 ms 2 198.51.100.2 (198.51.100.2) 1.868 ms 1.798 ms 1.549 ms", "ip rule list 0: from all lookup local 5 : from 10.0.0.0/24 lookup 5000 32766: from all lookup main 32767: from all lookup default", "ip route list table 5000 0.0.0.0/0 via 192.0.2.2 dev enp1s0 proto static metric 100 10.0.0.0/24 dev enp8s0 proto static scope link src 192.0.2.1 metric 102", "firewall-cmd --get-active-zones external interfaces: enp1s0 enp7s0 trusted interfaces: enp8s0 enp9s0", "firewall-cmd --info-zone=external external (active) target: default icmp-block-inversion: no interfaces: enp1s0 enp7s0 sources: services: ssh ports: protocols: masquerade: yes", "--- - name: Configuring policy-based routing hosts: managed-node-01.example.com tasks: - name: Routing traffic from a specific subnet to a different default gateway ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: Provider-A interface_name: enp7s0 type: ethernet autoconnect: True ip: address: - 198.51.100.1/30 gateway4: 198.51.100.2 dns: - 198.51.100.200 state: up zone: external - name: Provider-B interface_name: enp1s0 type: ethernet autoconnect: True ip: address: - 192.0.2.1/30 route: - network: 0.0.0.0 prefix: 0 gateway: 192.0.2.2 table: 5000 state: up zone: external - name: Internal-Workstations interface_name: enp8s0 type: ethernet autoconnect: True ip: address: - 10.0.0.1/24 route: - network: 10.0.0.0 prefix: 24 table: 5000 routing_rule: - priority: 5 from: 10.0.0.0/24 table: 5000 state: up zone: trusted - name: Servers interface_name: enp9s0 type: ethernet autoconnect: True ip: address: - 203.0.113.1/24 state: up zone: trusted", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "dnf install traceroute", "traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 10.0.0.1 (10.0.0.1) 0.337 ms 0.260 ms 0.223 ms 2 192.0.2.1 (192.0.2.1) 0.884 ms 1.066 ms 1.248 ms", "dnf install traceroute", "traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 203.0.113.1 (203.0.113.1) 2.179 ms 2.073 ms 1.944 ms 2 198.51.100.2 (198.51.100.2) 1.868 ms 1.798 ms 1.549 ms", "ip rule list 0: from all lookup local 5 : from 10.0.0.0/24 lookup 5000 32766: from all lookup main 32767: from all lookup default", "ip route list table 5000 0.0.0.0/0 via 192.0.2.2 dev enp1s0 proto static metric 100 10.0.0.0/24 dev enp8s0 proto static scope link src 192.0.2.1 metric 102", "firewall-cmd --get-active-zones external interfaces: enp1s0 enp7s0 trusted interfaces: enp8s0 enp9s0", "firewall-cmd --info-zone=external external (active) target: default icmp-block-inversion: no interfaces: enp1s0 enp7s0 sources: services: ssh ports: protocols: masquerade: yes", "nmcli connection add type vrf ifname vrf0 con-name vrf0 table 1001 ipv4.method disabled ipv6.method disabled", "nmcli connection up vrf0", "nmcli connection add type ethernet con-name vrf.enp1s0 ifname enp1s0 controller vrf0 ipv4.method manual ipv4.address 192.0.2.1/24", "nmcli connection up vrf.enp1s0", "nmcli connection add type vrf ifname vrf1 con-name vrf1 table 1002 ipv4.method disabled ipv6.method disabled", "nmcli connection up vrf1", "nmcli connection add type ethernet con-name vrf.enp7s0 ifname enp7s0 controller vrf1 ipv4.method manual ipv4.address 192.0.2.1/24", "nmcli connection up vrf.enp7s0", "ip link add dev blue type vrf table 1001", "ip link set dev blue up", "ip link set dev enp1s0 master blue", "ip link set dev enp1s0 up", "ip addr add dev enp1s0 192.0.2.1/24", "ip link add dev red type vrf table 1002", "ip link set dev red up", "ip link set dev enp7s0 master red", "ip link set dev enp7s0 up", "ip addr add dev enp7s0 192.0.2.1/24", "nmcli connection add type vrf ifname vrf0 con-name vrf0 table 1000 ipv4.method disabled ipv6.method disabled", "nmcli connection add type ethernet con-name enp1s0 ifname enp1s0 controller vrf0 ipv4.method manual ipv4.address 192.0.2.1/24 ipv4.gateway 192.0.2.254", "nmcli connection modify enp1s0 +ipv4.routes \" 198.51.100.0/24 192.0.2.2 \"", "nmcli connection up enp1s0", "ip -br addr show vrf vrf0 enp1s0 UP 192.0.2.1/24", "ip vrf show Name Table ----------------------- vrf0 1000", "ip route show default via 203.0.113.0/24 dev enp7s0 proto static metric 100", "ip route show table 1000 default via 192.0.2.254 dev enp1s0 proto static metric 101 broadcast 192.0.2.0 dev enp1s0 proto kernel scope link src 192.0.2.1 192.0.2.0 /24 dev enp1s0 proto kernel scope link src 192.0.2.1 metric 101 local 192.0.2.1 dev enp1s0 proto kernel scope host src 192.0.2.1 broadcast 192.0.2.255 dev enp1s0 proto kernel scope link src 192.0.2.1 198.51.100.0/24 via 192.0.2.2 dev enp1s0 proto static metric 101", "ip vrf exec vrf0 traceroute 203.0.113.1 traceroute to 203.0.113.1 ( 203.0.113.1 ), 30 hops max, 60 byte packets 1 192.0.2.254 ( 192.0.2.254 ) 0.516 ms 0.459 ms 0.430 ms", "systemctl cat httpd [Service] ExecStart=/usr/sbin/httpd USDOPTIONS -DFOREGROUND", "mkdir /etc/systemd/system/httpd.service.d/", "[Service] ExecStart= ExecStart=/usr/sbin/ip vrf exec vrf0 /usr/sbin/httpd USDOPTIONS -DFOREGROUND", "systemctl daemon-reload", "systemctl restart httpd", "pidof -c httpd 1904", "ip vrf identify 1904 vrf0", "ip vrf pids vrf0 1904 httpd", "nmcli con modify enp1s0 ethtool.feature-rx on ethtool.feature-tx off", "nmcli con modify enp1s0 ethtool.feature-tx \"\"", "nmcli connection up enp1s0", "ethtool -k network_device", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings and offload features ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: features: gro: no gso: yes tx_sctp_segmentation: no state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_enp1s0\": { \"active\": true, \"device\": \"enp1s0\", \"features\": { \"rx_gro_hw\": \"off, \"tx_gso_list\": \"on, \"tx_sctp_segmentation\": \"off\", }", "nmcli connection modify enp1s0 ethtool.coalesce-rx-frames 128", "nmcli connection modify enp1s0 ethtool.coalesce-rx-frames \"\"", "nmcli connection up enp1s0", "ethtool -c network_device", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings and coalesce settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: coalesce: rx_frames: 128 tx_frames: 128 state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'ethtool -c enp1s0' managed-node-01.example.com | CHANGED | rc=0 >> rx-frames: 128 tx-frames: 128", "ethtool -S enp1s0 rx_queue_0_drops: 97326 rx_queue_1_drops: 63783", "ethtool -g enp1s0 Ring parameters for enp1s0 : Pre-set maximums: RX: 4096 RX Mini: 0 RX Jumbo: 16320 TX: 4096 Current hardware settings: RX: 255 RX Mini: 0 RX Jumbo: 0 TX: 255", "nmcli connection show NAME UUID TYPE DEVICE Example-Connection a5eb6490-cc20-3668-81f8-0314a27f3f75 ethernet enp1s0", "nmcli connection modify Example-Connection ethtool.ring-rx 4096", "nmcli connection modify Example-Connection ethtool.ring-tx 4096", "nmcli connection up Example-Connection", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address setting and increased ring buffer sizes ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: ring: rx: 4096 tx: 4096 state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'ethtool -g enp1s0' managed-node-01.example.com | CHANGED | rc=0 >> Current hardware settings: RX: 4096 RX Mini: 0 RX Jumbo: 0 TX: 4096", "ethtool --show-channels enp1s0 Channel parameters for enp1s0 : Pre-set maximums: RX: 4 TX: 3 Other: 10 Combined: 63 Current hardware settings: RX: 1 TX: 1 Other: 1 Combined: 1", "nmcli connection modify enp1s0 ethtool.channels-rx 4 ethtool.channels-tx 3 ethtools.channels-other 9 ethtool.channels-combined 50", "nmcli connection up enp1s0", "ethtool --show-channels enp1s0 Channel parameters for enp1s0 : Pre-set maximums: RX: 4 TX: 3 Other: 10 Combined: 63 Current hardware settings: RX: 4 TX: 3 Other: 9 Combined: 50", "RateLimitBurst=0", "systemctl restart systemd-journald", "[logging] domains=ALL:TRACE", "systemctl restart NetworkManager", "journalctl -u NetworkManager Jun 30 15:24:32 server NetworkManager[164187]: <debug> [1656595472.4939] active-connection[0x5565143c80a0]: update activation type from assume to managed Jun 30 15:24:32 server NetworkManager[164187]: <trace> [1656595472.4939] device[55b33c3bdb72840c] (enp1s0): sys-iface-state: assume -> managed Jun 30 15:24:32 server NetworkManager[164187]: <trace> [1656595472.4939] l3cfg[4281fdf43e356454,ifindex=3]: commit type register (type \"update\", source \"device\", existing a369f23014b9ede3) -> a369f23014b9ede3 Jun 30 15:24:32 server NetworkManager[164187]: <info> [1656595472.4940] manager: NetworkManager state is now CONNECTED_SITE", "nmcli general logging LEVEL DOMAINS INFO PLATFORM,RFKILL,ETHER,WIFI,BT,MB,DHCP4,DHCP6,PPP,WIFI_SCAN,IP4,IP6,A UTOIP4,DNS,VPN,SHARING,SUPPLICANT,AGENTS,SETTINGS,SUSPEND,CORE,DEVICE,OLPC, WIMAX,INFINIBAND,FIREWALL,ADSL,BOND,VLAN,BRIDGE,DBUS_PROPS,TEAM,CONCHECK,DC B,DISPATCH", "nmcli general logging level LEVEL domains ALL", "nmcli general logging level LEVEL domains DOMAINS", "nmcli general logging level KEEP domains DOMAIN:LEVEL , DOMAIN:LEVEL", "journalctl -u NetworkManager -b", "interfaces: - name: enp1s0 type: ethernet lldp: enabled: true", "nmstatectl apply ~/enable-LLDP-enp1s0.yml", "nmstatectl show enp1s0 - name: enp1s0 type: ethernet state: up ipv4: enabled: false dhcp: false ipv6: enabled: false autoconf: false dhcp: false lldp: enabled: true neighbors: - - type: 5 system-name: Summit300-48 - type: 6 system-description: Summit300-48 - Version 7.4e.1 (Build 5) 05/27/05 04:53:11 - type: 7 system-capabilities: - MAC Bridge component - Router - type: 1 _description: MAC address chassis-id: 00:01:30:F9:AD:A0 chassis-id-type: 4 - type: 2 _description: Interface name port-id: 1/1 port-id-type: 5 - type: 127 ieee-802-1-vlans: - name: v2-0488-03-0505 vid: 488 oui: 00:80:c2 subtype: 3 - type: 127 ieee-802-3-mac-phy-conf: autoneg: true operational-mau-type: 16 pmd-autoneg-cap: 27648 oui: 00:12:0f subtype: 1 - type: 127 ieee-802-1-ppvids: - 0 oui: 00:80:c2 subtype: 2 - type: 8 management-addresses: - address: 00:01:30:F9:AD:A0 address-subtype: MAC interface-number: 1001 interface-number-subtype: 2 - type: 127 ieee-802-3-max-frame-size: 1522 oui: 00:12:0f subtype: 4 mac-address: 82:75:BE:6F:8C:7A mtu: 1500", "- type: 127 ieee-802-1-vlans: - name: v2-0488-03-0505 vid: 488", "tc qdisc show dev enp0s1", "tc -s qdisc show dev enp0s1 qdisc fq_codel 0: root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn Sent 1008193 bytes 5559 pkt (dropped 233, overlimits 55 requeues 77) backlog 0b 0p requeues 0", "sysctl -a | grep qdisc net.core.default_qdisc = fq_codel", "tc -s qdisc show dev enp0s1 qdisc fq_codel 0: root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0 new_flows_len 0 old_flows_len 0", "sysctl -w net.core.default_qdisc=pfifo_fast", "modprobe -r NETWORKDRIVERNAME modprobe NETWORKDRIVERNAME", "ip link set enp0s1 up", "tc -s qdisc show dev enp0s1 qdisc pfifo_fast 0: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 Sent 373186 bytes 5333 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 .", "tc -s qdisc show dev enp0s1", "tc qdisc replace dev enp0s1 root htb", "tc -s qdisc show dev enp0s1 qdisc htb 8001: root refcnt 2 r2q 10 default 0 direct_packets_stat 0 direct_qlen 1000 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0", "tc qdisc show dev enp0s1 qdisc fq_codel 0: root refcnt 2", "nmcli connection modify enp0s1 tc.qdiscs 'root pfifo_fast'", "nmcli connection modify enp0s1 +tc.qdisc 'ingress handle ffff:'", "nmcli connection up enp0s1", "tc qdisc show dev enp0s1 qdisc pfifo_fast 8001: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 qdisc ingress ffff: parent ffff:fff1 ----------------", "ip link add name ifb4eth0 numtxqueues 48 numrxqueues 48 type ifb", "ip link set dev ifb4eth0 up", "tc qdisc add dev enp1s0 handle ffff: ingress", "tc filter add dev enp1s0 parent ffff: protocol ip u32 match u32 0 0 action ctinfo cpmark 100 action mirred egress redirect dev ifb4eth0", "tc qdisc add dev ifb4eth0 root handle 1: htb default 1000", "tc class add dev ifb4eth0 parent 1:1 classid 1:100 htb ceil 2mbit rate 1mbit prio 100", "tc qdisc add dev ifb4eth0 parent 1:100 sfq perturb 60", "tc filter add dev ifb4eth0 parent 1:0 protocol ip prio 100 handle 100 fw classid 1:100", "nft add rule ip mangle PREROUTING counter meta mark set ct mark", "nft add table ip mangle nft add chain ip mangle PREROUTING {type filter hook prerouting priority mangle \\;}", "nft add rule ip mangle PREROUTING ip daddr 192.0.2.3 counter meta mark set 0x64", "nft add rule ip mangle PREROUTING counter ct mark set mark", "iperf3 -s", "iperf3 -c 192.0.2.3 -t TCP_STREAM | tee rate", "Accepted connection from 192.0.2.4, port 52128 [5] local 192.0.2.3 port 5201 connected to 192.0.2.4 port 52130 [ID] Interval Transfer Bitrate [5] 0.00-1.00 sec 119 KBytes 973 Kbits/sec [5] 1.00-2.00 sec 116 KBytes 950 Kbits/sec [ID] Interval Transfer Bitrate [5] 0.00-14.81 sec 1.51 MBytes 853 Kbits/sec receiver iperf3: interrupt - the server has terminated", "Connecting to host 192.0.2.3, port 5201 [5] local 192.0.2.4 port 52130 connected to 192.0.2.3 port 5201 [ID] Interval Transfer Bitrate Retr Cwnd [5] 0.00-1.00 sec 481 KBytes 3.94 Mbits/sec 0 76.4 KBytes [5] 1.00-2.00 sec 223 KBytes 1.83 Mbits/sec 0 82.0 KBytes [ID] Interval Transfer Bitrate Retr [5] 0.00-14.00 sec 3.92 MBytes 2.35 Mbits/sec 32 sender [5] 0.00-14.00 sec 0.00 Bytes 0.00 bits/sec receiver iperf3: error - the server has terminated", "tc -s qdisc show dev ifb4eth0 qdisc htb 1: root Sent 26611455 bytes 3054 pkt (dropped 76, overlimits 4887 requeues 0) qdisc sfq 8001: parent Sent 26535030 bytes 2296 pkt (dropped 76, overlimits 0 requeues 0)", "tc -s filter show dev enp1s0 ingress filter parent ffff: protocol ip pref 49152 u32 chain 0 filter parent ffff: protocol ip pref 49152 u32 chain 0 fh 800: ht divisor 1 filter parent ffff: protocol ip pref 49152 u32 chain 0 fh 800::800 order 2048 key ht 800 bkt 0 terminal flowid not_in_hw (rule hit 8075 success 8075) match 00000000/00000000 at 0 (success 8075 ) action order 1: ctinfo zone 0 pipe index 1 ref 1 bind 1 cpmark 0x00000064 installed 3105 sec firstused 3105 sec DSCP set 0 error 0 CPMARK set 7712 Action statistics: Sent 25891504 bytes 3137 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 action order 2: mirred (Egress Redirect to device ifb4eth0) stolen index 1 ref 1 bind 1 installed 3105 sec firstused 3105 sec Action statistics: Sent 25891504 bytes 3137 pkt (dropped 0, overlimits 61 requeues 0) backlog 0b 0p requeues 0", "tc -s class show dev ifb4eth0 class htb 1:100 root leaf 8001: prio 7 rate 1Mbit ceil 2Mbit burst 1600b cburst 1600b Sent 26541716 bytes 2373 pkt (dropped 61, overlimits 4887 requeues 0) backlog 0b 0p requeues 0 lended: 7248 borrowed: 0 giants: 0 tokens: 187250 ctokens: 93625", "nmcli connection modify enp1s0 802-1x.eap tls 802-1x.client-cert /etc/pki/tls/certs/client.crt 802-1x.private-key /etc/pki/tls/certs/certs/client.key", "nmcli connection modify enp1s0 802-1x.ca-cert /etc/pki/tls/certs/ca.crt", "nmcli connection modify enp1s0 802-1x.identity [email protected]", "nmcli connection modify enp1s0 802-1x.private-key-password password", "nmcli connection up enp1s0", "--- interfaces: - name: enp1s0 type: ethernet state: up ipv4: enabled: true address: - ip: 192.0.2.1 prefix-length: 24 dhcp: false ipv6: enabled: true address: - ip: 2001:db8:1::1 prefix-length: 64 autoconf: false dhcp: false 802.1x: ca-cert: /etc/pki/tls/certs/ca.crt client-cert: /etc/pki/tls/certs/client.crt eap-methods: - tls identity: client.example.org private-key: /etc/pki/tls/private/client.key private-key-password: password routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.0.2.254 next-hop-interface: enp1s0 - destination: ::/0 next-hop-address: 2001:db8:1::fffe next-hop-interface: enp1s0 dns-resolver: config: search: - example.com server: - 192.0.2.200 - 2001:db8:1::ffbb", "nmstatectl apply ~/create-ethernet-profile.yml", "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "pwd: <password>", "--- - name: Configure an Ethernet connection with 802.1X authentication hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Copy client key for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/client.key\" dest: \"/etc/pki/tls/private/client.key\" mode: 0600 - name: Copy client certificate for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/client.crt\" dest: \"/etc/pki/tls/certs/client.crt\" - name: Copy CA certificate for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/ca.crt\" dest: \"/etc/pki/ca-trust/source/anchors/ca.crt\" - name: Ethernet connection profile with static IP address settings and 802.1X ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: address: - 192.0.2.1/24 - 2001:db8:1::1/64 gateway4: 192.0.2.254 gateway6: 2001:db8:1::fffe dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com ieee802_1x: identity: <user_name> eap: tls private_key: \"/etc/pki/tls/private/client.key\" private_key_password: \"{{ pwd }}\" client_cert: \"/etc/pki/tls/certs/client.crt\" ca_cert: \"/etc/pki/ca-trust/source/anchors/ca.crt\" domain_suffix_match: example.com state: up", "ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "pwd: <password>", "--- - name: Configure a wifi connection with 802.1X authentication hosts: managed-node-01.example.com tasks: - name: Copy client key for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/client.key\" dest: \"/etc/pki/tls/private/client.key\" mode: 0400 - name: Copy client certificate for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/client.crt\" dest: \"/etc/pki/tls/certs/client.crt\" - name: Copy CA certificate for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/ca.crt\" dest: \"/etc/pki/ca-trust/source/anchors/ca.crt\" - name: Wifi connection profile with dynamic IP address settings and 802.1X ansible.builtin.import_role: name: rhel-system-roles.network vars: network_connections: - name: Wifi connection profile with dynamic IP address settings and 802.1X interface_name: wlp1s0 state: up type: wireless autoconnect: yes ip: dhcp4: true auto6: true wireless: ssid: \"Example-wifi\" key_mgmt: \"wpa-eap\" ieee802_1x: identity: <user_name> eap: tls private_key: \"/etc/pki/tls/client.key\" private_key_password: \"{{ pwd }}\" private_key_password_flags: none client_cert: \"/etc/pki/tls/client.pem\" ca_cert: \"/etc/pki/tls/cacert.pem\" domain_suffix_match: \"example.com\"", "ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "nmcli connection add type bridge con-name br0 ifname br0", "nmcli connection add type ethernet port-type bridge con-name br0-port1 ifname enp1s0 controller br0 nmcli connection add type ethernet port-type bridge con-name br0-port2 ifname enp7s0 controller br0 nmcli connection add type ethernet port-type bridge con-name br0-port3 ifname enp8s0 controller br0 nmcli connection add type ethernet port-type bridge con-name br0-port4 ifname enp9s0 controller br0", "nmcli connection modify br0 group-forward-mask 8", "nmcli connection modify br0 connection.autoconnect-ports 1", "nmcli connection up br0", "ip link show master br0 3: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:62:61:0e brd ff:ff:ff:ff:ff:ff", "cat /sys/class/net/br0/bridge/group_fwd_mask 0x8", "ipa-getcert request -w -k /etc/pki/tls/private/radius.key -f /etc/pki/tls/certs/radius.pem -o \"root:radiusd\" -m 640 -O \"root:radiusd\" -M 640 -T caIPAserviceCert -C 'systemctl restart radiusd.service' -N freeradius.idm.example.com -D freeradius.idm.example.com -K radius/ freeradius.idm.example.com", "ipa-getcert list -f /etc/pki/tls/certs/radius.pem Number of certificates and requests being tracked: 1. Request ID '20240918142211': status: MONITORING stuck: no key pair storage: type=FILE,location='/etc/pki/tls/private/radius.key' certificate: type=FILE,location='/etc/pki/tls/certs/radius.crt'", "openssl dhparam -out /etc/raddb/certs/dh 2048", "eap { tls-config tls-common { private_key_file = /etc/pki/tls/private/radius.key certificate_file = /etc/pki/tls/certs/radius.pem ca_file = /etc/ipa/ca.crt } }", "eap { default_eap_type = ttls }", "eap { # md5 { # } }", "authenticate { # Auth-Type PAP { # pap # } # Auth-Type CHAP { # chap # } # Auth-Type MS-CHAP { # mschap # } # mschap # digest }", "authorize { #-ldap ldap if ((ok || updated) && User-Password) { update { control:Auth-Type := ldap } } }", "authenticate { Auth-Type LDAP { ldap } }", "ln -s /etc/raddb/mods-available/ldap /etc/raddb/mods-enabled/ldap", "ldap { server = 'ldaps:// idm_server.idm.example.com ' base_dn = 'cn=users,cn=accounts, dc=idm,dc=example,dc=com ' }", "tls { require_cert = 'demand' }", "client localhost { ipaddr = 127.0.0.1 secret = localhost_client_password } client localhost_ipv6 { ipv6addr = ::1 secret = localhost_client_password }", "client hostapd.example.org { ipaddr = 192.0.2.2/32 secret = hostapd_client_password }", "client <hostname_or_description> { ipaddr = <IP_address_or_range> secret = <client_password> }", "radiusd -XC Configuration appears to be OK", "firewall-cmd --permanent --add-service=radius firewall-cmd --reload", "systemctl enable --now radiusd", "host -v idm_server.idm.example.com", "systemctl stop radiusd", "radiusd -X Ready to process requests", "General settings of hostapd =========================== Control interface settings ctrl_interface= /var/run/hostapd ctrl_interface_group= wheel Enable logging for all modules logger_syslog= -1 logger_stdout= -1 Log level logger_syslog_level= 2 logger_stdout_level= 2 Wired 802.1X authentication =========================== Driver interface type driver=wired Enable IEEE 802.1X authorization ieee8021x=1 Use port access entry (PAE) group address (01:80:c2:00:00:03) when sending EAPOL frames use_pae_group_addr=1 Network interface for authentication requests interface= br0 RADIUS client configuration =========================== Local IP address used as NAS-IP-Address own_ip_addr= 192.0.2.2 Unique NAS-Identifier within scope of RADIUS server nas_identifier= hostapd.example.org RADIUS authentication server auth_server_addr= 192.0.2.1 auth_server_port= 1812 auth_server_shared_secret= hostapd_client_password RADIUS accounting server acct_server_addr= 192.0.2.1 acct_server_port= 1813 acct_server_shared_secret= hostapd_client_password", "systemctl enable --now hostapd", "ip link show br0", "systemctl stop hostapd", "hostapd -d /etc/hostapd/hostapd.conf", "ipa user-add --first \" Test \" --last \" User \" idm_user --password", "ap_scan=0 network={ eap=TTLS eapol_flags=0 key_mgmt=IEEE8021X # Anonymous identity (sent in unencrypted phase 1) # Can be any string anonymous_identity=\" anonymous \" # Inner authentication (sent in TLS-encrypted phase 2) phase2=\"auth= PAP \" identity=\" idm_user \" password=\" idm_user_password \" # CA certificate to validate the RADIUS server's identity ca_cert=\" /etc/ipa/ca.crt \" }", "eapol_test -c /etc/wpa_supplicant/wpa_supplicant-TTLS.conf -a 192.0.2.1 -s <client_password> EAP: Status notification: remote certificate verification (param=success) CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully SUCCESS", "wpa_supplicant -c /etc/wpa_supplicant/wpa_supplicant-TTLS.conf -D wired -i enp0s31f6 enp0s31f6: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully", "#!/bin/sh TABLE=\"tr-mgmt-USD{1}\" read -r -d '' TABLE_DEF << EOF table bridge USD{TABLE} { set allowed_macs { type ether_addr } chain accesscontrol { ether saddr @allowed_macs accept ether daddr @allowed_macs accept drop } chain forward { type filter hook forward priority 0; policy accept; meta ibrname \"br0\" jump accesscontrol } } EOF case USD{2:-NOTANEVENT} in block_all) nft destroy table bridge \"USDTABLE\" printf \"USDTABLE_DEF\" | nft -f - echo \"USD1: All the bridge traffic blocked. Traffic for a client with a given MAC will be allowed after 802.1x authentication\" ;; AP-STA-CONNECTED | CTRL-EVENT-EAP-SUCCESS | CTRL-EVENT-EAP-SUCCESS2) nft add element bridge tr-mgmt-br0 allowed_macs { USD3 } echo \"USD1: Allowed traffic from USD3\" ;; AP-STA-DISCONNECTED | CTRL-EVENT-EAP-FAILURE) nft delete element bridge tr-mgmt-br0 allowed_macs { USD3 } echo \"USD1: Denied traffic from USD3\" ;; allow_all) nft destroy table bridge \"USDTABLE\" echo \"USD1: Allowed all bridge traffice again\" ;; NOTANEVENT) echo \"USD0 was called incorrectly, usage: USD0 interface event [mac_address]\" ;; esac", "[Unit] Description=Example 802.1x traffic management for hostapd After=hostapd.service After=sys-devices-virtual-net-%i.device [Service] Type=simple ExecStartPre=bash -c '/usr/sbin/hostapd_cli ping | grep PONG' ExecStartPre=/usr/local/bin/802-1x-tr-mgmt %i block_all ExecStart=/usr/sbin/hostapd_cli -i %i -a /usr/local/bin/802-1x-tr-mgmt ExecStopPost=/usr/local/bin/802-1x-tr-mgmt %i allow_all [Install] WantedBy=multi-user.target", "systemctl daemon-reload", "systemctl enable --now [email protected]", "echo \"net.mptcp.enabled=1\" > /etc/sysctl.d/90-enable-MPTCP.conf sysctl -p /etc/sysctl.d/90-enable-MPTCP.conf", "mptcpize run iperf3 -s Server listening on 5201", "mptcpize iperf3 -c 127.0.0.1 -t 3", "ss -nti '( dport :5201 )' State Recv-Q Send-Q Local Address:Port Peer Address:Port Process ESTAB 0 0 127.0.0.1:41842 127.0.0.1:5201 cubic wscale:7,7 rto:205 rtt:4.455/8.878 ato:40 mss:21888 pmtu:65535 rcvmss:536 advmss:65483 cwnd:10 bytes_sent:141 bytes_acked:142 bytes_received:4 segs_out:8 segs_in:7 data_segs_out:3 data_segs_in:3 send 393050505bps lastsnd:2813 lastrcv:2772 lastack:2772 pacing_rate 785946640bps delivery_rate 10944000000bps delivered:4 busy:41ms rcv_space:43690 rcv_ssthresh:43690 minrtt:0.008 tcp-ulp-mptcp flags:Mmec token:0000(id:0)/2ff053ec(id:0) seq:3e2cbea12d7673d4 sfseq:3 ssnoff:ad3d00f4 maplen:2", "nstat MPTcp * #kernel MPTcpExtMPCapableSYNRX 2 0.0 MPTcpExtMPCapableSYNTX 2 0.0 MPTcpExtMPCapableSYNACKRX 2 0.0 MPTcpExtMPCapableACKRX 2 0.0", "ip mptcp limits set add_addr_accepted 1", "ip mptcp endpoint add 198.51.100.1 dev enp1s0 signal", "mptcpize run iperf3 -s Server listening on 5201", "mptcpize iperf3 -c 192.0.2.1 -t 3", "ss -nti '( sport :5201 )'", "ip mptcp limit show", "ip mptcp endpoint show", "nstat MPTcp * #kernel MPTcpExtMPCapableSYNRX 2 0.0 MPTcpExtMPCapableACKRX 2 0.0 MPTcpExtMPJoinSynRx 2 0.0 MPTcpExtMPJoinAckRx 2 0.0 MPTcpExtEchoAdd 2 0.0", "echo \"net.mptcp.enabled=1\" > /etc/sysctl.d/90-enable-MPTCP.conf sysctl -p /etc/sysctl.d/90-enable-MPTCP.conf", "[Unit] Description=Set MPTCP subflow limit to 3 After=network.target [Service] ExecStart=ip mptcp limits set subflows 3 Type=oneshot [Install] WantedBy=multi-user.target", "systemctl enable --now set_mptcp_limit", "nmcli connection modify <profile_name> connection.mptcp-flags signal,subflow,also-without-default-route", "sysctl net.mptcp.enabled net.mptcp.enabled = 1", "ip mptcp limit show add_addr_accepted 2 subflows 3", "ip mptcp endpoint show 192.0.2.1 id 1 subflow dev enp4s0 198.51.100.1 id 2 subflow dev enp1s0 192.0.2.3 id 3 subflow dev enp7s0 192.0.2.4 id 4 subflow dev enp3s0", "ip mptcp limits set add_addr_accepted 0 subflows 1", "mptcpize run nc -l -k -p 12345", "ip -4 route 192.0.2.0/24 dev enp1s0 proto kernel scope link src 192.0.2.2 metric 100 192.0.2.0/24 dev wlp1s0 proto kernel scope link src 192.0.2.3 metric 600", "ip mptcp monitor", "mptcpize run nc 192.0.2.1 12345", "[ CREATED] token=63c070d2 remid=0 locid=0 saddr4=192.0.2.2 daddr4=192.0.2.1 sport=36444 dport=12345", "[ ESTABLISHED] token=63c070d2 remid=0 locid=0 saddr4=192.0.2.2 daddr4=192.0.2.1 sport=36444 dport=12345", "ss -taunp | grep \":12345\" tcp ESTAB 0 0 192.0.2.2:36444 192.0.2.1:12345", "ip mptcp endpoint add dev wlp1s0 192.0.2.3 subflow", "[SF_ESTABLISHED] token=63c070d2 remid=0 locid=2 saddr4=192.0.2.3 daddr4=192.0.2.1 sport=53345 dport=12345 backup=0 ifindex=3", "ss -taunp | grep \":12345\" tcp ESTAB 0 0 192.0.2.2:36444 192.0.2.1:12345 tcp ESTAB 0 0 192.0.2.3%wlp1s0:53345 192.0.2.1:12345", "ip mptcp endpoint delete id 2", "[ SF_CLOSED] token=63c070d2 remid=0 locid=2 saddr4=192.0.2.3 daddr4=192.0.2.1 sport=53345 dport=12345 backup=0 ifindex=3", "[ CLOSED] token=63c070d2", "echo \"net.mptcp.enabled=0\" > /etc/sysctl.d/90-enable-MPTCP.conf sysctl -p /etc/sysctl.d/90-enable-MPTCP.conf", "sysctl -a | grep mptcp.enabled net.mptcp.enabled = 0", "echo \"net.mptcp.enabled=1\" > /etc/sysctl.d/90-enable-MPTCP.conf sysctl -p /etc/sysctl.d/90-enable-MPTCP.conf", "systemctl start mptcp.service", "ip mptcp endpoint", "systemctl stop mptcp.service", "mptcpize run iperf3 -s &", "mptcpize enable nginx", "mptcpize disable nginx", "systemctl restart nginx", "[connection] id= example_connection uuid= 82c6272d-1ff7-4d56-9c7c-0eb27c300029 type= ethernet autoconnect= true [ipv4] method= auto [ipv6] method= auto [ethernet] mac-address= 00:53:00:8f:fa:66", "nmcli --offline connection add type ethernet con-name Example-Connection ipv4.addresses 192.0.2.1/24 ipv4.dns 192.0.2.200 ipv4.method manual > /etc/NetworkManager/system-connections/example.nmconnection", "chmod 600 /etc/NetworkManager/system-connections/example.nmconnection chown root:root /etc/NetworkManager/system-connections/example.nmconnection", "systemctl start NetworkManager.service", "nmcli connection up Example-Connection", "systemctl status NetworkManager.service ● NetworkManager.service - Network Manager Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2022-08-03 13:08:32 CEST; 1min 40s ago", "nmcli -f TYPE,FILENAME,NAME connection TYPE FILENAME NAME ethernet /etc/NetworkManager/system-connections/examaple.nmconnection Example-Connection ethernet /etc/sysconfig/network-scripts/ifcfg-enp1s0 enp1s0", "nmcli connection show Example-Connection connection.id: Example-Connection connection.uuid: 232290ce-5225-422a-9228-cb83b22056b4 connection.stable-id: -- connection.type: 802-3-ethernet connection.interface-name: -- connection.autoconnect: yes", "[connection] id=Example-Connection type=ethernet autoconnect=true interface-name=enp1s0 [ipv4] method=auto [ipv6] method=auto", "chown root:root /etc/NetworkManager/system-connections/example.nmconnection chmod 600 /etc/NetworkManager/system-connections/example.nmconnection", "nmcli connection reload", "nmcli -f NAME,UUID,FILENAME connection NAME UUID FILENAME Example-Connection 86da2486-068d-4d05-9ac7-957ec118afba /etc/NetworkManager/system-connections/example.nmconnection", "nmcli connection up example_connection", "nmcli connection show example_connection", "nmcli connection migrate Connection 'enp1s0' (43ed18ab-f0c4-4934-af3d-2b3333948e45) successfully migrated. Connection 'enp2s0' (883333e8-1b87-4947-8ceb-1f8812a80a9b) successfully migrated.", "nmcli -f TYPE,FILENAME,NAME connection TYPE FILENAME NAME ethernet /etc/NetworkManager/system-connections/enp1s0.nmconnection enp1s0 ethernet /etc/NetworkManager/system-connections/enp2s0.nmconnection enp2s0", "systemctl edit <service_name>", "[Unit] After=network-online.target", "systemctl daemon-reload", "import libnmstate", "import libnmstate from libnmstate.schema import Interface net_state = libnmstate.show() for iface_state in net_state[Interface.KEY]: print(iface_state[Interface.NAME] + \": \" + iface_state[Interface.STATE])", "nmstatectl show enp1s0 > ~/network-config.yml", "nmstatectl apply ~/network-config.yml", "[service] keep_state_file_after_apply = false", "vars: network_state: interfaces: - name: enp7s0 type: ethernet state: up ipv4: enabled: true auto-dns: true auto-gateway: true auto-routes: true dhcp: true ipv6: enabled: true auto-dns: true auto-gateway: true auto-routes: true autoconf: true dhcp: true", "vars: network_connections: - name: enp7s0 interface_name: enp7s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: up", "vars: network_state: interfaces: - name: enp7s0 type: ethernet state: down", "vars: network_connections: - name: enp7s0 interface_name: enp7s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: down", "xdpdump -i enp1s0 -w /root/capture.pcap", "dnf install bcc-tools", "ls -l /usr/share/bcc/tools/ -rwxr-xr-x. 1 root root 4198 Dec 14 17:53 dcsnoop -rwxr-xr-x. 1 root root 3931 Dec 14 17:53 dcstat -rwxr-xr-x. 1 root root 20040 Dec 14 17:53 deadlock_detector -rw-r--r--. 1 root root 7105 Dec 14 17:53 deadlock_detector.c drwxr-xr-x. 3 root root 8192 Mar 11 10:28 doc -rwxr-xr-x. 1 root root 7588 Dec 14 17:53 execsnoop -rwxr-xr-x. 1 root root 6373 Dec 14 17:53 ext4dist -rwxr-xr-x. 1 root root 10401 Dec 14 17:53 ext4slower", "/usr/share/bcc/tools/tcpaccept PID COMM IP RADDR RPORT LADDR LPORT 843 sshd 4 192.0.2.17 50598 192.0.2.1 22 1107 ns-slapd 4 198.51.100.6 38772 192.0.2.1 389 1107 ns-slapd 4 203.0.113.85 38774 192.0.2.1 389", "/usr/share/bcc/tools/tcpconnect PID COMM IP SADDR DADDR DPORT 31346 curl 4 192.0.2.1 198.51.100.16 80 31348 telnet 4 192.0.2.1 203.0.113.231 23 31361 isc-worker00 4 192.0.2.1 192.0.2.254 53", "/usr/share/bcc/tools/tcpconnlat PID COMM IP SADDR DADDR DPORT LAT(ms) 32151 isc-worker00 4 192.0.2.1 192.0.2.254 53 0.60 32155 ssh 4 192.0.2.1 203.0.113.190 22 26.34 32319 curl 4 192.0.2.1 198.51.100.59 443 188.96", "/usr/share/bcc/tools/tcpdrop TIME PID IP SADDR:SPORT > DADDR:DPORT STATE (FLAGS) 13:28:39 32253 4 192.0.2.85:51616 > 192.0.2.1:22 CLOSE_WAIT (FIN|ACK) b'tcp_drop+0x1' b'tcp_data_queue+0x2b9' 13:28:39 1 4 192.0.2.85:51616 > 192.0.2.1:22 CLOSE (ACK) b'tcp_drop+0x1' b'tcp_rcv_state_process+0xe2'", "/usr/share/bcc/tools/tcplife -L 22 PID COMM LADDR LPORT RADDR RPORT TX_KB RX_KB MS 19392 sshd 192.0.2.1 22 192.0.2.17 43892 53 52 6681.95 19431 sshd 192.0.2.1 22 192.0.2.245 43902 81 249381 7585.09 19487 sshd 192.0.2.1 22 192.0.2.121 43970 6998 7 16740.35", "/usr/share/bcc/tools/tcpretrans TIME PID IP LADDR:LPORT T> RADDR:RPORT STATE 00:23:02 0 4 192.0.2.1:22 R> 198.51.100.0:26788 ESTABLISHED 00:23:02 0 4 192.0.2.1:22 R> 198.51.100.0:26788 ESTABLISHED 00:45:43 0 4 192.0.2.1:22 R> 198.51.100.0:17634 ESTABLISHED", "/usr/share/bcc/tools/tcpstates SKADDR C-PID C-COMM LADDR LPORT RADDR RPORT OLDSTATE -> NEWSTATE MS ffff9cd377b3af80 0 swapper/1 0.0.0.0 22 0.0.0.0 0 LISTEN -> SYN_RECV 0.000 ffff9cd377b3af80 0 swapper/1 192.0.2.1 22 192.0.2.45 53152 SYN_RECV -> ESTABLISHED 0.067 ffff9cd377b3af80 818 sssd_nss 192.0.2.1 22 192.0.2.45 53152 ESTABLISHED -> CLOSE_WAIT 65636.773 ffff9cd377b3af80 1432 sshd 192.0.2.1 22 192.0.2.45 53152 CLOSE_WAIT -> LAST_ACK 24.409 ffff9cd377b3af80 1267 pulseaudio 192.0.2.1 22 192.0.2.45 53152 LAST_ACK -> CLOSE 0.376", "/usr/share/bcc/tools/tcpsubnet 192.0.2.0/24,198.51.100.0/24,0.0.0.0/0 Tracing... Output every 1 secs. Hit Ctrl-C to end [02/21/20 10:04:50] 192.0.2.0/24 856 198.51.100.0/24 7467 [02/21/20 10:04:51] 192.0.2.0/24 1200 198.51.100.0/24 8763 0.0.0.0/0 673", "/usr/share/bcc/tools/tcptop 13:46:29 loadavg: 0.10 0.03 0.01 1/215 3875 PID COMM LADDR RADDR RX_KB TX_KB 3853 3853 192.0.2.1:22 192.0.2.165:41838 32 102626 1285 sshd 192.0.2.1:22 192.0.2.45:39240 0 0", "/usr/share/bcc/tools/tcptracer Tracing TCP established connections. Ctrl-C to end. T PID COMM IP SADDR DADDR SPORT DPORT A 1088 ns-slapd 4 192.0.2.153 192.0.2.1 0 65535 A 845 sshd 4 192.0.2.1 192.0.2.67 22 42302 X 4502 sshd 4 192.0.2.1 192.0.2.67 22 42302", "/usr/share/bcc/tools/solisten PID COMM PROTO BACKLOG PORT ADDR 3643 nc TCPv4 1 4242 0.0.0.0 3659 nc TCPv6 1 4242 2001:db8:1::1 4221 redis-server TCPv6 128 6379 :: 4221 redis-server TCPv4 128 6379 0.0.0.0 .", "/usr/share/bcc/tools/softirqs Tracing soft irq event time... Hit Ctrl-C to end. ^C SOFTIRQ TOTAL_usecs tasklet 166 block 9152 net_rx 12829 rcu 53140 sched 182360 timer 306256", "/usr/share/bcc/tools/netqtop -n enp1s0 -i 2 Fri Jan 31 18:08:55 2023 TX QueueID avg_size [0, 64) [64, 512) [512, 2K) [2K, 16K) [16K, 64K) 0 0 0 0 0 0 0 Total 0 0 0 0 0 0 RX QueueID avg_size [0, 64) [64, 512) [512, 2K) [2K, 16K) [16K, 64K) 0 38.0 1 0 0 0 0 Total 38.0 1 0 0 0 0 ----------------------------------------------------------------------------- Fri Jan 31 18:08:57 2023 TX QueueID avg_size [0, 64) [64, 512) [512, 2K) [2K, 16K) [16K, 64K) 0 0 0 0 0 0 0 Total 0 0 0 0 0 0 RX QueueID avg_size [0, 64) [64, 512) [512, 2K) [2K, 16K) [16K, 64K) 0 38.0 1 0 0 0 0 Total 38.0 1 0 0 0 0 -----------------------------------------------------------------------------", "ip address show 1: enp1s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000 link/ether 98:fa:9b:a4:34:09 brd ff:ff:ff:ff:ff:ff", "ip link set enp1s0 promisc on", "ip link set enp1s0 promisc off", "ip link show enp1s0 1: enp1s0: <NO-CARRIER,BROADCAST,MULTICAST, PROMISC ,UP> mtu 1500 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000 link/ether 98:fa:9b:a4:34:09 brd ff:ff:ff:ff:ff:ff", "ip address show 1: enp1s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000 link/ether 98:fa:9b:a4:34:09 brd ff:ff:ff:ff:ff:ff", "nmcli connection modify enp1s0 ethernet.accept-all-mac-addresses yes", "nmcli connection modify enp1s0 ethernet.accept-all-mac-addresses no", "nmcli connection up enp1s0", "nmcli connection show enp1s0 802-3-ethernet.accept-all-mac-addresses:1 (true)", "--- interfaces: - name: enp1s0 type: ethernet state: up accept -all-mac-address: true", "nmstatectl apply ~/enp1s0.yml", "nmstatectl show enp1s0 interfaces: - name: enp1s0 type: ethernet state: up accept-all-mac-addresses: true", "nmcli connection add type ethernet ifname enp1s0 con-name enp1s0 autoconnect no", "nmcli connection modify enp1s0 +tc.qdisc \"root prio handle 10:\"", "nmcli connection modify enp1s0 +tc.qdisc \"ingress handle ffff:\"", "nmcli connection modify enp1s0 +tc.tfilter \"parent ffff: matchall action mirred egress mirror dev enp7s0 \" nmcli connection modify enp1s0 +tc.tfilter \"parent 10: matchall action mirred egress mirror dev enp7s0 \"", "nmcli connection up enp1s0", "dnf install tcpdump", "tcpdump -i enp7s0", "interfaces: - name: enp1s0 type: ethernet lldp: enabled: true - name: enp2s0 type: ethernet lldp: enabled: true - name: enp3s0 type: ethernet lldp: enabled: true", "nmstatectl apply ~/enable-lldp.yml", "nmstate-autoconf -d enp1s0 , enp2s0 , enp3s0 --- interfaces: - name: prod-net type: vlan state: up vlan: base-iface: bond100 id: 100 - name: mgmt-net type: vlan state: up vlan: base-iface: enp3s0 id: 200 - name: bond100 type: bond state: up link-aggregation: mode: balance-rr port: - enp1s0 - enp2s0", "nmstate-autoconf enp1s0 , enp2s0 , enp3s0", "nmstatectl show <interface_name>", "nmcli connection show Example-connection 802-3-ethernet.speed: 0 802-3-ethernet.duplex: -- 802-3-ethernet.auto-negotiate: no", "nmcli connection modify Example-connection 802-3-ethernet.auto-negotiate yes 802-3-ethernet.speed 10000 802-3-ethernet.duplex full", "nmcli connection up Example-connection", "ethtool enp1s0 Settings for enp1s0: Speed: 10000 Mb/s Duplex: Full Auto-negotiation: on Link detected: yes", "dnf install dpdk", "tipc", "systemctl start systemd-modules-load", "lsmod | grep tipc tipc 311296 0", "tipc node set identity host_name", "tipc bearer enable media eth device enp1s0", "tipc link list broadcast-link: up 5254006b74be:enp1s0-525400df55d1:enp1s0: up", "tipc nametable show Type Lower Upper Scope Port Node 0 1795222054 1795222054 cluster 0 5254006b74be 0 3741353223 3741353223 cluster 0 525400df55d1 1 1 1 node 2399405586 5254006b74be 2 3741353223 3741353223 node 0 5254006b74be", "dnf install NetworkManager-cloud-setup", "systemctl edit nm-cloud-setup.service", "[Service] Environment=NM_CLOUD_SETUP_EC2=yes", "systemctl daemon-reload", "systemctl enable --now nm-cloud-setup.service", "systemctl enable --now nm-cloud-setup.timer" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html-single/configuring_and_managing_networking/index
Chapter 110. KafkaTopicStatus schema reference
Chapter 110. KafkaTopicStatus schema reference Used in: KafkaTopic Property Property type Description conditions Condition array List of status conditions. observedGeneration integer The generation of the CRD that was last reconciled by the operator. topicName string Topic name. topicId string The topic's id. For a KafkaTopic with the ready condition, this will change only if the topic gets deleted and recreated with the same name. replicasChange ReplicasChangeStatus Replication factor change status.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaTopicStatus-reference
Chapter 8. Frequently asked questions
Chapter 8. Frequently asked questions Is it possible to deploy applications from OpenShift Dev Spaces to an OpenShift cluster? The user must log in to the OpenShift cluster from their running workspace using oc login . For best performance, what is the recommended storage to use for Persistent Volumes used with OpenShift Dev Spaces? Use block storage. Is it possible to deploy more than one OpenShift Dev Spaces instance on the same cluster? Only one OpenShift Dev Spaces instance can be deployed per cluster. Is it possible to install OpenShift Dev Spaces offline (that is, disconnected from the internet)? See Installing Red Hat OpenShift Dev Spaces in restricted environments on OpenShift . Is it possible to use non-default certificates with OpenShift Dev Spaces? You can use self-signed or public certificates. See Importing untrusted TLS certificates . Is it possible to run multiple workspaces simultaneously? See Enabling users to run multiple workspaces simultaneously .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.14/html/release_notes_and_known_issues/frequently-asked-questions_devspaces
Chapter 1. Operators overview
Chapter 1. Operators overview Operators are among the most important components of OpenShift Container Platform. Operators are the preferred method of packaging, deploying, and managing services on the control plane. They can also provide advantages to applications that users run. Operators integrate with Kubernetes APIs and CLI tools such as kubectl and oc commands. They provide the means of monitoring applications, performing health checks, managing over-the-air (OTA) updates, and ensuring that applications remain in your specified state. While both follow similar Operator concepts and goals, Operators in OpenShift Container Platform are managed by two different systems, depending on their purpose: Cluster Operators, which are managed by the Cluster Version Operator (CVO), are installed by default to perform cluster functions. Optional add-on Operators, which are managed by Operator Lifecycle Manager (OLM), can be made accessible for users to run in their applications. With Operators, you can create applications to monitor the running services in the cluster. Operators are designed specifically for your applications. Operators implement and automate the common Day 1 operations such as installation and configuration as well as Day 2 operations such as autoscaling up and down and creating backups. All these activities are in a piece of software running inside your cluster. 1.1. For developers As a developer, you can perform the following Operator tasks: Install Operator SDK CLI . Create Go-based Operators , Ansible-based Operators , and Helm-based Operators . Use Operator SDK to build,test, and deploy an Operator . Install and subscribe an Operator to your namespace . Create an application from an installed Operator through the web console . 1.2. For administrators As a cluster administrator, you can perform the following Operator tasks: Manage custom catalogs Allow non-cluster administrators to install Operators Install an Operator from OperatorHub View Operator status . Manage Operator conditions Upgrade installed Operators Delete installed Operators Configure proxy support Use Operator Lifecycle Manager on restricted networks To know all about the cluster Operators that Red Hat provides, see Cluster Operators reference . 1.3. steps To understand more about Operators, see What are Operators?
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/operators/operators-overview
11.6.3. Related Books
11.6.3. Related Books Sendmail by Bryan Costales with Eric Allman et al; O'Reilly &Associates - A good Sendmail reference written with the assistance of the original creator of Delivermail and Sendmail. Removing the Spam: Email Processing and Filtering by Geoff Mulligan; Addison-Wesley Publishing Company - A volume that looks at various methods used by email administrators using established tools, such as Sendmail and Procmail, to manage spam problems. Internet Email Protocols: A Developer's Guide by Kevin Johnson; Addison-Wesley Publishing Company - Provides a very thorough review of major email protocols and the security they provide. Managing IMAP by Dianna Mullet and Kevin Mullet; O'Reilly &Associates - Details the steps required to configure an IMAP server. Security Guide ; Red Hat, Inc - The Server Security chapter explains ways to secure Sendmail and other services.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-email-related-books
Chapter 6. Red Hat Ansible Automation Platform best practices
Chapter 6. Red Hat Ansible Automation Platform best practices This section covers Ansible Automation Platform product use that has specific content or context for using Ansible Automation Platform as a service. 6.1. Configure automation to use instance groups Red Hat Ansible Automation Platform Service on AWS requires that customers implement their own automation execution plane. Job templates must use a customer-configured instance or container group to run. If omitted, job runs can seem non-functional and eventually time out due to automation execution failure. Each job template must be assigned to a customer-configured instance group to function. 6.2. Syncing content with private automation hub Private automation hub allows you to attempt to sync all content between automation hub or Ansible Galaxy. However, this synchronization fails due to the storage and resource demands of such a large task. When syncing content from external sources, limit the synchronization to the collections your organization plans to use, focusing on recent or known used versions to reduce the synchronization scope.
null
https://docs.redhat.com/en/documentation/ansible_on_clouds/2.x/html/red_hat_ansible_automation_platform_service_on_aws/saas-aap-best-practices
7.2. Guest Clusters
7.2. Guest Clusters This refers to Red Hat Enterprise Linux Cluster/HA running inside of virtualized guests on a variety of virtualization platforms. In this use-case Red Hat Enterprise Linux Clustering/HA is primarily used to make the applications running inside of the guests highly available. This use-case is similar to how Red Hat Enterprise Linux Clustering/HA has always been used in traditional bare-metal hosts. The difference is that Clustering runs inside of guests instead. The following is a list of virtualization platforms and the level of support currently available for running guest clusters using Red Hat Enterprise Linux Cluster/HA. In the below list, Red Hat Enterprise Linux 6 Guests encompass both the High Availability (core clustering) and Resilient Storage Add-Ons (GFS2, clvmd and cmirror). Red Hat Enterprise Linux 5.3+ Xen hosts fully support running guest clusters where the guest operating systems are also Red Hat Enterprise Linux 5.3 or above: Xen guest clusters can use either fence_xvm or fence_scsi for guest fencing. Usage of fence_xvm/fence_xvmd requires a host cluster to be running to support fence_xvmd and fence_xvm must be used as the guest fencing agent on all clustered guests. Shared storage can be provided by either iSCSI or Xen shared block devices backed either by host block storage or by file backed storage (raw images). Red Hat Enterprise Linux 5.5+ KVM hosts do not support running guest clusters. Red Hat Enterprise Linux 6.1+ KVM hosts support running guest clusters where the guest operating systems are either Red Hat Enterprise Linux 6.1+ or Red Hat Enterprise Linux 5.6+. Red Hat Enterprise Linux 4 guests are not supported. Mixing bare metal cluster nodes with cluster nodes that are virtualized is permitted. Red Hat Enterprise Linux 5.6+ guest clusters can use either fence_xvm or fence_scsi for guest fencing. Red Hat Enterprise Linux 6.1+ guest clusters can use either fence_xvm (in the fence-virt package) or fence_scsi for guest fencing. The Red Hat Enterprise Linux 6.1+ KVM Hosts must use fence_virtd if the guest cluster is using fence_virt or fence_xvm as the fence agent. If the guest cluster is using fence_scsi then fence_virtd on the hosts is not required. fence_virtd can operate in three modes: Standalone mode where the host to guest mapping is hard coded and live migration of guests is not allowed Using the Openais Checkpoint service to track live-migrations of clustered guests. This requires a host cluster to be running. Using the Qpid Management Framework (QMF) provided by the libvirt-qpid package. This utilizes QMF to track guest migrations without requiring a full host cluster to be present. Shared storage can be provided by either iSCSI or KVM shared block devices backed by either host block storage or by file backed storage (raw images). Red Hat Enterprise Virtualization Management (RHEV-M) versions 2.2+ and 3.0 currently support Red Hat Enterprise Linux 5.6+ and Red Hat Enterprise Linux 6.1+ clustered guests. Guest clusters must be homogeneous (either all Red Hat Enterprise Linux 5.6+ guests or all Red Hat Enterprise Linux 6.1+ guests). Mixing bare metal cluster nodes with cluster nodes that are virtualized is permitted. Fencing is provided by fence_scsi in RHEV-M 2.2+ and by both fence_scsi and fence_rhevm in RHEV-M 3.0. Fencing is supported using fence_scsi as described below: Use of fence_scsi with iSCSI storage is limited to iSCSI servers that support SCSI 3 Persistent Reservations with the PREEMPT AND ABORT command. Not all iSCSI servers support this functionality. Check with your storage vendor to ensure that your server is compliant with SCSI 3 Persistent Reservation support. Note that the iSCSI server shipped with Red Hat Enterprise Linux does not presently support SCSI 3 Persistent Reservations, so it is not suitable for use with fence_scsi. VMware vSphere 4.1, VMware vCenter 4.1, VMware ESX and ESXi 4.1 support running guest clusters where the guest operating systems are Red Hat Enterprise Linux 5.7+ or Red Hat Enterprise Linux 6.2+. Version 5.0 of VMware vSphere, vCenter, ESX and ESXi are also supported; however, due to an incomplete WDSL schema provided in the initial release of VMware vSphere 5.0, the fence_vmware_soap utility does not work on the default install. Refer to the Red Hat Knowledgebase https://access.redhat.com/knowledge/ for updated procedures to fix this issue. Guest clusters must be homogeneous (either all Red Hat Enterprise Linux 5.7+ guests or all Red Hat Enterprise Linux 6.1+ guests). Mixing bare metal cluster nodes with cluster nodes that are virtualized is permitted. The fence_vmware_soap agent requires the 3rd party VMware perl APIs. This software package must be downloaded from VMware's web site and installed onto the Red Hat Enterprise Linux clustered guests. Alternatively, fence_scsi can be used to provide fencing as described below. Shared storage can be provided by either iSCSI or VMware raw shared block devices. Use of VMware ESX guest clusters is supported using either fence_vmware_soap or fence_scsi. Use of Hyper-V guest clusters is unsupported at this time. 7.2.1. Using fence_scsi and iSCSI Shared Storage In all of the above virtualization environments, fence_scsi and iSCSI storage can be used in place of native shared storage and the native fence devices. fence_scsi can be used to provide I/O fencing for shared storage provided over iSCSI if the iSCSI target properly supports SCSI 3 persistent reservations and the PREEMPT AND ABORT command. Check with your storage vendor to determine if your iSCSI solution supports the above functionality. The iSCSI server software shipped with Red Hat Enterprise Linux does not support SCSI 3 persistent reservations; therefore, it cannot be used with fence_scsi. It is suitable for use as a shared storage solution in conjunction with other fence devices like fence_vmware or fence_rhevm, however. If using fence_scsi on all guests, a host cluster is not required (in the Red Hat Enterprise Linux 5 Xen/KVM and Red Hat Enterprise Linux 6 KVM Host use cases) If fence_scsi is used as the fence agent, all shared storage must be over iSCSI. Mixing of iSCSI and native shared storage is not permitted. 7.2.2. General Recommendations As stated above it is recommended to upgrade both the hosts and guests to the latest Red Hat Enterprise Linux packages before using virtualization capabilities, as there have been many enhancements and bug fixes. Mixing virtualization platforms (hypervisors) underneath guest clusters is not supported. All underlying hosts must use the same virtualization technology. It is not supported to run all guests in a guest cluster on a single physical host as this provides no high availability in the event of a single host failure. This configuration can be used for prototype or development purposes, however. Best practices include the following: It is not necessary to have a single host per guest, but this configuration does provide the highest level of availability since a host failure only affects a single node in the cluster. If you have a 2-to-1 mapping (two guests in a single cluster per physical host) this means a single host failure results in two guest failures. Therefore it is advisable to get as close to a 1-to-1 mapping as possible. Mixing multiple independent guest clusters on the same set of physical hosts is not supported at this time when using the fence_xvm/fence_xvmd or fence_virt/fence_virtd fence agents. Mixing multiple independent guest clusters on the same set of physical hosts will work if using fence_scsi + iSCSI storage or if using fence_vmware + VMware (ESX/ESXi and vCenter). Running non-clustered guests on the same set of physical hosts as a guest cluster is supported, but since hosts will physically fence each other if a host cluster is configured, these other guests will also be terminated during a host fencing operation. Host hardware should be provisioned such that memory or virtual CPU overcommit is avoided. Overcommitting memory or virtual CPU will result in performance degradation. If the performance degradation becomes critical the cluster heartbeat could be affected, which may result in cluster failure.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/high_availability_add-on_overview/s1-virt-guestcluster