title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
9.12. Setting SASL Mechanisms | 9.12. Setting SASL Mechanisms Per default, Directory Server enables all mechanisms the simple authentication and security layer (SASL) library supports. These are listed in the root dse supportedSASLMechanisms parameter. To enable specific SASL mechanisms, set the nsslapd-allowed-sasl-mechanisms attribute in the cn=config entry. For example, to enable only the GSSAPI and DIGEST-MD5 mechanism, run: Note Even if EXTERNAL is not listed in the nsslapd-allowed-sasl-mechanisms parameter, this mechanism is always enabled. For further details, see the corresponding section in the Red Hat Directory Server Configuration, Command, and File Reference . | [
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-allowed-sasl-mechanisms=\"GSSAPI, DIGEST-MD5\" Successfully replaced \"nsslapd-allowed-sasl-mechanisms\""
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/setting-sasl-mech |
Part I. Installing Local Storage Operator | Part I. Installing Local Storage Operator Use this procedure to install the Local Storage Operator from the Operator Hub before creating OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword... box to find the Local Storage Operator from the list of operators and select the same. Set the following options on the Install Operator page: Update channel as stable . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Approval Strategy as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator by using the Red Hat OpenShift Container Platform Operator Hub. For information about the hardware and software requirements, see Planning your deployment . Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. After the operator is successfully installed, a pop-up with a message Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Storage and verify if Data Foundation is available. 2. Creating standalone Multicloud Object Gateway on IBM Z You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. (For deploying using local storage devices only) Ensure that Local Storage Operator is installed. To identify storage devices on each node, see Finding available storage devices . Procedure Log into the OpenShift Web Console. In openshift-local-storage namespace, click Operators Installed Operators to view the installed operators. Click the Local Storage installed operator. On the Operator Details page, click the Local Volume link. Click Create Local Volume . Click on YAML view for configuring Local Volume. Define a LocalVolume custom resource for filesystem PVs using the following YAML. The above definition selects sda local device from the worker-0 , worker-1 and worker-2 nodes. The localblock storage class is created and persistent volumes are provisioned from sda . Important Specify appropriate values of nodeSelector as per your environment. The device name should be same on all the worker nodes. You can also specify more than one devicePaths. Click Create . In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option for Backing storage type . Select the Storage Class that you used while installing LocalVolume. Click . Optional: In the Security page, select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate , and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . Click the Storage Systems tab and then click on ocs-storagecluster-storagesystem . In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) noobaa-default-backing-store-noobaa-pod-* (1 pod on any storage node) | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: localblock namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 - worker-2 storageClassDevices: - devicePaths: - /dev/sda storageClassName: localblock volumeMode: Filesystem"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_ibm_z/installing-local-storage-operator-ibm-z_ibmz |
Chapter 3. Developing Jakarta XML Web Services | Chapter 3. Developing Jakarta XML Web Services Jakarta XML Web Services defines the mapping between WSDL and Java, as well as the classes to be used for accessing web services and publishing them. JBossWS implements Jakarta XML Web Services 2.3 , which users can reference for any vendor-agnostic web service usage need. 3.1. Using Jakarta XML Web Services Tools The following Jakarta XML Web Services command-line tools are included with the JBoss EAP distribution. These tools can be used in a variety of ways for server and client-side development. Table 3.1. Jakarta XML Web Services Command-Line Tools Command Description wsprovide Generates Jakarta XML Web Services portable artifacts, and provides the abstract contract. Used for bottom-up development. wsconsume Consumes the abstract contract (WSDL and Schema files), and produces artifacts for both a server and client. Used for top-down and client development. See Jakarta XML Web Services Tools for more details on the usage of these tools. 3.1.1. Server-side Development Strategies When developing a web service endpoint on the server side, you have the option of starting from Java code, known as bottom-up development , or from the WSDL that defines your service, known as top-down development . If this is a new service, meaning that there is no existing contract, then the bottom-up approach is the fastest route; you only need to add a few annotations to your classes to get a service up and running. However, if you are developing a service with a contract already defined, it is far simpler to use the top-down approach, since the tool can generate the annotated code for you. Bottom-up use cases: Exposing an already existing Jakarta Enterprise Beans 3 bean as a web service. Providing a new service, and you want the contract to be generated for you. Top-down use cases: Replacing the implementation of an existing web service, and you can not break compatibility with older clients. Exposing a service that conforms to a contract specified by a third party, for example, a vendor that calls you back using an already defined protocol. Creating a service that adheres to the XML Schema and WSDL you developed by hand up front. Bottom-Up Strategy Using wsprovide The bottom-up strategy involves developing the Java code for your service, and then annotating it using Jakarta XML Web Services annotations. These annotations can be used to customize the contract that is generated for your service. For example, you can change the operation name to map to anything you like. However, all of the annotations have sensible defaults, so only the @WebService annotation is required. This can be as simple as creating a single class: package echo; @javax.jws.WebService public class Echo { public String echo(String input) { return input; } } A deployment can be built using this class, and it is the only Java code needed to deploy on JBossWS. The WSDL, and all other Java artifacts called wrapper classes will be generated for you at deploy time. The primary purpose of the wsprovide tool is to generate portable Jakarta XML Web Services artifacts. Additionally, it can be used to provide the WSDL file for your service. This can be obtained by invoking wsprovide using the -w option: Inspecting the WSDL reveals a service named EchoService : <wsdl:service name="EchoService"> <wsdl:port name="EchoPort" binding="tns:EchoServiceSoapBinding"> <soap:address location="http://localhost:9090/EchoPort"/> </wsdl:port> </wsdl:service> As expected, this service defines an operation, echo : <wsdl:portType name="Echo"> <wsdl:operation name="echo"> <wsdl:input name="echo" message="tns:echo"> </wsdl:input> <wsdl:output name="echoResponse" message="tns:echoResponse"> </wsdl:output> </wsdl:operation> </wsdl:portType> When deploying you do not need to run this tool. You only need it for generating portable artifacts or the abstract contract for your service. A POJO endpoint for the deployment can be created in a simple web.xml file: <web-app xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd" version="2.4"> <servlet> <servlet-name>Echo</servlet-name> <servlet-class>echo.Echo</servlet-class> </servlet> <servlet-mapping> <servlet-name>Echo</servlet-name> <url-pattern>/Echo</url-pattern> </servlet-mapping> </web-app> The web.xml and the single Java class can now be used to create a WAR: The WAR can then be deployed to JBoss EAP. This will internally invoke wsprovide , which will generate the WSDL. If the deployment was successful, and you are using the default settings, it should be available in the management console. Note For a portable Jakarta XML Web Services deployment, the wrapper classes generated earlier could be added to the deployment. Top-Down Strategy Using wsconsume The top-down development strategy begins with the abstract contract for the service, which includes the WSDL file and zero or more schema files. The wsconsume tool is then used to consume this contract, and produce annotated Java classes, and optionally sources, that define it. Note wsconsume might have problems with symlinks on Unix systems. Using the WSDL file from the bottom-up example, a new Java implementation that adheres to this service can be generated. The -k option is passed to wsconsume to preserve the Java source files that are generated, instead of providing just Java classes: The following table shows the purpose of each generated file: Table 3.2. Generated Files File Purpose Echo.java Service Endpoint Interface EchoResponse.java Wrapper bean for response message EchoService.java Used only by Jakarta XML Web Services clients Echo_Type.java Wrapper bean for request message ObjectFactory.java Jakarta XML Binding XML Registry package-info.java Holder for Jakarta XML Binding package annotations Examining the service endpoint interface reveals annotations that are more explicit than in the class written by hand in the bottom-up example, however, these evaluate to the same contract. @WebService(targetNamespace = "http://echo/", name = "Echo") @XmlSeeAlso({ObjectFactory.class}) public interface Echo { @WebMethod @RequestWrapper(localName = "echo", targetNamespace = "http://echo/", className = "echo.Echo_Type") @ResponseWrapper(localName = "echoResponse", targetNamespace = "http://echo/", className = "echo.EchoResponse") @WebResult(name = "return", targetNamespace = "") public java.lang.String echo( @WebParam(name = "arg0", targetNamespace = "") java.lang.String arg0 ); } The only missing piece, other than for packaging, is the implementation class, which can now be written using the above interface. package echo; @javax.jws.WebService(endpointInterface="echo.Echo") public class EchoImpl implements Echo { public String echo(String arg0) { return arg0; } } 3.1.2. Client-side Development Strategies Before going in to detail on the client side, it is important to understand the decoupling concept that is central to web services. Web services are not the best fit for internal RPC, even though they can be used in this way. There are much better technologies for this, such as CORBA and RMI. Web services were designed specifically for interoperable coarse-grained correspondence. There is no expectation or guarantee that any party participating in a web service interaction will be at any particular location, running on any particular operating system, or written in any particular programming language. So because of this, it is important to clearly separate client and server implementations. The only thing they should have in common is the abstract contract definition. If, for whatever reason, your software does not adhere to this principal, then you should not be using web services. For the above reasons, the recommended methodology for developing a client is to follow the top-down approach, even if the client is running on the same server. Top-Down Strategy Using wsconsume This section repeats the process of the server-side top-down section, however, it uses a deployed WSDL. This is to retrieve the correct value for soap:address , shown below, which is computed at deploy time. This value can be edited manually in the WSDL if necessary, but you must take care to provide the correct path. Example: soap:address in a Deployed WSDL <wsdl:service name="EchoService"> <wsdl:port name="EchoPort" binding="tns:EchoServiceSoapBinding"> <soap:address location="http://localhost.localdomain:8080/echo/Echo"/> </wsdl:port> </wsdl:service> Use wsconsume to generate Java classes for the deployed WSDL. Notice how the EchoService.java class stores the location from which the WSDL was obtained. @WebServiceClient(name = "EchoService", wsdlLocation = "http://localhost:8080/echo/Echo?wsdl", targetNamespace = "http://echo/") public class EchoService extends Service { public final static URL WSDL_LOCATION; public final static QName SERVICE = new QName("http://echo/", "EchoService"); public final static QName EchoPort = new QName("http://echo/", "EchoPort"); ... @WebEndpoint(name = "EchoPort") public Echo getEchoPort() { return super.getPort(EchoPort, Echo.class); } @WebEndpoint(name = "EchoPort") public Echo getEchoPort(WebServiceFeature... features) { return super.getPort(EchoPort, Echo.class, features); } } As you can see, this generated class extends the main client entry point in Jakarta XML Web Services, javax.xml.ws.Service . While you can use Service directly, this is far simpler since it provides the configuration information for you. Note the getEchoPort() method, which returns an instance of our service endpoint interface. Any web service operation can then be called by just invoking a method on the returned interface. Important Do not refer to a remote WSDL URL in a production application. This causes network I/O every time you instantiate the Service object. Instead, use the tool on a saved local copy, or use the URL version of the constructor to provide a new WSDL location. Write and compile the client: import echo.*; public class EchoClient { public static void main(String args[]) { if (args.length != 1) { System.err.println("usage: EchoClient <message>"); System.exit(1); } EchoService service = new EchoService(); Echo echo = service.getEchoPort(); System.out.println("Server said: " + echo.echo(args0)); } } You can change the endpoint address of your operation at runtime, by setting the ENDPOINT_ADDRESS_PROPERTY as shown below: EchoService service = new EchoService(); Echo echo = service.getEchoPort(); /* Set NEW Endpoint Location */ String endpointURL = "http://NEW_ENDPOINT_URL"; BindingProvider bp = (BindingProvider)echo; bp.getRequestContext().put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, endpointURL); System.out.println("Server said: " + echo.echo(args0)); 3.2. Jakarta XML Web Services Web Service Endpoints 3.2.1. About Jakarta XML Web Services Web Service Endpoints A Jakarta XML Web Services web service endpoint is the server component of a web service. Clients and other web services communicate with it over the HTTP protocol using an XML language called Simple Object Access Protocol (SOAP). The endpoint itself is deployed into the JBoss EAP container. WSDL descriptors can be created in one of the following two ways: Writing WSDL descriptors manually. Using Jakarta XML Web Services annotations that create the WSDL descriptors automatically. This is the most common method for creating WSDL descriptors. An endpoint implementation bean is annotated with Jakarta XML Web Services annotations and deployed to the server. The server automatically generates and publishes the abstract contract in WSDL format for client consumption. All marshalling and unmarshalling is delegated to the Jakarta XML Binding service. The endpoint itself might be a Plain Old Java Object (POJO) or a Jakarta EE web application. You can also expose endpoints using a Jakarta Enterprise Beans 3 stateless session bean. It is packaged into a web archive (WAR) file. The specification for packaging the endpoint is defined in the Jakarta Web Services Metadata Specification 2.1 . Example: POJO Endpoint @WebService @SOAPBinding(style = SOAPBinding.Style.RPC) public class JSEBean { @WebMethod public String echo(String input) { ... } } Example: Web Services Endpoint <web-app ...> <servlet> <servlet-name>TestService</servlet-name> <servlet-class>org.jboss.quickstarts.ws.jaxws.samples.jsr181pojo.JSEBean01</servlet-class> </servlet> <servlet-mapping> <servlet-name>TestService</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> </web-app> The following Jakarta Enterprise Beans 3 stateless session bean exposes the same method on the remote interface as well as an endpoint operation. @Stateless @Remote(EJB3RemoteInterface.class) @WebService @SOAPBinding(style = SOAPBinding.Style.RPC) public class EJB3Bean implements EJB3RemoteInterface { @WebMethod public String echo(String input) { ... } } Service Endpoint Interface Jakarta XML Web Services services typically implement a Java service endpoint interface (SEI), which might be mapped from a WSDL port type, either directly or using annotations. This SEI provides a high-level abstraction that hides the details between Java objects and their XML representations. Endpoint Provider Interface In some cases, Jakarta XML Web Services services need the ability to operate at the XML message level. The endpoint Provider interface provides this functionality to the web services that implement it. Consuming and Accessing the Endpoint After you deploy your web service, you can consume the WSDL to create the component stubs which will be the basis for your application. Your application can then access the endpoint to do its work. 3.2.2. Developing and Deploying Jakarta XML Web Services Web Service Endpoint A Jakarta XML Web Services service endpoint is a server-side component that responds to requests from Jakarta XML Web Services clients and publishes the WSDL definition for itself. See the following quickstarts that ship with JBoss EAP for working examples of how to develop Jakarta XML Web Services endpoint applications. jaxws-addressing jaxws-ejb jaxws-pojo jaxws-retail wsat-simple wsba-coordinator-completion-simple wsba-participant-completion-simple Development Requirements A web service must fulfill the requirements of the Jakarta XML Web Services API and Jakarta Web Services Metadata Specification 2.1 specification . It contains a javax.jws.WebService annotation. All method parameters and return types are compatible with Jakarta XML Binding 2.3 specification . The following is an example of a web service implementation that meets these requirements. Example: Web Service Implementation package org.jboss.quickstarts.ws.jaxws.samples.retail.profile; import javax.ejb.Stateless; import javax.jws.WebService; import javax.jws.WebMethod; import javax.jws.soap.SOAPBinding; @Stateless @WebService( name = "ProfileMgmt", targetNamespace = "http://org.jboss.ws/samples/retail/profile", serviceName = "ProfileMgmtService") @SOAPBinding(parameterStyle = SOAPBinding.ParameterStyle.BARE) public class ProfileMgmtBean { @WebMethod public DiscountResponse getCustomerDiscount(DiscountRequest request) { DiscountResponse dResponse = new DiscountResponse(); dResponse.setCustomer(request.getCustomer()); dResponse.setDiscount(10.00); return dResponse; } } The following is an example of the DiscountRequest class that is used by the ProfileMgmtBean bean in the example. The annotations are included for verbosity. Typically, the Jakarta XML Binding defaults are reasonable and do not need to be specified. Example: DiscountRequest Class package org.jboss.test.ws.jaxws.samples.retail.profile; import javax.xml.bind.annotation.XmlAccessType; import javax.xml.bind.annotation.XmlAccessorType; import javax.xml.bind.annotation.XmlType; import org.jboss.test.ws.jaxws.samples.retail.Customer; @XmlAccessorType(XmlAccessType.FIELD) @XmlType( name = "discountRequest", namespace="http://org.jboss.ws/samples/retail/profile", propOrder = { "customer" } ) public class DiscountRequest { protected Customer customer; public DiscountRequest() { } public DiscountRequest(Customer customer) { this.customer = customer; } public Customer getCustomer() { return customer; } public void setCustomer(Customer value) { this.customer = value; } } Packaging Your Deployment The implementation class is wrapped in a JAR deployment. Any metadata required for deployment is taken from the annotations on the implementation class and the service endpoint interface. You can deploy the JAR using the management CLI or the management console, and the HTTP endpoint is created automatically. The following listing shows an example of the structure for a JAR deployment of a Jakarta Enterprise Beans web service. 3.3. Jakarta XML Web Services Web Service Clients 3.3.1. Consume and Access a Jakarta XML Web Services Web Service After creating a web service endpoint, either manually or using Jakarta XML Web Services annotations, you can access its WSDL. This WSDL can be used to create the basic client application that will communicate with the web service. The process of generating Java code from the published WSDL is called consuming the web service. This happens in the following phases: Create the client artifacts . Construct a service stub . Create the Client Artifacts Before you can create client artifacts, you need to create your WSDL contract. The following WSDL contract is used for the examples presented in the rest of this section. The examples below rely on having this WSDL contract in the ProfileMgmtService.wsdl file. <definitions name='ProfileMgmtService' targetNamespace='http://org.jboss.ws/samples/retail/profile' xmlns='http://schemas.xmlsoap.org/wsdl/' xmlns:ns1='http://org.jboss.ws/samples/retail' xmlns:soap='http://schemas.xmlsoap.org/wsdl/soap/' xmlns:tns='http://org.jboss.ws/samples/retail/profile' xmlns:xsd='http://www.w3.org/2001/XMLSchema'> <types> <xs:schema targetNamespace='http://org.jboss.ws/samples/retail' version='1.0' xmlns:xs='http://www.w3.org/2001/XMLSchema'> <xs:complexType name='customer'> <xs:sequence> <xs:element minOccurs='0' name='creditCardDetails' type='xs:string'/> <xs:element minOccurs='0' name='firstName' type='xs:string'/> <xs:element minOccurs='0' name='lastName' type='xs:string'/> </xs:sequence> </xs:complexType> </xs:schema> <xs:schema targetNamespace='http://org.jboss.ws/samples/retail/profile' version='1.0' xmlns:ns1='http://org.jboss.ws/samples/retail' xmlns:tns='http://org.jboss.ws/samples/retail/profile' xmlns:xs='http://www.w3.org/2001/XMLSchema'> <xs:import namespace='http://org.jboss.ws/samples/retail'/> <xs:element name='getCustomerDiscount' nillable='true' type='tns:discountRequest'/> <xs:element name='getCustomerDiscountResponse' nillable='true' type='tns:discountResponse'/> <xs:complexType name='discountRequest'> <xs:sequence> <xs:element minOccurs='0' name='customer' type='ns1:customer'/> </xs:sequence> </xs:complexType> <xs:complexType name='discountResponse'> <xs:sequence> <xs:element minOccurs='0' name='customer' type='ns1:customer'/> <xs:element name='discount' type='xs:double'/> </xs:sequence> </xs:complexType> </xs:schema> </types> <message name='ProfileMgmt_getCustomerDiscount'> <part element='tns:getCustomerDiscount' name='getCustomerDiscount'/> </message> <message name='ProfileMgmt_getCustomerDiscountResponse'> <part element='tns:getCustomerDiscountResponse' name='getCustomerDiscountResponse'/> </message> <portType name='ProfileMgmt'> <operation name='getCustomerDiscount' parameterOrder='getCustomerDiscount'> <input message='tns:ProfileMgmt_getCustomerDiscount'/> <output message='tns:ProfileMgmt_getCustomerDiscountResponse'/> </operation> </portType> <binding name='ProfileMgmtBinding' type='tns:ProfileMgmt'> <soap:binding style='document' transport='http://schemas.xmlsoap.org/soap/http'/> <operation name='getCustomerDiscount'> <soap:operation soapAction=''/> <input> <soap:body use='literal'/> </input> <output> <soap:body use='literal'/> </output> </operation> </binding> <service name='ProfileMgmtService'> <port binding='tns:ProfileMgmtBinding' name='ProfileMgmtPort'> <!-- service address will be rewritten to actual one when WSDL is requested from running server --> <soap:address location='http://SERVER:PORT/jaxws-retail/ProfileMgmtBean'/> </port> </service> </definitions> Note If you use Jakarta XML Web Services annotations to create your web service endpoint, the WSDL contract is generated automatically, and you only need its URL. You can find this URL by navigating to Runtime , selecting the applicable server, selecting Webservices , then choosing the endpoint. The wsconsume.sh or wsconsume.bat tool is used to consume the abstract contract (WSDL) and produce annotated Java classes and optional sources that define it. The tool is located in the EAP_HOME /bin/ directory. The following command generates the source .java files listed in the output, from the ProfileMgmtService.wsdl file. The sources use the directory structure of the package, which is specified with the -p switch. Both .java source files and compiled .class files are generated into the output/ directory within the directory where you run the command. Table 3.3. Descriptions of Artifacts Created by wsconsume.sh File Description ProfileMgmt.java Service endpoint interface. Customer.java Custom data type. Discount.java Custom data types. ObjectFactory.java Jakarta XML Binding XML registry. package-info.java Jakarta XML Binding package annotations. ProfileMgmtService.java Service factory. The wsconsume command generates all custom data types (Jakarta XML Binding annotated classes), the service endpoint interface, and a service factory class. These artifacts are used to build web service client implementations. Construct a Service Stub Web service clients use service stubs to abstract the details of a remote web service invocation. To a client application, a web service invocation looks like an invocation of any other business component. In this case the service endpoint interface acts as the business interface, and a service factory class is not used to construct it as a service stub. The following example first creates a service factory using the WSDL location and the service name. , it uses the service endpoint interface created by wsconsume to build the service stub. Finally, the stub can be used just as any other business interface would be. You can find the WSDL URL for your endpoint in the JBoss EAP management console. You can find this URL by navigating to Runtime , selecting the applicable server, selecting Webservices , then choosing the endpoint. import javax.xml.ws.Service; [...] Service service = Service.create( new URL("http://example.org/service?wsdl"), new QName("MyService") ); ProfileMgmt profileMgmt = service.getPort(ProfileMgmt.class); // Use the service stub in your application 3.3.2. Develop a Jakarta XML Web Services Client Application The client communicates with, and requests work from, the Jakarta XML Web Services endpoint, which is deployed in the Java Enterprise Edition 7 container. For detailed information about the classes, methods, and other implementation details mentioned below, see the relevant sections of the Javadocs bundle included with JBoss EAP. Overview A Service is an abstraction which represents a WSDL service. A WSDL service is a collection of related ports, each of which includes a port type bound to a particular protocol and a particular endpoint address. Usually, the Service is generated when the rest of the component stubs are generated from an existing WSDL contract. The WSDL contract is available via the WSDL URL of the deployed endpoint, or can be created from the endpoint source using the wsprovide tool in the EAP_HOME /bin/ directory. This type of usage is referred to as the static use case. In this case, you create instances of the Service class which is created as one of the component stubs. You can also create the service manually, using the Service.create method. This is referred to as the dynamic use case. Usage Static Use Case The static use case for a Jakarta XML Web Services client assumes that you already have a WSDL contract. This might be generated by an external tool or generated by using the correct Jakarta XML Web Services annotations when you create your Jakarta XML Web Services endpoint. To generate your component stubs, you use the wsconsume tool included in EAP_HOME /bin . The tool takes the WSDL URL or file as a parameter, and generates multiple files, structured in a directory tree. The source and class files representing your Service are named _Service.java and _Service.class , respectively. The generated implementation class has two public constructors, one with no arguments and one with two arguments. The two arguments represent the WSDL location (a java.net.URL ) and the service name (a javax.xml.namespace.QName ) respectively. The no-argument constructor is the one used most often. In this case the WSDL location and service name are those found in the WSDL. These are set implicitly from the @WebServiceClient annotation that decorates the generated class. @WebServiceClient(name="StockQuoteService", targetNamespace="http://example.com/stocks", wsdlLocation="http://example.com/stocks.wsdl") public class StockQuoteService extends javax.xml.ws.Service { public StockQuoteService() { super(new URL("http://example.com/stocks.wsdl"), new QName("http://example.com/stocks", "StockQuoteService")); } public StockQuoteService(String wsdlLocation, QName serviceName) { super(wsdlLocation, serviceName); } ... } For details about how to obtain a port from the service and how to invoke an operation on the port, see Dynamic Proxy . For details about how to work with the XML payload directly or with the XML representation of the entire SOAP message, see Dispatch . Dynamic Use Case In the dynamic case, no stubs are generated automatically. Instead, a web service client uses the Service.create method to create Service instances. The following code fragment illustrates this process. URL wsdlLocation = new URL("http://example.org/my.wsdl"); QName serviceName = new QName("http://example.org/sample", "MyService"); Service service = Service.create(wsdlLocation, serviceName); Handler Resolver Jakarta XML Web Services provides a flexible plug-in framework for message processing modules, known as handlers. These handlers extend the capabilities of a Jakarta XML Web Services runtime system. A Service instance provides access to a HandlerResolver via a pair of getHandlerResolver and setHandlerResolver methods that can configure a set of handlers on a per-service, per-port or per-protocol binding basis. When a Service instance creates a proxy or a Dispatch instance, the handler resolver currently registered with the service creates the required handler chain. Subsequent changes to the handler resolver configured for a Service instance do not affect the handlers on previously created proxies or Dispatch instances. Executor Service instances can be configured with a java.util.concurrent.Executor . The Executor invokes any asynchronous callbacks requested by the application. The setExecutor and getExecutor methods of Service can modify and retrieve the Executor configured for a service. Dynamic Proxy A dynamic proxy is an instance of a client proxy using one of the getPort methods provided in the Service . The portName specifies the name of the WSDL port the service uses. The serviceEndpointInterface specifies the service endpoint interface supported by the created dynamic proxy instance. public <T> T getPort(QName portName, Class<T> serviceEndpointInterface) public <T> T getPort(Class<T> serviceEndpointInterface) The Service Endpoint Interface is usually generated using the wsconsume tool, which parses the WSDL and creates Java classes from it. A typed method, which returns a port, is also provided. These methods also return dynamic proxies that implement the SEI. See the following example. @WebServiceClient(name = "TestEndpointService", targetNamespace = "http://org.jboss.ws/wsref", wsdlLocation = "http://localhost.localdomain:8080/jaxws-samples-webserviceref?wsdl") public class TestEndpointService extends Service { ... public TestEndpointService(URL wsdlLocation, QName serviceName) { super(wsdlLocation, serviceName); } @WebEndpoint(name = "TestEndpointPort") public TestEndpoint getTestEndpointPort() { return (TestEndpoint)super.getPort(TESTENDPOINTPORT, TestEndpoint.class); } } @WebServiceRef The @WebServiceRef annotation declares a reference to a web service. It follows the resource pattern shown by the javax.annotation.Resource annotation defined in JSR 250 . The Jakarta EE equivalent for these annotations is in the Jakarta Annotations 1.3 specification . You can use it to define a reference whose type is a generated Service class. In this case, the type and value element each refer to the generated Service class type. Moreover, if the reference type can be inferred by the field or method declaration the annotation is applied to, the type and value elements might, but are not required to, have the default value of Object.class . If the type cannot be inferred, then at least the type element must be present with a non-default value. You can use it to define a reference whose type is an SEI. In this case, the type element might (but is not required to) be present with its default value if the type of the reference can be inferred from the annotated field or method declaration. However, the value element must always be present and refer to a generated service class type, which is a subtype of javax.xml.ws.Service . The wsdlLocation element, if present, overrides the WSDL location information specified in the @WebService annotation of the referenced generated service class. public class EJB3Client implements EJB3Remote { @WebServiceRef public TestEndpointService service4; @WebServiceRef public TestEndpoint port3; } Dispatch XML web services use XML messages for communication between the endpoint, which is deployed in the Jakarta EE container, and any clients. The XML messages use an XML language called Simple Object Access Protocol (SOAP). The Jakarta XML Web Services API provides the mechanisms for the endpoint and clients to each be able to send and receive SOAP messages. Marshalling is the process of converting a Java Object into a SOAP XML message. Unmarshalling is the process of converting the SOAP XML message back into a Java Object. In some cases, you need access to the raw SOAP messages themselves, rather than the result of the conversion. The Dispatch class provides this functionality. Dispatch operates in one of two usage modes, which are identified by one of the following constants. javax.xml.ws.Service.Mode.MESSAGE - This mode directs client applications to work directly with protocol-specific message structures. When used with a SOAP protocol binding, a client application works directly with a SOAP message. javax.xml.ws.Service.Mode.PAYLOAD - This mode causes the client to work with the payload itself. For instance, if it is used with a SOAP protocol binding, a client application would work with the contents of the SOAP body rather than the entire SOAP message. Dispatch is a low-level API which requires clients to structure messages or payloads as XML, with strict adherence to the standards of the individual protocol and a detailed knowledge of message or payload structure. Dispatch is a generic class which supports input and output of messages or message payloads of any type. Service service = Service.create(wsdlURL, serviceName); Dispatch dispatch = service.createDispatch(portName, StreamSource.class, Mode.PAYLOAD); String payload = "<ns1:ping xmlns:ns1='http://oneway.samples.jaxws.ws.test.jboss.org/'/>"; dispatch.invokeOneWay(new StreamSource(new StringReader(payload))); payload = "<ns1:feedback xmlns:ns1='http://oneway.samples.jaxws.ws.test.jboss.org/'/>"; Source retObj = (Source)dispatch.invoke(new StreamSource(new StringReader(payload))); Asynchronous Invocations The BindingProvider interface represents a component that provides a protocol binding which clients can use. It is implemented by proxies and is extended by the Dispatch interface. BindingProvider instances might provide asynchronous operation capabilities. Asynchronous operation invocations are decoupled from the BindingProvider instance at invocation time. The response context is not updated when the operation completes. Instead, a separate response context is made available using the Response interface. public void testInvokeAsync() throws Exception { URL wsdlURL = new URL("http://" + getServerHost() + ":8080/jaxws-samples-asynchronous?wsdl"); QName serviceName = new QName(targetNS, "TestEndpointService"); Service service = Service.create(wsdlURL, serviceName); TestEndpoint port = service.getPort(TestEndpoint.class); Response response = port.echoAsync("Async"); // access future String retStr = (String) response.get(); assertEquals("Async", retStr); } @Oneway Invocations The @Oneway annotation indicates that the given web method takes an input message but returns no output message. Usually, a @Oneway method returns the thread of control to the calling application before the business method is executed. @WebService (name="PingEndpoint") @SOAPBinding(style = SOAPBinding.Style.RPC) public class PingEndpointImpl { private static String feedback; @WebMethod @Oneway public void ping() { log.info("ping"); feedback = "ok"; } @WebMethod public String feedback() { log.info("feedback"); return feedback; } } Timeout Configuration Two different properties control the timeout behavior of the HTTP connection and the timeout of a client which is waiting to receive a message. The first is javax.xml.ws.client.connectionTimeout and the second is javax.xml.ws.client.receiveTimeout . Each is expressed in milliseconds, and the correct syntax is shown below. public void testConfigureTimeout() throws Exception { //Set timeout until a connection is established ((BindingProvider)port).getRequestContext().put("javax.xml.ws.client.connectionTimeout", "6000"); //Set timeout until the response is received ((BindingProvider) port).getRequestContext().put("javax.xml.ws.client.receiveTimeout", "1000"); port.echo("testTimeout"); } 3.4. Configuring the Web Services Subsystem JBossWS components handle the processing of web service endpoints and are provided to JBoss EAP through the webservices subsystem. The subsystem supports the configuration of published endpoint addresses and endpoint handler chains. A default webservices subsystem is provided in the server's domain and standalone configuration files. It contains several predefined endpoint and client configurations. <subsystem xmlns="urn:jboss:domain:webservices:2.0"> <wsdl-host>USD{jboss.bind.address:127.0.0.1}</wsdl-host> <endpoint-config name="Standard-Endpoint-Config"/> <endpoint-config name="Recording-Endpoint-Config"> <pre-handler-chain name="recording-handlers" protocol-bindings="##SOAP11_HTTP ##SOAP11_HTTP_MTOM ##SOAP12_HTTP ##SOAP12_HTTP_MTOM"> <handler name="RecordingHandler" class="org.jboss.ws.common.invocation.RecordingServerHandler"/> </pre-handler-chain> </endpoint-config> <client-config name="Standard-Client-Config"/> </subsystem> 3.4.1. Endpoint Configurations JBossWS enables extra setup configuration data to be predefined and associated with an endpoint implementation. Predefined endpoint configurations can be used for Jakarta XML Web Services client and Jakarta XML Web Services endpoint setup. Endpoint configurations can include Jakarta XML Web Services handlers and key/value properties declarations. This feature provides a convenient way to add handlers to web service endpoints and to set key/value properties that control JBossWS and Apache CXF internals. The webservices subsystem allows you to define named sets of endpoint configuration data. Each endpoint configuration must have a unique name within the subsystem. The org.jboss.ws.api.annotation.EndpointConfig annotation can then be used to assign an endpoint configuration to a Jakarta XML Web Services implementation in a deployed application. See Assigning a Configuration for more information on assigning endpoint configurations. There are two predefined endpoint configurations in the default JBoss EAP configuration: Standard-Endpoint-Config is the endpoint configuration used for any endpoint that does not have an explicitly-assigned endpoint configuration. Recording-Endpoint-Config is an example of custom endpoint configuration that includes a recording handler. Add an Endpoint Configuration You can add a new endpoint configuration using the management CLI. Configure an Endpoint Configuration You can add key/value property declarations for the endpoint configuration using the management CLI. You can also configure handler chains and handlers for these endpoint configurations. Remove an Endpoint Configuration You can remove a endpoint configuration using the management CLI. 3.4.2. Handler Chains Each endpoint configuration can be associated with PRE or POST handler chains. Each handler chain may include Jakarta XML Web Services-compliant handlers to perform additional processing on messages. For outbound messages, PRE handler chain handlers are executed before any handler attached to the endpoints using standard Jakarta XML Web Services means, such as the @HandlerChain annotation. POST handler chain handlers are executed after usual endpoint handlers. For inbound messages, the opposite applies. Server Outbound Messages Server Inbound Messages Add a Handler Chain You can add a POST handler chain to an endpoint configuration using the following management CLI command. You can add a PRE handler chain to an endpoint configuration using the following management CLI command. Configure a Handler Chain Use the protocol-bindings attribute to set which protocols trigger the handler chain to start. See the handlers section for information on configuring handlers for a handler chain. Remove a Handler Chain You can remove a handler chain using the management CLI. 3.4.3. Handlers A Jakarta XML Web Services handler is added to a handler chain and specifies the fully-qualified name of the handler class. When the endpoint is deployed, an instance of that class is created for each referencing deployment. Either the deployment class loader or the class loader for the org.jboss.as.webservices.server.integration module must be able to load the handler class. See the Handler Javadocs for a listing of the available handlers. Add a Handler You can add a handler to a handler chain using the following management CLI command. You must provide the class name of the handler. Configure a Handler You can update the class for a handler using the management CLI. Remove a Handler You can remove a handler using the management CLI. 3.4.4. Published Endpoint Addresses The rewriting of the <soap:address> element of endpoints published in WSDL contracts is supported. This feature is useful for controlling the server address that is advertised to clients for each endpoint. The following table lists the attributes that can be configured for this feature. Name Description modify-wsdl-address This boolean enables and disables the address rewrite functionality. When modify-wsdl-address is set to true and the content of <soap:address> is a valid URL, JBossWS rewrites the URL using the values of wsdl-host and wsdl-port or wsdl-secure-port . When modify-wsdl-address is set to false and the content of <soap:address> is a valid URL, JBossWS does not rewrite the URL. The <soap:address> URL is used. When the content of <soap:address> is not a valid URL, JBossWS rewrites it no matter what the setting of modify-wsdl-address . If modify-wsdl-address is set to true and wsdl-host is not defined or explicitly set to jbossws.undefined.host , the content of <soap:address> URL is used. JBossWS uses the requester's host when rewriting the <soap:address> . When modify-wsdl-address is not defined JBossWS uses a default value of true . wsdl-host The host name or IP address to be used for rewriting <soap:address> . If wsdl-host is set to jbossws.undefined.host , JBossWS uses the requester's host when rewriting the <soap:address> . When wsdl-host is not defined JBossWS uses a default value of jbossws.undefined.host . wsdl-path-rewrite-rule This string defines a SED substitution command, for example s/regexp/replacement/g , that JBossWS executes against the path component of each <soap:address> URL published from the server. When wsdl-path-rewrite-rule is not defined, JBossWS retains the original path component of each <soap:address> URL. When modify-wsdl-address is set to false this element is ignored. wsdl-port Set this property to explicitly define the HTTP port that will be used for rewriting the SOAP address. Otherwise the HTTP port will be identified by querying the list of installed HTTP connectors. wsdl-secure-port Set this property to explicitly define the HTTPS port that will be used for rewriting the SOAP address. Otherwise the HTTPS port will be identified by querying the list of installed HTTPS connectors. wsdl-uri-scheme This property explicitly sets the URI scheme to use for rewriting <soap:address> . Valid values are http and https . This configuration overrides the scheme computed by processing the endpoint even if a transport guarantee is specified. The provided values for wsdl-port and wsdl-secure-port , or their default values, are used depending on the specified scheme. You can use the management CLI to update these attributes. For example: 3.4.5. Viewing Runtime Information Each web service endpoint is exposed through the deployment that provides the endpoint implementation. Each endpoint can be queried as a deployment resource. Each web service endpoint specifies a web context and a WSDL URL. You can access this runtime information using the management CLI or the management console. The following management CLI command shows the details of the TestService endpoint from the jaxws-samples-handlerchain.war deployment. Note Using the include-runtime=true flag on the read-resource operation returns runtime statistics in the result. However, the collection of statistics for web service endpoints is disabled by default. You can enable statistics for web service endpoints using the following management CLI command. You can also view runtime information for web services endpoints from the Runtime tab of the management console by selecting the applicable server, selecting Webservices , then choosing the endpoint. 3.5. Assigning Client and Endpoint Configurations Client and endpoint configurations can be assigned in the following ways: Explicit assignment through annotations, for endpoints, or API programmatic usage, for clients. Automatic assignment of configurations from default descriptors. Automatic assignment of configurations from the container. 3.5.1. Explicit Configuration Assignment The explicit configuration assignment is meant for developers that know in advance their endpoint or client has to be set up according to a specified configuration. The configuration is coming from either a descriptor that is included in the application deployment, or is included in the webservices subsystem. 3.5.1.1. Configuration Deployment Descriptor Jakarta EE archives that can contain Jakarta XML Web Services client and endpoint implementations may also contain predefined client and endpoint configuration declarations. All endpoint or client configuration definitions for a given archive must be provided in a single deployment descriptor file, which must be an implementation of the schema that can be found at EAP_HOME/docs/schema/jbossws-jaxws-config_4_0.xsd . Many endpoint or client configurations can be defined in the deployment descriptor file. Each configuration must have a name that is unique within the server on which the application is deployed. The configuration name cannot be referred to by endpoint or client implementations outside the application. Example: Descriptor with Two Endpoint Configurations <?xml version="1.0" encoding="UTF-8"?> <jaxws-config xmlns="urn:jboss:jbossws-jaxws-config:4.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:javaee="http://java.sun.com/xml/ns/javaee" xsi:schemaLocation="urn:jboss:jbossws-jaxws-config:4.0 schema/jbossws-jaxws-config_4_0.xsd"> <endpoint-config> <config-name>org.jboss.test.ws.jaxws.jbws3282.Endpoint4Impl</config-name> <pre-handler-chains> <javaee:handler-chain> <javaee:handler> <javaee:handler-name>Log Handler</javaee:handler-name> <javaee:handler-class>org.jboss.test.ws.jaxws.jbws3282.LogHandler</javaee:handler-class> </javaee:handler> </javaee:handler-chain> </pre-handler-chains> <post-handler-chains> <javaee:handler-chain> <javaee:handler> <javaee:handler-name>Routing Handler</javaee:handler-name> <javaee:handler-class>org.jboss.test.ws.jaxws.jbws3282.RoutingHandler</javaee:handler-class> </javaee:handler> </javaee:handler-chain> </post-handler-chains> </endpoint-config> <endpoint-config> <config-name>EP6-config</config-name> <post-handler-chains> <javaee:handler-chain> <javaee:handler> <javaee:handler-name>Authorization Handler</javaee:handler-name> <javaee:handler-class>org.jboss.test.ws.jaxws.jbws3282.AuthorizationHandler</javaee:handler-class> </javaee:handler> </javaee:handler-chain> </post-handler-chains> </endpoint-config> </jaxws-config> Similarly, a client configuration can be specified in descriptors, which is still implementing the schema mentioned above: <?xml version="1.0" encoding="UTF-8"?> <jaxws-config xmlns="urn:jboss:jbossws-jaxws-config:4.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:javaee="http://java.sun.com/xml/ns/javaee" xsi:schemaLocation="urn:jboss:jbossws-jaxws-config:4.0 schema/jbossws-jaxws-config_4_0.xsd"> <client-config> <config-name>Custom Client Config</config-name> <pre-handler-chains> <javaee:handler-chain> <javaee:handler> <javaee:handler-name>Routing Handler</javaee:handler-name> <javaee:handler-class>org.jboss.test.ws.jaxws.clientConfig.RoutingHandler</javaee:handler-class> </javaee:handler> <javaee:handler> <javaee:handler-name>Custom Handler</javaee:handler-name> <javaee:handler-class>org.jboss.test.ws.jaxws.clientConfig.CustomHandler</javaee:handler-class> </javaee:handler> </javaee:handler-chain> </pre-handler-chains> </client-config> <client-config> <config-name>Another Client Config</config-name> <post-handler-chains> <javaee:handler-chain> <javaee:handler> <javaee:handler-name>Routing Handler</javaee:handler-name> <javaee:handler-class>org.jboss.test.ws.jaxws.clientConfig.RoutingHandler</javaee:handler-class> </javaee:handler> </javaee:handler-chain> </post-handler-chains> </client-config> </jaxws-config> 3.5.1.2. Application Server Configuration JBoss EAP allows declaring JBossWS client and server predefined configurations in the webservices subsystem. As a result it is possible to declare server-wide handlers to be added to the chain of each endpoint or client assigned to a given configuration. Standard Configuration Clients running in the same JBoss EAP instance, as well as endpoints, are assigned standard configurations by default. The defaults are used unless different a configuration is set. This enables administrators to tune the default handler chains for client and endpoint configurations. The names of the default client and endpoint configurations used in the webservices subsystem are Standard-Client-Config and Standard-Endpoint-Config . Handlers Classloading When setting a server-wide handler, the handler class needs to be available through each ws deployment classloader. As a result proper module dependencies may need to be specified in the deployments that are going to use a given predefined configuration. One way to ensure the proper module dependencies are specified in the deployment is to add a dependency to the module containing the handler class in one of the modules which are already automatically set as dependencies to any deployment, for instance org.jboss.ws.spi . Example Configuration Example: Default Subsystem Configuration <subsystem xmlns="urn:jboss:domain:webservices:2.0"> <!-- ... --> <endpoint-config name="Standard-Endpoint-Config"/> <endpoint-config name="Recording-Endpoint-Config"> <pre-handler-chain name="recording-handlers" protocol-bindings="##SOAP11_HTTP ##SOAP11_HTTP_MTOM ##SOAP12_HTTP ##SOAP12_HTTP_MTOM"> <handler name="RecordingHandler" class="org.jboss.ws.common.invocation.RecordingServerHandler"/> </pre-handler-chain> </endpoint-config> <client-config name="Standard-Client-Config"/> </subsystem> A configuration file for a deployment specific ws-security endpoint setup: <jaxws-config xmlns="urn:jboss:jbossws-jaxws-config:4.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:javaee="http://java.sun.com/xml/ns/javaee" xsi:schemaLocation="urn:jboss:jbossws-jaxws-config:4.0 schema/jbossws-jaxws-config_4_0.xsd"> <endpoint-config> <config-name>Custom WS-Security Endpoint</config-name> <property> <property-name>ws-security.signature.properties</property-name> <property-value>bob.properties</property-value> </property> <property> <property-name>ws-security.encryption.properties</property-name> <property-value>bob.properties</property-value> </property> <property> <property-name>ws-security.signature.username</property-name> <property-value>bob</property-value> </property> <property> <property-name>ws-security.encryption.username</property-name> <property-value>alice</property-value> </property> <property> <property-name>ws-security.callback-handler</property-name> <property-value>org.jboss.test.ws.jaxws.samples.wsse.policy.basic.KeystorePasswordCallback</property-value> </property> </endpoint-config> </jaxws-config> JBoss EAP default configuration modified to default to SOAP messages schema-validation on: <subsystem xmlns="urn:jboss:domain:webservices:2.0"> <!-- ... --> <endpoint-config name="Standard-Endpoint-Config"> <property name="schema-validation-enabled" value="true"/> </endpoint-config> <!-- ... --> <client-config name="Standard-Client-Config"> <property name="schema-validation-enabled" value="true"/> </client-config> </subsystem> 3.5.1.3. EndpointConfig Annotation Once a configuration is available to a given application, the org.jboss.ws.api.annotation.EndpointConfig annotation is used to assign an endpoint configuration to a Jakarta XML Web Services endpoint implementation. When you assign a configuration that is defined in the webservices subsystem, you only need to specify the configuration name. When you assign a configuration that is defined in the application, you need to specify the relative path to the deployment descriptor and the configuration name. Example: EndpointConfig Annotation @EndpointConfig(configFile = "WEB-INF/my-endpoint-config.xml", configName = "Custom WS-Security Endpoint") public class ServiceImpl implements ServiceIface { public String sayHello() { return "Secure Hello World!"; } } 3.5.1.4. Jakarta XML Web Services Feature You can also use org.jboss.ws.api.configuration.ClientConfigFeature to set a configuration that is a Jakarta XML Web Services Feature extension provided by JBossWS. import org.jboss.ws.api.configuration.ClientConfigFeature; Service service = Service.create(wsdlURL, serviceName); Endpoint port = service.getPort(Endpoint.class, new ClientConfigFeature("META-INF/my-client-config.xml", "Custom Client Config")); port.echo("Kermit"); You can also set properties from the specified configuration by passing in true to the ClientConfigFeature constructor. Endpoint port = service.getPort(Endpoint.class, new ClientConfigFeature("META-INF/my-client-config.xml", "Custom Client Config"), true); JBossWS parses the specified configuration file, after having resolved it as a resource using the current thread context class loader. The EAP_HOME /docs/schema/jbossws-jaxws-config_4_0.xsd schema defines the descriptor contents and is included in the jbossws-spi artifact. If you pass in null for the configuration file, the configuration will be read from the current container configurations, if available. Endpoint port = service.getPort(Endpoint.class, new ClientConfigFeature(null, "Container Custom Client Config")); 3.5.1.5. Explicit Setup Through API Alternatively, the JBossWS API comes with facility classes that can be used for assigning configurations when building a client. Handlers Jakarta XML Web Services handlers are read from client configurations as follows. import org.jboss.ws.api.configuration.ClientConfigUtil; import org.jboss.ws.api.configuration.ClientConfigurer; Service service = Service.create(wsdlURL, serviceName); Endpoint port = service.getPort(Endpoint.class); BindingProvider bp = (BindingProvider)port; ClientConfigurer configurer = ClientConfigUtil.resolveClientConfigurer(); configurer.setConfigHandlers(bp, "META-INF/my-client-config.xml", "Custom Client Config"); port.echo("Kermit"); You can also use the ClientConfigUtil utility class to set up the handlers. ClientConfigUtil.setConfigHandlers(bp, "META-INF/my-client-config.xml", "Custom Client Config"); The default ClientConfigurer implementation parses the specified configuration file, after having resolved it as a resource using the current thread context class loader. The EAP_HOME/docs/schema/jbossws-jaxws-config_4_0.xsd schema defines the descriptor contents and is included in the jbossws-spi artifact. If you pass in null for the configuration file, the configuration will be read from the current container configurations, if available. ClientConfigurer configurer = ClientConfigUtil.resolveClientConfigurer(); configurer.setConfigHandlers(bp, null, "Container Custom Client Config"); Properties Similarly, properties are read from client configurations as follows. import org.jboss.ws.api.configuration.ClientConfigUtil; import org.jboss.ws.api.configuration.ClientConfigurer; Service service = Service.create(wsdlURL, serviceName); Endpoint port = service.getPort(Endpoint.class); ClientConfigUtil.setConfigProperties(port, "META-INF/my-client-config.xml", "Custom Client Config"); port.echo("Kermit"); You can also use the ClientConfigUtil utility class to set up the properties. ClientConfigurer configurer = ClientConfigUtil.resolveClientConfigurer(); configurer.setConfigProperties(port, "META-INF/my-client-config.xml", "Custom Client Config"); The default ClientConfigurer implementation parses the specified configuration file, after having resolved it as a resource using the current thread context class loader. The EAP_HOME/docs/schema/jbossws-jaxws-config_4_0.xsd schema defines the descriptor contents and is included in the jbossws-spi artifact. If you pass in null for the configuration file, the configuration will be read from the current container configurations, if available. ClientConfigurer configurer = ClientConfigUtil.resolveClientConfigurer(); configurer.setConfigProperties(port, null, "Container Custom Client Config"); 3.5.2. Automatic Configuration from Default Descriptors In some cases, the application developer might not be aware of the configuration that will need to be used for its client and endpoint implementation. In other cases, explicit usage of the JBossWS API might not be accepted because it is a compile-time dependency. To cope with such scenarios, JBossWS allows including default client, jaxws-client-config.xml , and endpoint, jaxws-endpoint-config.xml , descriptors within the application in its root directory. These are parsed for getting configurations whenever a configuration file name is not specified. <config-file>WEB-INF/jaxws-endpoint-config.xml</config-file> If the configuration name is not specified, JBossWS automatically looks for a configuration named as: The fully qualified name (FQN) of the endpoint implementation class, in case of Jakarta XML Web Services endpoints. The FQN of the service endpoint interface, in case of Jakarta XML Web Services clients. No automatic configuration name is selected for Dispatch clients. For example, an endpoint implementation class org.foo.bar.EndpointImpl , for which no predefined configuration is explicitly set, will cause JBossWS to look for a org.foo.bar.EndpointImpl named configuration within a jaxws-endpoint-config.xml descriptor in the root of the application deployment. Similarly, on the client side, a client proxy implementing org.foo.bar.Endpoint interface will have the setup read from a org.foo.bar.Endpoint named configuration in the jaxws-client-config.xml descriptor. 3.5.3. Automatic Configuration Assignment from Container JBossWS falls back to getting predefined configurations from the container whenever no explicit configuration has been provided and the default descriptors are either not available or do not contain relevant configurations. This behavior gives additional control on the Jakarta XML Web Services client and endpoint setup to administrators since the container can be managed independently from the deployed applications. JBossWS accesses the webservices subsystem for an explicitly named configuration. The default configuration names used are: The fully qualified name of the endpoint implementation class, in case of Jakarta XML Web Services endpoints. The fully qualified name of the service endpoint interface, in case of Jakarta XML Web Services clients. Dispatch clients are not automatically configured. If no configuration is found using names computed as above, the Standard-Client-Config and Standard-Endpoint-Config configurations are used for clients and endpoints respectively. 3.6. Setting Module Dependencies for Web Service Applications JBoss EAP web services are delivered as a set of modules and libraries, including the org.jboss.as.webservices.* and org.jboss.ws.* modules. You should not need to change these modules. With JBoss EAP you cannot directly use JBossWS implementation classes unless you explicitly set a dependency to the corresponding module. You declare the module dependencies that you want to be added to the deployment. The JBossWS APIs are available by default whenever the webservices subsystem is available. You can use them without creating an explicit dependencies declaration for those modules. 3.6.1. Using MANIFEST.MF To configure deployment dependencies, add them to the MANIFEST.MF file. For example: This MANIFEST.MF file declares dependencies on the org.jboss.ws.cxf.jbossws-cxf-client and foo.bar modules. For more information on declaring dependencies in a MANIFEST.MF file, including the export and services options, see Add a Dependency Configuration to MANIFEST.MF in the JBoss EAP Development Guide . When using annotations on the endpoints and handlers, for example, Apache CXF endpoints and handlers, add the proper module dependency in your manifest file. If you skip this step, your annotations are not picked up and are completely, silently ignored. 3.6.1.1. Using Jakarta XML Binding To successfully and directly use Jakarta XML Binding contexts in your client or endpoint running in-container, set up a Jakarta XML Binding implementation. For example, set the following dependency: 3.6.1.2. Using Apache CXF To use Apache CXF APIs and implementation classes, add a dependency to the org.apache.cxf (API) module or org.apache.cxf.impl (implementation) module. For example: The dependency is purely Apache CXF without any JBossWS customizations or additional extensions. For this reason, a client-side aggregation module is available with all the web service dependencies that you might need. 3.6.1.3. Client-side Web Services Aggregation Module When you want to use all of the web services features and functionality, you can set a dependency to the convenient client module. For example: The services option is required to enable all JBossWS features by loading JBossWS specific classes. The services option is almost always needed when declaring dependencies on the org.jboss.ws.cxf.jbossws-cxf-client and org.apache.cxf modules. The option affects the loading of classes through the Service API, which is what is used to wire most of the JBossWS components and the Apache CXF Bus extensions. 3.6.1.4. Annotation Scanning The application server uses an annotation index for detecting Jakarta XML Web Services endpoints in user deployments. When declaring web service endpoints for a class that belongs to a different module, for instance referring to it in the web.xml descriptor, use an annotations type dependency. Without that dependency your endpoints are ignored as they do not appear as annotated classes to the webservices subsystem. 3.6.2. Using jboss-deployment-structure.xml In some circumstances, the convenient approach of setting module dependencies in the MANIFEST.MF file might not work. For example, setting dependencies in the MANIFEST.MF file does not work when importing and exporting specific resources from a given module dependency. In these scenarios, add a jboss-deployment-structure.xml descriptor file to your deployment and set module dependencies in it. For more information on using jboss-deployment-structure.xml , see Add a Dependency Configuration to the jboss-deployment-structure.xml in the JBoss EAP Development Guide . 3.7. Configuring HTTP Timeout The HTTP session timeout defines the period after which an HTTP session is considered to have become invalid because there was no activity within the specified period. The HTTP session timeout can be configured, in order of precedence, in the following places: Application You can define the HTTP session timeout in the application's web.xml configuration file by adding the following configuration to the file. This value is in minutes. <session-config> <session-timeout>30</session-timeout> </session-config> If you modified the WAR file, redeploy the application. If you exploded the WAR file, no further action is required because JBoss EAP automatically undeploys and redeploys the application. Server You can use the following management CLI command to set the default HTTP session timeout in the undertow subsystem. This value is in minutes. Default The default HTTP session timeout is 30 minutes. 3.8. Securing Jakarta XML Web Services WS-Security provides the means to secure your services beyond transport level protocols such as HTTPS. Through a number of standards, such as headers defined in the WS-Security standard, you can: Pass authentication tokens between services. Encrypt messages or parts of messages. Sign messages. Timestamp messages. WS-Security makes use of public and private key cryptography. With public key cryptography, a user has a pair of public and private keys. These are generated using a large prime number and a key function. The keys are related mathematically, but cannot be derived from one another. With these keys we can encrypt messages. For example, if Scott wants to send a message to Adam, he can encrypt a message using his public key. Adam can then decrypt this message using his private key. Only Adam can decrypt this message as he is the only one with the private key. Messages can also be signed. This allows you to ensure the authenticity of the message. If Adam wants to send a message to Scott, and Scott wants to be sure that it is from Adam, Adam can sign the message using his private key. Scott can then verify that the message is from Adam by using his public key. 3.8.1. Applying Web Services Security (WS-Security) Web services support many real-world scenarios requiring WS-Security functionality. These scenarios include signature and encryption support through X509 certificates, authentication and authorization through username tokens, and all WS-Security configurations covered by the WS-SecurityPolicy specification. For other WS-* features, the core of WS-Security functionality is provided through the Apache CXF engine. In addition, the JBossWS integration adds a few configuration enhancements to simplify the setup of WS-Security enabled endpoints. 3.8.1.1. Apache CXF WS-Security Implementation Apache CXF features a WS-Security module that supports multiple configurations and is easily extendible. The system is based on interceptors that delegate to Apache WSS4J for the low-level security operations. Interceptors can be configured in different ways, either through Spring configuration files or directly using the Apache CXF client API. Recent versions of Apache CXF introduced support for WS-SecurityPolicy, which aims at moving most of the security configuration into the service contract, through policies, so that clients can be easily configured almost completely automatically from that. This way users do not need to manually deal with configuring and installing the required interceptors; the Apache CXF WS-Policy engine internally takes care of that instead. 3.8.1.2. WS-Security Policy Support WS-SecurityPolicy describes the actions that are required to securely communicate with a service advertised in a given WSDL contract. The WSDL bindings and operations reference WS-Policy fragments with the security requirements to interact with the service. The WS-SecurityPolicy specification allows for specifying things such as asymmetric and symmetric keys, using transports (HTTPS) for encryption, which parts or headers to encrypt or sign, whether to sign then encrypt or encrypt then sign, whether to include timestamps, whether to use derived keys, or something else. However some mandatory configuration elements are not covered by WS-SecurityPolicy because they are not meant to be public or part of the published endpoint contract. These include things such as keystore locations, and usernames and passwords. Apache CXF allows configuring these elements either through Spring XML descriptors or using the client API or annotations. Table 3.4. Supported Configuration Properties Configuration property Description ws-security.username The username used for UsernameToken policy assertions. ws-security.password The password used for UsernameToken policy assertions. If not specified, the callback handler will be called. ws-security.callback-handler The WSS4J security CallbackHandler that will be used to retrieve passwords for keystores and UsernameToken . ws-security.signature.properties The properties file/object that contains the WSS4J properties for configuring the signature keystore and crypto objects. ws-security.encryption.properties The properties file/object that contains the WSS4J properties for configuring the encryption keystore and crypto objects. ws-security.signature.username The username or alias for the key in the signature keystore that will be used. If not specified, it uses the default alias set in the properties file. If that is also not set, and the keystore only contains a single key, that key will be used. ws-security.encryption.username The username or alias for the key in the encryption keystore that will be used. If not specified, it uses the default alias set in the properties file. If that is also not set, and the keystore only contains a single key, that key will be used. For the web service provider, the useReqSigCert keyword can be used to accept (encrypt) any client whose public key is in the service's truststore (defined in ws-security.encryption.properties ). ws-security.signature.crypto Instead of specifying the signature properties, this can point to the full WSS4J Crypto object. This can allow easier programmatic configuration of the crypto information. ws-security.encryption.crypto Instead of specifying the encryption properties, this can point to the full WSS4J Crypto object. This can allow easier programmatic configuration of the crypto information. ws-security.enable.streaming Enable streaming (StAX based) processing of WS-Security messages. 3.8.2. WS-Trust WS-Trust is a web service specification that defines extensions to WS-Security. It is a general framework for implementing security in a distributed system. The standard is based on a centralized Security Token Service (STS), which is capable of authenticating clients and issuing tokens containing various types of authentication and authorization data. The specification describes a protocol used for issuance, exchange, and validation of security tokens. The following specifications play an important role in the WS-Trust architecture: WS-SecurityPolicy 1.2 SAML 2.0 Username Token Profile X.509 Token Profile SAML Token Profile Kerberos Token Profile The WS-Trust extensions address the needs of applications that span multiple domains and requires the sharing of security keys. This occurs by providing a standards-based trusted third party web service (STS) to broker trust relationships between a web service requester and a web service provider. This architecture also alleviates the pain of service updates that require credential changes by providing a common location for this information. The STS is the common access point from which both the requester and provider retrieves and verifies security tokens. There are three main components of the WS-Trust specification: The Security Token Service (STS) for issuing, renewing, and validating security tokens. The message formats for security token requests and responses. The mechanisms for key exchange. The following section explains a basic WS-Trust scenario. For advanced scenarios, see Advanced WS-Trust Scenarios . 3.8.2.1. Scenario: Basic WS-Trust In this section we provide an example of a basic WS-Trust scenario. It comprises a web service requester ( ws-requester ), a web service provider ( ws-provider ), and a Security Token Service (STS). The ws-provider requires SAML 2.0 token issued from a designated STS to be presented by the ws-requester using asymmetric binding. These communication requirements are declared in the WSDL of the ws-provider . STS requires ws-requester credentials to be provided in a WSS UsernameToken format request using symmetric binding. The response from STS is provided containing SAML 2.0 token. These communication requirements are declared in the WSDL of the STS. The ws-requester contacts the ws-provider and consumes its WSDL. On finding the security token issuer requirement, the ws-requester creates and configures the STSClient with the information required to generate a valid request. The STSClient contacts the STS and consumes its WSDL. The security policies are discovered. The STSClient creates and sends an authentication request with appropriate credentials. The STS verifies the credentials. In response, the STS issues a security token that provides proof that the ws-requester has authenticated with the STS. The STSClient presents a message with the security token to the ws-provider . The ws-provider verifies that the token was issued by the STS, and hence proves that the ws-requester has successfully authenticated with the STS. The ws-provider executes the requested service and returns the results to the ws-requester . 3.8.2.2. Apache CXF Support Apache CXF is an open-source, fully-featured web services framework. The JBossWS open source project integrates the JBoss Web Services (JBossWS) stack with the Apache CXF project modules to provide WS-Trust and other Jakarta XML Web Services functionality. This integration helps in easy deployment of Apache CXF STS implementations. The Apache CXF API also provides a STSClient utility to facilitate web service requester communication with its STS. 3.8.3. Security Token Service (STS) The Security Token Service (STS) is the core of the WS-Trust specification. It is a standards-based mechanism for authentication and authorization. The STS is an implementation of the WS-Trust specification's protocol for issuing, exchanging, and validating security tokens, based on token format, namespace, or trust boundaries. The STS is a web service that acts as a trusted third party to broker trust relationships between a web service requester and a web service provider. It is a common access point trusted by both requester and provider to provide interoperable security tokens. It removes the need for a direct relationship between the requestor and provider. The STS helps ensure interoperability across realms and between different platforms because it is a standards-based mechanism for authentication. The STS's WSDL contract defines how other applications and processes interact with it. In particular, the WSDL defines the WS-Trust and WS-Security policies that a requester must fulfill to successfully communicate with the STS's endpoints. A web service requester consumes the STS's WSDL and, with the aid of an STSClient utility, generates a message request compliant with the stated security policies and submits it to the STS endpoint. The STS validates the request and returns an appropriate response. 3.8.3.1. Configuring a PicketLink WS-Trust Security Token Service (STS) PicketLink STS provides options for building an alternative to the Apache CXF Security Token Service implementation. You can also use PicketLink to configure SAML SSO for web applications. For more details on configuring SAML SSO using PicketLink, see How To Set Up SSO with SAML v2 . To set up an application to serve as a PicketLink WS-Trust STS, the following steps must be performed: Create a security domain for the WS-Trust STS application. Configure the web.xml file for the WS-Trust STS application. Configure the authenticator for the WS-Trust STS application. Declare the necessary dependencies for the WS-Trust STS application. Configure the web-service portion of the WS-Trust STS application. Create and configure a picketlink.xml file for the WS-Trust STS application. Note The security domain should be created and configured before creating and deploying the application. 3.8.3.1.1. Create a Security Domain for the STS The STS handles authentication of a principal based on the credentials provided and issues the proper security token based on that result. This requires that an identity store be configured via a security domain. The only requirement around creating this security domain and identity store is that it has authentication and authorization mechanisms properly defined. This means that many different identity stores, for example properties files, database, and LDAP, and their associated login modules could be used to support an STS application. For more information on security domains, see the Security Domains section of the JBoss EAP Security Architecture documentation. In the below example, a simple UsersRoles login module using properties files for an identity store is used. CLI Commands for Creating a Security Domain Resulting XML <security-domain name="sts" cache-type="default"> <authentication> <login-module code="UsersRoles" flag="required"> <module-option name="usersProperties" value="USD{jboss.server.config.dir}/sts-users.properties"/> <module-option name="rolesProperties" value="USD{jboss.server.config.dir}/sts-roles.properties"/> </login-module> </authentication> </security-domain> Note The management CLI commands shown assume that you are running a JBoss EAP standalone server. For more details on using the management CLI for a JBoss EAP managed domain, see the JBoss EAP Management CLI Guide . Property Files The UsersRoles login module utilizes properties files to store the user/password and user/role information. For more specifics of the UsersRoles module, please see the JBoss EAP Login Module Reference . In this example, the properties files contain the following: Example: sts-users.properties File Example: sts-roles.properties File Important You also need to create a keystore for signing and encrypting the security tokens. This keystore will be used when configuring the picketlink.xml file. 3.8.3.1.2. Configure the web.xml File for the STS The web.xml file for an STS should contain the following: A <servlet> to enable the STS functionality and a <servlet-mapping> to map its URL. A <security-constraint> with a <web-resource-collection> containing a <url-pattern> that maps to the URL pattern of the secured area. Optionally, <security-constraint> may also contain an <auth-constraint> stipulating the allowed roles. A <login-config> configured for BASIC authentication. If any roles were specified in the <auth-constraint> , those roles should be defined in a <security-role> . Example web.xml file: <web-app> <!-- Define STS servlet --> <servlet> <servlet-name>STS-servlet</servlet-name> <servlet-class>com.example.sts.PicketLinkSTService</servlet-class> </servlet> <servlet-mapping> <servlet-name>STS-servlet</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> <!-- Define a security constraint that requires the All role to access resources --> <security-constraint> <web-resource-collection> <web-resource-name>STS</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>All</role-name> </auth-constraint> </security-constraint> <!-- Define the Login Configuration for this Application --> <login-config> <auth-method>BASIC</auth-method> <realm-name>STS Realm</realm-name> </login-config> <!-- Security roles referenced by this web application --> <security-role> <description>The role that is required to log in to the IDP Application</description> <role-name>All</role-name> </security-role> </web-app> 3.8.3.1.3. Configure the Authenticator for the STS The authenticator is responsible for the authentication of users for issuing and validating security tokens. The authenticator is configured by defining the security domain to be used in authenticating and authorizing principals. The jboss-web.xml file should have the following: A <security-domain> to specify which security domain to use for authentication and authorization. Example: jboss-web.xml File <jboss-web> <security-domain>sts</security-domain> <context-root>SecureTokenService</context-root> </jboss-web> 3.8.3.1.4. Declare the Necessary Dependencies for the STS The web application serving as the STS requires a dependency to be defined in the jboss-deployment-structure.xml file so that the org.picketlink classes can be located. As JBoss EAP provides all necessary org.picketlink and related classes, the application just needs to declare them as dependencies to use them. Example: Using jboss-deployment-structure.xml to Declare Dependencies <jboss-deployment-structure> <deployment> <dependencies> <module name="org.picketlink"/> </dependencies> </deployment> </jboss-deployment-structure> 3.8.3.1.5. Configure the Web-Service Portion of the STS The web application serving as the STS requires that you define a web-service for clients to call to obtain their security tokens. This requires that you define in your WSDL a service name called PicketLinkSTS , and a port called PicketLinkSTSPort . You can, however, change the SOAP address to better reflect your target deployment environment. Example: PicketLinkSTS.wsdl File <?xml version="1.0"?> <wsdl:definitions name="PicketLinkSTS" targetNamespace="urn:picketlink:identity-federation:sts" xmlns:tns="urn:picketlink:identity-federation:sts" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:wsap10="http://www.w3.org/2006/05/addressing/wsdl" xmlns:soap12="http://schemas.xmlsoap.org/wsdl/soap12/"> <wsdl:types> <xs:schema targetNamespace="urn:picketlink:identity-federation:sts" xmlns:tns="urn:picketlink:identity-federation:sts" xmlns:xs="http://www.w3.org/2001/XMLSchema" version="1.0" elementFormDefault="qualified"> <xs:element name="MessageBody"> <xs:complexType> <xs:sequence> <xs:any minOccurs="0" maxOccurs="unbounded" namespace="##any"/> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> </wsdl:types> <wsdl:message name="RequestSecurityToken"> <wsdl:part name="rstMessage" element="tns:MessageBody"/> </wsdl:message> <wsdl:message name="RequestSecurityTokenResponse"> <wsdl:part name="rstrMessage" element="tns:MessageBody"/> </wsdl:message> <wsdl:portType name="SecureTokenService"> <wsdl:operation name="IssueToken"> <wsdl:input wsap10:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue" message="tns:RequestSecurityToken"/> <wsdl:output wsap10:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTR/Issue" message="tns:RequestSecurityTokenResponse"/> </wsdl:operation> </wsdl:portType> <wsdl:binding name="STSBinding" type="tns:SecureTokenService"> <soap12:binding transport="http://schemas.xmlsoap.org/soap/http"/> <wsdl:operation name="IssueToken"> <soap12:operation soapAction="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue" style="document"/> <wsdl:input> <soap12:body use="literal"/> </wsdl:input> <wsdl:output> <soap12:body use="literal"/> </wsdl:output> </wsdl:operation> </wsdl:binding> <wsdl:service name="PicketLinkSTS"> <wsdl:port name="PicketLinkSTSPort" binding="tns:STSBinding"> <soap12:address location="http://localhost:8080/SecureTokenService/PicketLinkSTS"/> </wsdl:port> </wsdl:service> </wsdl:definitions> In addition, you need a class for your web-service to use your WSDL: Example: PicketLinkSTS Class @WebServiceProvider(serviceName = "PicketLinkSTS", portName = "PicketLinkSTSPort", targetNamespace = "urn:picketlink:identity-federation:sts", wsdlLocation = "WEB-INF/wsdl/PicketLinkSTS.wsdl") @ServiceMode(value = Service.Mode.MESSAGE) public class PicketLinkSTService extends PicketLinkSTS { private static Logger log = Logger.getLogger(PicketLinkSTService.class.getName()); @Resource public void setWSC(WebServiceContext wctx) { log.debug("Setting WebServiceContext = " + wctx); this.context = wctx; } } 3.8.3.1.6. Create and Configure a picketlink.xml File for the STS The picketlink.xml file is responsible for the behavior of the authenticator and is loaded at the application's startup. The JBoss EAP Security Token Service defines several interfaces that provide extension points. Implementations can be plugged in and the default values can be specified for some properties using configuration. Similar to the IDP and SP configuration in How To Set Up SSO with SAML v2 , all STS configurations are specified in the picketlink.xml file of the deployed application. The following are the elements that can be configured in the picketlink.xml file. Note In the following text, a service provider refers to the web service that requires a security token to be presented by its clients. <PicketLinkSTS> : This is the root element. It defines some properties that allow the STS administrator to set the following properties: STSName : A string representing the name of the security token service. If not specified, the default PicketLinkSTS value is used. TokenTimeout : The token lifetime value in seconds. If not specified, the default value of 3600 (one hour) is used. EncryptToken : A boolean specifying whether issued tokens are to be encrypted or not. The default value is false . <KeyProvider> : This element and all its subelements are used to configure the keystore that are used by PicketLink STS to sign and encrypt tokens. Properties like the keystore location, its password, and the signing (private key) alias and password are all configured in this section. <TokenProviders> : This section specifies the TokenProvider implementations that must be used to handle each type of security token. In the example we have two providers - one that handles tokens of type SAMLV1.1 and one that handles tokens of type SAMLV2.0 . The WSTrustRequestHandler calls the getProviderForTokenType(String type) method of STSConfiguration to obtain a reference to the appropriate TokenProvider . <ServiceProviders> : This section specifies the token types that must be used for each service provider, the web service that requires a security token. When a WS-Trust request does not contain the token type, the WSTrustRequestHandler must use the service provider endpoint to find out the type of the token that must be issued. Note When configuring PicketLink, POST binding is recommended as it provides enhanced security and does not pass the response within URL parameters. Example: picketlink.xml Configuration File <!DOCTYPE PicketLinkSTS> <PicketLinkSTS xmlns="urn:picketlink:federation:config:2.1" STSName="PicketLinkSTS" TokenTimeout="7200" EncryptToken="false"> <KeyProvider ClassName="org.picketlink.identity.federation.core.impl.KeyStoreKeyManager"> <Auth Key="KeyStoreURL" Value="sts_keystore.jks"/> <Auth Key="KeyStorePass" Value="testpass"/> <Auth Key="SigningKeyAlias" Value="sts"/> <Auth Key="SigningKeyPass" Value="keypass"/> <ValidatingAlias Key="http://services.testcorp.org/provider1" Value="service1"/> </KeyProvider> <TokenProviders> <TokenProvider ProviderClass="org.picketlink.identity.federation.core.wstrust.plugins.saml.SAML11TokenProvider" TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV1.1" TokenElement="Assertion" TokenElementNS="urn:oasis:names:tc:SAML:1.0:assertion"/> <TokenProvider ProviderClass="org.picketlink.identity.federation.core.wstrust.plugins.saml.SAML20TokenProvider" TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0" TokenElement="Assertion" TokenElementNS="urn:oasis:names:tc:SAML:2.0:assertion"/> </TokenProviders> <ServiceProviders> <ServiceProvider Endpoint="http://services.testcorp.org/provider1" TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0" TruststoreAlias="service1"/> </ServiceProviders> </PicketLinkSTS> By default, the picketlink.xml file is located in the WEB-INF/classes directory of the STS web application. The PicketLink configuration file can also be loaded from the file system. To load the PicketLink configuration file from the file system, it must be named picketlink-sts.xml and be located in the USD{user.home}/picketlink-store/sts/ directory. 3.8.3.2. Using a WS-Trust Security Token Service (STS) with a Client To configure a client to obtain a security token from the STS, you need to make use of the org.picketlink.identity.federation.api.wstrust.WSTrustClient class to connect to the STS and ask for a token to be issued. First you need to instantiate the client: Example: Creating a WSTrustClient WSTrustClient client = new WSTrustClient("PicketLinkSTS", "PicketLinkSTSPort", "http://localhost:8080/SecureTokenService/PicketLinkSTS", new SecurityInfo(username, password)); you need to use the WSTrustClient to ask for a token, for example a SAML assertion, to be issued: Example: Obtaining an Assertion org.w3c.dom.Element assertion = null; try { assertion = client.issueToken(SAMLUtil.SAML2_TOKEN_TYPE); } catch (WSTrustException wse) { System.out.println("Unable to issue assertion: " + wse.getMessage()); wse.printStackTrace(); } Once you have the assertion, there are two ways by which it can be included in and sent via the SOAP message: The client can push the SAML2 Assertion into the SOAP MessageContext under the key org.picketlink.trust.saml.assertion . For example: bindingProvider.getRequestContext().put(SAML2Constants.SAML2_ASSERTION_PROPERTY, assertion); The SAML2 Assertion is available as part of the JAAS subject on the security context. This can happen if there has been a JAAS interaction with the usage of PicketLink STS login modules. 3.8.3.3. STS Client Pooling Warning The STS client pooling feature is NOT supported in JBoss EAP. STS client pooling is a feature that allows you to configure a pool of STS clients on the server, thereby eliminating a possible bottleneck of STS client creation. Client pooling can be used for login modules that need an STS client to obtain SAML tickets. These include: org.picketlink.identity.federation.core.wstrust.auth.STSIssuingLoginModule org.picketlink.identity.federation.core.wstrust.auth.STSValidatingLoginModule org.picketlink.trust.jbossws.jaas.JBWSTokenIssuingLoginModule The default number of clients in the pool for each login module is configured using the initialNumberOfClients login module option. The org.picketlink.identity.federation.bindings.stspool.STSClientPoolFactory class provides client pool functionality to applications. Using STSClientPoolFactory STS clients are inserted into subpools using their STSClientConfig configuration as a key. To insert an STS client into a subpool, you need to obtain the STSClientPool instance and then initialize a subpool based on the configuration. Optionally, you can specify the initial number of STS clients when initializing the pool, or you can rely on the default number. Example: Inserting an STS Client into a Subpool final STSClientPool pool = STSClientPoolFactory.getPoolInstance(); pool.createPool(20, stsClientConfig); final STSClient client = pool.getClient(stsClientConfig); When you are done with a client, you can return it to the pool by calling the returnClient() method. Example: Returning an STS Client to the Subpool pool.returnClient(); Example: Checking If a Subpool Exists with a Given Configuration if (! pool.configExists(stsClientConfig) { pool.createPool(stsClientConfig); } If the picketlink-federation subsystem is enabled, all client pools created for a deployment are destroyed automatically during the undeploy process. To manually destroy a pool: Example: Manually Destroying a Subpool pool.destroyPool(stsClientConfig); 3.8.4. Propagating Authenticated Identity to the Jakarta Enterprise Beans Subsystem The webservices subsystem contains an adapter that allows you to configure an Elytron security domain to secure web service endpoints using either annotations or deployment descriptors. When Elytron security is enabled, the JAAS subject or principal can be pushed to the Apache CXF endpoint's SecurityContext to propagate the authenticated identity to the Jakarta Enterprise Beans container. The following is an example of how to use an Apache CXF interceptor to propagate authenticated information to the Jakarta Enterprise Beans container. public class PropagateSecurityInterceptor extends WSS4JInInterceptor { public PropagateSecurityInterceptor() { super(); getAfter().add(PolicyBasedWSS4JInInterceptor.class.getName()); } @Override public void handleMessage(SoapMessage message) throws Fault { ... final Endpoint endpoint = message.getExchange().get(Endpoint.class); final SecurityDomainContext securityDomainContext = endpoint.getSecurityDomainContext(); //push subject principal retrieved from CXF to ElytronSecurityDomainContext securityDomainContext.pushSubjectContext(subject, principal, null) } } 3.9. Jakarta XML Web Services Logging You can handle logging for inbound and outbound messages using Jakarta XML Web Services handlers or Apache CXF logging interceptors . 3.9.1. Using Jakarta XML Web Services Handlers You can configure a Jakarta XML Web Services handler to log messages that are passed to it. This approach is portable as the handler can be added to the desired client and endpoints programatically by using the @HandlerChain Jakarta XML Web Services annotation. The predefined client and endpoint configuration mechanism allows you to add the logging handler to any client and endpoint combination, or to only some of the clients and endpoints. To add the logging handler to only some of the clients or endpoints, use the @EndpointConfig annotation and the JBossWS API. The org.jboss.ws.api.annotation.EndpointConfig annotation is used to assign an endpoint configuration to a Jakarta XML Web Services endpoint implementation. When assigning a configuration that is defined in the webservices subsystem, only the configuration name is specified. When assigning a configuration that is defined in the application, the relative path to the deployment descriptor and the configuration name must be specified. 3.9.2. Using Apache CXF Logging Interceptors Apache CXF also comes with logging interceptors that can be used to log messages to the console, client log files, or server log files. Those interceptors can be added to client, endpoint, and buses in multiple ways, including: System property Setting the org.apache.cxf.logging.enabled system property to true causes the logging interceptors to be added to any bus instance being created on the JVM. You can also set the system property to pretty to output nicely-formatted XML output. You can use the following management CLI command to set this system property. Manual interceptor addition Logging interceptors can be selectively added to endpoints using the Apache CXF annotations @org.apache.cxf.interceptor.InInterceptors and @org.apache.cxf.interceptor.OutInterceptors . The same outcome is achieved on the client side by programmatically adding new instances of the logging interceptors to the client or the bus. 3.10. Enabling Web Services Addressing (WS-Addressing) Web Services Addressing, or WS-Addressing, provides a transport-neutral mechanism to address web services and their associated messages. To enable WS-Addressing, you must add the @Addressing annotation to the web service endpoint and then configure the client to access it. The following examples assume your application has an existing Jakarta XML Web Services service and client configuration. See the jaxws-addressing quickstart that ships with JBoss EAP for a complete working example. Add the @Addressing annotation to the application's Jakarta XML Web Services endpoint code. Example: Jakarta XML Web Services Endpoint with @Addressing Annotation package org.jboss.quickstarts.ws.jaxws.samples.wsa; import org.jboss.quickstarts.ws.jaxws.samples.wsa.ServiceIface; import javax.jws.WebService; import javax.xml.ws.soap.Addressing; @WebService( portName = "AddressingServicePort", serviceName = "AddressingService", wsdlLocation = "WEB-INF/wsdl/AddressingService.wsdl", targetNamespace = "http://www.jboss.org/jbossws/ws-extensions/wsaddressing", endpointInterface = "org.jboss.quickstarts.ws.jaxws.samples.wsa.ServiceIface") @Addressing(enabled = true, required = true) public class ServiceImpl implements ServiceIface { public String sayHello() { return "Hello World!"; } } Update the Jakarta XML Web Services client code to configure WS-Addressing. Example: Jakarta XML Web Services Client Configured for WS-Addressing package org.jboss.quickstarts.ws.jaxws.samples.wsa; import java.net.URL; import javax.xml.namespace.QName; import javax.xml.ws.Service; import javax.xml.ws.soap.AddressingFeature; public final class AddressingClient { private static final String serviceURL = "http://localhost:8080/jaxws-addressing/AddressingService"; public static void main(String[] args) throws Exception { // construct proxy QName serviceName = new QName("http://www.jboss.org/jbossws/ws-extensions/wsaddressing", "AddressingService"); URL wsdlURL = new URL(serviceURL + "?wsdl"); Service service = Service.create(wsdlURL, serviceName); org.jboss.quickstarts.ws.jaxws.samples.wsa.ServiceIface proxy = (org.jboss.quickstarts.ws.jaxws.samples.wsa.ServiceIface) service.getPort(org.jboss.quickstarts.ws.jaxws.samples.wsa.ServiceIface.class, new AddressingFeature()); // invoke method System.out.println(proxy.sayHello()); } } The client and endpoint now communicate using WS-Addressing. 3.11. Enabling Web Services Reliable Messaging Web Services Reliable Messaging (WS-Reliable Messaging) is implemented internally in Apache CXF. A set of interceptors interacts with the low-level requirements of the reliable messaging protocol. To enable WS-Reliable Messaging, complete one of the following steps: Consume a WSDL contract that specifies proper WS-Reliable Messaging policies, assertions, or both. Manually add and configure the reliable messaging interceptors. Specify the reliable messaging policies in an optional CXF Spring XML descriptor. Specify the Apache CXF reliable messaging feature in an optional CXF Spring XML descriptor. The first approach, which is the only portable approach, relies on the Apache CXF WS-Policy engine. The other approaches, which are proprietary, allow for fine-grained configuration of the protocol aspects that are not covered by the WS-Reliable Messaging Policy. 3.12. Specifying Web Services Policies Web Services Policies (WS-Policy) rely on the Apache CXF WS-Policy framework. This framework is compliant with the following specifications: Web Services Policy 1.5 - Framework Web Services Policy 1.5 - Attachment You can work with the policies in different ways, including: Add policy assertions to WSDL contracts and let the runtime consume the assertions and behave accordingly. Specify endpoint policy attachments using either CXF annotations or features. Use the Apache CXF policy framework to define custom assertions and complete other tasks. 3.13. Apache CXF Integration All Jakarta XML Web Services functionality provided by JBossWS on top of JBoss EAP is currently served through a proper integration of the JBossWS stack with most of the Apache CXF project modules. Apache CXF is an open source services framework. It allows building and developing services using front-end programming APIs, including Jakarta XML Web Services, with services speaking a variety of protocols such as SOAP and XML/HTTP over a variety of transports such as HTTP and Jakarta Messaging. The integration layer between JBossWS and Apache CXF is mainly meant for: Allowing use of standard web services APIs, including Jakarta XML Web Services, on JBoss EAP; this is performed internally leveraging Apache CXF without requiring the user to deal with it; Allowing use of Apache CXF advanced features, including WS-*, on top of JBoss EAP without requiring the user to deal with, set up, or care about the required integration steps for running in such a container. In support of those goals, the JBossWS integration with Apache CXF supports the JBossWS endpoint deployment mechanism and comes with many internal customizations on top of Apache CXF. For more in-depth details on the Apache CXF architecture, refer to the Apache CXF official documentation . 3.13.1. Server-side Integration Customization The JBossWS server-side integration with Apache CXF takes care of internally creating proper Apache CXF structures for the provided web service deployment. If the deployment includes multiple endpoints, they will all exist within the same Apache CXF Bus, which is separate from other deployments' bus instances. While JBossWS sets sensible defaults for most of the Apache CXF configuration options on the server side, users might want to fine-tune the Bus instance that is created for their deployment; a jboss-webservices.xml descriptor can be used for deployment-level customizations. 3.13.1.1. Deployment Descriptor Properties The jboss-webservices.xml descriptor can be used to provide property values. <webservices xmlns="http://www.jboss.com/xml/ns/javaee" version="1.2"> ... <property> <name>...</name> <value>...</value> </property> ... </webservices> JBossWS integration with Apache CXF comes with a set of allowed property names to control Apache CXF internals. 3.13.1.2. WorkQueue Configuration Apache CXF uses WorkQueue instances for dealing with some operations, for example @Oneway request processing. A WorkQueueManager is installed in the Bus as an extension and allows for adding or removing queues as well as controlling the existing ones. On the server side, queues can be provided by using the cxf.queue.<queue-name>.* properties in jboss-webservices.xml . For example, you can use the cxf.queue.default.maxQueueSize property to configure the maximum queue size of the default WorkQueue . At the deployment time, the JBossWS integration can add new instances of AutomaticWorkQueueImpl to the currently configured WorkQueueManager . The properties below are used to fill in the AutomaticWorkQueueImpl constructor parameters: Table 3.5. AutomaticWorkQueueImpl Constructor Properties Property Default Value cxf.queue.<queue-name>.maxQueueSize 256 cxf.queue.<queue-name>.initialThreads 0 cxf.queue.<queue-name>.highWaterMark 25 cxf.queue.<queue-name>.lowWaterMark 5 cxf.queue.<queue-name>.dequeueTimeout 120000 3.13.1.3. Policy Alternative Selector The Apache CXF policy engine supports different strategies to deal with policy alternatives. JBossWS integration currently defaults to the MaximalAlternativeSelector , but still allows for setting different selector implementation using the cxf.policy.alternativeSelector property in the jboss-webservices.xml file. 3.13.1.4. MBean Management Apache CXF allows you to manage its MBean objects that are installed into the JBoss EAP MBean server. You can enable this feature on a deployment basis through the cxf.management.enabled property in the jboss-webservices.xml file. You can also use the cxf.management.installResponseTimeInterceptors property to control installation of the CXF response time interceptors. These interceptors are added by default when enabling the MBean management, but it might not be required in some cases. Example: MBean Management in the jboss-webservices.xml File <webservices xmlns="http://www.jboss.com/xml/ns/javaee" version="1.2"> <property> <name>cxf.management.enabled</name> <value>true</value> </property> <property> <name>cxf.management.installResponseTimeInterceptors</name> <value>false</value> </property> </webservices> 3.13.1.5. Schema Validation Apache CXF includes a feature for validating incoming and outgoing SOAP messages on both the client and the server side. The validation is performed against the relevant schema in the endpoint WSDL contract (server side) or the WSDL contract used for building up the service proxy (client side). You can enable schema validation in any of the following ways: In the JBoss EAP server configuration. For example, the management CLI command below enables schema validation for the default Standard-Endpoint-Config endpoint configuration. In a predefined client or endpoint configuration file. You can associate any endpoint or client running in-container to a JBossWS predefined configuration by setting the schema-validation-enabled property to true in the referenced configuration file. Programmatically on the client side. On the client side, you can enable schema validation programmatically. For example: ((BindingProvider)proxy).getRequestContext().put("schema-validation-enabled", true); Using the @org.apache.cxf.annotations.SchemaValidation annotation on the server side. On the server side, you can use the @org.apache.cxf.annotations.SchemaValidation annotation. For example: import javax.jws.WebService; import org.apache.cxf.annotations.SchemaValidation; @WebService(...) @SchemaValidation public class ValidatingHelloImpl implements Hello { ... } 3.13.1.6. Apache CXF Interceptors The jboss-webservices.xml descriptor enables specifying the cxf.interceptors.in and cxf.interceptors.out properties. These properties allow you to attach the declaring interceptors to the Bus instance that is created for serving the deployment. Example: jboss-webservices.xml File <?xml version="1.1" encoding="UTF-8"?> <webservices xmlns="http://www.jboss.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="1.2" xsi:schemaLocation="http://www.jboss.com/xml/ns/javaee"> <property> <name>cxf.interceptors.in</name> <value>org.jboss.test.ws.jaxws.cxf.interceptors.BusInterceptor</value> </property> <property> <name>cxf.interceptors.out</name> <value>org.jboss.test.ws.jaxws.cxf.interceptors.BusCounterInterceptor</value> </property> </webservices> You can declare interceptors using one of the following approaches: Annotation usage on endpoint classes, for example @org.apache.cxf.interceptor.InInterceptor or @org.apache.cxf.interceptor.OutInterceptor . Direct API usage on the client side through the org.apache.cxf.interceptor.InterceptorProvider interface. JBossWS descriptor usage. Because Spring integration is no longer supported in JBoss EAP, the JBossWS integration uses the jaxws-endpoint-config.xml descriptor file to avoid requiring modifications to the actual client or endpoint code. You can declare interceptors within predefined client and endpoint configurations by specifying a list of interceptor class names for the cxf.interceptors.in and cxf.interceptors.out properties. Example: jaxws-endpoint-config.xml File <?xml version="1.0" encoding="UTF-8"?> <jaxws-config xmlns="urn:jboss:jbossws-jaxws-config:4.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:javaee="http://java.sun.com/xml/ns/javaee" xsi:schemaLocation="urn:jboss:jbossws-jaxws-config:4.0 schema/jbossws-jaxws-config_4_0.xsd"> <endpoint-config> <config-name>org.jboss.test.ws.jaxws.cxf.interceptors.EndpointImpl</config-name> <property> <property-name>cxf.interceptors.in</property-name> <property-value>org.jboss.test.ws.jaxws.cxf.interceptors.EndpointInterceptor,org.jboss.test.ws.jaxws.cxf.interceptors.FooInterceptor</property-value> </property> <property> <property-name>cxf.interceptors.out</property-name> <property-value>org.jboss.test.ws.jaxws.cxf.interceptors.EndpointCounterInterceptor</property-value> </property> </endpoint-config> </jaxws-config> Note A new instance of each specified interceptor class will be added to the client or endpoint to which the configuration is assigned. The interceptor classes must have a no-argument constructor. 3.13.1.7. Apache CXF Features The jboss-webservices.xml descriptor enables specifying the cxf.features property. This property allows you to declare features to be attached to any endpoint belonging to the Bus instance that is created for serving the deployment. Example: jboss-webservices.xml File <?xml version="1.1" encoding="UTF-8"?> <webservices xmlns="http://www.jboss.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="1.2" xsi:schemaLocation="http://www.jboss.com/xml/ns/javaee"> <property> <name>cxf.features</name> <value>org.apache.cxf.feature.FastInfosetFeature</value> </property> </webservices> You can declare features using one of the following approaches: Annotation usage on endpoint classes, for example @org.apache.cxf.feature.Features . Direct API usage on client side through extensions of the org.apache.cxf.feature.AbstractFeature class. JBossWS descriptor usage. Since Spring integration is no longer supported in JBoss EAP, the JBossWS integration adds an additional descriptor, a jaxws-endpoint-config.xml file-based approach to avoid requiring modifications to the actual client or endpoint code. You can declare features within predefined client and endpoint configurations by specifying a list of feature class names for the cxf.features property. Example: jaxws-endpoint-config.xml File <?xml version="1.0" encoding="UTF-8"?> <jaxws-config xmlns="urn:jboss:jbossws-jaxws-config:4.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:javaee="http://java.sun.com/xml/ns/javaee" xsi:schemaLocation="urn:jboss:jbossws-jaxws-config:4.0 schema/jbossws-jaxws-config_4_0.xsd"> <endpoint-config> <config-name>Custom FI Config</config-name> <property> <property-name>cxf.features</property-name> <property-value>org.apache.cxf.feature.FastInfosetFeature</property-value> </property> </endpoint-config> </jaxws-config> Note A new instance of each specified feature class will be added to the client or endpoint the configuration is assigned to. The feature classes must have a no-argument constructor. 3.13.1.8. Properties-Driven Bean Creation The Apache CXF Interceptors and Apache CXF Features sections explain how to declare CXF interceptors and features through properties either in a client or endpoint predefined configuration or in a jboss-webservices.xml descriptor. By only getting the feature or interceptor class name specified, the container tries to create a bean instance using the class default constructor. This sets a limitation on the feature or interceptor configuration, unless custom extensions of vanilla CXF classes are provided, with the default constructor setting properties before eventually using the super constructor. To address this issue, JBossWS integration comes with a mechanism for configuring simple bean hierarchies when building them up from properties. Properties can have bean reference values, which are strings starting with ## . Property reference keys are used to specify the bean class name and the value for each attribute. So for instance, the following properties result in the stack installing two feature instances: Key Value cxf.features ##foo, ##bar ##foo org.jboss.Foo ##foo.par 34 ##bar org.jboss.Bar ##bar.color blue The same result can be created by the following code: import org.Bar; import org.Foo; ... Foo foo = new Foo(); foo.setPar(34); Bar bar = new Bar(); bar.setColor("blue"); This mechanism assumes that the classes are valid beans with proper getter() and setter() methods. Value objects are cast to the correct primitive type by inspecting the class definition. Nested beans can also be configured. | [
"package echo; @javax.jws.WebService public class Echo { public String echo(String input) { return input; } }",
"javac -d . Echo.java EAP_HOME/bin/wsprovide.sh --classpath=. -w echo.Echo",
"<wsdl:service name=\"EchoService\"> <wsdl:port name=\"EchoPort\" binding=\"tns:EchoServiceSoapBinding\"> <soap:address location=\"http://localhost:9090/EchoPort\"/> </wsdl:port> </wsdl:service>",
"<wsdl:portType name=\"Echo\"> <wsdl:operation name=\"echo\"> <wsdl:input name=\"echo\" message=\"tns:echo\"> </wsdl:input> <wsdl:output name=\"echoResponse\" message=\"tns:echoResponse\"> </wsdl:output> </wsdl:operation> </wsdl:portType>",
"<web-app xmlns=\"http://java.sun.com/xml/ns/j2ee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd\" version=\"2.4\"> <servlet> <servlet-name>Echo</servlet-name> <servlet-class>echo.Echo</servlet-class> </servlet> <servlet-mapping> <servlet-name>Echo</servlet-name> <url-pattern>/Echo</url-pattern> </servlet-mapping> </web-app>",
"mkdir -p WEB-INF/classes cp -rp echo WEB-INF/classes/ cp web.xml WEB-INF jar cvf echo.war WEB-INF added manifest adding: WEB-INF/(in = 0) (out= 0)(stored 0%) adding: WEB-INF/classes/(in = 0) (out= 0)(stored 0%) adding: WEB-INF/classes/echo/(in = 0) (out= 0)(stored 0%) adding: WEB-INF/classes/echo/Echo.class(in = 340) (out= 247)(deflated 27%) adding: WEB-INF/web.xml(in = 576) (out= 271)(deflated 52%)",
"EAP_HOME/bin/wsconsume.sh -k EchoService.wsdl",
"@WebService(targetNamespace = \"http://echo/\", name = \"Echo\") @XmlSeeAlso({ObjectFactory.class}) public interface Echo { @WebMethod @RequestWrapper(localName = \"echo\", targetNamespace = \"http://echo/\", className = \"echo.Echo_Type\") @ResponseWrapper(localName = \"echoResponse\", targetNamespace = \"http://echo/\", className = \"echo.EchoResponse\") @WebResult(name = \"return\", targetNamespace = \"\") public java.lang.String echo( @WebParam(name = \"arg0\", targetNamespace = \"\") java.lang.String arg0 ); }",
"package echo; @javax.jws.WebService(endpointInterface=\"echo.Echo\") public class EchoImpl implements Echo { public String echo(String arg0) { return arg0; } }",
"<wsdl:service name=\"EchoService\"> <wsdl:port name=\"EchoPort\" binding=\"tns:EchoServiceSoapBinding\"> <soap:address location=\"http://localhost.localdomain:8080/echo/Echo\"/> </wsdl:port> </wsdl:service>",
"EAP_HOME/bin/wsconsume.sh -k http://localhost:8080/echo/Echo?wsdl",
"@WebServiceClient(name = \"EchoService\", wsdlLocation = \"http://localhost:8080/echo/Echo?wsdl\", targetNamespace = \"http://echo/\") public class EchoService extends Service { public final static URL WSDL_LOCATION; public final static QName SERVICE = new QName(\"http://echo/\", \"EchoService\"); public final static QName EchoPort = new QName(\"http://echo/\", \"EchoPort\"); @WebEndpoint(name = \"EchoPort\") public Echo getEchoPort() { return super.getPort(EchoPort, Echo.class); } @WebEndpoint(name = \"EchoPort\") public Echo getEchoPort(WebServiceFeature... features) { return super.getPort(EchoPort, Echo.class, features); } }",
"import echo.*; public class EchoClient { public static void main(String args[]) { if (args.length != 1) { System.err.println(\"usage: EchoClient <message>\"); System.exit(1); } EchoService service = new EchoService(); Echo echo = service.getEchoPort(); System.out.println(\"Server said: \" + echo.echo(args0)); } }",
"EchoService service = new EchoService(); Echo echo = service.getEchoPort(); /* Set NEW Endpoint Location */ String endpointURL = \"http://NEW_ENDPOINT_URL\"; BindingProvider bp = (BindingProvider)echo; bp.getRequestContext().put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, endpointURL); System.out.println(\"Server said: \" + echo.echo(args0));",
"@WebService @SOAPBinding(style = SOAPBinding.Style.RPC) public class JSEBean { @WebMethod public String echo(String input) { } }",
"<web-app ...> <servlet> <servlet-name>TestService</servlet-name> <servlet-class>org.jboss.quickstarts.ws.jaxws.samples.jsr181pojo.JSEBean01</servlet-class> </servlet> <servlet-mapping> <servlet-name>TestService</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> </web-app>",
"@Stateless @Remote(EJB3RemoteInterface.class) @WebService @SOAPBinding(style = SOAPBinding.Style.RPC) public class EJB3Bean implements EJB3RemoteInterface { @WebMethod public String echo(String input) { } }",
"package org.jboss.quickstarts.ws.jaxws.samples.retail.profile; import javax.ejb.Stateless; import javax.jws.WebService; import javax.jws.WebMethod; import javax.jws.soap.SOAPBinding; @Stateless @WebService( name = \"ProfileMgmt\", targetNamespace = \"http://org.jboss.ws/samples/retail/profile\", serviceName = \"ProfileMgmtService\") @SOAPBinding(parameterStyle = SOAPBinding.ParameterStyle.BARE) public class ProfileMgmtBean { @WebMethod public DiscountResponse getCustomerDiscount(DiscountRequest request) { DiscountResponse dResponse = new DiscountResponse(); dResponse.setCustomer(request.getCustomer()); dResponse.setDiscount(10.00); return dResponse; } }",
"package org.jboss.test.ws.jaxws.samples.retail.profile; import javax.xml.bind.annotation.XmlAccessType; import javax.xml.bind.annotation.XmlAccessorType; import javax.xml.bind.annotation.XmlType; import org.jboss.test.ws.jaxws.samples.retail.Customer; @XmlAccessorType(XmlAccessType.FIELD) @XmlType( name = \"discountRequest\", namespace=\"http://org.jboss.ws/samples/retail/profile\", propOrder = { \"customer\" } ) public class DiscountRequest { protected Customer customer; public DiscountRequest() { } public DiscountRequest(Customer customer) { this.customer = customer; } public Customer getCustomer() { return customer; } public void setCustomer(Customer value) { this.customer = value; } }",
"jar -tf jaxws-samples-retail.jar org/jboss/test/ws/jaxws/samples/retail/profile/DiscountRequest.class org/jboss/test/ws/jaxws/samples/retail/profile/DiscountResponse.class org/jboss/test/ws/jaxws/samples/retail/profile/ObjectFactory.class org/jboss/test/ws/jaxws/samples/retail/profile/ProfileMgmt.class org/jboss/test/ws/jaxws/samples/retail/profile/ProfileMgmtBean.class org/jboss/test/ws/jaxws/samples/retail/profile/ProfileMgmtService.class org/jboss/test/ws/jaxws/samples/retail/profile/package-info.class",
"<definitions name='ProfileMgmtService' targetNamespace='http://org.jboss.ws/samples/retail/profile' xmlns='http://schemas.xmlsoap.org/wsdl/' xmlns:ns1='http://org.jboss.ws/samples/retail' xmlns:soap='http://schemas.xmlsoap.org/wsdl/soap/' xmlns:tns='http://org.jboss.ws/samples/retail/profile' xmlns:xsd='http://www.w3.org/2001/XMLSchema'> <types> <xs:schema targetNamespace='http://org.jboss.ws/samples/retail' version='1.0' xmlns:xs='http://www.w3.org/2001/XMLSchema'> <xs:complexType name='customer'> <xs:sequence> <xs:element minOccurs='0' name='creditCardDetails' type='xs:string'/> <xs:element minOccurs='0' name='firstName' type='xs:string'/> <xs:element minOccurs='0' name='lastName' type='xs:string'/> </xs:sequence> </xs:complexType> </xs:schema> <xs:schema targetNamespace='http://org.jboss.ws/samples/retail/profile' version='1.0' xmlns:ns1='http://org.jboss.ws/samples/retail' xmlns:tns='http://org.jboss.ws/samples/retail/profile' xmlns:xs='http://www.w3.org/2001/XMLSchema'> <xs:import namespace='http://org.jboss.ws/samples/retail'/> <xs:element name='getCustomerDiscount' nillable='true' type='tns:discountRequest'/> <xs:element name='getCustomerDiscountResponse' nillable='true' type='tns:discountResponse'/> <xs:complexType name='discountRequest'> <xs:sequence> <xs:element minOccurs='0' name='customer' type='ns1:customer'/> </xs:sequence> </xs:complexType> <xs:complexType name='discountResponse'> <xs:sequence> <xs:element minOccurs='0' name='customer' type='ns1:customer'/> <xs:element name='discount' type='xs:double'/> </xs:sequence> </xs:complexType> </xs:schema> </types> <message name='ProfileMgmt_getCustomerDiscount'> <part element='tns:getCustomerDiscount' name='getCustomerDiscount'/> </message> <message name='ProfileMgmt_getCustomerDiscountResponse'> <part element='tns:getCustomerDiscountResponse' name='getCustomerDiscountResponse'/> </message> <portType name='ProfileMgmt'> <operation name='getCustomerDiscount' parameterOrder='getCustomerDiscount'> <input message='tns:ProfileMgmt_getCustomerDiscount'/> <output message='tns:ProfileMgmt_getCustomerDiscountResponse'/> </operation> </portType> <binding name='ProfileMgmtBinding' type='tns:ProfileMgmt'> <soap:binding style='document' transport='http://schemas.xmlsoap.org/soap/http'/> <operation name='getCustomerDiscount'> <soap:operation soapAction=''/> <input> <soap:body use='literal'/> </input> <output> <soap:body use='literal'/> </output> </operation> </binding> <service name='ProfileMgmtService'> <port binding='tns:ProfileMgmtBinding' name='ProfileMgmtPort'> <!-- service address will be rewritten to actual one when WSDL is requested from running server --> <soap:address location='http://SERVER:PORT/jaxws-retail/ProfileMgmtBean'/> </port> </service> </definitions>",
"./wsconsume.sh --help WSConsumeTask is a cmd line tool that generates portable JAX-WS artifacts from a WSDL file. usage: org.jboss.ws.tools.cmd.WSConsume [options] <wsdl-url> options: -h, --help Show this help message -b, --binding=<file> One or more JAX-WS or Java XML Binding files -k, --keep Keep/Generate Java source -c --catalog=<file> Oasis XML Catalog file for entity resolution -p --package=<name> The target package for generated source -w --wsdlLocation=<loc> Value to use for @WebService.wsdlLocation -o, --output=<directory> The directory to put generated artifacts -s, --source=<directory> The directory to put Java source -t, --target=<2.0|2.1|2.2> The JAX-WS target -q, --quiet Be somewhat more quiet -v, --verbose Show full exception stack traces -l, --load-consumer Load the consumer and exit (debug utility) -e, --extension Enable SOAP 1.2 binding extension -a, --additionalHeaders Enable processing of implicit SOAP headers -n, --nocompile Do not compile generated sources",
"[user@host bin]USD wsconsume.sh -k -p org.jboss.test.ws.jaxws.samples.retail.profile ProfileMgmtService.wsdl output/org/jboss/test/ws/jaxws/samples/retail/profile/Customer.java output/org/jboss/test/ws/jaxws/samples/retail/profile/DiscountRequest.java output/org/jboss/test/ws/jaxws/samples/retail/profile/DiscountResponse.java output/org/jboss/test/ws/jaxws/samples/retail/profile/ObjectFactory.java output/org/jboss/test/ws/jaxws/samples/retail/profile/ProfileMgmt.java output/org/jboss/test/ws/jaxws/samples/retail/profile/ProfileMgmtService.java output/org/jboss/test/ws/jaxws/samples/retail/profile/package-info.java output/org/jboss/test/ws/jaxws/samples/retail/profile/Customer.class output/org/jboss/test/ws/jaxws/samples/retail/profile/DiscountRequest.class output/org/jboss/test/ws/jaxws/samples/retail/profile/DiscountResponse.class output/org/jboss/test/ws/jaxws/samples/retail/profile/ObjectFactory.class output/org/jboss/test/ws/jaxws/samples/retail/profile/ProfileMgmt.class output/org/jboss/test/ws/jaxws/samples/retail/profile/ProfileMgmtService.class output/org/jboss/test/ws/jaxws/samples/retail/profile/package-info.class",
"import javax.xml.ws.Service; [...] Service service = Service.create( new URL(\"http://example.org/service?wsdl\"), new QName(\"MyService\") ); ProfileMgmt profileMgmt = service.getPort(ProfileMgmt.class); // Use the service stub in your application",
"@WebServiceClient(name=\"StockQuoteService\", targetNamespace=\"http://example.com/stocks\", wsdlLocation=\"http://example.com/stocks.wsdl\") public class StockQuoteService extends javax.xml.ws.Service { public StockQuoteService() { super(new URL(\"http://example.com/stocks.wsdl\"), new QName(\"http://example.com/stocks\", \"StockQuoteService\")); } public StockQuoteService(String wsdlLocation, QName serviceName) { super(wsdlLocation, serviceName); } }",
"URL wsdlLocation = new URL(\"http://example.org/my.wsdl\"); QName serviceName = new QName(\"http://example.org/sample\", \"MyService\"); Service service = Service.create(wsdlLocation, serviceName);",
"public <T> T getPort(QName portName, Class<T> serviceEndpointInterface) public <T> T getPort(Class<T> serviceEndpointInterface)",
"@WebServiceClient(name = \"TestEndpointService\", targetNamespace = \"http://org.jboss.ws/wsref\", wsdlLocation = \"http://localhost.localdomain:8080/jaxws-samples-webserviceref?wsdl\") public class TestEndpointService extends Service { public TestEndpointService(URL wsdlLocation, QName serviceName) { super(wsdlLocation, serviceName); } @WebEndpoint(name = \"TestEndpointPort\") public TestEndpoint getTestEndpointPort() { return (TestEndpoint)super.getPort(TESTENDPOINTPORT, TestEndpoint.class); } }",
"public class EJB3Client implements EJB3Remote { @WebServiceRef public TestEndpointService service4; @WebServiceRef public TestEndpoint port3; }",
"Service service = Service.create(wsdlURL, serviceName); Dispatch dispatch = service.createDispatch(portName, StreamSource.class, Mode.PAYLOAD); String payload = \"<ns1:ping xmlns:ns1='http://oneway.samples.jaxws.ws.test.jboss.org/'/>\"; dispatch.invokeOneWay(new StreamSource(new StringReader(payload))); payload = \"<ns1:feedback xmlns:ns1='http://oneway.samples.jaxws.ws.test.jboss.org/'/>\"; Source retObj = (Source)dispatch.invoke(new StreamSource(new StringReader(payload)));",
"public void testInvokeAsync() throws Exception { URL wsdlURL = new URL(\"http://\" + getServerHost() + \":8080/jaxws-samples-asynchronous?wsdl\"); QName serviceName = new QName(targetNS, \"TestEndpointService\"); Service service = Service.create(wsdlURL, serviceName); TestEndpoint port = service.getPort(TestEndpoint.class); Response response = port.echoAsync(\"Async\"); // access future String retStr = (String) response.get(); assertEquals(\"Async\", retStr); }",
"@WebService (name=\"PingEndpoint\") @SOAPBinding(style = SOAPBinding.Style.RPC) public class PingEndpointImpl { private static String feedback; @WebMethod @Oneway public void ping() { log.info(\"ping\"); feedback = \"ok\"; } @WebMethod public String feedback() { log.info(\"feedback\"); return feedback; } }",
"public void testConfigureTimeout() throws Exception { //Set timeout until a connection is established ((BindingProvider)port).getRequestContext().put(\"javax.xml.ws.client.connectionTimeout\", \"6000\"); //Set timeout until the response is received ((BindingProvider) port).getRequestContext().put(\"javax.xml.ws.client.receiveTimeout\", \"1000\"); port.echo(\"testTimeout\"); }",
"<subsystem xmlns=\"urn:jboss:domain:webservices:2.0\"> <wsdl-host>USD{jboss.bind.address:127.0.0.1}</wsdl-host> <endpoint-config name=\"Standard-Endpoint-Config\"/> <endpoint-config name=\"Recording-Endpoint-Config\"> <pre-handler-chain name=\"recording-handlers\" protocol-bindings=\"##SOAP11_HTTP ##SOAP11_HTTP_MTOM ##SOAP12_HTTP ##SOAP12_HTTP_MTOM\"> <handler name=\"RecordingHandler\" class=\"org.jboss.ws.common.invocation.RecordingServerHandler\"/> </pre-handler-chain> </endpoint-config> <client-config name=\"Standard-Client-Config\"/> </subsystem>",
"/subsystem=webservices/endpoint-config=My-Endpoint-Config:add",
"/subsystem=webservices/endpoint-config=Standard-Endpoint-Config/property= PROPERTY_NAME :add(value= PROPERTY_VALUE )",
"/subsystem=webservices/endpoint-config=My-Endpoint-Config:remove",
"Endpoint --> PRE Handlers --> Endpoint Handlers --> POST Handlers --> ... --> Client",
"Client --> ... --> POST Handlers --> Endpoint Handlers --> PRE Handlers --> Endpoint",
"/subsystem=webservices/endpoint-config=My-Endpoint-Config/post-handler-chain=my-post-handler-chain:add",
"/subsystem=webservices/endpoint-config=My-Endpoint-Config/pre-handler-chain=my-pre-handler-chain:add",
"/subsystem=webservices/endpoint-config=My-Endpoint-Config/post-handler-chain=my-post-handler-chain:write-attribute(name=protocol-bindings,value=##SOAP11_HTTP)",
"/subsystem=webservices/endpoint-config=My-Endpoint-Config/post-handler-chain=my-post-handler-chain:remove",
"/subsystem=webservices/endpoint-config=My-Endpoint-Config/post-handler-chain=my-post-handler-chain/handler=my-handler:add(class=\"com.arjuna.webservices11.wsarj.handler.InstanceIdentifierInHandler\")",
"/subsystem=webservices/endpoint-config=My-Endpoint-Config/post-handler-chain=my-post-handler-chain/handler=my-handler:add(class=\"org.jboss.ws.common.invocation.RecordingServerHandler\")",
"/subsystem=webservices/endpoint-config=My-Endpoint-Config/post-handler-chain=my-post-handler-chain/handler=my-handler:",
"/subsystem=webservices:write-attribute(name=wsdl-uri-scheme, value=https)",
"/deployment=\"jaxws-samples-handlerchain.war\"/subsystem=webservices/endpoint=\"jaxws-samples-handlerchain:TestService\":read-resource(include-runtime=true) { \"outcome\" => \"success\", \"result\" => { \"average-processing-time\" => 23L, \"class\" => \"org.jboss.test.ws.jaxws.samples.handlerchain.EndpointImpl\", \"context\" => \"jaxws-samples-handlerchain\", \"fault-count\" => 0L, \"max-processing-time\" => 23L, \"min-processing-time\" => 23L, \"name\" => \"TestService\", \"request-count\" => 1L, \"response-count\" => 1L, \"total-processing-time\" => 23L, \"type\" => \"JAXWS_JSE\", \"wsdl-url\" => \"http://localhost:8080/jaxws-samples-handlerchain?wsdl\" } }",
"/subsystem=webservices:write-attribute(name=statistics-enabled,value=true)",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <jaxws-config xmlns=\"urn:jboss:jbossws-jaxws-config:4.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:javaee=\"http://java.sun.com/xml/ns/javaee\" xsi:schemaLocation=\"urn:jboss:jbossws-jaxws-config:4.0 schema/jbossws-jaxws-config_4_0.xsd\"> <endpoint-config> <config-name>org.jboss.test.ws.jaxws.jbws3282.Endpoint4Impl</config-name> <pre-handler-chains> <javaee:handler-chain> <javaee:handler> <javaee:handler-name>Log Handler</javaee:handler-name> <javaee:handler-class>org.jboss.test.ws.jaxws.jbws3282.LogHandler</javaee:handler-class> </javaee:handler> </javaee:handler-chain> </pre-handler-chains> <post-handler-chains> <javaee:handler-chain> <javaee:handler> <javaee:handler-name>Routing Handler</javaee:handler-name> <javaee:handler-class>org.jboss.test.ws.jaxws.jbws3282.RoutingHandler</javaee:handler-class> </javaee:handler> </javaee:handler-chain> </post-handler-chains> </endpoint-config> <endpoint-config> <config-name>EP6-config</config-name> <post-handler-chains> <javaee:handler-chain> <javaee:handler> <javaee:handler-name>Authorization Handler</javaee:handler-name> <javaee:handler-class>org.jboss.test.ws.jaxws.jbws3282.AuthorizationHandler</javaee:handler-class> </javaee:handler> </javaee:handler-chain> </post-handler-chains> </endpoint-config> </jaxws-config>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <jaxws-config xmlns=\"urn:jboss:jbossws-jaxws-config:4.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:javaee=\"http://java.sun.com/xml/ns/javaee\" xsi:schemaLocation=\"urn:jboss:jbossws-jaxws-config:4.0 schema/jbossws-jaxws-config_4_0.xsd\"> <client-config> <config-name>Custom Client Config</config-name> <pre-handler-chains> <javaee:handler-chain> <javaee:handler> <javaee:handler-name>Routing Handler</javaee:handler-name> <javaee:handler-class>org.jboss.test.ws.jaxws.clientConfig.RoutingHandler</javaee:handler-class> </javaee:handler> <javaee:handler> <javaee:handler-name>Custom Handler</javaee:handler-name> <javaee:handler-class>org.jboss.test.ws.jaxws.clientConfig.CustomHandler</javaee:handler-class> </javaee:handler> </javaee:handler-chain> </pre-handler-chains> </client-config> <client-config> <config-name>Another Client Config</config-name> <post-handler-chains> <javaee:handler-chain> <javaee:handler> <javaee:handler-name>Routing Handler</javaee:handler-name> <javaee:handler-class>org.jboss.test.ws.jaxws.clientConfig.RoutingHandler</javaee:handler-class> </javaee:handler> </javaee:handler-chain> </post-handler-chains> </client-config> </jaxws-config>",
"<subsystem xmlns=\"urn:jboss:domain:webservices:2.0\"> <!-- ... --> <endpoint-config name=\"Standard-Endpoint-Config\"/> <endpoint-config name=\"Recording-Endpoint-Config\"> <pre-handler-chain name=\"recording-handlers\" protocol-bindings=\"##SOAP11_HTTP ##SOAP11_HTTP_MTOM ##SOAP12_HTTP ##SOAP12_HTTP_MTOM\"> <handler name=\"RecordingHandler\" class=\"org.jboss.ws.common.invocation.RecordingServerHandler\"/> </pre-handler-chain> </endpoint-config> <client-config name=\"Standard-Client-Config\"/> </subsystem>",
"<jaxws-config xmlns=\"urn:jboss:jbossws-jaxws-config:4.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:javaee=\"http://java.sun.com/xml/ns/javaee\" xsi:schemaLocation=\"urn:jboss:jbossws-jaxws-config:4.0 schema/jbossws-jaxws-config_4_0.xsd\"> <endpoint-config> <config-name>Custom WS-Security Endpoint</config-name> <property> <property-name>ws-security.signature.properties</property-name> <property-value>bob.properties</property-value> </property> <property> <property-name>ws-security.encryption.properties</property-name> <property-value>bob.properties</property-value> </property> <property> <property-name>ws-security.signature.username</property-name> <property-value>bob</property-value> </property> <property> <property-name>ws-security.encryption.username</property-name> <property-value>alice</property-value> </property> <property> <property-name>ws-security.callback-handler</property-name> <property-value>org.jboss.test.ws.jaxws.samples.wsse.policy.basic.KeystorePasswordCallback</property-value> </property> </endpoint-config> </jaxws-config>",
"<subsystem xmlns=\"urn:jboss:domain:webservices:2.0\"> <!-- ... --> <endpoint-config name=\"Standard-Endpoint-Config\"> <property name=\"schema-validation-enabled\" value=\"true\"/> </endpoint-config> <!-- ... --> <client-config name=\"Standard-Client-Config\"> <property name=\"schema-validation-enabled\" value=\"true\"/> </client-config> </subsystem>",
"@EndpointConfig(configFile = \"WEB-INF/my-endpoint-config.xml\", configName = \"Custom WS-Security Endpoint\") public class ServiceImpl implements ServiceIface { public String sayHello() { return \"Secure Hello World!\"; } }",
"import org.jboss.ws.api.configuration.ClientConfigFeature; Service service = Service.create(wsdlURL, serviceName); Endpoint port = service.getPort(Endpoint.class, new ClientConfigFeature(\"META-INF/my-client-config.xml\", \"Custom Client Config\")); port.echo(\"Kermit\");",
"Endpoint port = service.getPort(Endpoint.class, new ClientConfigFeature(\"META-INF/my-client-config.xml\", \"Custom Client Config\"), true);",
"Endpoint port = service.getPort(Endpoint.class, new ClientConfigFeature(null, \"Container Custom Client Config\"));",
"import org.jboss.ws.api.configuration.ClientConfigUtil; import org.jboss.ws.api.configuration.ClientConfigurer; Service service = Service.create(wsdlURL, serviceName); Endpoint port = service.getPort(Endpoint.class); BindingProvider bp = (BindingProvider)port; ClientConfigurer configurer = ClientConfigUtil.resolveClientConfigurer(); configurer.setConfigHandlers(bp, \"META-INF/my-client-config.xml\", \"Custom Client Config\"); port.echo(\"Kermit\");",
"ClientConfigUtil.setConfigHandlers(bp, \"META-INF/my-client-config.xml\", \"Custom Client Config\");",
"ClientConfigurer configurer = ClientConfigUtil.resolveClientConfigurer(); configurer.setConfigHandlers(bp, null, \"Container Custom Client Config\");",
"import org.jboss.ws.api.configuration.ClientConfigUtil; import org.jboss.ws.api.configuration.ClientConfigurer; Service service = Service.create(wsdlURL, serviceName); Endpoint port = service.getPort(Endpoint.class); ClientConfigUtil.setConfigProperties(port, \"META-INF/my-client-config.xml\", \"Custom Client Config\"); port.echo(\"Kermit\");",
"ClientConfigurer configurer = ClientConfigUtil.resolveClientConfigurer(); configurer.setConfigProperties(port, \"META-INF/my-client-config.xml\", \"Custom Client Config\");",
"ClientConfigurer configurer = ClientConfigUtil.resolveClientConfigurer(); configurer.setConfigProperties(port, null, \"Container Custom Client Config\");",
"<config-file>WEB-INF/jaxws-endpoint-config.xml</config-file>",
"Manifest-Version: 1.0 Dependencies: org.jboss.ws.cxf.jbossws-cxf-client services export,foo.bar",
"Dependencies: com.sun.xml.bind services export",
"Dependencies: org.apache.cxf services",
"Dependencies: org.jboss.ws.cxf.jbossws-cxf-client services",
"Dependencies: my.org annotations",
"<session-config> <session-timeout>30</session-timeout> </session-config>",
"/subsystem=undertow/servlet-container=default:write-attribute(name=default-session-timeout,value=30)",
"/subsystem=security/security-domain=sts:add(cache-type=default)",
"/subsystem=security/security-domain=sts/authentication=classic:add",
"/subsystem=security/security-domain=sts/authentication=classic/login-module=UsersRoles:add(code=UsersRoles,flag=required,module-options=[usersProperties=USD{jboss.server.config.dir}/sts-users.properties,rolesProperties=USD{jboss.server.config.dir}/sts-roles.properties])",
"reload",
"<security-domain name=\"sts\" cache-type=\"default\"> <authentication> <login-module code=\"UsersRoles\" flag=\"required\"> <module-option name=\"usersProperties\" value=\"USD{jboss.server.config.dir}/sts-users.properties\"/> <module-option name=\"rolesProperties\" value=\"USD{jboss.server.config.dir}/sts-roles.properties\"/> </login-module> </authentication> </security-domain>",
"Eric=samplePass Alan=samplePass",
"Eric=All Alan=",
"<web-app> <!-- Define STS servlet --> <servlet> <servlet-name>STS-servlet</servlet-name> <servlet-class>com.example.sts.PicketLinkSTService</servlet-class> </servlet> <servlet-mapping> <servlet-name>STS-servlet</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> <!-- Define a security constraint that requires the All role to access resources --> <security-constraint> <web-resource-collection> <web-resource-name>STS</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>All</role-name> </auth-constraint> </security-constraint> <!-- Define the Login Configuration for this Application --> <login-config> <auth-method>BASIC</auth-method> <realm-name>STS Realm</realm-name> </login-config> <!-- Security roles referenced by this web application --> <security-role> <description>The role that is required to log in to the IDP Application</description> <role-name>All</role-name> </security-role> </web-app>",
"<jboss-web> <security-domain>sts</security-domain> <context-root>SecureTokenService</context-root> </jboss-web>",
"<jboss-deployment-structure> <deployment> <dependencies> <module name=\"org.picketlink\"/> </dependencies> </deployment> </jboss-deployment-structure>",
"<?xml version=\"1.0\"?> <wsdl:definitions name=\"PicketLinkSTS\" targetNamespace=\"urn:picketlink:identity-federation:sts\" xmlns:tns=\"urn:picketlink:identity-federation:sts\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:wsap10=\"http://www.w3.org/2006/05/addressing/wsdl\" xmlns:soap12=\"http://schemas.xmlsoap.org/wsdl/soap12/\"> <wsdl:types> <xs:schema targetNamespace=\"urn:picketlink:identity-federation:sts\" xmlns:tns=\"urn:picketlink:identity-federation:sts\" xmlns:xs=\"http://www.w3.org/2001/XMLSchema\" version=\"1.0\" elementFormDefault=\"qualified\"> <xs:element name=\"MessageBody\"> <xs:complexType> <xs:sequence> <xs:any minOccurs=\"0\" maxOccurs=\"unbounded\" namespace=\"##any\"/> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> </wsdl:types> <wsdl:message name=\"RequestSecurityToken\"> <wsdl:part name=\"rstMessage\" element=\"tns:MessageBody\"/> </wsdl:message> <wsdl:message name=\"RequestSecurityTokenResponse\"> <wsdl:part name=\"rstrMessage\" element=\"tns:MessageBody\"/> </wsdl:message> <wsdl:portType name=\"SecureTokenService\"> <wsdl:operation name=\"IssueToken\"> <wsdl:input wsap10:Action=\"http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue\" message=\"tns:RequestSecurityToken\"/> <wsdl:output wsap10:Action=\"http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTR/Issue\" message=\"tns:RequestSecurityTokenResponse\"/> </wsdl:operation> </wsdl:portType> <wsdl:binding name=\"STSBinding\" type=\"tns:SecureTokenService\"> <soap12:binding transport=\"http://schemas.xmlsoap.org/soap/http\"/> <wsdl:operation name=\"IssueToken\"> <soap12:operation soapAction=\"http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue\" style=\"document\"/> <wsdl:input> <soap12:body use=\"literal\"/> </wsdl:input> <wsdl:output> <soap12:body use=\"literal\"/> </wsdl:output> </wsdl:operation> </wsdl:binding> <wsdl:service name=\"PicketLinkSTS\"> <wsdl:port name=\"PicketLinkSTSPort\" binding=\"tns:STSBinding\"> <soap12:address location=\"http://localhost:8080/SecureTokenService/PicketLinkSTS\"/> </wsdl:port> </wsdl:service> </wsdl:definitions>",
"@WebServiceProvider(serviceName = \"PicketLinkSTS\", portName = \"PicketLinkSTSPort\", targetNamespace = \"urn:picketlink:identity-federation:sts\", wsdlLocation = \"WEB-INF/wsdl/PicketLinkSTS.wsdl\") @ServiceMode(value = Service.Mode.MESSAGE) public class PicketLinkSTService extends PicketLinkSTS { private static Logger log = Logger.getLogger(PicketLinkSTService.class.getName()); @Resource public void setWSC(WebServiceContext wctx) { log.debug(\"Setting WebServiceContext = \" + wctx); this.context = wctx; } }",
"<!DOCTYPE PicketLinkSTS> <PicketLinkSTS xmlns=\"urn:picketlink:federation:config:2.1\" STSName=\"PicketLinkSTS\" TokenTimeout=\"7200\" EncryptToken=\"false\"> <KeyProvider ClassName=\"org.picketlink.identity.federation.core.impl.KeyStoreKeyManager\"> <Auth Key=\"KeyStoreURL\" Value=\"sts_keystore.jks\"/> <Auth Key=\"KeyStorePass\" Value=\"testpass\"/> <Auth Key=\"SigningKeyAlias\" Value=\"sts\"/> <Auth Key=\"SigningKeyPass\" Value=\"keypass\"/> <ValidatingAlias Key=\"http://services.testcorp.org/provider1\" Value=\"service1\"/> </KeyProvider> <TokenProviders> <TokenProvider ProviderClass=\"org.picketlink.identity.federation.core.wstrust.plugins.saml.SAML11TokenProvider\" TokenType=\"http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV1.1\" TokenElement=\"Assertion\" TokenElementNS=\"urn:oasis:names:tc:SAML:1.0:assertion\"/> <TokenProvider ProviderClass=\"org.picketlink.identity.federation.core.wstrust.plugins.saml.SAML20TokenProvider\" TokenType=\"http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0\" TokenElement=\"Assertion\" TokenElementNS=\"urn:oasis:names:tc:SAML:2.0:assertion\"/> </TokenProviders> <ServiceProviders> <ServiceProvider Endpoint=\"http://services.testcorp.org/provider1\" TokenType=\"http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0\" TruststoreAlias=\"service1\"/> </ServiceProviders> </PicketLinkSTS>",
"WSTrustClient client = new WSTrustClient(\"PicketLinkSTS\", \"PicketLinkSTSPort\", \"http://localhost:8080/SecureTokenService/PicketLinkSTS\", new SecurityInfo(username, password));",
"org.w3c.dom.Element assertion = null; try { assertion = client.issueToken(SAMLUtil.SAML2_TOKEN_TYPE); } catch (WSTrustException wse) { System.out.println(\"Unable to issue assertion: \" + wse.getMessage()); wse.printStackTrace(); }",
"bindingProvider.getRequestContext().put(SAML2Constants.SAML2_ASSERTION_PROPERTY, assertion);",
"final STSClientPool pool = STSClientPoolFactory.getPoolInstance(); pool.createPool(20, stsClientConfig); final STSClient client = pool.getClient(stsClientConfig);",
"pool.returnClient();",
"if (! pool.configExists(stsClientConfig) { pool.createPool(stsClientConfig); }",
"pool.destroyPool(stsClientConfig);",
"public class PropagateSecurityInterceptor extends WSS4JInInterceptor { public PropagateSecurityInterceptor() { super(); getAfter().add(PolicyBasedWSS4JInInterceptor.class.getName()); } @Override public void handleMessage(SoapMessage message) throws Fault { final Endpoint endpoint = message.getExchange().get(Endpoint.class); final SecurityDomainContext securityDomainContext = endpoint.getSecurityDomainContext(); //push subject principal retrieved from CXF to ElytronSecurityDomainContext securityDomainContext.pushSubjectContext(subject, principal, null) } }",
"/system-property=org.apache.cxf.logging.enabled:add(value=true)",
"package org.jboss.quickstarts.ws.jaxws.samples.wsa; import org.jboss.quickstarts.ws.jaxws.samples.wsa.ServiceIface; import javax.jws.WebService; import javax.xml.ws.soap.Addressing; @WebService( portName = \"AddressingServicePort\", serviceName = \"AddressingService\", wsdlLocation = \"WEB-INF/wsdl/AddressingService.wsdl\", targetNamespace = \"http://www.jboss.org/jbossws/ws-extensions/wsaddressing\", endpointInterface = \"org.jboss.quickstarts.ws.jaxws.samples.wsa.ServiceIface\") @Addressing(enabled = true, required = true) public class ServiceImpl implements ServiceIface { public String sayHello() { return \"Hello World!\"; } }",
"package org.jboss.quickstarts.ws.jaxws.samples.wsa; import java.net.URL; import javax.xml.namespace.QName; import javax.xml.ws.Service; import javax.xml.ws.soap.AddressingFeature; public final class AddressingClient { private static final String serviceURL = \"http://localhost:8080/jaxws-addressing/AddressingService\"; public static void main(String[] args) throws Exception { // construct proxy QName serviceName = new QName(\"http://www.jboss.org/jbossws/ws-extensions/wsaddressing\", \"AddressingService\"); URL wsdlURL = new URL(serviceURL + \"?wsdl\"); Service service = Service.create(wsdlURL, serviceName); org.jboss.quickstarts.ws.jaxws.samples.wsa.ServiceIface proxy = (org.jboss.quickstarts.ws.jaxws.samples.wsa.ServiceIface) service.getPort(org.jboss.quickstarts.ws.jaxws.samples.wsa.ServiceIface.class, new AddressingFeature()); // invoke method System.out.println(proxy.sayHello()); } }",
"<webservices xmlns=\"http://www.jboss.com/xml/ns/javaee\" version=\"1.2\"> <property> <name>...</name> <value>...</value> </property> </webservices>",
"<webservices xmlns=\"http://www.jboss.com/xml/ns/javaee\" version=\"1.2\"> <property> <name>cxf.management.enabled</name> <value>true</value> </property> <property> <name>cxf.management.installResponseTimeInterceptors</name> <value>false</value> </property> </webservices>",
"/subsystem=webservices/endpoint-config=Standard-Endpoint-Config/property=schema-validation-enabled:add(value=true)",
"((BindingProvider)proxy).getRequestContext().put(\"schema-validation-enabled\", true);",
"import javax.jws.WebService; import org.apache.cxf.annotations.SchemaValidation; @WebService(...) @SchemaValidation public class ValidatingHelloImpl implements Hello { }",
"<?xml version=\"1.1\" encoding=\"UTF-8\"?> <webservices xmlns=\"http://www.jboss.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" version=\"1.2\" xsi:schemaLocation=\"http://www.jboss.com/xml/ns/javaee\"> <property> <name>cxf.interceptors.in</name> <value>org.jboss.test.ws.jaxws.cxf.interceptors.BusInterceptor</value> </property> <property> <name>cxf.interceptors.out</name> <value>org.jboss.test.ws.jaxws.cxf.interceptors.BusCounterInterceptor</value> </property> </webservices>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <jaxws-config xmlns=\"urn:jboss:jbossws-jaxws-config:4.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:javaee=\"http://java.sun.com/xml/ns/javaee\" xsi:schemaLocation=\"urn:jboss:jbossws-jaxws-config:4.0 schema/jbossws-jaxws-config_4_0.xsd\"> <endpoint-config> <config-name>org.jboss.test.ws.jaxws.cxf.interceptors.EndpointImpl</config-name> <property> <property-name>cxf.interceptors.in</property-name> <property-value>org.jboss.test.ws.jaxws.cxf.interceptors.EndpointInterceptor,org.jboss.test.ws.jaxws.cxf.interceptors.FooInterceptor</property-value> </property> <property> <property-name>cxf.interceptors.out</property-name> <property-value>org.jboss.test.ws.jaxws.cxf.interceptors.EndpointCounterInterceptor</property-value> </property> </endpoint-config> </jaxws-config>",
"<?xml version=\"1.1\" encoding=\"UTF-8\"?> <webservices xmlns=\"http://www.jboss.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" version=\"1.2\" xsi:schemaLocation=\"http://www.jboss.com/xml/ns/javaee\"> <property> <name>cxf.features</name> <value>org.apache.cxf.feature.FastInfosetFeature</value> </property> </webservices>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <jaxws-config xmlns=\"urn:jboss:jbossws-jaxws-config:4.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:javaee=\"http://java.sun.com/xml/ns/javaee\" xsi:schemaLocation=\"urn:jboss:jbossws-jaxws-config:4.0 schema/jbossws-jaxws-config_4_0.xsd\"> <endpoint-config> <config-name>Custom FI Config</config-name> <property> <property-name>cxf.features</property-name> <property-value>org.apache.cxf.feature.FastInfosetFeature</property-value> </property> </endpoint-config> </jaxws-config>",
"import org.Bar; import org.Foo; Foo foo = new Foo(); foo.setPar(34); Bar bar = new Bar(); bar.setColor(\"blue\");"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/developing_web_services_applications/developing_jax_ws_web_services |
Chapter 1. Prerequisites | Chapter 1. Prerequisites You can use installer-provisioned installation to install OpenShift Container Platform on IBM Cloud(R) Bare Metal (Classic) nodes. This document describes the prerequisites and procedures when installing OpenShift Container Platform on IBM Cloud(R) nodes. Important Red Hat supports IPMI and PXE on the provisioning network only. Red Hat has not tested Red Fish, virtual media, or other complementary technologies such as Secure Boot on IBM Cloud(R) deployments. A provisioning network is required. Installer-provisioned installation of OpenShift Container Platform requires: One node with Red Hat Enterprise Linux CoreOS (RHCOS) 8.x installed, for running the provisioner Three control plane nodes One routable network One provisioning network Before starting an installer-provisioned installation of OpenShift Container Platform on IBM Cloud(R) Bare Metal (Classic), address the following prerequisites and requirements. 1.1. Setting up IBM Cloud Bare Metal (Classic) infrastructure To deploy an OpenShift Container Platform cluster on IBM Cloud(R) Bare Metal (Classic) infrastructure, you must first provision the IBM Cloud(R) nodes. Important Red Hat supports IPMI and PXE on the provisioning network only. Red Hat has not tested Red Fish, virtual media, or other complementary technologies such as Secure Boot on IBM Cloud(R) deployments. The provisioning network is required. You can customize IBM Cloud(R) nodes using the IBM Cloud(R) API. When creating IBM Cloud(R) nodes, you must consider the following requirements. Use one data center per cluster All nodes in the OpenShift Container Platform cluster must run in the same IBM Cloud(R) data center. Create public and private VLANs Create all nodes with a single public VLAN and a single private VLAN. Ensure subnets have sufficient IP addresses IBM Cloud(R) public VLAN subnets use a /28 prefix by default, which provides 16 IP addresses. That is sufficient for a cluster consisting of three control plane nodes, four worker nodes, and two IP addresses for the API VIP and Ingress VIP on the baremetal network. For larger clusters, you might need a smaller prefix. IBM Cloud(R) private VLAN subnets use a /26 prefix by default, which provides 64 IP addresses. IBM Cloud(R) Bare Metal (Classic) uses private network IP addresses to access the Baseboard Management Controller (BMC) of each node. OpenShift Container Platform creates an additional subnet for the provisioning network. Network traffic for the provisioning network subnet routes through the private VLAN. For larger clusters, you might need a smaller prefix. Table 1.1. IP addresses per prefix IP addresses Prefix 32 /27 64 /26 128 /25 256 /24 Configuring NICs OpenShift Container Platform deploys with two networks: provisioning : The provisioning network is a non-routable network used for provisioning the underlying operating system on each node that is a part of the OpenShift Container Platform cluster. baremetal : The baremetal network is a routable network. You can use any NIC order to interface with the baremetal network, provided it is not the NIC specified in the provisioningNetworkInterface configuration setting or the NIC associated to a node's bootMACAddress configuration setting for the provisioning network. While the cluster nodes can contain more than two NICs, the installation process only focuses on the first two NICs. For example: NIC Network VLAN NIC1 provisioning <provisioning_vlan> NIC2 baremetal <baremetal_vlan> In the example, NIC1 on all control plane and worker nodes connects to the non-routable network ( provisioning ) that is only used for the installation of the OpenShift Container Platform cluster. NIC2 on all control plane and worker nodes connects to the routable baremetal network. PXE Boot order NIC1 PXE-enabled provisioning network 1 NIC2 baremetal network. 2 Note Ensure PXE is enabled on the NIC used for the provisioning network and is disabled on all other NICs. Configuring canonical names Clients access the OpenShift Container Platform cluster nodes over the baremetal network. Configure IBM Cloud(R) subdomains or subzones where the canonical name extension is the cluster name. For example: Creating DNS entries You must create DNS A record entries resolving to unused IP addresses on the public subnet for the following: Usage Host Name IP API api.<cluster_name>.<domain> <ip> Ingress LB (apps) *.apps.<cluster_name>.<domain> <ip> Control plane and worker nodes already have DNS entries after provisioning. The following table provides an example of fully qualified domain names. The API and Nameserver addresses begin with canonical name extensions. The host names of the control plane and worker nodes are examples, so you can use any host naming convention you prefer. Usage Host Name IP API api.<cluster_name>.<domain> <ip> Ingress LB (apps) *.apps.<cluster_name>.<domain> <ip> Provisioner node provisioner.<cluster_name>.<domain> <ip> Master-0 openshift-master-0.<cluster_name>.<domain> <ip> Master-1 openshift-master-1.<cluster_name>.<domain> <ip> Master-2 openshift-master-2.<cluster_name>.<domain> <ip> Worker-0 openshift-worker-0.<cluster_name>.<domain> <ip> Worker-1 openshift-worker-1.<cluster_name>.<domain> <ip> Worker-n openshift-worker-n.<cluster_name>.<domain> <ip> OpenShift Container Platform includes functionality that uses cluster membership information to generate A records. This resolves the node names to their IP addresses. After the nodes are registered with the API, the cluster can disperse node information without using CoreDNS-mDNS. This eliminates the network traffic associated with multicast DNS. Important After provisioning the IBM Cloud(R) nodes, you must create a DNS entry for the api.<cluster_name>.<domain> domain name on the external DNS because removing CoreDNS causes the local entry to disappear. Failure to create a DNS record for the api.<cluster_name>.<domain> domain name in the external DNS server prevents worker nodes from joining the cluster. Network Time Protocol (NTP) Each OpenShift Container Platform node in the cluster must have access to an NTP server. OpenShift Container Platform nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL certificates that require validation, which might fail if the date and time between the nodes are not in sync. Important Define a consistent clock date and time format in each cluster node's BIOS settings, or installation might fail. Configure a DHCP server IBM Cloud(R) Bare Metal (Classic) does not run DHCP on the public or private VLANs. After provisioning IBM Cloud(R) nodes, you must set up a DHCP server for the public VLAN, which corresponds to OpenShift Container Platform's baremetal network. Note The IP addresses allocated to each node do not need to match the IP addresses allocated by the IBM Cloud(R) Bare Metal (Classic) provisioning system. See the "Configuring the public subnet" section for details. Ensure BMC access privileges The "Remote management" page for each node on the dashboard contains the node's intelligent platform management interface (IPMI) credentials. The default IPMI privileges prevent the user from making certain boot target changes. You must change the privilege level to OPERATOR so that Ironic can make those changes. In the install-config.yaml file, add the privilegelevel parameter to the URLs used to configure each BMC. See the "Configuring the install-config.yaml file" section for additional details. For example: ipmi://<IP>:<port>?privilegelevel=OPERATOR Alternatively, contact IBM Cloud(R) support and request that they increase the IPMI privileges to ADMINISTRATOR for each node. Create bare metal servers Create bare metal servers in the IBM Cloud(R) dashboard by navigating to Create resource Bare Metal Servers for Classic . Alternatively, you can create bare metal servers with the ibmcloud CLI utility. For example: USD ibmcloud sl hardware create --hostname <SERVERNAME> \ --domain <DOMAIN> \ --size <SIZE> \ --os <OS-TYPE> \ --datacenter <DC-NAME> \ --port-speed <SPEED> \ --billing <BILLING> See Installing the stand-alone IBM Cloud(R) CLI for details on installing the IBM Cloud(R) CLI. Note IBM Cloud(R) servers might take 3-5 hours to become available. | [
"<cluster_name>.<domain>",
"test-cluster.example.com",
"ipmi://<IP>:<port>?privilegelevel=OPERATOR",
"ibmcloud sl hardware create --hostname <SERVERNAME> --domain <DOMAIN> --size <SIZE> --os <OS-TYPE> --datacenter <DC-NAME> --port-speed <SPEED> --billing <BILLING>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_ibm_cloud_bare_metal_classic/install-ibm-cloud-prerequisites |
Chapter 3. Cluster capabilities | Chapter 3. Cluster capabilities Cluster administrators can use cluster capabilities to enable or disable optional components prior to installation. Cluster administrators can enable cluster capabilities at anytime after installation. Note Cluster administrators cannot disable a cluster capability after it is enabled. 3.1. Enabling cluster capabilities If you are using an installation method that includes customizing your cluster by creating an install-config.yaml file, you can select which cluster capabilities you want to make available on the cluster. Note If you customize your cluster by enabling or disabling specific cluster capabilities, you must manually maintain your install-config.yaml file. New OpenShift Container Platform updates might declare new capability handles for existing components, or introduce new components altogether. Users who customize their install-config.yaml file should consider periodically updating their install-config.yaml file as OpenShift Container Platform is updated. You can use the following configuration parameters to select cluster capabilities: capabilities: baselineCapabilitySet: v4.11 1 additionalEnabledCapabilities: 2 - CSISnapshot - Console - Storage 1 Defines a baseline set of capabilities to install. Valid values are None , vCurrent and v4.x . If you select None , all optional capabilities are disabled. The default value is vCurrent , which enables all optional capabilities. Note v4.x refers to any value up to and including the current cluster version. For example, valid values for a OpenShift Container Platform 4.12 cluster are v4.11 and v4.12 . 2 Defines a list of capabilities to explicitly enable. These capabilities are enabled in addition to the capabilities specified in baselineCapabilitySet . Note In this example, the default capability is set to v4.11 . The additionalEnabledCapabilities field enables additional capabilities over the default v4.11 capability set. The following table describes the baselineCapabilitySet values. Table 3.1. Cluster capabilities baselineCapabilitySet values description Value Description vCurrent Specify this option when you want to automatically add new, default capabilities that are introduced in new releases. v4.11 Specify this option when you want to enable the default capabilities for OpenShift Container Platform 4.11. By specifying v4.11 , capabilities that are introduced in newer versions of OpenShift Container Platform are not enabled. The default capabilities in OpenShift Container Platform 4.11 are baremetal , MachineAPI , marketplace , and openshift-samples . v4.12 Specify this option when you want to enable the default capabilities for OpenShift Container Platform 4.12. By specifying v4.12 , capabilities that are introduced in newer versions of OpenShift Container Platform are not enabled. The default capabilities in OpenShift Container Platform 4.12 are baremetal , MachineAPI , marketplace , openshift-samples , Console , Insights , Storage , and CSISnapshot . v4.13 Specify this option when you want to enable the default capabilities for OpenShift Container Platform 4.13. By specifying v4.13 , capabilities that are introduced in newer versions of OpenShift Container Platform are not enabled. The default capabilities in OpenShift Container Platform 4.13 are baremetal , MachineAPI , marketplace , openshift-samples , Console , Insights , Storage , CSISnapshot , and NodeTuning . v4.14 Specify this option when you want to enable the default capabilities for OpenShift Container Platform 4.14. By specifying v4.14 , capabilities that are introduced in newer versions of OpenShift Container Platform are not enabled. The default capabilities in OpenShift Container Platform 4.14 are baremetal , MachineAPI , marketplace , openshift-samples , Console , Insights , Storage , CSISnapshot , NodeTuning , ImageRegistry , Build , and DeploymentConfig . v4.15 Specify this option when you want to enable the default capabilities for OpenShift Container Platform 4.15. By specifying v4.15 , capabilities that are introduced in newer versions of OpenShift Container Platform are not enabled. The default capabilities in OpenShift Container Platform 4.15 are baremetal , MachineAPI , marketplace , OperatorLifecycleManager , openshift-samples , Console , Insights , Storage , CSISnapshot , NodeTuning , ImageRegistry , Build , CloudCredential , and DeploymentConfig . None Specify when the other sets are too large, and you do not need any capabilities or want to fine-tune via additionalEnabledCapabilities . Additional resources Installing a cluster on AWS with customizations Installing a cluster on GCP with customizations 3.2. Optional cluster capabilities in OpenShift Container Platform 4.15 Currently, cluster Operators provide the features for these optional capabilities. The following summarizes the features provided by each capability and what functionality you lose if it is disabled. Additional resources Cluster Operators reference 3.2.1. Bare-metal capability Purpose The Cluster Baremetal Operator provides the features for the baremetal capability. The Cluster Baremetal Operator (CBO) deploys all the components necessary to take a bare-metal server to a fully functioning worker node ready to run OpenShift Container Platform compute nodes. The CBO ensures that the metal3 deployment, which consists of the Bare Metal Operator (BMO) and Ironic containers, runs on one of the control plane nodes within the OpenShift Container Platform cluster. The CBO also listens for OpenShift Container Platform updates to resources that it watches and takes appropriate action. The bare-metal capability is required for deployments using installer-provisioned infrastructure. Disabling the bare-metal capability can result in unexpected problems with these deployments. It is recommended that cluster administrators only disable the bare-metal capability during installations with user-provisioned infrastructure that do not have any BareMetalHost resources in the cluster. Important If the bare-metal capability is disabled, the cluster cannot provision or manage bare-metal nodes. Only disable the capability if there are no BareMetalHost resources in your deployment. The baremetal capability depends on the MachineAPI capability. If you enable the baremetal capability, you must also enable MachineAPI . Additional resources Deploying installer-provisioned clusters on bare metal Preparing for bare metal cluster installation Bare metal configuration 3.2.2. Build capability Purpose The Build capability enables the Build API. The Build API manages the lifecycle of Build and BuildConfig objects. Important If the Build capability is disabled, the cluster cannot use Build or BuildConfig resources. Disable the capability only if Build and BuildConfig resources are not required in the cluster. 3.2.3. Cloud credential capability Purpose The Cloud Credential Operator provides features for the CloudCredential capability. Note Currently, disabling the CloudCredential capability is only supported for bare-metal clusters. The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). The CCO syncs on CredentialsRequest custom resources (CRs) to allow OpenShift Container Platform components to request cloud provider credentials with the specific permissions that are required for the cluster to run. By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in several different modes. If no mode is specified, or the credentialsMode parameter is set to an empty string ( "" ), the CCO operates in its default mode. Additional resources About the Cloud Credential Operator 3.2.4. Cluster Image Registry capability Purpose The Cluster Image Registry Operator provides features for the ImageRegistry capability. The Cluster Image Registry Operator manages a singleton instance of the OpenShift image registry. It manages all configuration of the registry, including creating storage. On initial start up, the Operator creates a default image-registry resource instance based on the configuration detected in the cluster. This indicates what cloud storage type to use based on the cloud provider. If insufficient information is available to define a complete image-registry resource, then an incomplete resource is defined and the Operator updates the resource status with information about what is missing. The Cluster Image Registry Operator runs in the openshift-image-registry namespace and it also manages the registry instance in that location. All configuration and workload resources for the registry reside in that namespace. In order to integrate the image registry into the cluster's user authentication and authorization system, a service account token secret and an image pull secret are generated for each service account in the cluster. Important If you disable the ImageRegistry capability or if you disable the integrated OpenShift image registry in the Cluster Image Registry Operator's configuration, the service account token secret and image pull secret are not generated for each service account. If you disable the ImageRegistry capability, you can reduce the overall resource footprint of OpenShift Container Platform in resource-constrained environments. Depending on your deployment, you can disable this component if you do not need it. Project cluster-image-registry-operator Additional resources Image Registry Operator in OpenShift Container Platform Automatically generated secrets 3.2.5. Cluster storage capability Purpose The Cluster Storage Operator provides the features for the Storage capability. The Cluster Storage Operator sets OpenShift Container Platform cluster-wide storage defaults. It ensures a default storageclass exists for OpenShift Container Platform clusters. It also installs Container Storage Interface (CSI) drivers which enable your cluster to use various storage backends. Important If the cluster storage capability is disabled, the cluster will not have a default storageclass or any CSI drivers. Users with administrator privileges can create a default storageclass and manually install CSI drivers if the cluster storage capability is disabled. Notes The storage class that the Operator creates can be made non-default by editing its annotation, but this storage class cannot be deleted as long as the Operator runs. 3.2.6. Console capability Purpose The Console Operator provides the features for the Console capability. The Console Operator installs and maintains the OpenShift Container Platform web console on a cluster. The Console Operator is installed by default and automatically maintains a console. Additional resources Web console overview 3.2.7. CSI snapshot controller capability Purpose The Cluster CSI Snapshot Controller Operator provides the features for the CSISnapshot capability. The Cluster CSI Snapshot Controller Operator installs and maintains the CSI Snapshot Controller. The CSI Snapshot Controller is responsible for watching the VolumeSnapshot CRD objects and manages the creation and deletion lifecycle of volume snapshots. Additional resources CSI volume snapshots 3.2.8. DeploymentConfig capability Purpose The DeploymentConfig capability enables and manages the DeploymentConfig API. Important If you disable the DeploymentConfig capability, the following resources will not be available in the cluster: DeploymentConfig resources The deployer service account Disable the DeploymentConfig capability only if you do not require DeploymentConfig resources and the deployer service account in the cluster. 3.2.9. Insights capability Purpose The Insights Operator provides the features for the Insights capability. The Insights Operator gathers OpenShift Container Platform configuration data and sends it to Red Hat. The data is used to produce proactive insights recommendations about potential issues that a cluster might be exposed to. These insights are communicated to cluster administrators through Insights Advisor on console.redhat.com . Notes Insights Operator complements OpenShift Container Platform Telemetry. Additional resources Using Insights Operator 3.2.10. Machine API capability Purpose The machine-api-operator , cluster-autoscaler-operator , and cluster-control-plane-machine-set-operator Operators provide the features for the MachineAPI capability. You can disable this capability only if you install a cluster with user-provisioned infrastructure. The Machine API capability is responsible for all machine configuration and management in the cluster. If you disable the Machine API capability during installation, you need to manage all machine-related tasks manually. Additional resources Overview of machine management Machine API Operator Cluster Autoscaler Operator Control Plane Machine Set Operator 3.2.11. Marketplace capability Purpose The Marketplace Operator provides the features for the marketplace capability. The Marketplace Operator simplifies the process for bringing off-cluster Operators to your cluster by using a set of default Operator Lifecycle Manager (OLM) catalogs on the cluster. When the Marketplace Operator is installed, it creates the openshift-marketplace namespace. OLM ensures catalog sources installed in the openshift-marketplace namespace are available for all namespaces on the cluster. If you disable the marketplace capability, the Marketplace Operator does not create the openshift-marketplace namespace. Catalog sources can still be configured and managed on the cluster manually, but OLM depends on the openshift-marketplace namespace in order to make catalogs available to all namespaces on the cluster. Users with elevated permissions to create namespaces prefixed with openshift- , such as system or cluster administrators, can manually create the openshift-marketplace namespace. If you enable the marketplace capability, you can enable and disable individual catalogs by configuring the Marketplace Operator. Additional resources Red Hat-provided Operator catalogs 3.2.12. Node Tuning capability Purpose The Node Tuning Operator provides features for the NodeTuning capability. The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon and achieves low latency performance by using the Performance Profile controller. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. If you disable the NodeTuning capability, some default tuning settings will not be applied to the control-plane nodes. This might limit the scalability and performance of large clusters with over 900 nodes or 900 routes. Additional resources Using the Node Tuning Operator 3.2.13. OpenShift samples capability Purpose The Cluster Samples Operator provides the features for the openshift-samples capability. The Cluster Samples Operator manages the sample image streams and templates stored in the openshift namespace. On initial start up, the Operator creates the default samples configuration resource to initiate the creation of the image streams and templates. The configuration object is a cluster scoped object with the key cluster and type configs.samples . The image streams are the Red Hat Enterprise Linux CoreOS (RHCOS)-based OpenShift Container Platform image streams pointing to images on registry.redhat.io . Similarly, the templates are those categorized as OpenShift Container Platform templates. If you disable the samples capability, users cannot access the image streams, samples, and templates it provides. Depending on your deployment, you might want to disable this component if you do not need it. Additional resources Configuring the Cluster Samples Operator 3.2.14. Operator Lifecycle Manager capability Purpose Operator Lifecycle Manager (OLM) helps users install, update, and manage the lifecycle of Kubernetes native applications (Operators) and their associated services running across their OpenShift Container Platform clusters. It is part of the Operator Framework , an open source toolkit designed to manage Operators in an effective, automated, and scalable way. If an Operator requires any of the following APIs, then you must enable the OperatorLifecycleManager capability: ClusterServiceVersion CatalogSource Subscription InstallPlan OperatorGroup Important The marketplace capability depends on the OperatorLifecycleManager capability. You cannot disable the OperatorLifecycleManager capability and enable the marketplace capability. Additional resources Operator Lifecycle Manager concepts and resources 3.3. Viewing the cluster capabilities As a cluster administrator, you can view the capabilities by using the clusterversion resource status. Prerequisites You have installed the OpenShift CLI ( oc ). Procedure To view the status of the cluster capabilities, run the following command: USD oc get clusterversion version -o jsonpath='{.spec.capabilities}{"\n"}{.status.capabilities}{"\n"}' Example output {"additionalEnabledCapabilities":["openshift-samples"],"baselineCapabilitySet":"None"} {"enabledCapabilities":["openshift-samples"],"knownCapabilities":["CSISnapshot","Console","Insights","Storage","baremetal","marketplace","openshift-samples"]} 3.4. Enabling the cluster capabilities by setting baseline capability set As a cluster administrator, you can enable cluster capabilities any time after a OpenShift Container Platform installation by setting the baselineCapabilitySet configuration parameter. Prerequisites You have installed the OpenShift CLI ( oc ). Procedure To set the baselineCapabilitySet configuration parameter, run the following command: USD oc patch clusterversion version --type merge -p '{"spec":{"capabilities":{"baselineCapabilitySet":"vCurrent"}}}' 1 1 For baselineCapabilitySet you can specify vCurrent , v4.15 , or None . 3.5. Enabling the cluster capabilities by setting additional enabled capabilities As a cluster administrator, you can enable cluster capabilities any time after a OpenShift Container Platform installation by setting the additionalEnabledCapabilities configuration parameter. Prerequisites You have installed the OpenShift CLI ( oc ). Procedure View the additional enabled capabilities by running the following command: USD oc get clusterversion version -o jsonpath='{.spec.capabilities.additionalEnabledCapabilities}{"\n"}' Example output ["openshift-samples"] To set the additionalEnabledCapabilities configuration parameter, run the following command: USD oc patch clusterversion/version --type merge -p '{"spec":{"capabilities":{"additionalEnabledCapabilities":["openshift-samples", "marketplace"]}}}' Important It is not possible to disable a capability which is already enabled in a cluster. The cluster version Operator (CVO) continues to reconcile the capability which is already enabled in the cluster. If you try to disable a capability, then CVO shows the divergent spec: USD oc get clusterversion version -o jsonpath='{.status.conditions[?(@.type=="ImplicitlyEnabledCapabilities")]}{"\n"}' Example output {"lastTransitionTime":"2022-07-22T03:14:35Z","message":"The following capabilities could not be disabled: openshift-samples","reason":"CapabilitiesImplicitlyEnabled","status":"True","type":"ImplicitlyEnabledCapabilities"} Note During the cluster upgrades, it is possible that a given capability could be implicitly enabled. If a resource was already running on the cluster before the upgrade, then any capabilities that is part of the resource will be enabled. For example, during a cluster upgrade, a resource that is already running on the cluster has been changed to be part of the marketplace capability by the system. Even if a cluster administrator does not explicitly enabled the marketplace capability, it is implicitly enabled by the system. | [
"capabilities: baselineCapabilitySet: v4.11 1 additionalEnabledCapabilities: 2 - CSISnapshot - Console - Storage",
"oc get clusterversion version -o jsonpath='{.spec.capabilities}{\"\\n\"}{.status.capabilities}{\"\\n\"}'",
"{\"additionalEnabledCapabilities\":[\"openshift-samples\"],\"baselineCapabilitySet\":\"None\"} {\"enabledCapabilities\":[\"openshift-samples\"],\"knownCapabilities\":[\"CSISnapshot\",\"Console\",\"Insights\",\"Storage\",\"baremetal\",\"marketplace\",\"openshift-samples\"]}",
"oc patch clusterversion version --type merge -p '{\"spec\":{\"capabilities\":{\"baselineCapabilitySet\":\"vCurrent\"}}}' 1",
"oc get clusterversion version -o jsonpath='{.spec.capabilities.additionalEnabledCapabilities}{\"\\n\"}'",
"[\"openshift-samples\"]",
"oc patch clusterversion/version --type merge -p '{\"spec\":{\"capabilities\":{\"additionalEnabledCapabilities\":[\"openshift-samples\", \"marketplace\"]}}}'",
"oc get clusterversion version -o jsonpath='{.status.conditions[?(@.type==\"ImplicitlyEnabledCapabilities\")]}{\"\\n\"}'",
"{\"lastTransitionTime\":\"2022-07-22T03:14:35Z\",\"message\":\"The following capabilities could not be disabled: openshift-samples\",\"reason\":\"CapabilitiesImplicitlyEnabled\",\"status\":\"True\",\"type\":\"ImplicitlyEnabledCapabilities\"}"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installation_overview/cluster-capabilities |
Chapter 5. Creating Cross-forest Trusts with Active Directory and Identity Management | Chapter 5. Creating Cross-forest Trusts with Active Directory and Identity Management This chapter describes creating cross-forest trusts between Active Directory and Identity Management. A cross-forest trust is the recommended one of the two methods to integrate Identity Management and Active Directory (AD) environments indirectly. The other method is synchronization. If you are unsure which method to choose for your environment, read Section 1.3, "Indirect Integration" . Kerberos implements a concept of a trust . In a trust, a principal from one Kerberos realm can request a ticket to a service in another Kerberos realm. Using this ticket, the principal can authenticate against resources on machines belonging to the other realm. Kerberos also has the ability to create a relationship between two otherwise separate Kerberos realms: a cross-realm trust . Realms that are part of a trust use a shared pair of a ticket and key; a member of one realm then counts as a member of both realms. Red Hat Identity Management supports configuring a cross-forest trust between an IdM domain and an Active Directory domain. 5.1. Introduction to Cross-forest Trusts Kerberos realm only concerns authentication. Other services and protocols are involved in complementing identity and authorization for resources running on the machines in the Kerberos realm. As such, establishing Kerberos cross-realm trust is not enough to allow users from one realm to access resources in the other realm; a support is required at other levels of communication as well. 5.1.1. The Architecture of a Trust Relationship Both Active Directory and Identity Management manage a variety of core services such as Kerberos, LDAP, DNS, or certificate services. To transparently integrate these two diverse environments, all core services must interact seamlessly with one another. Active Directory Trusts, Forests, and Cross-forest Trusts Kerberos cross-realm trust plays an important role in authentication between Active Directory environments. All activities to resolve user and group names in a trusted AD domain require authentication, regardless of how access is performed: using LDAP protocol or as part of the Distributed Computing Environment/Remote Procedure Calls (DCE/RPC) on top of the Server Message Block (SMB) protocol. Because there are more protocols involved in organizing access between two different Active Directory domains, trust relationship has a more generic name, Active Directory trust . Multiple AD domains can be organized together into an Active Directory forest . A root domain of the forest is the first domain created in the forest. Identity Management domain cannot be part of an existing AD forest, thus it is always seen as a separate forest. When trust relationship is established between two separate forest root domains, allowing users and services from different AD forests to communicate, a trust is called Active Directory cross-forest trust . Trust Flow and One-way Trusts A trust establishes an access relationship between two domains. Active Directory environments can be complex so there are different possible types and arrangements for Active Directory trusts, between child domains, root domains, or forests. A trust is a path from one domain to another. The way that identities and information move between the domains is called a trust flow . The trusted domain contains users, and the trusting domain allows access to resources. In a one-way trust, trust flows only in one direction: users can access the trusting domain's resources but users in the trusting domain cannot access resources in the trusted domain. In Figure 5.1, "One-way Trust" , Domain A is trusted by Domain B, but Domain B is not trusted by Domain A. Figure 5.1. One-way Trust IdM allows the administrator to configure both one-way and two-way trusts. For details, see Section 5.1.4, "One-Way and Two-Way Trusts" . Transitive and Non-transitive Trusts Trusts can be transitive so that a domain trusts another domain and any other domain trusted by that second domain. Figure 5.2. Transitive Trusts Trusts can also be non-transitive which means the trust is limited only to the explicitly included domains. Cross-forest Trust in Active Directory and Identity Management Within an Active Directory forest, trust relationships between domains are normally two-way and transitive by default. Because trust between two AD forests is a trust between two forest root domains, it can also be two-way or one-way. The transitivity of the cross-forest trust is explicit: any domain trust within an AD forest that leads to the root domain of the forest is transitive over the cross-forest trust. However, separate cross-forest trusts are not transitive. An explicit cross-forest trust must be established between each AD forest root domain to another AD forest root domain. From the perspective of AD, Identity Management represents a separate AD forest with a single AD domain. When cross-forest trust between an AD forest root domain and an IdM domain is established, users from the AD forest domains can interact with Linux machines and services from the IdM domain. Figure 5.3. Trust Direction 5.1.2. Active Directory Security Objects and Trust Active Directory Global Catalog The global catalog contains information about objects of an Active Directory. It stores a full copy of objects within its own domain. From objects of other domains in the Active Directory forest, only a partial copy of the commonly most searched attributes is stored in the global catalog. Additionally, some types of groups are only valid within a specific scope and might not be part of the global catalog. Note that the cross-forest trust context is wider than a single domain. Therefore, some of these server-local or domain-local security group memberships from a trusted forest might not be visible to IdM servers. Global Catalog and POSIX Attributes Active Directory does not replicate POSIX attributes with its default settings. If it is required to use POSIX attributes that are defined in AD Red Hat strongly recommends to replicate them to the global catalog service. 5.1.3. Trust Architecture in IdM On the Identity Management side, the IdM server has to be able to recognize Active Directory identities and appropriately process their group membership for access controls. The Microsoft PAC (MS-PAC, Privilege Account Certificate) contains the required information about the user; their security ID, domain user name, and group memberships. Identity Management has two components to analyze data in the PAC on the Kerberos ticket: SSSD, to perform identity lookups on Active Directory and to retrieve user and group security identifiers (SIDs) for authorization. SSSD also caches user, group, and ticket information for users and maps Kerberos and DNS domains, Identity Management (Linux domain management), to associate the Active Directory user with an IdM group for IdM policies and access. Note Access control rules and policies for Linux domain administration, such as SELinux, sudo, and host-based access controls, are defined and applied through Identity Management. Any access control rules set on the Active Directory side are not evaluated or used by IdM; the only Active Directory configuration which is relevant is group membership. Trusts with Different Active Directory Forests IdM can also be part of trust relationships with different AD forests. Once a trust is established, additional trusts with other forests can be added later, following the same commands and procedures. IdM can trust multiple entirely unrelated forests at the same time, allowing users from such unrelated AD forests access to resources in the same shared IdM domain. 5.1.3.1. Active Directory PACs and IdM Tickets Group information in Active Directory is stored in a list of identifiers in the Privilege Attribute Certificate (MS-PAC or PAC) data set. The PAC contains various authorization information, such as group membership or additional credentials information. It also includes security identifiers (SIDs) of users and groups in the Active Directory domain. SIDs are identifiers assigned to Active Directory users and groups when they are created. In trust environments, group members are identified by SIDs, rather than by names or DNs. A PAC is embedded in the Kerberos service request ticket for Active Directory users as a way of identifying the entity to other Windows clients and servers in the Windows domain. IdM maps the group information in the PAC to the Active Directory groups and then to the corresponding IdM groups to determine access. When an Active Directory user requests a ticket for a service on IdM resources, the process goes as follows: The request for a service contains the PAC of the user. The IdM Kerberos Distribution Centre (KDC) analyzes the PAC by comparing the list of Active Directory groups to memberships in IdM groups. For SIDs of the Kerberos principal defined in the MS-PAC, the IdM KDC evaluates external group memberships defined in the IdM LDAP. If additional mappings are available for an SID, the MS-PAC record is extended with other SIDs of the IdM groups to which the SID belongs. The resulting MS-PAC is signed by the IdM KDC. The service ticket is returned to the user with the updated PAC signed by the IdM KDC. Users belonging to AD groups known to the IdM domain can now be recognized by SSSD running on the IdM clients based on the MS-PAC content of the service ticket. This allows to reduce identity traffic to discover group memberships by the IdM clients. When the IdM client evaluates the service ticket, the process includes the following steps: The Kerberos client libraries used in the evaluation process send the PAC data to the SSSD PAC responder. The PAC responder verifies the group SIDs in the PAC and adds the user to the corresponding groups in the SSSD cache. SSSD stores multiple TGTs and tickets for each user as new services are accessed. Users belonging to the verified groups can now access the required services on the IdM side. 5.1.3.2. Active Directory Users and Identity Management Groups When managing Active Directory users and groups, you can add individual AD users and whole AD groups to Identity Management groups. For a description of how to configure IdM groups for AD users, see Section 5.3.3, "Creating IdM Groups for Active Directory Users" . Non-POSIX External Groups and SID Mapping Group membership in the IdM LDAP is expressed by specifying a distinguished name (DN) of an LDAP object that is a member of a group. AD entries are not synchronized or copied over to IdM, which means that AD users and groups have no LDAP objects in the IdM LDAP. Therefore, they cannot be directly used to express group membership in the IdM LDAP. For this reason, IdM creates non-POSIX external groups : proxy LDAP objects that contain references to SIDs of AD users and groups as strings. Non-POSIX external groups are then referenced as normal IdM LDAP objects to signify group membership for AD users and groups in IdM. SIDs of non-POSIX external groups are processed by SSSD; SSSD maps SIDs of groups to which an AD user belongs to POSIX groups in IdM. The SIDs on the AD side are associated with user names. When the user name is used to access IdM resources, SSSD in IdM resolves that user name to its SID, and then looks up the information for that SID within the AD domain, as described in Section 5.1.3.1, "Active Directory PACs and IdM Tickets" . ID Ranges When a user is created in Linux, it is assigned a user ID number. In addition, a private group is created for the user. The private group ID number is the same as the user ID number. In Linux environment, this does not create a conflict. On Windows, however, the security ID number must be unique for every object in the domain. Trusted AD users require a UID and GID number on a Linux system. This UID and GID number can be generated by IdM, but if the AD entry already has UID and GID numbers assigned, assigning different numbers creates a conflict. To avoid such conflicts, it is possible to use the AD-defined POSIX attributes, including the UID and GID number and preferred login shell. Note AD stores a subset of information for all objects within the forest in a global catalog . The global catalog includes every entry for every domain in the forest. If you want to use AD-defined POSIX attributes, Red Hat strongly recommends that you first replicate the attributes to the global catalog. When a trust is created, IdM automatically detects what kind of ID range to use and creates a unique ID range for the AD domain added to the trust. You can also choose this manually by passing one of the following options to the ipa trust-add command: ipa-ad-trust This range option is used for IDs algorithmically generated by IdM based on the SID. If IdM generates the SIDs using SID-to-POSIX ID mapping, the ID ranges for AD and IdM users and groups must have unique, non-overlapping ID ranges available. ipa-ad-trust-posix This range option is used for IDs defined in POSIX attributes in the AD entry. IdM obtains the POSIX attributes, including uidNumber and gidNumber , from the global catalog in AD or from the directory controller. If the AD domain is managed correctly and without ID conflicts, the ID numbers generated in this way are unique. In this case, no ID validation or ID range is required. For example: Recreating a trust with the other ID range If the ID range of the created trust does not suit your deployment, you can re-create the trust using the other --range-type option: View all the ID ranges that are currently in use: In the list, identify the name of the ID range that was created by the ipa trust-add command. The first part of the name of the ID range is the name of the trust: name_of_the_trust _id_range, for example ad.example.com . (Optional) If you do not know which --range-type option, ipa-ad-trust or ipa-ad-trust-posix , was used when the trust was created, identify the option: Make note of the type so that you choose the opposite type for the new trust in Step 5. Remove the range that was created by the ipa trust-add command: Remove the trust: Create a new trust with the correct --range-type option. For example: 5.1.3.3. Active Directory Users and IdM Policies and Configuration Several IdM policy definitions, such as SELinux, host-based access control, sudo, and netgroups, rely on user groups to identify how the policies are applied. Figure 5.4. Active Directory Users and IdM Groups and Policies Active Directory users are external to the IdM domain, but they can still be added as group members to IdM groups, as long as those groups are configured as external groups described in Section 5.1.3.2, "Active Directory Users and Identity Management Groups" . In such cases, the sudo, host-based access controls, and other policies are applied to the external POSIX group and, ultimately, to the AD user when accessing IdM domain resources. The user SID in the PAC in the ticket is resolved to the AD identity. This means that Active Directory users can be added as group members using their fully-qualified user name or their SID. 5.1.4. One-Way and Two-Way Trusts IdM supports two types of trust agreements, depending on whether the entities that can establish connection to services in IdM are limited to only AD or can include IdM entities as well. One-way trust One-way trust enables AD users and groups to access resources in IdM, but not the other way around. The IdM domain trusts the AD forest, but the AD forest does not trust the IdM domain. One-way trust is the default mode for creating a trust. Two-way trust Two-way trust enables AD users and groups to access resources in IdM. You must configure a two-way trust for solutions such as Microsoft SQL Server that expect the S4U2Self and S4U2Proxy Microsoft extensions to the Kerberos protocol to work over a trust boundary. An application on a RHEL IdM host might request S4U2Self or S4U2Proxy information from an Active Directory domain controller about an AD user, and a two-way trust provides this feature. Note that this two-way trust functionality does not allow IdM users to login to Windows systems, and the two-way trust in IdM does not give the users any additional rights compared to the one-way trust solution in AD. For more general information on one-way and two-way trusts, see Section 5.1.1, "The Architecture of a Trust Relationship" . After a trust is established, it is not possible to modify its type. If you require a different type of trust, run the ipa trust-add command again; by doing this, you can delete the existing trust and establish a new one. 5.1.5. External Trusts to Active Directory An external trust is a trust relationship between domains that are in a different forests. While forest trusts always require to establish the trust between the root domains of Active Directory forests, you can establish an external trust to any domain within the forest. External trusts are non-transitive. For this reason, users and groups from other Active Directory domains have no access to IdM resources. For further information, see the section called "Transitive and Non-transitive Trusts" . 5.1.6. Trust Controllers and Trust Agents IdM provides the following types of IdM servers that support trust to Active Directory: Trust controllers IdM servers that can control the trust and perform identity lookups against Active Directory domain controllers (DC). Active Directory domain controllers contact trust controllers when establishing and verifying the trust to Active Directory. The first trust controller is created when you configure the trust. For details about configuring an IdM server as a trust controller, see Section 5.2.2, "Creating Trusts" . Trust controllers run an increased amount of network-facing services compared to trust agents, and thus present a greater attack surface for potential intruders. Trust agents IdM servers that can perform identity lookups against Active Directory domain controllers. For details about configuring an IdM server as a trust agent, see Section 5.2.2.1.1, "Preparing the IdM Server for Trust" . In addition to trust controllers and agents, the IdM domain can also include replicas without any role. However, these servers do not communicate with Active Directory. Therefore, clients that communicate with these servers cannot resolve Active Directory users and groups or authenticate and authorize Active Directory users. Table 5.1. A comparison of the capabilities provided by trust controllers and trust agents Capability Trust controllers Trust agents Resolve Active Directory users and groups Yes Yes Enroll IdM clients that run services accessible by users from trusted Active Directory forests Yes Yes Manage the trust (for example, add trust agreements) Yes No When planning the deployment of trust controllers and trust agents, consider these guidelines: Configure at least two trust controllers per Identity Management deployment. Configure at least two trust controllers in each data center. If you ever want to create additional trust controllers or if an existing trust controller fails, create a new trust controller by promoting a trust agent or a replica. To do this, use the ipa-adtrust-install utility on the IdM server as described in Section 5.2.2.1.1, "Preparing the IdM Server for Trust" . Important You cannot downgrade an existing trust controller to a trust agent. The trust controller server role, once installed, cannot be removed from the topology. | [
"ipa trust-add name_of_the_trust --range-type=ipa-ad-trust-posix",
"ipa idrange-find",
"ipa idrange-show name_of_the_trust_id_range",
"ipa idrange-del name_of_the_trust_id_range",
"ipa trust-del name_of_the_trust",
"ipa trust-add name_of_the_trust --range-type=ipa-ad-trust"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/active-directory-trust |
Chapter 32. Sorting column values in guided decision tables | Chapter 32. Sorting column values in guided decision tables You can sort the values in columns that you created in a guided decision table. Prerequisites You created the required columns in a guided decision table. Procedure Double-click a column header that you want to sort in ascending order. To sort the values of the same column in descending order, double-click the column header again. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/proc-guided-decision-tables-columns-sort_guided-decision-tables |
Chapter 1. Validating an installation | Chapter 1. Validating an installation You can check the status of an OpenShift Container Platform cluster after an installation by following the procedures in this document. 1.1. Reviewing the installation log You can review a summary of an installation in the OpenShift Container Platform installation log. If an installation succeeds, the information required to access the cluster is included in the log. Prerequisites You have access to the installation host. Procedure Review the .openshift_install.log log file in the installation directory on your installation host: USD cat <install_dir>/.openshift_install.log Example output Cluster credentials are included at the end of the log if the installation is successful, as outlined in the following example: ... time="2020-12-03T09:50:47Z" level=info msg="Install complete!" time="2020-12-03T09:50:47Z" level=info msg="To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'" time="2020-12-03T09:50:47Z" level=info msg="Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com" time="2020-12-03T09:50:47Z" level=info msg="Login to the console with user: \"kubeadmin\", and password: \"password\"" time="2020-12-03T09:50:47Z" level=debug msg="Time elapsed per stage:" time="2020-12-03T09:50:47Z" level=debug msg=" Infrastructure: 6m45s" time="2020-12-03T09:50:47Z" level=debug msg="Bootstrap Complete: 11m30s" time="2020-12-03T09:50:47Z" level=debug msg=" Bootstrap Destroy: 1m5s" time="2020-12-03T09:50:47Z" level=debug msg=" Cluster Operators: 17m31s" time="2020-12-03T09:50:47Z" level=info msg="Time elapsed: 37m26s" 1.2. Viewing the image pull source For clusters with unrestricted network connectivity, you can view the source of your pulled images by using a command on a node, such as crictl images . However, for disconnected installations, to view the source of pulled images, you must review the CRI-O logs to locate the Trying to access log entry, as shown in the following procedure. Other methods to view the image pull source, such as the crictl images command, show the non-mirrored image name, even though the image is pulled from the mirrored location. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Review the CRI-O logs for a master or worker node: USD oc adm node-logs <node_name> -u crio Example output The Trying to access log entry indicates where the image is being pulled from. ... Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1366]: time="2021-08-05 10:33:21.594930907Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-release:4.10.0-ppc64le" id=abcd713b-d0e1-4844-ac1c-474c5b60c07c name=/runtime.v1alpha2.ImageService/PullImage Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time="2021-03-17 02:52:50.194341109Z" level=info msg="Trying to access \"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\"" Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time="2021-03-17 02:52:50.226788351Z" level=info msg="Trying to access \"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\"" ... The log might show the image pull source twice, as shown in the preceding example. If your ImageContentSourcePolicy object lists multiple mirrors, OpenShift Container Platform attempts to pull the images in the order listed in the configuration, for example: 1.3. Getting cluster version, status, and update details You can view the cluster version and status by running the oc get clusterversion command. If the status shows that the installation is still progressing, you can review the status of the Operators for more information. You can also list the current update channel and review the available cluster updates. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Obtain the cluster version and overall status: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.4 True False 6m25s Cluster version is 4.6.4 The example output indicates that the cluster has been installed successfully. If the cluster status indicates that the installation is still progressing, you can obtain more detailed progress information by checking the status of the Operators: USD oc get clusteroperators.config.openshift.io View a detailed summary of cluster specifications, update availability, and update history: USD oc describe clusterversion List the current update channel: USD oc get clusterversion -o jsonpath='{.items[0].spec}{"\n"}' Example output {"channel":"stable-4.6","clusterID":"245539c1-72a3-41aa-9cec-72ed8cf25c5c"} Review the available cluster updates: USD oc adm upgrade Example output Cluster version is 4.6.4 Updates: VERSION IMAGE 4.6.6 quay.io/openshift-release-dev/ocp-release@sha256:c7e8f18e8116356701bd23ae3a23fb9892dd5ea66c8300662ef30563d7104f39 Additional resources See Querying Operator status after installation for more information about querying Operator status if your installation is still progressing. See Troubleshooting Operator issues for information about investigating issues with Operators. See Updating a cluster using the web console for more information on updating your cluster. See Understanding update channels and releases for an overview about update release channels. 1.4. Verifying that a cluster uses short-term credentials You can verify that a cluster uses short-term security credentials for individual components by checking the Cloud Credential Operator (CCO) configuration and other values in the cluster. Prerequisites You deployed an OpenShift Container Platform cluster using the Cloud Credential Operator utility ( ccoctl ) to implement short-term credentials. You installed the OpenShift CLI ( oc ). You are logged in as a user with cluster-admin privileges. Procedure Verify that the CCO is configured to operate in manual mode by running the following command: USD oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode} The following output confirms that the CCO is operating in manual mode: Example output Manual Verify that the cluster does not have root credentials by running the following command: USD oc get secrets \ -n kube-system <secret_name> where <secret_name> is the name of the root secret for your cloud provider. Platform Secret name Amazon Web Services (AWS) aws-creds Microsoft Azure azure-credentials Google Cloud Platform (GCP) gcp-credentials An error confirms that the root secret is not present on the cluster. Example output for an AWS cluster Error from server (NotFound): secrets "aws-creds" not found Verify that the components are using short-term security credentials for individual components by running the following command: USD oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }' This command displays the value of the .spec.serviceAccountIssuer parameter in the cluster Authentication object. An output of a URL that is associated with your cloud provider indicates that the cluster is using manual mode with short-term credentials that are created and managed from outside of the cluster. Azure clusters: Verify that the components are assuming the Azure client ID that is specified in the secret manifests by running the following command: USD oc get secrets \ -n openshift-image-registry installer-cloud-credentials \ -o jsonpath='{.data}' An output that contains the azure_client_id and azure_federated_token_file felids confirms that the components are assuming the Azure client ID. Azure clusters: Verify that the pod identity webhook is running by running the following command: USD oc get pods \ -n openshift-cloud-credential-operator Example output NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m 1.5. Querying the status of the cluster nodes by using the CLI You can verify the status of the cluster nodes after an installation. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List the status of the cluster nodes. Verify that the output lists all of the expected control plane and compute nodes and that each node has a Ready status: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION compute-1.example.com Ready worker 33m v1.29.4 control-plane-1.example.com Ready master 41m v1.29.4 control-plane-2.example.com Ready master 45m v1.29.4 compute-2.example.com Ready worker 38m v1.29.4 compute-3.example.com Ready worker 33m v1.29.4 control-plane-3.example.com Ready master 41m v1.29.4 Review CPU and memory resource availability for each cluster node: USD oc adm top nodes Example output NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% compute-1.example.com 128m 8% 1132Mi 16% control-plane-1.example.com 801m 22% 3471Mi 23% control-plane-2.example.com 1718m 49% 6085Mi 40% compute-2.example.com 935m 62% 5178Mi 75% compute-3.example.com 111m 7% 1131Mi 16% control-plane-3.example.com 942m 26% 4100Mi 27% Additional resources See Verifying node health for more details about reviewing node health and investigating node issues. 1.6. Reviewing the cluster status from the OpenShift Container Platform web console You can review the following information in the Overview page in the OpenShift Container Platform web console: The general status of your cluster The status of the control plane, cluster Operators, and storage CPU, memory, file system, network transfer, and pod availability The API address of the cluster, the cluster ID, and the name of the provider Cluster version information Cluster update status, including details of the current update channel and available updates A cluster inventory detailing node, pod, storage class, and persistent volume claim (PVC) information A list of ongoing cluster activities and recent events Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to Home Overview . 1.7. Reviewing the cluster status from Red Hat OpenShift Cluster Manager From the OpenShift Container Platform web console, you can review detailed information about the status of your cluster on OpenShift Cluster Manager. Prerequisites You are logged in to OpenShift Cluster Manager . You have access to the cluster as a user with the cluster-admin role. Procedure Go to the Cluster List list in OpenShift Cluster Manager and locate your OpenShift Container Platform cluster. Click the Overview tab for your cluster. Review the following information about your cluster: vCPU and memory availability and resource usage The cluster ID, status, type, region, and the provider name Node counts by node type Cluster version details, the creation date of the cluster, and the name of the cluster owner The life cycle support status of the cluster Subscription information, including the service level agreement (SLA) status, the subscription unit type, the production status of the cluster, the subscription obligation, and the service level Tip To view the history for your cluster, click the Cluster history tab. Navigate to the Monitoring page to review the following information: A list of any issues that have been detected A list of alerts that are firing The cluster Operator status and version The cluster's resource usage Optional: You can view information about your cluster that Red Hat Insights collects by navigating to the Overview menu. From this menu you can view the following information: Potential issues that your cluster might be exposed to, categorized by risk level Health-check status by category Additional resources See Using Insights to identify issues with your cluster for more information about reviewing potential issues with your cluster. 1.8. Checking cluster resource availability and utilization OpenShift Container Platform provides a comprehensive set of monitoring dashboards that help you understand the state of cluster components. In the Administrator perspective, you can access dashboards for core OpenShift Container Platform components, including: etcd Kubernetes compute resources Kubernetes network resources Prometheus Dashboards relating to cluster and node performance Figure 1.1. Example compute resources dashboard Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective in the OpenShift Container Platform web console, navigate to Observe Dashboards . Choose a dashboard in the Dashboard list. Some dashboards, such as the etcd dashboard, produce additional sub-menus when selected. Optional: Select a time range for the graphs in the Time Range list. Select a pre-defined time period. Set a custom time range by selecting Custom time range in the Time Range list. Input or select the From and To dates and times. Click Save to save the custom time range. Optional: Select a Refresh Interval . Hover over each of the graphs within a dashboard to display detailed information about specific items. Additional resources See About OpenShift Container Platform monitoring for more information about the OpenShift Container Platform monitoring stack. 1.9. Listing alerts that are firing Alerts provide notifications when a set of defined conditions are true in an OpenShift Container Platform cluster. You can review the alerts that are firing in your cluster by using the Alerting UI in the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to the Observe Alerting Alerts page. Review the alerts that are firing, including their Severity , State , and Source . Select an alert to view more detailed information in the Alert Details page. Additional resources See Managing alerts as an Administrator for further details about alerting in OpenShift Container Platform. 1.10. steps See Troubleshooting installations if you experience issues when installing your cluster. After installing OpenShift Container Platform, you can further expand and customize your cluster . | [
"cat <install_dir>/.openshift_install.log",
"time=\"2020-12-03T09:50:47Z\" level=info msg=\"Install complete!\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Login to the console with user: \\\"kubeadmin\\\", and password: \\\"password\\\"\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Time elapsed per stage:\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Infrastructure: 6m45s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Bootstrap Complete: 11m30s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Bootstrap Destroy: 1m5s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Cluster Operators: 17m31s\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Time elapsed: 37m26s\"",
"oc adm node-logs <node_name> -u crio",
"Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1366]: time=\"2021-08-05 10:33:21.594930907Z\" level=info msg=\"Pulling image: quay.io/openshift-release-dev/ocp-release:4.10.0-ppc64le\" id=abcd713b-d0e1-4844-ac1c-474c5b60c07c name=/runtime.v1alpha2.ImageService/PullImage Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.194341109Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\" Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.226788351Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\"",
"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\" Trying to access \\\"li0317gcp2.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"",
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.4 True False 6m25s Cluster version is 4.6.4",
"oc get clusteroperators.config.openshift.io",
"oc describe clusterversion",
"oc get clusterversion -o jsonpath='{.items[0].spec}{\"\\n\"}'",
"{\"channel\":\"stable-4.6\",\"clusterID\":\"245539c1-72a3-41aa-9cec-72ed8cf25c5c\"}",
"oc adm upgrade",
"Cluster version is 4.6.4 Updates: VERSION IMAGE 4.6.6 quay.io/openshift-release-dev/ocp-release@sha256:c7e8f18e8116356701bd23ae3a23fb9892dd5ea66c8300662ef30563d7104f39",
"oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}",
"Manual",
"oc get secrets -n kube-system <secret_name>",
"Error from server (NotFound): secrets \"aws-creds\" not found",
"oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'",
"oc get secrets -n openshift-image-registry installer-cloud-credentials -o jsonpath='{.data}'",
"oc get pods -n openshift-cloud-credential-operator",
"NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION compute-1.example.com Ready worker 33m v1.29.4 control-plane-1.example.com Ready master 41m v1.29.4 control-plane-2.example.com Ready master 45m v1.29.4 compute-2.example.com Ready worker 38m v1.29.4 compute-3.example.com Ready worker 33m v1.29.4 control-plane-3.example.com Ready master 41m v1.29.4",
"oc adm top nodes",
"NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% compute-1.example.com 128m 8% 1132Mi 16% control-plane-1.example.com 801m 22% 3471Mi 23% control-plane-2.example.com 1718m 49% 6085Mi 40% compute-2.example.com 935m 62% 5178Mi 75% compute-3.example.com 111m 7% 1131Mi 16% control-plane-3.example.com 942m 26% 4100Mi 27%"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/validation_and_troubleshooting/validating-an-installation |
Images | Images OpenShift Container Platform 4.15 Creating and managing images and imagestreams in OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/images/index |
5.72. fprintd | 5.72. fprintd 5.72.1. RHBA-2012:0912 - fprintd bug fix update Updated fprintd packages that fix one bug are now available for Red Hat Enterprise Linux 6. The fprintd packages contains a D-Bus service to access fingerprint readers. Bug Fix BZ# 665837 Previously, if no USB support was available on a machine (for example, virtual machines on a hypervisor that disabled USB support for guests), the fprintd daemon received the SIGABRT signal, and therefore terminated abnormally. Such crashes did not cause any system failure; however, the Automatic Bug Reporting Tool (ABRT) was alerted every time. With this update, the underlying code has been modified so that the fprintd daemon now exits gracefully on machines with no USB support. All users of fprintd are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/fprintd |
3.6. Testing the Resource Configuration | 3.6. Testing the Resource Configuration You can validate your system configuration with the following procedure. You should be able to mount the exported file system with either NFSv3 or NFSv4. On a node outside of the cluster, residing in the same network as the deployment, verify that the NFS share can be seen by mounting the NFS share. For this example, we are using the 192.168.122.0/24 network. To verify that you can mount the NFS share with NFSv4, mount the NFS share to a directory on the client node. After mounting, verify that the contents of the export directories are visible. Unmount the share after testing. Verify that you can mount the NFS share with NFSv3. After mounting, verify that the test file clientdatafile1 is visible. Unlike NFSv4, since NFSV3 does not use the virtual file system, you must mount a specific export. Unmount the share after testing. To test for failover, perform the following steps. On a node outside of the cluster, mount the NFS share and verify access to the clientdatafile1 we created in Section 3.3, "NFS Share Setup" . From a node within the cluster, determine which node in the cluster is running nfsgroup . In this example, nfsgroup is running on z1.example.com . From a node within the cluster, put the node that is running nfsgroup in standby mode. Verify that nfsgroup successfully starts on the other cluster node. From the node outside the cluster on which you have mounted the NFS share, verify that this outside node still continues to have access to the test file within the NFS mount. Service will be lost briefly for the client during the failover briefly but the client should recover in with no user intervention. By default, clients using NFSv4 may take up to 90 seconds to recover the mount; this 90 seconds represents the NFSv4 file lease grace period observed by the server on startup. NFSv3 clients should recover access to the mount in a matter of a few seconds. From a node within the cluster, remove the node that was initially running running nfsgroup from standby mode. This will not in itself move the cluster resources back to this node. Note Removing a node from standby mode does not in itself cause the resources to fail back over to that node. This will depend on the resource-stickiness value for the resources. For information on the resource-stickiness meta attribute, see Configuring a Resource to Prefer its Current Node in the Red Hat High Availability Add-On Reference . | [
"showmount -e 192.168.122.200 Export list for 192.168.122.200: /nfsshare/exports/export1 192.168.122.0/255.255.255.0 /nfsshare/exports 192.168.122.0/255.255.255.0 /nfsshare/exports/export2 192.168.122.0/255.255.255.0",
"mkdir nfsshare mount -o \"vers=4\" 192.168.122.200:export1 nfsshare ls nfsshare clientdatafile1 umount nfsshare",
"mkdir nfsshare mount -o \"vers=3\" 192.168.122.200:/nfsshare/exports/export2 nfsshare ls nfsshare clientdatafile2 umount nfsshare",
"mkdir nfsshare mount -o \"vers=4\" 192.168.122.200:export1 nfsshare ls nfsshare clientdatafile1",
"pcs status Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: nfsgroup my_lvm (ocf::heartbeat:LVM): Started z1.example.com nfsshare (ocf::heartbeat:Filesystem): Started z1.example.com nfs-daemon (ocf::heartbeat:nfsserver): Started z1.example.com nfs-root (ocf::heartbeat:exportfs): Started z1.example.com nfs-export1 (ocf::heartbeat:exportfs): Started z1.example.com nfs-export2 (ocf::heartbeat:exportfs): Started z1.example.com nfs_ip (ocf::heartbeat:IPaddr2): Started z1.example.com nfs-notify (ocf::heartbeat:nfsnotify): Started z1.example.com",
"pcs node standby z1.example.com",
"pcs status Full list of resources: Resource Group: nfsgroup my_lvm (ocf::heartbeat:LVM): Started z2.example.com nfsshare (ocf::heartbeat:Filesystem): Started z2.example.com nfs-daemon (ocf::heartbeat:nfsserver): Started z2.example.com nfs-root (ocf::heartbeat:exportfs): Started z2.example.com nfs-export1 (ocf::heartbeat:exportfs): Started z2.example.com nfs-export2 (ocf::heartbeat:exportfs): Started z2.example.com nfs_ip (ocf::heartbeat:IPaddr2): Started z2.example.com nfs-notify (ocf::heartbeat:nfsnotify): Started z2.example.com",
"ls nfsshare clientdatafile1",
"pcs node unstandby z1.example.com"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/s1-unittestnfs-haaa |
Chapter 26. Associating secondary interfaces metrics to network attachments | Chapter 26. Associating secondary interfaces metrics to network attachments 26.1. Extending secondary network metrics for monitoring Secondary devices, or interfaces, are used for different purposes. It is important to have a way to classify them to be able to aggregate the metrics for secondary devices with the same classification. Exposed metrics contain the interface but do not specify where the interface originates. This is workable when there are no additional interfaces. However, if secondary interfaces are added, it can be difficult to use the metrics since it is hard to identify interfaces using only interface names. When adding secondary interfaces, their names depend on the order in which they are added, and different secondary interfaces might belong to different networks and can be used for different purposes. With pod_network_name_info it is possible to extend the current metrics with additional information that identifies the interface type. In this way, it is possible to aggregate the metrics and to add specific alarms to specific interface types. The network type is generated using the name of the related NetworkAttachmentDefinition , that in turn is used to differentiate different classes of secondary networks. For example, different interfaces belonging to different networks or using different CNIs use different network attachment definition names. 26.1.1. Network Metrics Daemon The Network Metrics Daemon is a daemon component that collects and publishes network related metrics. The kubelet is already publishing network related metrics you can observe. These metrics are: container_network_receive_bytes_total container_network_receive_errors_total container_network_receive_packets_total container_network_receive_packets_dropped_total container_network_transmit_bytes_total container_network_transmit_errors_total container_network_transmit_packets_total container_network_transmit_packets_dropped_total The labels in these metrics contain, among others: Pod name Pod namespace Interface name (such as eth0 ) These metrics work well until new interfaces are added to the pod, for example via Multus , as it is not clear what the interface names refer to. The interface label refers to the interface name, but it is not clear what that interface is meant for. In case of many different interfaces, it would be impossible to understand what network the metrics you are monitoring refer to. This is addressed by introducing the new pod_network_name_info described in the following section. 26.1.2. Metrics with network name This daemonset publishes a pod_network_name_info gauge metric, with a fixed value of 0 : pod_network_name_info{interface="net0",namespace="namespacename",network_name="nadnamespace/firstNAD",pod="podname"} 0 The network name label is produced using the annotation added by Multus. It is the concatenation of the namespace the network attachment definition belongs to, plus the name of the network attachment definition. The new metric alone does not provide much value, but combined with the network related container_network_* metrics, it offers better support for monitoring secondary networks. Using a promql query like the following ones, it is possible to get a new metric containing the value and the network name retrieved from the k8s.v1.cni.cncf.io/network-status annotation: (container_network_receive_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name) | [
"pod_network_name_info{interface=\"net0\",namespace=\"namespacename\",network_name=\"nadnamespace/firstNAD\",pod=\"podname\"} 0",
"(container_network_receive_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name)"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/networking/associating-secondary-interfaces-metrics-to-network-attachments |
Red Hat Data Grid | Red Hat Data Grid Data Grid is a high-performance, distributed in-memory data store. Schemaless data structure Flexibility to store different objects as key-value pairs. Grid-based data storage Designed to distribute and replicate data across clusters. Elastic scaling Dynamically adjust the number of nodes to meet demand without service disruption. Data interoperability Store, retrieve, and query data in the grid from different endpoints. | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/hot_rod_node.js_client_guide/red-hat-data-grid |
Chapter 12. Installing a cluster on GCP in a restricted network with user-provisioned infrastructure | Chapter 12. Installing a cluster on GCP in a restricted network with user-provisioned infrastructure In OpenShift Container Platform version 4.17, you can install a cluster on Google Cloud Platform (GCP) that uses infrastructure that you provide and an internal mirror of the installation release content. Important While you can install an OpenShift Container Platform cluster by using mirrored installation release content, your cluster still requires internet access to use the GCP APIs. The steps for performing a user-provided infrastructure install are outlined here. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 12.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. If you use a firewall, you configured it to allow the sites that your cluster requires access to. While you might need to grant access to more sites, you must grant access to *.googleapis.com and accounts.google.com . If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain long-term credentials . 12.2. About installations in restricted networks In OpenShift Container Platform 4.17, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 12.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 12.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.17, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 12.4. Configuring your GCP project Before you can install OpenShift Container Platform, you must configure a Google Cloud Platform (GCP) project to host it. 12.4.1. Creating a GCP project To install OpenShift Container Platform, you must create a project in your Google Cloud Platform (GCP) account to host the cluster. Procedure Create a project to host your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation. Important Your GCP project must use the Premium Network Service Tier if you are using installer-provisioned infrastructure. The Standard Network Service Tier is not supported for clusters installed using the installation program. The installation program configures internal load balancing for the api-int.<cluster_name>.<base_domain> URL; the Premium Tier is required for internal load balancing. 12.4.2. Enabling API services in GCP Your Google Cloud Platform (GCP) project requires access to several API services to complete OpenShift Container Platform installation. Prerequisites You created a project to host your cluster. Procedure Enable the following required API services in the project that hosts your cluster. You may also enable optional API services which are not required for installation. See Enabling services in the GCP documentation. Table 12.1. Required API services API service Console service name Compute Engine API compute.googleapis.com Cloud Resource Manager API cloudresourcemanager.googleapis.com Google DNS API dns.googleapis.com IAM Service Account Credentials API iamcredentials.googleapis.com Identity and Access Management (IAM) API iam.googleapis.com Service Usage API serviceusage.googleapis.com Table 12.2. Optional API services API service Console service name Google Cloud APIs cloudapis.googleapis.com Service Management API servicemanagement.googleapis.com Google Cloud Storage JSON API storage-api.googleapis.com Cloud Storage storage-component.googleapis.com 12.4.3. Configuring DNS for GCP To install OpenShift Container Platform, the Google Cloud Platform (GCP) account you use must have a dedicated public hosted zone in the same project that you host the OpenShift Container Platform cluster. This zone must be authoritative for the domain. The DNS service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through GCP or another source. Note If you purchase a new domain, it can take time for the relevant DNS changes to propagate. For more information about purchasing domains through Google, see Google Domains . Create a public hosted zone for your domain or subdomain in your GCP project. See Creating public zones in the GCP documentation. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . Extract the new authoritative name servers from the hosted zone records. See Look up your Cloud DNS name servers in the GCP documentation. You typically have four name servers. Update the registrar records for the name servers that your domain uses. For example, if you registered your domain to Google Domains, see the following topic in the Google Domains Help: How to switch to custom name servers . If you migrated your root domain to Google Cloud DNS, migrate your DNS records. See Migrating to Cloud DNS in the GCP documentation. If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. This process might include a request to your company's IT department or the division that controls the root domain and DNS services for your company. 12.4.4. GCP account limits The OpenShift Container Platform cluster uses a number of Google Cloud Platform (GCP) components, but the default Quotas do not affect your ability to install a default OpenShift Container Platform cluster. A default cluster, which contains three compute and three control plane machines, uses the following resources. Note that some resources are required only during the bootstrap process and are removed after the cluster deploys. Table 12.3. GCP resources used in a default cluster Service Component Location Total resources required Resources removed after bootstrap Service account IAM Global 6 1 Firewall rules Networking Global 11 1 Forwarding rules Compute Global 2 0 Health checks Compute Global 2 0 Images Compute Global 1 0 Networks Networking Global 1 0 Routers Networking Global 1 0 Routes Networking Global 2 0 Subnetworks Compute Global 2 0 Target pools Networking Global 2 0 Note If any of the quotas are insufficient during installation, the installation program displays an error that states both which quota was exceeded and the region. Be sure to consider your actual cluster size, planned cluster growth, and any usage from other clusters that are associated with your account. The CPU, static IP addresses, and persistent disk SSD (storage) quotas are the ones that are most likely to be insufficient. If you plan to deploy your cluster in one of the following regions, you will exceed the maximum storage quota and are likely to exceed the CPU quota limit: asia-east2 asia-northeast2 asia-south1 australia-southeast1 europe-north1 europe-west2 europe-west3 europe-west6 northamerica-northeast1 southamerica-east1 us-west2 You can increase resource quotas from the GCP console , but you might need to file a support ticket. Be sure to plan your cluster size early so that you can allow time to resolve the support ticket before you install your OpenShift Container Platform cluster. 12.4.5. Creating a service account in GCP OpenShift Container Platform requires a Google Cloud Platform (GCP) service account that provides authentication and authorization to access data in the Google APIs. If you do not have an existing IAM service account that contains the required roles in your project, you must create one. Prerequisites You created a project to host your cluster. Procedure Create a service account in the project that you use to host your OpenShift Container Platform cluster. See Creating a service account in the GCP documentation. Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the Owner role to it. See Granting roles to a service account for specific resources . Note While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. You can create the service account key in JSON format, or attach the service account to a GCP virtual machine. See Creating service account keys and Creating and enabling service accounts for instances in the GCP documentation. Note If you use a virtual machine with an attached service account to create your cluster, you must set credentialsMode: Manual in the install-config.yaml file before installation. 12.4.6. Required GCP roles When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If your organization's security policies require a more restrictive set of permissions, you can create a service account with the following permissions. If you deploy your cluster into an existing virtual private cloud (VPC), the service account does not require certain networking permissions, which are noted in the following lists: Required roles for the installation program Compute Admin Role Administrator Security Admin Service Account Admin Service Account Key Admin Service Account User Storage Admin Required roles for creating network resources during installation DNS Administrator Required roles for using the Cloud Credential Operator in passthrough mode Compute Load Balancer Admin Tag User Required roles for user-provisioned GCP infrastructure Deployment Manager Editor The following roles are applied to the service accounts that the control plane and compute machines use: Table 12.4. GCP service account roles Account Roles Control Plane roles/compute.instanceAdmin roles/compute.networkAdmin roles/compute.securityAdmin roles/storage.admin roles/iam.serviceAccountUser Compute roles/compute.viewer roles/storage.admin roles/artifactregistry.reader 12.4.7. Required GCP permissions for user-provisioned infrastructure When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If your organization's security policies require a more restrictive set of permissions, you can create custom roles with the necessary permissions. The following permissions are required for the user-provisioned infrastructure for creating and deleting the OpenShift Container Platform cluster. Example 12.1. Required permissions for creating network resources compute.addresses.create compute.addresses.createInternal compute.addresses.delete compute.addresses.get compute.addresses.list compute.addresses.use compute.addresses.useInternal compute.firewalls.create compute.firewalls.delete compute.firewalls.get compute.firewalls.list compute.forwardingRules.create compute.forwardingRules.get compute.forwardingRules.list compute.forwardingRules.setLabels compute.globalAddresses.create compute.globalAddresses.get compute.globalAddresses.use compute.globalForwardingRules.create compute.globalForwardingRules.get compute.globalForwardingRules.setLabels compute.networks.create compute.networks.get compute.networks.list compute.networks.updatePolicy compute.networks.use compute.routers.create compute.routers.get compute.routers.list compute.routers.update compute.routes.list compute.subnetworks.create compute.subnetworks.get compute.subnetworks.list compute.subnetworks.use compute.subnetworks.useExternalIp Example 12.2. Required permissions for creating load balancer resources compute.backendServices.create compute.backendServices.get compute.backendServices.list compute.backendServices.update compute.backendServices.use compute.regionBackendServices.create compute.regionBackendServices.get compute.regionBackendServices.list compute.regionBackendServices.update compute.regionBackendServices.use compute.targetPools.addInstance compute.targetPools.create compute.targetPools.get compute.targetPools.list compute.targetPools.removeInstance compute.targetPools.use compute.targetTcpProxies.create compute.targetTcpProxies.get compute.targetTcpProxies.use Example 12.3. Required permissions for creating DNS resources dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.list dns.resourceRecordSets.update Example 12.4. Required permissions for creating Service Account resources iam.serviceAccountKeys.create iam.serviceAccountKeys.delete iam.serviceAccountKeys.get iam.serviceAccountKeys.list iam.serviceAccounts.actAs iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 12.5. Required permissions for creating compute resources compute.disks.create compute.disks.get compute.disks.list compute.instanceGroups.create compute.instanceGroups.delete compute.instanceGroups.get compute.instanceGroups.list compute.instanceGroups.update compute.instanceGroups.use compute.instances.create compute.instances.delete compute.instances.get compute.instances.list compute.instances.setLabels compute.instances.setMetadata compute.instances.setServiceAccount compute.instances.setTags compute.instances.use compute.machineTypes.get compute.machineTypes.list Example 12.6. Required for creating storage resources storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.list storage.objects.create storage.objects.delete storage.objects.get storage.objects.list Example 12.7. Required permissions for creating health check resources compute.healthChecks.create compute.healthChecks.get compute.healthChecks.list compute.healthChecks.useReadOnly compute.httpHealthChecks.create compute.httpHealthChecks.get compute.httpHealthChecks.list compute.httpHealthChecks.useReadOnly compute.regionHealthChecks.create compute.regionHealthChecks.get compute.regionHealthChecks.useReadOnly Example 12.8. Required permissions to get GCP zone and region related information compute.globalOperations.get compute.regionOperations.get compute.regions.get compute.regions.list compute.zoneOperations.get compute.zones.get compute.zones.list Example 12.9. Required permissions for checking services and quotas monitoring.timeSeries.list serviceusage.quotas.get serviceusage.services.list Example 12.10. Required IAM permissions for installation iam.roles.get Example 12.11. Required permissions when authenticating without a service account key iam.serviceAccounts.signBlob Example 12.12. Required Images permissions for installation compute.images.create compute.images.delete compute.images.get compute.images.list Example 12.13. Optional permission for running gather bootstrap compute.instances.getSerialPortOutput Example 12.14. Required permissions for deleting network resources compute.addresses.delete compute.addresses.deleteInternal compute.addresses.list compute.addresses.setLabels compute.firewalls.delete compute.firewalls.list compute.forwardingRules.delete compute.forwardingRules.list compute.globalAddresses.delete compute.globalAddresses.list compute.globalForwardingRules.delete compute.globalForwardingRules.list compute.networks.delete compute.networks.list compute.networks.updatePolicy compute.routers.delete compute.routers.list compute.routes.list compute.subnetworks.delete compute.subnetworks.list Example 12.15. Required permissions for deleting load balancer resources compute.backendServices.delete compute.backendServices.list compute.regionBackendServices.delete compute.regionBackendServices.list compute.targetPools.delete compute.targetPools.list compute.targetTcpProxies.delete compute.targetTcpProxies.list Example 12.16. Required permissions for deleting DNS resources dns.changes.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.resourceRecordSets.delete dns.resourceRecordSets.list Example 12.17. Required permissions for deleting Service Account resources iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 12.18. Required permissions for deleting compute resources compute.disks.delete compute.disks.list compute.instanceGroups.delete compute.instanceGroups.list compute.instances.delete compute.instances.list compute.instances.stop compute.machineTypes.list Example 12.19. Required for deleting storage resources storage.buckets.delete storage.buckets.getIamPolicy storage.buckets.list storage.objects.delete storage.objects.list Example 12.20. Required permissions for deleting health check resources compute.healthChecks.delete compute.healthChecks.list compute.httpHealthChecks.delete compute.httpHealthChecks.list compute.regionHealthChecks.delete compute.regionHealthChecks.list Example 12.21. Required Images permissions for deletion compute.images.delete compute.images.list Example 12.22. Required permissions to get Region related information compute.regions.get Example 12.23. Required Deployment Manager permissions deploymentmanager.deployments.create deploymentmanager.deployments.delete deploymentmanager.deployments.get deploymentmanager.deployments.list deploymentmanager.manifests.get deploymentmanager.operations.get deploymentmanager.resources.list Additional resources Optimizing storage 12.4.8. Supported GCP regions You can deploy an OpenShift Container Platform cluster to the following Google Cloud Platform (GCP) regions: africa-south1 (Johannesburg, South Africa) asia-east1 (Changhua County, Taiwan) asia-east2 (Hong Kong) asia-northeast1 (Tokyo, Japan) asia-northeast2 (Osaka, Japan) asia-northeast3 (Seoul, South Korea) asia-south1 (Mumbai, India) asia-south2 (Delhi, India) asia-southeast1 (Jurong West, Singapore) asia-southeast2 (Jakarta, Indonesia) australia-southeast1 (Sydney, Australia) australia-southeast2 (Melbourne, Australia) europe-central2 (Warsaw, Poland) europe-north1 (Hamina, Finland) europe-southwest1 (Madrid, Spain) europe-west1 (St. Ghislain, Belgium) europe-west2 (London, England, UK) europe-west3 (Frankfurt, Germany) europe-west4 (Eemshaven, Netherlands) europe-west6 (Zurich, Switzerland) europe-west8 (Milan, Italy) europe-west9 (Paris, France) europe-west12 (Turin, Italy) me-central1 (Doha, Qatar, Middle East) me-central2 (Dammam, Saudi Arabia, Middle East) me-west1 (Tel Aviv, Israel) northamerica-northeast1 (Montreal, Quebec, Canada) northamerica-northeast2 (Toronto, Ontario, Canada) southamerica-east1 (Sao Paulo, Brazil) southamerica-west1 (Santiago, Chile) us-central1 (Council Bluffs, Iowa, USA) us-east1 (Moncks Corner, South Carolina, USA) us-east4 (Ashburn, Northern Virginia, USA) us-east5 (Columbus, Ohio) us-south1 (Dallas, Texas) us-west1 (The Dalles, Oregon, USA) us-west2 (Los Angeles, California, USA) us-west3 (Salt Lake City, Utah, USA) us-west4 (Las Vegas, Nevada, USA) Note To determine which machine type instances are available by region and zone, see the Google documentation . 12.4.9. Installing and configuring CLI tools for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must install and configure the CLI tools for GCP. Prerequisites You created a project to host your cluster. You created a service account and granted it the required permissions. Procedure Install the following binaries in USDPATH : gcloud gsutil See Install the latest Cloud SDK version in the GCP documentation. Authenticate using the gcloud tool with your configured service account. See Authorizing with a service account in the GCP documentation. 12.5. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 12.5.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 12.5. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 12.5.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 12.6. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 12.5.3. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 12.24. Machine series A2 A3 C2 C2D C3 C3D E2 M1 N1 N2 N2D N4 Tau T2D 12.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . 12.6. Creating the installation files for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 12.6.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.17.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 12.6.2. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. Configure a GCP account. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the network and subnets for the VPC to install the cluster in under the parent platform.gcp field: network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet> For platform.gcp.network , specify the name for the existing Google VPC. For platform.gcp.controlPlaneSubnet and platform.gcp.computeSubnet , specify the existing subnets to deploy the control plane machines and compute machines, respectively. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Make any other modifications to the install-config.yaml file that you require. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for GCP 12.6.3. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 12.6.4. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 12.6.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 12.6.6. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Optional: If you do not want the cluster to provision compute machines, remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Important If you disabled the MachineAPI capability when installing a cluster on user-provisioned infrastructure, you must remove the Kubernetes manifest files that define the worker machines. Otherwise, your cluster fails to install. Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Additional resources Optional: Adding the ingress DNS records 12.7. Exporting common variables 12.7.1. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Google Cloud Platform (GCP). The infrastructure name is also used to locate the appropriate GCP resources during an OpenShift Container Platform installation. The provided Deployment Manager templates contain references to this infrastructure name, so you must extract it. Prerequisites You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 12.7.2. Exporting common variables for Deployment Manager templates You must export a common set of variables that are used with the provided Deployment Manager templates used to assist in completing a user-provided infrastructure install on Google Cloud Platform (GCP). Note Specific Deployment Manager templates can also require additional exported variables, which are detailed in their related procedures. Procedure Export the following common variables to be used by the provided Deployment Manager templates: USD export BASE_DOMAIN='<base_domain>' USD export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' USD export NETWORK_CIDR='10.0.0.0/16' USD export MASTER_SUBNET_CIDR='10.0.0.0/17' USD export WORKER_SUBNET_CIDR='10.0.128.0/17' USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 USD export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` USD export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` USD export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` USD export REGION=`jq -r .gcp.region <installation_directory>/metadata.json` 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 12.8. Creating a VPC in GCP You must create a VPC in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements. One way to create the VPC is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You have defined the variables in the Exporting common variables section. Procedure Copy the template from the Deployment Manager template for the VPC section of this topic and save it as 01_vpc.py on your computer. This template describes the VPC that your cluster requires. Create a 01_vpc.yaml resource definition file: USD cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 region is the region to deploy the cluster into, for example us-central1 . 3 master_subnet_cidr is the CIDR for the master subnet, for example 10.0.0.0/17 . 4 worker_subnet_cidr is the CIDR for the worker subnet, for example 10.0.128.0/17 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-vpc --config 01_vpc.yaml 12.8.1. Deployment Manager template for the VPC You can use the following Deployment Manager template to deploy the VPC that you need for your OpenShift Container Platform cluster: Example 12.25. 01_vpc.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources} 12.9. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. 12.9.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 12.9.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Table 12.7. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 12.8. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 12.9. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 12.10. Creating load balancers in GCP You must configure load balancers in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You have defined the variables in the Exporting common variables section. Procedure Copy the template from the Deployment Manager template for the internal load balancer section of this topic and save it as 02_lb_int.py on your computer. This template describes the internal load balancing objects that your cluster requires. For an external cluster, also copy the template from the Deployment Manager template for the external load balancer section of this topic and save it as 02_lb_ext.py on your computer. This template describes the external load balancing objects that your cluster requires. Export the variables that the deployment template uses: Export the cluster network location: USD export CLUSTER_NETWORK=(`gcloud compute networks describe USD{INFRA_ID}-network --format json | jq -r .selfLink`) Export the control plane subnet location: USD export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-master-subnet --region=USD{REGION} --format json | jq -r .selfLink`) Export the three zones that the cluster uses: USD export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d "/" -f9`) USD export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d "/" -f9`) USD export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d "/" -f9`) Create a 02_infra.yaml resource definition file: USD cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF 1 2 Required only when deploying an external cluster. 3 infra_id is the INFRA_ID infrastructure name from the extraction step. 4 region is the region to deploy the cluster into, for example us-central1 . 5 control_subnet is the URI to the control subnet. 6 zones are the zones to deploy the control plane instances into, like us-east1-b , us-east1-c , and us-east1-d . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml Export the cluster IP address: USD export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`) For an external cluster, also export the cluster public IP address: USD export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`) 12.10.1. Deployment Manager template for the external load balancer You can use the following Deployment Manager template to deploy the external load balancer that you need for your OpenShift Container Platform cluster: Example 12.26. 02_lb_ext.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources} 12.10.2. Deployment Manager template for the internal load balancer You can use the following Deployment Manager template to deploy the internal load balancer that you need for your OpenShift Container Platform cluster: Example 12.27. 02_lb_int.py Deployment Manager template def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': "HTTPS" } }, { 'name': context.properties['infra_id'] + '-api-internal', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources} You will need this template in addition to the 02_lb_ext.py template when you create an external cluster. 12.11. Creating a private DNS zone in GCP You must configure a private DNS zone in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create this component is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Procedure Copy the template from the Deployment Manager template for the private DNS section of this topic and save it as 02_dns.py on your computer. This template describes the private DNS objects that your cluster requires. Create a 02_dns.yaml resource definition file: USD cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 cluster_domain is the domain for the cluster, for example openshift.example.com . 3 cluster_network is the selfLink URL to the cluster network. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml The templates do not create DNS entries due to limitations of Deployment Manager, so you must create them manually: Add the internal DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone For an external cluster, also add the external DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} 12.11.1. Deployment Manager template for the private DNS You can use the following Deployment Manager template to deploy the private DNS that you need for your OpenShift Container Platform cluster: Example 12.28. 02_dns.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources} 12.12. Creating firewall rules in GCP You must create firewall rules in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Procedure Copy the template from the Deployment Manager template for firewall rules section of this topic and save it as 03_firewall.py on your computer. This template describes the security groups that your cluster requires. Create a 03_firewall.yaml resource definition file: USD cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF 1 allowed_external_cidr is the CIDR range that can access the cluster API and SSH to the bootstrap host. For an internal cluster, set this value to USD{NETWORK_CIDR} . 2 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 cluster_network is the selfLink URL to the cluster network. 4 network_cidr is the CIDR of the VPC network, for example 10.0.0.0/16 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml 12.12.1. Deployment Manager template for firewall rules You can use the following Deployment Manager template to deploy the firewall rues that you need for your OpenShift Container Platform cluster: Example 12.29. 03_firewall.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources} 12.13. Creating IAM roles in GCP You must create IAM roles in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You have defined the variables in the Exporting common variables section. Procedure Copy the template from the Deployment Manager template for IAM roles section of this topic and save it as 03_iam.py on your computer. This template describes the IAM roles that your cluster requires. Create a 03_iam.yaml resource definition file: USD cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml Export the variable for the master service account: USD export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the variable for the worker service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the variable for the subnet that hosts the compute machines: USD export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`) The templates do not create the policy bindings due to limitations of Deployment Manager, so you must create them manually: USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.instanceAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.securityAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/iam.serviceAccountUser" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/storage.admin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/compute.viewer" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/storage.admin" Create a service account key and store it locally for later use: USD gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT} 12.13.1. Deployment Manager template for IAM roles You can use the following Deployment Manager template to deploy the IAM roles that you need for your OpenShift Container Platform cluster: Example 12.30. 03_iam.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources} 12.14. Creating the RHCOS cluster image for the GCP infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Google Cloud Platform (GCP) for your OpenShift Container Platform nodes. Procedure Obtain the RHCOS image from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The file name contains the OpenShift Container Platform version number in the format rhcos-<version>-<arch>-gcp.<arch>.tar.gz . Create the Google storage bucket: USD gsutil mb gs://<bucket_name> Upload the RHCOS image to the Google storage bucket: USD gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name> Export the uploaded RHCOS image location as a variable: USD export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz Create the cluster image: USD gcloud compute images create "USD{INFRA_ID}-rhcos-image" \ --source-uri="USD{IMAGE_SOURCE}" 12.15. Creating the bootstrap machine in GCP You must create the bootstrap machine in Google Cloud Platform (GCP) to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Ensure you installed pyOpenSSL. Procedure Copy the template from the Deployment Manager template for the bootstrap machine section of this topic and save it as 04_bootstrap.py on your computer. This template describes the bootstrap machine that your cluster requires. Export the location of the Red Hat Enterprise Linux CoreOS (RHCOS) image that the installation program requires: USD export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`) Create a bucket and upload the bootstrap.ign file: USD gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition USD gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/ Create a signed URL for the bootstrap instance to use to access the Ignition config. Export the URL from the output as a variable: USD export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep "^gs:" | awk '{print USD5}'` Create a 04_bootstrap.yaml resource definition file: USD cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 region is the region to deploy the cluster into, for example us-central1 . 3 zone is the zone to deploy the bootstrap instance into, for example us-central1-b . 4 cluster_network is the selfLink URL to the cluster network. 5 control_subnet is the selfLink URL to the control subnet. 6 image is the selfLink URL to the RHCOS image. 7 machine_type is the machine type of the instance, for example n1-standard-4 . 8 root_volume_size is the boot disk size for the bootstrap machine. 9 bootstrap_ign is the URL output when creating a signed URL. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the bootstrap machine manually. Add the bootstrap instance to the internal load balancer instance group: USD gcloud compute instance-groups unmanaged add-instances \ USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap Add the bootstrap instance group to the internal load balancer backend service: USD gcloud compute backend-services add-backend \ USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} 12.15.1. Deployment Manager template for the bootstrap machine You can use the following Deployment Manager template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 12.31. 04_bootstrap.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{"ignition":{"config":{"replace":{"source":"' + context.properties['bootstrap_ign'] + '"}},"version":"3.2.0"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources} 12.16. Creating the control plane machines in GCP You must create the control plane machines in Google Cloud Platform (GCP) for your cluster to use. One way to create these machines is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables , Creating load balancers in GCP , Creating IAM roles in GCP , and Creating the bootstrap machine in GCP sections. Create the bootstrap machine. Procedure Copy the template from the Deployment Manager template for control plane machines section of this topic and save it as 05_control_plane.py on your computer. This template describes the control plane machines that your cluster requires. Export the following variable required by the resource definition: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign` Create a 05_control_plane.yaml resource definition file: USD cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 zones are the zones to deploy the control plane instances into, for example us-central1-a , us-central1-b , and us-central1-c . 3 control_subnet is the selfLink URL to the control subnet. 4 image is the selfLink URL to the RHCOS image. 5 machine_type is the machine type of the instance, for example n1-standard-4 . 6 service_account_email is the email address for the master service account that you created. 7 ignition is the contents of the master.ign file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the control plane machines manually. Run the following commands to add the control plane machines to the appropriate instance groups: USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2 For an external cluster, you must also run the following commands to add the control plane machines to the target pools: USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_0}" --instances=USD{INFRA_ID}-master-0 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_1}" --instances=USD{INFRA_ID}-master-1 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_2}" --instances=USD{INFRA_ID}-master-2 12.16.1. Deployment Manager template for control plane machines You can use the following Deployment Manager template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 12.32. 05_control_plane.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources} 12.17. Wait for bootstrap completion and remove bootstrap resources in GCP After you create all of the required infrastructure in Google Cloud Platform (GCP), wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} USD gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign USD gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition USD gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap 12.18. Creating additional worker machines in GCP You can create worker machines in Google Cloud Platform (GCP) for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. In this example, you manually launch one instance by using the Deployment Manager template. Additional instances can be launched by including additional resources of type 06_worker.py in the file. Note If you do not use the provided Deployment Manager template to create your worker machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables , Creating load balancers in GCP , and Creating the bootstrap machine in GCP sections. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the Deployment Manager template for worker machines section of this topic and save it as 06_worker.py on your computer. This template describes the worker machines that your cluster requires. Export the variables that the resource definition uses. Export the subnet that hosts the compute machines: USD export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`) Export the email address for your service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the location of the compute machine Ignition config file: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign` Create a 06_worker.yaml resource definition file: USD cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF 1 name is the name of the worker machine, for example worker-0 . 2 9 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 10 zone is the zone to deploy the worker machine into, for example us-central1-a . 4 11 compute_subnet is the selfLink URL to the compute subnet. 5 12 image is the selfLink URL to the RHCOS image. 1 6 13 machine_type is the machine type of the instance, for example n1-standard-4 . 7 14 service_account_email is the email address for the worker service account that you created. 8 15 ignition is the contents of the worker.ign file. Optional: If you want to launch additional instances, include additional resources of type 06_worker.py in your 06_worker.yaml resource definition file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736 12.18.1. Deployment Manager template for worker machines You can use the following Deployment Manager template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 12.33. 06_worker.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources} 12.19. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You installed the oc CLI. Ensure the bootstrap process completed successfully. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 12.20. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 12.21. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 12.22. Optional: Adding the ingress DNS records If you removed the DNS zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at the ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites Ensure you defined the variables in the Exporting common variables section. Remove the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs. Ensure the bootstrap process completed successfully. Procedure Wait for the Ingress router to create a load balancer and populate the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98 Add the A record to your zones: To use A records: Export the variable for the router IP address: USD export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add the A record to the private zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone For an external cluster, also add the A record to the public zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} To add explicit domains instead of using a wildcard, create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com 12.23. Completing a GCP installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Google Cloud Platform (GCP) user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Ensure the bootstrap process completed successfully. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Observe the running state of your cluster. Run the following command to view the current cluster version and status: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete Run the following command to view the Operators managed on the control plane by the Cluster Version Operator (CVO): USD oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m Run the following command to view your cluster pods: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m ... openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m When the current cluster version is AVAILABLE , the installation is complete. 12.24. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.17, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 12.25. steps Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to Use Operator Lifecycle Manager in disconnected environments . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster | [
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.17.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet>",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"publish: Internal",
"controlPlane: platform: gcp: secureBoot: Enabled",
"compute: - platform: gcp: secureBoot: Enabled",
"platform: gcp: defaultMachinePlatform: secureBoot: Enabled",
"controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3",
"compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"export BASE_DOMAIN='<base_domain>' export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' export NETWORK_CIDR='10.0.0.0/16' export MASTER_SUBNET_CIDR='10.0.0.0/17' export WORKER_SUBNET_CIDR='10.0.128.0/17' export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` export REGION=`jq -r .gcp.region <installation_directory>/metadata.json`",
"cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-vpc --config 01_vpc.yaml",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources}",
"export CLUSTER_NETWORK=(`gcloud compute networks describe USD{INFRA_ID}-network --format json | jq -r .selfLink`)",
"export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-master-subnet --region=USD{REGION} --format json | jq -r .selfLink`)",
"export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d \"/\" -f9`)",
"export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d \"/\" -f9`)",
"export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d \"/\" -f9`)",
"cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml",
"export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`)",
"export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`)",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources}",
"def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': \"HTTPS\" } }, { 'name': context.properties['infra_id'] + '-api-internal', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources}",
"cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources}",
"cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources}",
"cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml",
"export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)",
"export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)",
"export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)",
"gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.instanceAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.securityAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/iam.serviceAccountUser\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.viewer\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\"",
"gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT}",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources}",
"gsutil mb gs://<bucket_name>",
"gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name>",
"export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz",
"gcloud compute images create \"USD{INFRA_ID}-rhcos-image\" --source-uri=\"USD{IMAGE_SOURCE}\"",
"export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`)",
"gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition",
"gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/",
"export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep \"^gs:\" | awk '{print USD5}'`",
"cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap",
"gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"' + context.properties['bootstrap_ign'] + '\"}},\"version\":\"3.2.0\"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources}",
"export MASTER_IGNITION=`cat <installation_directory>/master.ign`",
"cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1",
"gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2",
"gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_0}\" --instances=USD{INFRA_ID}-master-0",
"gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_1}\" --instances=USD{INFRA_ID}-master-1",
"gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_2}\" --instances=USD{INFRA_ID}-master-2",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources}",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2",
"gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}",
"gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign",
"gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition",
"gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap",
"export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)",
"export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)",
"export WORKER_IGNITION=`cat <installation_directory>/worker.ign`",
"cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF",
"gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml",
"def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources}",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98",
"export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone",
"if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete",
"oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_gcp/installing-restricted-networks-gcp |
Chapter 6. Customizing the web console in OpenShift Container Platform | Chapter 6. Customizing the web console in OpenShift Container Platform You can customize the OpenShift Container Platform web console to set a custom logo, product name, links, notifications, and command line downloads. This is especially helpful if you need to tailor the web console to meet specific corporate or government requirements. 6.1. Adding a custom logo and product name You can create custom branding by adding a custom logo or custom product name. You can set both or one without the other, as these settings are independent of each other. Prerequisites You must have administrator privileges. Create a file of the logo that you want to use. The logo can be a file in any common image format, including GIF, JPG, PNG, or SVG, and is constrained to a max-height of 60px . Procedure Import your logo file into a config map in the openshift-config namespace: USD oc create configmap console-custom-logo --from-file /path/to/console-custom-logo.png -n openshift-config Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: console-custom-logo namespace: openshift-config binaryData: console-custom-logo.png: <base64-encoded_logo> ... 1 1 Provide a valid base64-encoded logo. Edit the web console's Operator configuration to include customLogoFile and customProductName : USD oc edit consoles.operator.openshift.io cluster apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: customLogoFile: key: console-custom-logo.png name: console-custom-logo customProductName: My Console Once the Operator configuration is updated, it will sync the custom logo config map into the console namespace, mount it to the console pod, and redeploy. Check for success. If there are any issues, the console cluster Operator will report a Degraded status, and the console Operator configuration will also report a CustomLogoDegraded status, but with reasons like KeyOrFilenameInvalid or NoImageProvided . To check the clusteroperator , run: USD oc get clusteroperator console -o yaml To check the console Operator configuration, run: USD oc get consoles.operator.openshift.io -o yaml 6.2. Creating custom links in the web console Prerequisites You must have administrator privileges. Procedure From Administration Custom Resource Definitions , click on ConsoleLink . Select Instances tab Click Create Console Link and edit the file: apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: example spec: href: 'https://www.example.com' location: HelpMenu 1 text: Link 1 1 Valid location settings are HelpMenu , UserMenu , ApplicationMenu , and NamespaceDashboard . To make the custom link appear in all namespaces, follow this example: apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: namespaced-dashboard-link-for-all-namespaces spec: href: 'https://www.example.com' location: NamespaceDashboard text: This appears in all namespaces To make the custom link appear in only some namespaces, follow this example: apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: namespaced-dashboard-for-some-namespaces spec: href: 'https://www.example.com' location: NamespaceDashboard # This text will appear in a box called "Launcher" under "namespace" or "project" in the web console text: Custom Link Text namespaceDashboard: namespaces: # for these specific namespaces - my-namespace - your-namespace - other-namespace To make the custom link appear in the application menu, follow this example: apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: application-menu-link-1 spec: href: 'https://www.example.com' location: ApplicationMenu text: Link 1 applicationMenu: section: My New Section # image that is 24x24 in size imageURL: https://via.placeholder.com/24 Click Save to apply your changes. 6.3. Customizing console routes For console and downloads routes, custom routes functionality uses the ingress config route configuration API. If the console custom route is set up in both the ingress config and console-operator config, then the new ingress config custom route configuration takes precedent. The route configuration with the console-operator config is deprecated. 6.3.1. Customizing the console route You can customize the console route by setting the custom hostname and TLS certificate in the spec.componentRoutes field of the cluster Ingress configuration. Prerequisites You have logged in to the cluster as a user with administrative privileges. You have created a secret in the openshift-config namespace containing the TLS certificate and key. This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Tip You can create a TLS secret by using the oc create secret tls command. Procedure Edit the cluster Ingress configuration: USD oc edit ingress.config.openshift.io cluster Set the custom hostname and optionally the serving certificate and key: apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: console namespace: openshift-console hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2 1 The custom hostname. 2 Reference to a secret in the openshift-config namespace that contains a TLS certificate ( tls.crt ) and key ( tls.key ). This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Save the file to apply the changes. 6.3.2. Customizing the download route You can customize the download route by setting the custom hostname and TLS certificate in the spec.componentRoutes field of the cluster Ingress configuration. Prerequisites You have logged in to the cluster as a user with administrative privileges. You have created a secret in the openshift-config namespace containing the TLS certificate and key. This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Tip You can create a TLS secret by using the oc create secret tls command. Procedure Edit the cluster Ingress configuration: USD oc edit ingress.config.openshift.io cluster Set the custom hostname and optionally the serving certificate and key: apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: downloads namespace: openshift-console hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2 1 The custom hostname. 2 Reference to a secret in the openshift-config namespace that contains a TLS certificate ( tls.crt ) and key ( tls.key ). This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Save the file to apply the changes. 6.4. Customizing the login page Create Terms of Service information with custom login pages. Custom login pages can also be helpful if you use a third-party login provider, such as GitHub or Google, to show users a branded page that they trust and expect before being redirected to the authentication provider. You can also render custom error pages during the authentication process. Note Customizing the error template is limited to identity providers (IDPs) that use redirects, such as request header and OIDC-based IDPs. It does not have an effect on IDPs that use direct password authentication, such as LDAP and htpasswd. Prerequisites You must have administrator privileges. Procedure Run the following commands to create templates you can modify: USD oc adm create-login-template > login.html USD oc adm create-provider-selection-template > providers.html USD oc adm create-error-template > errors.html Create the secrets: USD oc create secret generic login-template --from-file=login.html -n openshift-config USD oc create secret generic providers-template --from-file=providers.html -n openshift-config USD oc create secret generic error-template --from-file=errors.html -n openshift-config Run: USD oc edit oauths cluster Update the specification: apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster # ... spec: templates: error: name: error-template login: name: login-template providerSelection: name: providers-template Run oc explain oauths.spec.templates to understand the options. 6.5. Defining a template for an external log link If you are connected to a service that helps you browse your logs, but you need to generate URLs in a particular way, then you can define a template for your link. Prerequisites You must have administrator privileges. Procedure From Administration Custom Resource Definitions , click on ConsoleExternalLogLink . Select Instances tab Click Create Console External Log Link and edit the file: apiVersion: console.openshift.io/v1 kind: ConsoleExternalLogLink metadata: name: example spec: hrefTemplate: >- https://example.com/logs?resourceName=USD{resourceName}&containerName=USD{containerName}&resourceNamespace=USD{resourceNamespace}&podLabels=USD{podLabels} text: Example Logs 6.6. Creating custom notification banners Prerequisites You must have administrator privileges. Procedure From Administration Custom Resource Definitions , click on ConsoleNotification . Select Instances tab Click Create Console Notification and edit the file: apiVersion: console.openshift.io/v1 kind: ConsoleNotification metadata: name: example spec: text: This is an example notification message with an optional link. location: BannerTop 1 link: href: 'https://www.example.com' text: Optional link text color: '#fff' backgroundColor: '#0088ce' 1 Valid location settings are BannerTop , BannerBottom , and BannerTopBottom . Click Create to apply your changes. 6.7. Customizing CLI downloads You can configure links for downloading the CLI with custom link text and URLs, which can point directly to file packages or to an external page that provides the packages. Prerequisites You must have administrator privileges. Procedure Navigate to Administration Custom Resource Definitions . Select ConsoleCLIDownload from the list of Custom Resource Definitions (CRDs). Click the YAML tab, and then make your edits: apiVersion: console.openshift.io/v1 kind: ConsoleCLIDownload metadata: name: example-cli-download-links spec: description: | This is an example of download links displayName: example links: - href: 'https://www.example.com/public/example.tar' text: example for linux - href: 'https://www.example.com/public/example.mac.zip' text: example for mac - href: 'https://www.example.com/public/example.win.zip' text: example for windows Click the Save button. 6.8. Adding YAML examples to Kubernetes resources You can dynamically add YAML examples to any Kubernetes resources at any time. Prerequisites You must have cluster administrator privileges. Procedure From Administration Custom Resource Definitions , click on ConsoleYAMLSample . Click YAML and edit the file: apiVersion: console.openshift.io/v1 kind: ConsoleYAMLSample metadata: name: example spec: targetResource: apiVersion: batch/v1 kind: Job title: Example Job description: An example Job YAML sample yaml: | apiVersion: batch/v1 kind: Job metadata: name: countdown spec: template: metadata: name: countdown spec: containers: - name: counter image: centos:7 command: - "bin/bash" - "-c" - "for i in 9 8 7 6 5 4 3 2 1 ; do echo USDi ; done" restartPolicy: Never Use spec.snippet to indicate that the YAML sample is not the full YAML resource definition, but a fragment that can be inserted into the existing YAML document at the user's cursor. Click Save . | [
"oc create configmap console-custom-logo --from-file /path/to/console-custom-logo.png -n openshift-config",
"apiVersion: v1 kind: ConfigMap metadata: name: console-custom-logo namespace: openshift-config binaryData: console-custom-logo.png: <base64-encoded_logo> ... 1",
"oc edit consoles.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: customLogoFile: key: console-custom-logo.png name: console-custom-logo customProductName: My Console",
"oc get clusteroperator console -o yaml",
"oc get consoles.operator.openshift.io -o yaml",
"apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: example spec: href: 'https://www.example.com' location: HelpMenu 1 text: Link 1",
"apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: namespaced-dashboard-link-for-all-namespaces spec: href: 'https://www.example.com' location: NamespaceDashboard text: This appears in all namespaces",
"apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: namespaced-dashboard-for-some-namespaces spec: href: 'https://www.example.com' location: NamespaceDashboard # This text will appear in a box called \"Launcher\" under \"namespace\" or \"project\" in the web console text: Custom Link Text namespaceDashboard: namespaces: # for these specific namespaces - my-namespace - your-namespace - other-namespace",
"apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: application-menu-link-1 spec: href: 'https://www.example.com' location: ApplicationMenu text: Link 1 applicationMenu: section: My New Section # image that is 24x24 in size imageURL: https://via.placeholder.com/24",
"oc edit ingress.config.openshift.io cluster",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: console namespace: openshift-console hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2",
"oc edit ingress.config.openshift.io cluster",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: downloads namespace: openshift-console hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2",
"oc adm create-login-template > login.html",
"oc adm create-provider-selection-template > providers.html",
"oc adm create-error-template > errors.html",
"oc create secret generic login-template --from-file=login.html -n openshift-config",
"oc create secret generic providers-template --from-file=providers.html -n openshift-config",
"oc create secret generic error-template --from-file=errors.html -n openshift-config",
"oc edit oauths cluster",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: templates: error: name: error-template login: name: login-template providerSelection: name: providers-template",
"apiVersion: console.openshift.io/v1 kind: ConsoleExternalLogLink metadata: name: example spec: hrefTemplate: >- https://example.com/logs?resourceName=USD{resourceName}&containerName=USD{containerName}&resourceNamespace=USD{resourceNamespace}&podLabels=USD{podLabels} text: Example Logs",
"apiVersion: console.openshift.io/v1 kind: ConsoleNotification metadata: name: example spec: text: This is an example notification message with an optional link. location: BannerTop 1 link: href: 'https://www.example.com' text: Optional link text color: '#fff' backgroundColor: '#0088ce'",
"apiVersion: console.openshift.io/v1 kind: ConsoleCLIDownload metadata: name: example-cli-download-links spec: description: | This is an example of download links displayName: example links: - href: 'https://www.example.com/public/example.tar' text: example for linux - href: 'https://www.example.com/public/example.mac.zip' text: example for mac - href: 'https://www.example.com/public/example.win.zip' text: example for windows",
"apiVersion: console.openshift.io/v1 kind: ConsoleYAMLSample metadata: name: example spec: targetResource: apiVersion: batch/v1 kind: Job title: Example Job description: An example Job YAML sample yaml: | apiVersion: batch/v1 kind: Job metadata: name: countdown spec: template: metadata: name: countdown spec: containers: - name: counter image: centos:7 command: - \"bin/bash\" - \"-c\" - \"for i in 9 8 7 6 5 4 3 2 1 ; do echo USDi ; done\" restartPolicy: Never"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/web_console/customizing-web-console |
Chapter 2. AMQ Streams installation methods | Chapter 2. AMQ Streams installation methods You can install AMQ Streams on OpenShift 4.12 and later in two ways. Installation method Description Installation artifacts (YAML files) Download Red Hat AMQ Streams 2.5 OpenShift Installation and Example Files from the AMQ Streams software downloads page . Deploy the YAML installation artifacts to your OpenShift cluster using oc . You start by deploying the Cluster Operator from install/cluster-operator to a single namespace, multiple namespaces, or all namespaces. You can also use the install/ artifacts to deploy the following: AMQ Streams administrator roles ( strimzi-admin ) A standalone Topic Operator ( topic-operator ) A standalone User Operator ( user-operator ) AMQ Streams Drain Cleaner ( drain-cleaner ) OperatorHub Use the AMQ Streams operator in the OperatorHub to deploy AMQ Streams to a single namespace or all namespaces. For the greatest flexibility, choose the installation artifacts method. The OperatorHub method provides a standard configuration and allows you to take advantage of automatic updates. Note Installation of AMQ Streams using Helm is not supported. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/deploying_and_managing_amq_streams_on_openshift/con-streams-installation-methods_str |
Chapter 19. consistency | Chapter 19. consistency This chapter describes the commands under the consistency command. 19.1. consistency group add volume Add volume(s) to consistency group Usage: Table 19.1. Positional arguments Value Summary <consistency-group> Consistency group to contain <volume> (name or id) <volume> Volume(s) to add to <consistency-group> (name or id) (repeat option to add multiple volumes) Table 19.2. Command arguments Value Summary -h, --help Show this help message and exit 19.2. consistency group create Create new consistency group. Usage: Table 19.3. Positional arguments Value Summary <name> Name of new consistency group (default to none) Table 19.4. Command arguments Value Summary -h, --help Show this help message and exit --volume-type <volume-type> Volume type of this consistency group (name or id) --consistency-group-source <consistency-group> Existing consistency group (name or id) --consistency-group-snapshot <consistency-group-snapshot> Existing consistency group snapshot (name or id) --description <description> Description of this consistency group --availability-zone <availability-zone> Availability zone for this consistency group (not available if creating consistency group from source) Table 19.5. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 19.6. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 19.7. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 19.8. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 19.3. consistency group delete Delete consistency group(s). Usage: Table 19.9. Positional arguments Value Summary <consistency-group> Consistency group(s) to delete (name or id) Table 19.10. Command arguments Value Summary -h, --help Show this help message and exit --force Allow delete in state other than error or available 19.4. consistency group list List consistency groups. Usage: Table 19.11. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show details for all projects. admin only. (defaults to False) --long List additional fields in output Table 19.12. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 19.13. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 19.14. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 19.15. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 19.5. consistency group remove volume Remove volume(s) from consistency group Usage: Table 19.16. Positional arguments Value Summary <consistency-group> Consistency group containing <volume> (name or id) <volume> Volume(s) to remove from <consistency-group> (name or ID) (repeat option to remove multiple volumes) Table 19.17. Command arguments Value Summary -h, --help Show this help message and exit 19.6. consistency group set Set consistency group properties Usage: Table 19.18. Positional arguments Value Summary <consistency-group> Consistency group to modify (name or id) Table 19.19. Command arguments Value Summary -h, --help Show this help message and exit --name <name> New consistency group name --description <description> New consistency group description 19.7. consistency group show Display consistency group details. Usage: Table 19.20. Positional arguments Value Summary <consistency-group> Consistency group to display (name or id) Table 19.21. Command arguments Value Summary -h, --help Show this help message and exit Table 19.22. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 19.23. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 19.24. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 19.25. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 19.8. consistency group snapshot create Create new consistency group snapshot. Usage: Table 19.26. Positional arguments Value Summary <snapshot-name> Name of new consistency group snapshot (default to None) Table 19.27. Command arguments Value Summary -h, --help Show this help message and exit --consistency-group <consistency-group> Consistency group to snapshot (name or id) (default to be the same as <snapshot-name>) --description <description> Description of this consistency group snapshot Table 19.28. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 19.29. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 19.30. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 19.31. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 19.9. consistency group snapshot delete Delete consistency group snapshot(s). Usage: Table 19.32. Positional arguments Value Summary <consistency-group-snapshot> Consistency group snapshot(s) to delete (name or id) Table 19.33. Command arguments Value Summary -h, --help Show this help message and exit 19.10. consistency group snapshot list List consistency group snapshots. Usage: Table 19.34. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show detail for all projects (admin only) (defaults to False) --long List additional fields in output --status <status> Filters results by a status ("available", "error", "creating", "deleting" or "error_deleting") --consistency-group <consistency-group> Filters results by a consistency group (name or id) Table 19.35. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 19.36. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 19.37. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 19.38. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 19.11. consistency group snapshot show Display consistency group snapshot details Usage: Table 19.39. Positional arguments Value Summary <consistency-group-snapshot> Consistency group snapshot to display (name or id) Table 19.40. Command arguments Value Summary -h, --help Show this help message and exit Table 19.41. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 19.42. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 19.43. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 19.44. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack consistency group add volume [-h] <consistency-group> <volume> [<volume> ...]",
"openstack consistency group create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] (--volume-type <volume-type> | --consistency-group-source <consistency-group> | --consistency-group-snapshot <consistency-group-snapshot>) [--description <description>] [--availability-zone <availability-zone>] [<name>]",
"openstack consistency group delete [-h] [--force] <consistency-group> [<consistency-group> ...]",
"openstack consistency group list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--all-projects] [--long]",
"openstack consistency group remove volume [-h] <consistency-group> <volume> [<volume> ...]",
"openstack consistency group set [-h] [--name <name>] [--description <description>] <consistency-group>",
"openstack consistency group show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <consistency-group>",
"openstack consistency group snapshot create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--consistency-group <consistency-group>] [--description <description>] [<snapshot-name>]",
"openstack consistency group snapshot delete [-h] <consistency-group-snapshot> [<consistency-group-snapshot> ...]",
"openstack consistency group snapshot list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--all-projects] [--long] [--status <status>] [--consistency-group <consistency-group>]",
"openstack consistency group snapshot show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <consistency-group-snapshot>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/consistency |
Using Streams for Apache Kafka on RHEL with ZooKeeper | Using Streams for Apache Kafka on RHEL with ZooKeeper Red Hat Streams for Apache Kafka 2.7 Configure and manage a deployment of Streams for Apache Kafka 2.7 on Red Hat Enterprise Linux | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_streams_for_apache_kafka_on_rhel_with_zookeeper/index |
Chapter 26. Delegating permissions to user groups to manage users using IdM WebUI | Chapter 26. Delegating permissions to user groups to manage users using IdM WebUI Delegation is one of the access control methods in IdM, along with self-service rules and role-based access control (RBAC). You can use delegation to assign permissions to one group of users to manage entries for another group of users. This section covers the following topics: Delegation rules Creating a delegation rule using IdM WebUI Viewing existing delegation rules using IdM WebUI Modifying a delegation rule using IdM WebUI Deleting a delegation rule using IdM WebUI 26.1. Delegation rules You can delegate permissions to user groups to manage users by creating delegation rules . Delegation rules allow a specific user group to perform write (edit) operations on specific attributes for users in another user group. This form of access control rule is limited to editing the values of a subset of attributes you specify in a delegation rule; it does not grant the ability to add or remove whole entries or control over unspecified attributes. Delegation rules grant permissions to existing user groups in IdM. You can use delegation to, for example, allow the managers user group to manage selected attributes of users in the employees user group. 26.2. Creating a delegation rule using IdM WebUI Follow this procedure to create a delegation rule using the IdM WebUI. Prerequisites You are logged in to the IdM Web UI as a member of the admins group. Procedure From the IPA Server menu, click Role-Based Access Control Delegations . Click Add . In the Add delegation window, do the following: Name the new delegation rule. Set the permissions by selecting the check boxes that indicate whether users will have the right to view the given attributes ( read ) and add or change the given attributes ( write ). In the User group drop-down menu, select the group who is being granted permissions to view or edit the entries of users in the member group. In the Member user group drop-down menu, select the group whose entries can be edited by members of the delegation group. In the attributes box, select the check boxes by the attributes to which you want to grant permissions. Click the Add button to save the new delegation rule. 26.3. Viewing existing delegation rules using IdM WebUI Follow this procedure to view existing delegation rules using the IdM WebUI. Prerequisites You are logged in to the IdM Web UI as a member of the admins group. Procedure From the IPA Server menu, click Role-Based Access Control Delegations . 26.4. Modifying a delegation rule using IdM WebUI Follow this procedure to modify an existing delegation rule using the IdM WebUI. Prerequisites You are logged in to the IdM Web UI as a member of the admins group. Procedure From the IPA Server menu, click Role-Based Access Control Delegations . Click on the rule you want to modify. Make the desired changes: Change the name of the rule. Change granted permissions by selecting the check boxes that indicate whether users will have the right to view the given attributes ( read ) and add or change the given attributes ( write ). In the User group drop-down menu, select the group who is being granted permissions to view or edit the entries of users in the member group. In the Member user group drop-down menu, select the group whose entries can be edited by members of the delegation group. In the attributes box, select the check boxes by the attributes to which you want to grant permissions. To remove permissions to an attribute, uncheck the relevant check box. Click the Save button to save the changes. 26.5. Deleting a delegation rule using IdM WebUI Follow this procedure to delete an existing delegation rule using the IdM WebUI. Prerequisites You are logged in to the IdM Web UI as a member of the admins group. Procedure From the IPA Server menu, click Role-Based Access Control Delegations . Select the check box to the rule you want to remove. Click Delete . Click Delete to confirm. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/delegating-permissions-to-user-groups-to-manage-users-using-idm-webui_configuring-and-managing-idm |
Chapter 25. FeatureFlagService | Chapter 25. FeatureFlagService 25.1. GetFeatureFlags GET /v1/featureflags 25.1.1. Description 25.1.2. Parameters 25.1.3. Return Type V1GetFeatureFlagsResponse 25.1.4. Content Type application/json 25.1.5. Responses Table 25.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetFeatureFlagsResponse 0 An unexpected error response. GooglerpcStatus 25.1.6. Samples 25.1.7. Common object reference 25.1.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 25.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 25.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 25.1.7.3. V1FeatureFlag Field Name Required Nullable Type Description Format name String envVar String enabled Boolean 25.1.7.4. V1GetFeatureFlagsResponse Field Name Required Nullable Type Description Format featureFlags List of V1FeatureFlag | [
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/featureflagservice |
Appendix B. Using KVM Virtualization on Multiple Architectures | Appendix B. Using KVM Virtualization on Multiple Architectures By default, KVM virtualization on Red Hat Enterprise Linux 7 is compatible with the AMD64 and Intel 64 architectures. However, starting with Red Hat Enterprise Linux 7.5, KVM virtualization is also supported on the following architectures, thanks to the introduction of the kernel-alt packages: IBM POWER IBM Z ARM systems (not supported) Note that when using virtualization on these architectures, the installation, usage, and feature support differ from AMD64 and Intel 64 in certain respects. For more information, see the sections below: B.1. Using KVM Virtualization on IBM POWER Systems Starting with Red Hat Enterprise Linux 7.5, KVM virtualization is supported on IBM POWER8 Systems and IBM POWER9 systems. However, IBM POWER8 does not use kernel-alt , which means that these two architectures differ in certain aspects. Installation To install KVM virtualization on Red Hat Enterprise Linux 7 for IBM POWER 8 and POWER9 Systems: Install the host system from the bootable image on the Customer Portal: IBM POWER8 IBM POWER9 For detailed instructions, see the Red Hat Enterprise Linux 7 Installation Guide . Ensure that your host system meets the hypervisor requirements: Verify that you have the correct machine type: The output of this command must include the PowerNV entry, which indicates that you are running on a supported PowerNV machine type: Load the KVM-HV kernel module: Verify that the KVM-HV kernel module is loaded: If KVM-HV was loaded successfully, the output of this command includes kvm_hv . Install the qemu-kvm-ma package in addition to other virtualization packages described in Chapter 2, Installing the Virtualization Packages . Architecture Specifics KVM virtualization on Red Hat Enterprise Linux 7.5 for IBM POWER differs from KVM on AMD64 and Intel 64 systems in the following: The recommended minimum memory allocation for a guest on an IBM POWER host is 2GB RAM . The SPICE protocol is not supported on IBM POWER systems. To display the graphical output of a guest, use the VNC protocol. In addition, only the following virtual graphics card devices are supported: vga - only supported in -vga std mode and not in -vga cirrus mode virtio-vga virtio-gpu The following virtualization features are disabled on AMD64 and Intel 64 hosts, but work on IBM POWER. However, they are not supported by Red Hat, and therefore not recommended for use: I/O threads SMBIOS configuration is not available. POWER8 guests, including compatibility mode guests, may fail to start with an error similar to: This is significantly more likely to occur on guests that use Red Hat Enterprise Linux 7.3 or prior. To fix this problem, increase the CMA memory pool available for the guest's hashed page table (HPT) by adding kvm_cma_resv_ratio= memory to the host's kernel command line, where memory is the percentage of host memory that should be reserved for the CMA pool (defaults to 5). Transparent huge pages (THPs) currently do not provide any notable performance benefits on IBM POWER8 guests Also note that the sizes of static huge pages on IBM POWER8 systems are 16MiB and 16GiB, as opposed to 2MiB and 1GiB on AMD64 and Intel 64 and on IBM POWER9. As a consequence, migrating a guest from an IBM POWER8 host to an IBM POWER9 host fails if the guest is configured with static huge pages. In addition, to be able to use static huge pages or THPs on IBM POWER8 guests, you must first set up huge pages on the host . A number of virtual peripheral devices that are supported on AMD64 and Intel 64 systems are not supported on IBM POWER systems, or a different device is supported as a replacement: Devices used for PCI-E hierarchy, including the ioh3420 and xio3130-downstream devices, are not supported. This functionality is replaced by multiple independent PCI root bridges, provided by the spapr-pci-host-bridge device. UHCI and EHCI PCI controllers are not supported. Use OHCI and XHCI controllers instead. IDE devices, including the virtual IDE CD-ROM ( ide-cd ) and the virtual IDE disk ( ide-hd ), are not supported. Use the virtio-scsi and virtio-blk devices instead. Emulated PCI NICs ( rtl8139 ) are not supported. Use the virtio-net device instead. Sound devices, including intel-hda , hda-output , and AC97 , are not supported. USB redirection devices, including usb-redir and usb-tablet , are not supported. The kvm-clock service does not have to be configured for time management on IBM POWER systems. The pvpanic device is not supported on IBM POWER systems. However, an equivalent functionality is available and activated on this architecture by default. To enable it on a guest, use the <on_crash> configuration element with the preserve value. In addition, make sure to remove the <panic> element from the <devices> section, as its presence can lead to the guest failing to boot on IBM POWER systems. On IBM POWER8 systems, the host machine must run in single-threaded mode to support guests. This is automatically configured if the qemu-kvm-ma packages are installed. However, guests running on single-threaded hosts can still use multiple threads. When an IBM POWER virtual machine (VM) running on a RHEL 7 host is configured with a NUMA node that uses zero memory ( memory='0' ), the VM does not work correctly. As a consequence, Red Hat does not support IBM POWER VMs with zero-memory NUMA nodes on RHEL 7 | [
"grep ^platform /proc/cpuinfo",
"platform : PowerNV",
"modprobe kvm_hv",
"lsmod | grep kvm",
"qemu-kvm: Failed to allocate KVM HPT of order 33 (try smaller maxmem?): Cannot allocate memory"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/appe-KVM_on_multiarch |
Chapter 11. Known issues | Chapter 11. Known issues This part describes known issues in Red Hat Enterprise Linux 9.1. 11.1. Installer and image creation The reboot --kexec and inst.kexec commands do not provide a predictable system state Performing a RHEL installation with the reboot --kexec Kickstart command or the inst.kexec kernel boot parameters do not provide the same predictable system state as a full reboot. As a consequence, switching to the installed system without rebooting can produce unpredictable results. Note that the kexec feature is deprecated and will be removed in a future release of Red Hat Enterprise Linux. (BZ#1697896) Local Media installation source is not detected when booting the installation from a USB that is created using a third party tool When booting the RHEL installation from a USB that is created using a third party tool, the installer fails to detect the Local Media installation source (only Red Hat CDN is detected). This issue occurs because the default boot option int.stage2= attempts to search for iso9660 image format. However, a third party tool might create an ISO image with a different format. As a workaround, use either of the following solution: When booting the installation, click the Tab key to edit the kernel command line, and change the boot option inst.stage2= to inst.repo= . To create a bootable USB device on Windows, use Fedora Media Writer. When using a third party tool like Rufus to create a bootable USB device, first regenerate the RHEL ISO image on a Linux system, and then use the third party tool to create a bootable USB device. For more information on the steps involved in performing any of the specified workaround, see, Installation media is not auto detected during the installation of RHEL 8.3 . (BZ#1877697) The auth and authconfig Kickstart commands require the AppStream repository The authselect-compat package is required by the auth and authconfig Kickstart commands during installation. Without this package, the installation fails if auth or authconfig are used. However, by design, the authselect-compat package is only available in the AppStream repository. To work around this problem, verify that the BaseOS and AppStream repositories are available to the installer or use the authselect Kickstart command during installation. (BZ#1640697) Driver disk menu fails to display user inputs on the console When you start RHEL installation using the inst.dd option on the Kernel command line with a driver disk, the console fails to display the user input. Consequently, it appears that the application does not respond to the user input and freezes, but displays the output which is confusing for users. However, this behavior does not affect the functionality, and user input gets registered after pressing Enter . As a workaround, to see the expected results, ignore the absence of user inputs in the console and press Enter when you finish adding inputs. (BZ#2109231) Unexpected SELinux policies on systems where Anaconda is running as an application When Anaconda is running as an application on an already installed system (for example to perform another installation to an image file using the -image anaconda option), the system is not prohibited to modify the SELinux types and attributes during installation. As a consequence, certain elements of SELinux policy might change on the system where Anaconda is running. To work around this problem, do not run Anaconda on the production system and execute it in a temporary virtual machine. So that the SELinux policy on a production system is not modified. Running anaconda as part of the system installation process such as installing from boot.iso or dvd.iso is not affected by this issue. ( BZ#2050140 ) The USB CD-ROM drive is not available as an installation source in Anaconda Installation fails when the USB CD-ROM drive is the source for it and the Kickstart ignoredisk --only-use= command is specified. In this case, Anaconda cannot find and use this source disk. To work around this problem, use the harddrive --partition=sdX --dir=/ command to install from USB CD-ROM drive. As a result, the installation does not fail. ( BZ#1914955 ) Hard drive partitioned installations with iso9660 filesystem fails You cannot install RHEL on systems where the hard drive is partitioned with the iso9660 filesystem. This is due to the updated installation code that is set to ignore any hard disk containing a iso9660 file system partition. This happens even when RHEL is installed without using a DVD. To workaround this problem, add the following script in the kickstart file to format the disc before the installation starts. Note: Before performing the workaround, backup the data available on the disk. The wipefs command formats all the existing data from the disk. As a result, installations work as expected without any errors. ( BZ#1929105 ) Anaconda fails to verify existence of an administrator user account While installing RHEL using a graphical user interface, Anaconda fails to verify if the administrator account has been created. As a consequence, users might install a system without any administrator user account. To work around this problem, ensure you configure an administrator user account or the root password is set and the root account is unlocked. As a result, users can perform administrative tasks on the installed system. ( BZ#2047713 ) New XFS features prevent booting of PowerNV IBM POWER systems with firmware older than version 5.10 PowerNV IBM POWER systems use a Linux kernel for firmware, and use Petitboot as a replacement for GRUB. This results in the firmware kernel mounting /boot and Petitboot reading the GRUB config and booting RHEL. The RHEL 9 kernel introduces bigtime=1 and inobtcount=1 features to the XFS filesystem, which kernels with firmware older than version 5.10 do not understand. To work around this problem, you can use another filesystem for /boot , for example ext4. (BZ#1997832) Cannot install RHEL when PReP is not 4 or 8 MiB in size The RHEL installer cannot install the boot loader if the PowerPC Reference Platform (PReP) partition is of a different size than 4 MiB or 8 MiB on a disk that uses 4 kiB sectors. As a consequence, you cannot install RHEL on the disk. To work around the problem, make sure that the PReP partition is exactly 4 MiB or 8 MiB in size, and that the size is not rounded to another value. As a result, the installer can now install RHEL on the disk. (BZ#2026579) The installer displays an incorrect total disk space while custom partitioning with multipath devices The installer does not filter out individual paths of multipath devices while custom partitioning. This causes the installer to display individual paths to multipath devices and users can select individual paths to multipath devices for the created partitions. As a consequence, an incorrect sum of the total disk space is displayed. It is computed by adding the size of each individual path to the total disk space. As a workaround, use only the multipath devices and not individual paths while custom partitioning, and ignore the incorrectly computed total disk space. ( BZ#2052938 ) RHEL for Edge installer image fails to create mount points when installing an rpm-ostree payload When deploying rpm-ostree payloads, used for example in a RHEL for Edge installer image, the installer does not properly create some mount points for custom partitions. As a consequence, the installation is aborted with the following error: To work around this issue: Use an automatic partitioning scheme and do not add any mount points manually. Manually assign mount points only inside /var directory. For example, /var/ my-mount-point ), and the following standard directories: / , /boot , /var . As a result, the installation process finishes successfully. ( BZ#2125542 ) NetworkManager fails to start after the installation when connected to a network but without DHCP or a static IP address configured Starting with RHEL 9.0, Anaconda activates network devices automatically when there is no specific ip= or kickstart network configuration set. Anaconda creates a default persistent configuration file for each Ethernet device. The connection profile has the ONBOOT and autoconnect value set to true . As a consequence, during the start of the installed system, RHEL activates the network devices, and the networkManager-wait-online service fails. As a workaround, do one of the following: Delete all connections using the nmcli utility except one connection you want to use. For example: List all connection profiles: Delete the connection profiles that you do not require: Replace <connection_name> with the name of the connection you want to delete. Disable the auto connect network feature in Anaconda if no specific ip= or kickstart network configuration is set. In the Anaconda GUI, navigate to Network & Host Name . Select a network device to disable. Click Configure . On the General tab, deselect the Connect automatically with priority Click Save . (BZ#2115783) RHEL installer does not process the inst.proxy boot option correctly When running Anaconda, the installation program does not process the inst.proxy boot option correctly. As a consequence, you cannot use the specified proxy to fetch the installation image. To work around this issue: * Use the latest version of RHEL distribution. * Use proxy instead of inst.proxy boot option. (JIRA:RHELDOCS-18764) RHEL installation fails on IBM Z architectures with multi-LUNs RHEL installation fails on IBM Z architectures when using multiple LUNs during installation. Due to the multipath setup of FCP and the LUN auto-scan behavior, the length of the kernel command line in the configuration file exceeds 896 bytes. To work around this problem, you can do one of the following: Install the latest version of RHEL (RHEL 9.2 or later). Install the RHEL system with a single LUN and add additional LUNs post installation. Optimize the redundant zfcp entries in the boot configuration on the installed system. Create a physical volume ( pvcreate ) for each of the additional LUNs listed under /dev/mapper/ . Extend the VG with PVs, for example, vgextend <vg_name> /dev/mapper/mpathX . Increase the LV as needed for example, lvextend -r -l +100%FREE /dev/<vg name>/root . For more information, see the KCS solution . (JIRA:RHELDOCS-18638) RHEL installer does not automatically discover or use iSCSI devices as boot devices on aarch64 The absence of the iscsi_ibft kernel module in RHEL installers running on aarch64 prevents automatic discovery of iSCSI devices defined in firmware. These devices are not automatically visible in the installer nor selectable as boot devices when added manually by using the GUI. As a workaround, add the "inst.nonibftiscsiboot" parameter to the kernel command line when booting the installer and then manually attach iSCSI devices through the GUI. As a result, the installer can recognize the attached iSCSI devices as bootable and installation completes as expected. For more information, see KCS solution . (JIRA:RHEL-56135) Kickstart installation fails with an unknown disk error when 'ignoredisk' command precedes 'iscsi' command Installing RHEL by using the kickstart method fails if the ignoredisk command is placed before the iscsi command. This issue occurs because the iscsi command attaches the specified iSCSI device during command parsing, while the ignoredisk command resolves device specifications simultaneously. If the ignoredisk command references an iSCSI device name before it is attached by the iscsi command, the installation fails with an "unknown disk" error. As a workaround, ensure that the iscsi command is placed before the ignoredisk command in the Kickstart file to reference the iSCSI disk and enable successful installation. (JIRA:RHEL-13837) The services Kickstart command fails to disable the firewalld service A bug in Anaconda prevents the services --disabled=firewalld command from disabling the firewalld service in Kickstart. To work around this problem, use the firewall --disabled command instead. As a result, the firewalld service is disabled properly. (JIRA:RHEL-82566) 11.2. Subscription management The subscription-manager utility retains nonessential text in the terminal after completing a command Starting with RHEL 9.1, the subscription-manager utility displays progress information while processing an operation. For some languages (typically non-Latin), progress messages might not be cleared after the operation finishes. As a result, you might see parts of old progress messages in the terminal. Note that this is not a functional failure for subscription-manager . To work around this problem, perform either of the following steps: Include the --no-progress-messages option when running `subscription-manager`commands in the terminal Configure subscription-manager to operate without displaying progress messages by entering the following command: (BZ#2136694) 11.3. Software management The Installation process sometimes becomes unresponsive When you install RHEL, the installation process sometimes becomes unresponsive. The /tmp/packaging.log file displays the following message at the end: To workaround this problem, restart the installation process. ( BZ#2073510 ) A security DNF upgrade fails for packages that change their architecture through the upgrade The patch for BZ#2108969 , released with the RHBA-2022:8295 advisory, introduced the following regression: The DNF upgrade using security filters fails for packages that change their architecture from or to noarch through the upgrade. Consequently, it can leave the system in a vulnerable state. To work around this problem, perform the regular upgrade without security filters. (BZ#2108969) 11.4. Shells and command-line tools ReaR fails during recovery if the TMPDIR variable is set in the configuration file Setting and exporting TMPDIR in the /etc/rear/local.conf or /etc/rear/site.conf ReaR configuration file does not work and is deprecated. The ReaR default configuration file /usr/share/rear/conf/default.conf contains the following instructions: The instructions mentioned above do not work correctly because the TMPDIR variable has the same value in the rescue environment, which is not correct if the directory specified in the TMPDIR variable does not exist in the rescue image. As a consequence, setting and exporting TMPDIR in the /etc/rear/local.conf file leads to the following error when the rescue image is booted : or the following error and abort later, when running rear recover : To work around this problem, if you want to have a custom temporary directory, specify a custom directory for ReaR temporary files by exporting the variable in the shell environment before executing ReaR. For example, execute the export TMPDIR=... statement and then execute the rear command in the same shell session or script. As a result, the recovery is successful in the described configuration. Jira:RHEL-24847 Renaming network interfaces using ifcfg files fails On RHEL 9, the initscripts package is not installed by default. Consequently, renaming network interfaces using ifcfg files fails. To solve this problem, Red Hat recommends that you use udev rules or link files to rename interfaces. For further details, see Consistent network interface device naming and the systemd.link(5) man page. If you cannot use one of the recommended solutions, install the initscripts package. (BZ#2018112) The chkconfig package is not installed by default in RHEL 9 The chkconfig package, which updates and queries runlevel information for system services, is not installed by default in RHEL 9. To manage services, use the systemctl commands or install the chkconfig package manually. For more information about systemd , see Managing systemd . For instructions on how to use the systemctl utility, see Managing system services with systemctl . (BZ#2053598) 11.5. Infrastructure services Both bind and unbound disable validation of SHA-1-based signatures The bind and unbound components disable validation support of all RSA/SHA1 (algorithm number 5) and RSASHA1-NSEC3-SHA1 (algorithm number 7) signatures, and the SHA-1 usage for signatures is restricted in the DEFAULT system-wide cryptographic policy. As a result, certain DNSSEC records signed with the SHA-1, RSA/SHA1, and RSASHA1-NSEC3-SHA1 digest algorithms fail to verify in Red Hat Enterprise Linux 9 and the affected domain names become vulnerable. To work around this problem, upgrade to a different signature algorithm, such as RSA/SHA-256 or elliptic curve keys. For more information and a list of top-level domains that are affected and vulnerable, see the DNSSEC records signed with RSASHA1 fail to verify solution. ( BZ#2070495 ) named fails to start if the same writable zone file is used in multiple zones BIND does not allow the same writable zone file in multiple zones. Consequently, if a configuration includes multiple zones which share a path to a file that can be modified by the named service, named fails to start. To work around this problem, use the in-view clause to share one zone between multiple views and make sure to use different paths for different zones. For example, include the view names in the path. Note that writable zone files are typically used in zones with allowed dynamic updates, slave zones, or zones maintained by DNSSEC. ( BZ#1984982 ) Setting the console keymap requires the libxkbcommon library on your minimal install In RHEL 9, certain systemd library dependencies have been converted from dynamic linking to dynamic loading, so that your system opens and uses the libraries at runtime when they are available. With this change, a functionality that depends on such libraries is not available unless you install the necessary library. This also affects setting the keyboard layout on systems with a minimal install. As a result, the localectl --no-convert set-x11-keymap gb command fails. To work around this problem, install the libxkbcommon library: ( BZ#2214130 ) 11.6. Security OpenSSL does not detect if a PKCS #11 token supports the creation of raw RSA or RSA-PSS signatures The TLS 1.3 protocol requires support for RSA-PSS signatures. If a PKCS #11 token does not support raw RSA or RSA-PSS signatures, server applications that use the OpenSSL library fail to work with an RSA key if the key is held by the PKCS #11 token. As a result, TLS communication fails in the described scenario. To work around this problem, configure servers and clients to use TLS version 1.2 as the highest TLS protocol version available. (BZ#1681178) OpenSSL incorrectly handles PKCS #11 tokens that does not support raw RSA or RSA-PSS signatures The OpenSSL library does not detect key-related capabilities of PKCS #11 tokens. Consequently, establishing a TLS connection fails when a signature is created with a token that does not support raw RSA or RSA-PSS signatures. To work around the problem, add the following lines after the .include line at the end of the crypto_policy section in the /etc/pki/tls/openssl.cnf file: As a result, a TLS connection can be established in the described scenario. (BZ#1685470) scp empties files copied to themselves when a specific syntax is used The scp utility changed from the Secure copy protocol (SCP) to the more secure SSH file transfer protocol (SFTP). Consequently, copying a file from a location to the same location erases the file content. The problem affects the following syntax: scp localhost:/myfile localhost:/myfile To work around this problem, do not copy files to a destination that is the same as the source location using this syntax. The problem has been fixed for the following syntaxes: scp /myfile localhost:/myfile scp localhost:~/myfile ~/myfile ( BZ#2056884 ) PSK ciphersuites do not work with the FUTURE crypto policy Pre-shared key (PSK) ciphersuites are not recognized as performing perfect forward secrecy (PFS) key exchange methods. As a consequence, the ECDHE-PSK and DHE-PSK ciphersuites do not work with OpenSSL configured to SECLEVEL=3 , for example with the FUTURE crypto policy. As a workaround, you can set a less restrictive crypto policy or set a lower security level ( SECLEVEL ) for applications that use PSK ciphersuites. ( BZ#2060044 ) GnuPG incorrectly allows using SHA-1 signatures even if disallowed by crypto-policies The GNU Privacy Guard (GnuPG) cryptographic software can create and verify signatures that use the SHA-1 algorithm regardless of the settings defined by the system-wide cryptographic policies. Consequently, you can use SHA-1 for cryptographic purposes in the DEFAULT cryptographic policy, which is not consistent with the system-wide deprecation of this insecure algorithm for signatures. To work around this problem, do not use GnuPG options that involve SHA-1. As a result, you will prevent GnuPG from lowering the default system security by using the non-secure SHA-1 signatures. ( BZ#2070722 ) gpg-agent does not work as an SSH agent in FIPS mode The gpg-agent tool creates MD5 fingerprints when adding keys to the ssh-agent program even though FIPS mode disables the MD5 digest. Consequently, the ssh-add utility fails to add the keys to the authentication agent. To work around the problem, create the ~/.gnupg/sshcontrol file without using the gpg-agent --daemon --enable-ssh-support command. For example, you can paste the output of the gpg --list-keys command in the <FINGERPRINT> 0 format to ~/.gnupg/sshcontrol . As a result, gpg-agent works as an SSH authentication agent. ( BZ#2073567 ) Default SELinux policy allows unconfined executables to make their stack executable The default state of the selinuxuser_execstack boolean in the SELinux policy is on, which means that unconfined executables can make their stack executable. Executables should not use this option, and it might indicate poorly coded executables or a possible attack. However, due to compatibility with other tools, packages, and third-party products, Red Hat cannot change the value of the boolean in the default policy. If your scenario does not depend on such compatibility aspects, you can turn the boolean off in your local policy by entering the command setsebool -P selinuxuser_execstack off . ( BZ#2064274 ) Remediating service-related rules during kickstart installations might fail During a kickstart installation, the OpenSCAP utility sometimes incorrectly shows that a service enable or disable state remediation is not needed. Consequently, OpenSCAP might set the services on the installed system to a non-compliant state. As a workaround, you can scan and remediate the system after the kickstart installation. This will fix the service-related issues. ( BZ#1834716 ) Remediation of SCAP Audit rules fails incorrectly Bash remediation of some SCAP rules related to Audit configuration does not add the Audit key when remediating. This applies to the following rules: audit_rules_login_events audit_rules_login_events_faillock audit_rules_login_events_lastlog audit_rules_login_events_tallylog audit_rules_usergroup_modification audit_rules_usergroup_modification_group audit_rules_usergroup_modification_gshadow audit_rules_usergroup_modification_opasswd audit_rules_usergroup_modification_passwd audit_rules_usergroup_modification_shadow audit_rules_time_watch_localtime audit_rules_mac_modification audit_rules_networkconfig_modification audit_rules_sysadmin_actions audit_rules_session_events audit_rules_sudoers audit_rules_sudoers_d In consequence, if the relevant Audit rule already exists but does not fully conform to the OVAL check, the remediation fixes the functional part of the Audit rule, that is, the path and access bits, but does not add the Audit key. Therefore, the resulting Audit rule works correctly, but the SCAP rule incorrectly reports FAIL. To work around this problem, add the correct keys to the Audit rules manually. ( BZ#2120978 ) SSH timeout rules in STIG profiles configure incorrect options An update of OpenSSH affected the rules in the following Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) profiles: DISA STIG for RHEL 9 ( xccdf_org.ssgproject.content_profile_stig ) DISA STIG with GUI for RHEL 9 ( xccdf_org.ssgproject.content_profile_stig_gui ) In each of these profiles, the following two rules are affected: When applied to SSH servers, each of these rules configures an option ( ClientAliveCountMax and ClientAliveInterval ) that no longer behaves as previously. As a consequence, OpenSSH no longer disconnects idle SSH users when it reaches the timeout configured by these rules. As a workaround, these rules have been temporarily removed from the DISA STIG for RHEL 9 and DISA STIG with GUI for RHEL 9 profiles until a solution is developed. ( BZ#2038978 ) Keylime might fail attestation of systems that access multiple IMA-measured files If a system that runs the Keylime agent accesses multiple files measured by the Integrity Measurement Architecture (IMA) in quick succession, the Keylime verifier might incorrectly process the IMA log additions. As a consequence, the running hash does not match the correct Platform Configuration Register (PCR) state, and the system fails attestation. There is currently no workaround. ( BZ#2138167 ) Keylime measured boot policy generation script might cause a segmentation fault and core dump The create_mb_refstate script, which generates policies for measure boot attestation in Keylime, might incorrectly calculate the data length in the DevicePath field instead of using the value of the LengthOfDevicePath field when handling the output of the tpm2_eventlog tool depending on the input provided. As a consequence, the script tries to access invalid memory using the incorrectly calculated length, which results in a segmentation fault and core dump. The main functionality of Keylime is not affected by this problem, but you might be unable to generate a measured boot policy. To work around this problem, do not use a measured boot policy or write the policy file manually from the data obtained using the tpm2_eventlog tool from the tpm2-tools package. ( BZ#2140670 ) Some TPM certificates cause Keylime registrar to crash The require_ek_cert configuration option in tenant.conf , which should be enabled in production deployments, determines whether the Keylime tenant requires an endorsement key (EK) certificate from the Trusted Platform Module (TPM). When performing the initial identity quote with require_ek_cert enabled, Kelime attempts to verify whether the TPM device on the agent is genuine by comparing the EK certificate against the trusted certificates present in the Keylime TPM certificate store. However, some certificates in the store are malformed x509 certificates and cause the Keylime registrar to crash. There is currently no simple workaround to this problem, except for setting require_ek_cert to false , and defining a custom script in the ek_check_script option that will perform EK validation. ( BZ#2142009 ) OpenSSH in RHEL 9.0-9.3 is not compatible with OpenSSL 3.2.2 The openssh packages provided by RHEL 9.0, 9.1, 9.2, and 9.3 strictly check for the OpenSSL version. Consequently, if you upgrade the openssl packages to version 3.2.2 and higher and you keep the openssh packages in version 8.7p1-34.el9_3.3 or earlier, the sshd service fails to start with an OpenSSL version mismatch error message. To work around this problem, upgrade the openssh packages to version 8.7p1-38.el9 and later. See the sshd not working, OpenSSL version mismatch solution (Red Hat Knowledgebase) for more information. (JIRA:RHELDOCS-19626) 11.7. Networking The nm-cloud-setup service removes manually-configured secondary IP addresses from interfaces Based on the information received from the cloud environment, the nm-cloud-setup service configures network interfaces. Disable nm-cloud-setup to manually configure interfaces. However, in certain cases, other services on the host can configure interfaces as well. For example, these services could add secondary IP addresses. To avoid that nm-cloud-setup removes secondary IP addresses: Stop and disable the nm-cloud-setup service and timer: Display the available connection profiles: Reactive the affected connection profiles: As a result, the service no longer removes manually-configured secondary IP addresses from interfaces. ( BZ#2151040 ) Failure to update the session key causes the connection to break Kernel Transport Layer Security (kTLS) protocol does not support updating the session key, which is used by the symmetric cipher. Consequently, the user cannot update the key, which causes a connection break. To work around this problem, disable kTLS. As a result, with the workaround, it is possible to successfully update the session key. (BZ#2013650) The initscripts package is not installed by default By default, the initscripts package is not installed. As a consequence, the ifup and ifdown utilities are not available. As an alternative, use the nmcli connection up and nmcli connection down commands to enable and disable connections. If the suggested alternative does not work for you, report the problem and install the NetworkManager-initscripts-updown package, which provides a NetworkManager solution for the ifup and ifdown utilities. ( BZ#2082303 ) 11.8. Kernel The mlx5 driver fails while using Mellanox ConnectX-5 adapter In Ethernet switch device driver model ( switchdev ) mode, mlx5 driver fails when configured with device managed flow steering (DMFS) parameter and ConnectX-5 adapter supported hardware. As a consequence, you can see the following error message: To workaround this problem, you need to use the software managed flow steering (SMFS) parameter instead of DMFS. (BZ#2180665) FADump enabled with Secure Boot might lead to GRUB Out of Memory (OOM) In the Secure Boot environment, GRUB and PowerVM together allocate a 512 MB memory region, known as the Real Mode Area (RMA), for boot memory. The region is divided among the boot components and, if any component exceeds its allocation, out-of-memory failures occur. Generally, the default installed initramfs file system and the vmlinux symbol table are within the limits to avoid such failures. However, if Firmware Assisted Dump (FADump) is enabled in the system, the default initramfs size can increase and exceed 95 MB. As a consequence, every system reboot leads to a GRUB OOM state. To avoid this issue, do not use Secure Boot and FADump together. For more information and methods on how to work around this issue, see https://www.ibm.com/support/pages/node/6846531 . (BZ#2149172) weak-modules from kmod fails to work with module inter-dependencies The weak-modules script provided by the kmod package determines which modules are kABI-compatible with installed kernels. However, while checking modules' kernel compatibility, weak-modules processes modules symbol dependencies from higher to lower release of the kernel for which they were built. As a consequence, modules with inter-dependencies built against different kernel releases might be interpreted as non-compatible, and therefore the weak-modules script fails to work in this scenario. To work around the problem, build or put the extra modules against the latest stock kernel before you install the new kernel. (BZ#2103605) The kdump service fails to build the initrd file on IBM Z systems On the 64-bit IBM Z systems, the kdump service fails to load the initial RAM disk ( initrd ) when znet related configuration information such as s390-subchannels reside in an inactive NetworkManager connection profile. Consequently, the kdump mechanism fails with the following error: As a workaround, use one of the following solutions: Configure a network bond or bridge by re-using the connection profile that has the znet configuration information: Copy the znet configuration information from the inactive connection profile to the active connection profile: Run the nmcli command to query the NetworkManager connection profiles: Update the active profile with configuration information from the inactive connection: Restart the kdump service for changes to take effect: ( BZ#2064708 ) The kdump mechanism fails to capture the vmcore file on LUKS-encrypted targets When running kdump on systems with Linux Unified Key Setup (LUKS) encrypted partitions, systems require a certain amount of available memory. When the available memory is less than the required amount of memory, the systemd-cryptsetup service fails to mount the partition. Consequently, the second kernel fails to capture the crash dump file ( vmcore ) on LUKS-encrypted targets. With the kdumpctl estimate command, you can query the Recommended crashkernel value , which is the recommended memory size required for kdump . To work around this problem, use following steps to configure the required memory for kdump on LUKS encrypted targets: Print the estimate crashkernel value: Configure the amount of required memory by increasing the crashkernel value: Reboot the system for changes to take effect. As a result, kdump works correctly on systems with LUKS-encrypted partitions. (BZ#2017401) Allocating crash kernel memory fails at boot time On certain Ampere Altra systems, allocating the crash kernel memory for kdump usage fails during boot when the available memory is below 1 GB. Consequently, the kdumpctl command fails to start the kdump service. To workaround this problem, do one of the following: Decrease the value of the crashkernel parameter by a minimum of 240 MB to fit the size requirement, for example crashkernel=240M . Use the crashkernel=x,high option to reserve crash kernel memory above 4 GB for kdump . As a result, the crash kernel memory allocation for kdump does not fail on Ampere Altra systems. ( BZ#2065013 ) The Delay Accounting functionality does not display the SWAPIN and IO% statistics columns by default The Delayed Accounting functionality, unlike early versions, is disabled by default. Consequently, the iotop application does not show the SWAPIN and IO% statistics columns and displays the following warning: The Delay Accounting functionality, using the taskstats interface, provides the delay statistics for all tasks or threads that belong to a thread group. Delays in task execution occur when they wait for a kernel resource to become available, for example, a task waiting for a free CPU to run on. The statistics help in setting a task's CPU priority, I/O priority, and rss limit values appropriately. As a workaround, you can enable the delayacct boot option either at runtime or boot. To enable delayacct at runtime, enter: Note that this command enables the feature system wide, but only for the tasks that you start after running this command. To enable delayacct permanently at boot, use one of the following procedures: Edit the /etc/sysctl.conf file to override the default parameters: Add the following entry to the /etc/sysctl.conf file: For more information, see How to set sysctl variables on Red Hat Enterprise Linux . Reboot the system for changes to take effect. Edit the GRUB 2 configuration file to override the default parameters: Append the delayacct option to the /etc/default/grub file's GRUB _CMDLINE_LINUX entry. Run the grub2-mkconfig utility to regenerate the boot configuration: For more information, see How do I permanently modify the kernel command line? . Reboot the system for changes to take effect. As a result, the iotop application displays the SWAPIN and IO% statistics columns. (BZ#2132480) kTLS does not support offloading of TLS 1.3 to NICs Kernel Transport Layer Security (kTLS) does not support offloading of TLS 1.3 to NICs. Consequently, software encryption is used with TLS 1.3 even when the NICs support TLS offload. To work around this problem, disable TLS 1.3 if offload is required. As a result, you can offload only TLS 1.2. When TLS 1.3 is in use, there is lower performance, since TLS 1.3 cannot be offloaded. (BZ#2000616) The iwl7260-firmware breaks Wi-Fi on Intel Wi-Fi 6 AX200, AX210, and Lenovo ThinkPad P1 Gen 4 After updating the iwl7260-firmware or iwl7260-wifi driver to the version provided by RHEL 8.7 and/or RHEL 9.1 (and later), the hardware gets into an incorrect internal state. reports its state incorrectly. Consequently, Intel Wifi 6 cards may not work and display the error message: An unconfirmed work around is to power off the system and back on again. Do not reboot. (BZ#2129288) dkms provides an incorrect warning on program failure with correctly compiled drivers on 64-bit ARM CPUs The Dynamic Kernel Module Support ( dkms ) utility does not recognize that the kernel headers for 64-bit ARM CPUs work for both the kernels with 4 kilobytes and 64 kilobytes page sizes. As a result, when the kernel update is performed and the kernel-64k-devel package is not installed, dkms provides an incorrect warning on why the program failed on correctly compiled drivers. To work around this problem, install the kernel-headers package, which contains header files for both types of ARM CPU architectures and is not specific to dkms and its requirements. (JIRA:RHEL-25967) 11.9. Boot loader The behavior of grubby diverges from its documentation When you add a new kernel using the grubby tool and do not specify any arguments, grubby passes the default arguments to the new entry. This behavior occurs even without passing the --copy-default argument. Using --args and --copy-default options ensures those arguments are appended to the default arguments as stated in the grubby documentation. However, when you add additional arguments, such as USDtuned_params , the grubby tool does not pass these arguments unless the --copy-default option is invoked. In this situation, two workarounds are available: Either set the root= argument and leave --args empty: Or set the root= argument and the specified arguments, but not the default ones: ( BZ#2127453 ) 11.10. File systems and storage RHEL instances on Azure fail to boot if provisioned by cloud-init and configured with an NFSv3 mount entry Currently, booting a RHEL virtual machine (VM) on the Microsoft Azure cloud platform fails if the VM was provisioned by the cloud-init tool and the guest operating system of the VM has an NFSv3 mount entry in the /etc/fstab file. (BZ#2081114) Anaconda fails to login iSCSI server using the no authentication method after unsuccessful CHAP authentication attempt When you add iSCSI discs using CHAP authentication and the login attempt fails due to incorrect credentials, a relogin attempt to the discs with the no authentication method fails. To workaround this problem, close the current session and login using the no authentication method. (BZ#1983602) Device Mapper Multipath is not supported with NVMe/TCP Using Device Mapper Multipath with the nvme-tcp driver can result in the Call Trace warnings and system instability. To work around this problem, NVMe/TCP users must enable native NVMe multipathing and not use the device-mapper-multipath tools with NVMe. By default, Native NVMe multipathing is enabled in RHEL 9. For more information, see Enabling multipathing on NVMe devices . (BZ#2033080) The blk-availability systemd service deactivates complex device stacks In systemd , the default block deactivation code does not always handle complex stacks of virtual block devices correctly. In some configurations, virtual devices might not be removed during the shutdown, which causes error messages to be logged. To work around this problem, deactivate complex block device stacks by executing the following command: As a result, complex virtual device stacks are correctly deactivated during shutdown and do not produce error messages. (BZ#2011699) supported_speeds sysfs attribute reports incorrect speed values Previously, due to an incorrect definition in the qla2xxx driver, the supported_speeds sysfs attribute for the HBA reported 20 Gb/s speed instead of the expected 64 Gb/s speed. Consequently, if the HBA supported 64 Gb/s link speed, the sysfs supported_speeds value was incorrect, which affected the reported speed value. But now the supported_speeds sysfs attribute for the HBA returns a 100 Gb/s speed instead of the intended 64 Gb/s, and 50 Gb/s speed instead of the intended 128 Gb/s speed. This only affects the reported speed value, and the actual link rates used on the Fibre connection are correct. (BZ#2069758) 11.11. Dynamic programming languages, web and database servers The --ssl-fips-mode option in MySQL and MariaDB does not change FIPS mode The --ssl-fips-mode option in MySQL and MariaDB in RHEL works differently than in upstream. In RHEL 9, if you use --ssl-fips-mode as an argument for the mysqld or mariadbd daemon, or if you use ssl-fips-mode in the MySQL or MariaDB server configuration files, --ssl-fips-mode does not change FIPS mode for these database servers. Instead: If you set --ssl-fips-mode to ON , the mysqld or mariadbd server daemon does not start. If you set --ssl-fips-mode to OFF on a FIPS-enabled system, the mysqld or mariadbd server daemons still run in FIPS mode. This is expected because FIPS mode should be enabled or disabled for the whole RHEL system, not for specific components. Therefore, do not use the --ssl-fips-mode option in MySQL or MariaDB in RHEL. Instead, ensure FIPS mode is enabled on the whole RHEL system: Preferably, install RHEL with FIPS mode enabled. Enabling FIPS mode during the installation ensures that the system generates all keys with FIPS-approved algorithms and continuous monitoring tests in place. For information about installing RHEL in FIPS mode, see Installing the system in FIPS mode . Alternatively, you can switch FIPS mode for the entire RHEL system by following the procedure in Switching the system to FIPS mode . ( BZ#1991500 ) 11.12. Compilers and development tools Certain symbol-based probes do not work in SystemTap on the 64-bit ARM architecture Kernel configuration disables certain functionality needed for SystemTap . Consequently, some symbol-based probes do not work on the 64-bit ARM architecture. As a result, affected SystemTap scripts may not run or may not collect hits on desired probe points. Note that this bug has been fixed for the remaining architectures with the release of the RHBA-2022:5259 advisory. (BZ#2083727) 11.13. Identity Management MIT Kerberos does not support ECC certificates for PKINIT MIT Kerberos does not implement the RFC5349 request for comments document, which describes the design of elliptic-curve cryptography (ECC) support in Public Key Cryptography for initial authentication (PKINIT). Consequently, the MIT krb5-pkinit package, used by RHEL, does not support ECC certificates. For more information, see Elliptic Curve Cryptography (ECC) Support for Public Key Cryptography for Initial Authentication in Kerberos (PKINIT) . ( BZ#2106043 ) The DEFAULT:SHA1 sub-policy has to be set on RHEL 9 clients for PKINIT to work against AD KDCs The SHA-1 digest algorithm has been deprecated in RHEL 9, and CMS messages for Public Key Cryptography for initial authentication (PKINIT) are now signed with the stronger SHA-256 algorithm. However, the Active Directory (AD) Kerberos Distribution Center (KDC) still uses the SHA-1 digest algorithm to sign CMS messages. As a result, RHEL 9 Kerberos clients fail to authenticate users by using PKINIT against an AD KDC. To work around the problem, enable support for the SHA-1 algorithm on your RHEL 9 systems with the following command: ( BZ#2060798 ) The PKINIT authentication of a user fails if a RHEL 9 Kerberos agent communicates with a non-RHEL-9 and non-AD Kerberos agent If a RHEL 9 Kerberos agent, either a client or Kerberos Distribution Center (KDC), interacts with a non-RHEL-9 Kerberos agent that is not an Active Directory (AD) agent, the PKINIT authentication of the user fails. To work around the problem, perform one of the following actions: Set the RHEL 9 agent's crypto-policy to DEFAULT:SHA1 to allow the verification of SHA-1 signatures: Update the non-RHEL-9 and non-AD agent to ensure it does not sign CMS data using the SHA-1 algorithm. For this, update your Kerberos client or KDC packages to the versions that use SHA-256 instead of SHA-1: CentOS 9 Stream: krb5-1.19.1-15 RHEL 8.7: krb5-1.18.2-17 RHEL 7.9: krb5-1.15.1-53 Fedora Rawhide/36: krb5-1.19.2-7 Fedora 35/34: krb5-1.19.2-3 As a result, the PKINIT authentication of the user works correctly. Note that for other operating systems, it is the krb5-1.20 release that ensures that the agent signs CMS data with SHA-256 instead of SHA-1. See also The DEFAULT:SHA1 sub-policy has to be set on RHEL 9 clients for PKINIT to work against AD KDCs . ( BZ#2077450 ) Heimdal client fails to authenticate a user using PKINIT against RHEL 9 KDC By default, a Heimdal Kerberos client initiates the PKINIT authentication of an IdM user by using Modular Exponential (MODP) Diffie-Hellman Group 2 for Internet Key Exchange (IKE). However, the MIT Kerberos Distribution Center (KDC) on RHEL 9 only supports MODP Group 14 and 16. Consequently, the pre-autentication request fails with the krb5_get_init_creds: PREAUTH_FAILED error on the Heimdal client and Key parameters not accepted on the RHEL MIT KDC. To work around this problem, ensure that the Heimdal client uses MODP Group 14. Set the pkinit_dh_min_bits parameter in the libdefaults section of the client configuration file to 1759: As a result, the Heimdal client completes the PKINIT pre-authentication against the RHEL MIT KDC. ( BZ#2106296 ) IdM in FIPS mode does not support using the NTLMSSP protocol to establish a two-way cross-forest trust Establishing a two-way cross-forest trust between Active Directory (AD) and Identity Management (IdM) with FIPS mode enabled fails because the New Technology LAN Manager Security Support Provider (NTLMSSP) authentication is not FIPS-compliant. IdM in FIPS mode does not accept the RC4 NTLM hash that the AD domain controller uses when attempting to authenticate. ( BZ#2124243 ) IdM to AD cross-realm TGS requests fail The Privilege Attribute Certificate (PAC) information in IdM Kerberos tickets is now signed with AES SHA-2 HMAC encryption, which is not supported by Active Directory (AD). Consequently, IdM to AD cross-realm TGS requests, that is, two-way trust setups, are failing with the following error: ( BZ#2060421 ) IdM Vault encryption and decryption fails in FIPS mode The OpenSSL RSA-PKCS1v15 padding encryption is blocked if FIPS mode is enabled. Consequently, Identity Management (IdM) Vaults fail to work correctly as IdM is currently using the PKCS1v15 padding for wrapping the session key with the transport certificate. ( BZ#2089907 ) Migrated IdM users might be unable to log in due to mismatching domain SIDs If you have used the ipa migrate-ds script to migrate users from one IdM deployment to another, those users might have problems using IdM services because their previously existing Security Identifiers (SIDs) do not have the domain SID of the current IdM environment. For example, those users can retrieve a Kerberos ticket with the kinit utility, but they cannot log in. To work around this problem, see the following Knowledgebase article: Migrated IdM users unable to log in due to mismatching domain SIDs . (JIRA:RHELPLAN-109613) Directory Server terminates unexpectedly when started in referral mode Due to a bug, global referral mode does not work in Directory Server. If you start the ns-slapd process with the refer option as the dirsrv user, Directory Server ignores the port settings and terminates unexpectedly. Trying to run the process as the root user changes SELinux labels and prevents the service from starting in future in normal mode. There are no workarounds available. ( BZ#2053204 ) Configuring a referral for a suffix fails in Directory Server If you set a back-end referral in Directory Server, setting the state of the backend using the dsconf <instance_name> backend suffix set --state referral command fails with the following error: As a consequence, configuring a referral for suffixes fail. To work around the problem: Set the nsslapd-referral parameter manually: Set the back-end state: As a result, with the workaround, you can configure a referral for a suffix. ( BZ#2063140 ) The dsconf utility has no option to create fix-up tasks for the entryUUID plug-in The dsconf utility does not provide an option to create fix-up tasks for the entryUUID plug-in. As a result, administrators cannot not use dsconf to create a task to automatically add entryUUID attributes to existing entries. As a workaround, create a task manually: After the task has been created, Directory Server fixes entries with missing or invalid entryUUID attributes. ( BZ#2047175 ) Potential risk when using the default value for ldap_id_use_start_tls option When using ldap:// without TLS for identity lookups, it can pose a risk for an attack vector. Particularly a man-in-the-middle (MITM) attack which could allow an attacker to impersonate a user by altering, for example, the UID or GID of an object returned in an LDAP search. Currently, the SSSD configuration option to enforce TLS, ldap_id_use_start_tls , defaults to false . Ensure that your setup operates in a trusted environment and decide if it is safe to use unencrypted communication for id_provider = ldap . Note id_provider = ad and id_provider = ipa are not affected as they use encrypted connections protected by SASL and GSSAPI. If it is not safe to use unencrypted communication, enforce TLS by setting the ldap_id_use_start_tls option to true in the /etc/sssd/sssd.conf file. The default behavior is planned to be changed in a future release of RHEL. (JIRA:RHELPLAN-155168) SSSD retrieves incomplete list of members if the group size exceeds 1500 members During the integration of SSSD with Active Directory, SSSD retrieves incomplete group member lists when the group size exceeds 1500 members. This issue occurs because Active Directory's MaxValRange policy, which restricts the number of members retrievable in a single query, is set to 1500 by default. To work around this problem, change the MaxValRange setting in Active Directory to accommodate larger group sizes. (JIRA:RHELDOCS-19603) 11.14. Desktop Firefox add-ons are disabled after upgrading to RHEL 9 If you upgrade from RHEL 8 to RHEL 9, all add-ons that you previously enabled in Firefox are disabled. To work around the problem, manually reinstall or update the add-ons. As a result, the add-ons are enabled as expected. ( BZ#2013247 ) User Creation screen is unresponsive When installing RHEL using a graphical user interface, the User Creation screen is unresponsive. As a consequence, creating users during installation is more difficult. To work around this problem, use one of the following solutions to create users: Run the installation in VNC mode and resize the VNC window. Create users after completing the installation process. ( BZ#2122636 ) VNC is not running after upgrading to RHEL 9 After upgrading from RHEL 8 to RHEL 9, the VNC server fails to start, even if it was previously enabled. To work around the problem, manually enable the vncserver service after the system upgrade: As a result, VNC is now enabled and starts after every system boot as expected. ( BZ#2060308 ) 11.15. Graphics infrastructures Matrox G200e shows no output on a VGA display Your display might show no graphical output if you use the following system configuration: The Matrox G200e GPU A display connected over the VGA controller As a consequence, you cannot use or install RHEL on this configuration. To work around the problem, use the following procedure: Boot the system to the boot loader menu. Add the module_blacklist=mgag200 option to the kernel command line. As a result, RHEL boots and shows graphical output as expected, but the maximum resolution is limited to 1024x768 at the 16-bit color depth. (BZ#1960467) X.org configuration utilities do not work under Wayland X.org utilities for manipulating the screen do not work in the Wayland session. Notably, the xrandr utility does not work under Wayland due to its different approach to handling, resolutions, rotations, and layout. (JIRA:RHELPLAN-121049) NVIDIA drivers might revert to X.org Under certain conditions, the proprietary NVIDIA drivers disable the Wayland display protocol and revert to the X.org display server: If the version of the NVIDIA driver is lower than 470. If the system is a laptop that uses hybrid graphics. If you have not enabled the required NVIDIA driver options. Additionally, Wayland is enabled but the desktop session uses X.org by default if the version of the NVIDIA driver is lower than 510. (JIRA:RHELPLAN-119001) Night Light is not available on Wayland with NVIDIA When the proprietary NVIDIA drivers are enabled on your system, the Night Light feature of GNOME is not available in Wayland sessions. The NVIDIA drivers do not currently support Night Light . (JIRA:RHELPLAN-119852) 11.16. The web console VNC console works incorrectly at certain resolutions When using the Virtual Network Computing (VNC) console under certain display resolutions, you might experience a mouse offset issue or you might see only a part of the interface. Consequently, using the VNC console might not be possible. To work around this issue, you can try expanding the size of the VNC console or use the Desktop Viewer in the Console tab to launch the remote viewer instead. ( BZ#2030836 ) 11.17. Virtualization Installing a virtual machine over https or ssh in some cases fails Currently, the virt-install utility fails when attempting to install a guest operating system (OS) from an ISO source over a https or ssh connection - for example using virt-install --cdrom https://example/path/to/image.iso . Instead of creating a virtual machine (VM), the described operation terminates unexpectedly with an internal error: process exited while connecting to monitor message. Similarly, using the RHEL 9 web console to install a guest OS fails and displays an Unknown driver 'https' error if you use an https or ssh URL, or the Download OS function. To work around this problem, install qemu-kvm-block-curl and qemu-kvm-block-ssh on the host to enable https and ssh protocol support, respectively. Alternatively, use a different connection protocol or a different installation source. ( BZ#2014229 ) Using NVIDIA drivers in virtual machines disables Wayland Currently, NVIDIA drivers are not compatible with the Wayland graphical session. As a consequence, RHEL guest operating systems that use NVIDIA drivers automatically disable Wayland and load an Xorg session instead. This primarily occurs in the following scenarios: When you pass through an NVIDIA GPU device to a RHEL virtual machine (VM) When you assign an NVIDIA vGPU mediated device to a RHEL VM (JIRA:RHELPLAN-117234) The Milan VM CPU type is sometimes not available on AMD Milan systems On certain AMD Milan systems, the Enhanced REP MOVSB ( erms ) and Fast Short REP MOVSB ( fsrm ) feature flags are disabled in the BIOS by default. Consequently, the Milan CPU type might not be available on these systems. In addition, VM live migration between Milan hosts with different feature flag settings might fail. To work around these problems, manually turn on erms and fsrm in the BIOS of your host. (BZ#2077767) Disabling AVX causes VMs to become unbootable On a host machine that uses a CPU with Advanced Vector Extensions (AVX) support, attempting to boot a VM with AVX explicitly disabled currently fails, and instead triggers a kernel panic in the VM. (BZ#2005173) VNC is unable to connect to UEFI VMs after migration If you enable or disable a message queue while migrating a virtual machine (VM), the Virtual Network Computing (VNC) client will fail to connect to the VM after the migration is complete. This problem affects only UEFI based VMs that use the Open Virtual Machine Firmware (OVMF). (JIRA:RHELPLAN-135600) Failover virtio NICs are not assigned an IP address on Windows virtual machines Currently, when starting a Windows virtual machine (VM) with only a failover virtio NIC, the VM fails to assign an IP address to the NIC. Consequently, the NIC is unable to set up a network connection. Currently, there is no workaround. ( BZ#1969724 ) Windows VM fails to get IP address after network interface reset Sometimes, Windows virtual machines fail to get an IP address after an automatic network interface reset. As a consequence, the VM fails to connect to the network. To work around this problem, disable and re-enable the network adapter driver in the Windows Device Manager. ( BZ#2084003 ) Broadcom network adapters work incorrectly on Windows VMs after a live migration Currently, network adapters from the Broadcom family of devices, such as Broadcom, Qlogic, or Marvell, cannot be hot-unplugged during live migration of Windows virtual machines (VMs). As a consequence, the adapters work incorrectly after the migration is complete. This problem affects only those adapters that are attached to Windows VMs using Single-root I/O virtualization (SR-IOV). ( BZ#2090712 , BZ#2091528 , BZ#2111319 ) A hostdev interface with failover settings cannot be hot-plugged after being hot-unplugged After removing a hostdev network interface with failover configuration from a running virtual machine (VM), the interface currently cannot be re-attached to the same running VM. ( BZ#2052424 ) Live post-copy migration of VMs with failover VFs fails Currently, attempting to post-copy migrate a running virtual machine (VM) fails if the VM uses a device with the virtual function (VF) failover capability enabled. To work around the problem, use the standard migration type, rather than post-copy migration. ( BZ#1817965 ) Host network cannot ping VMs with VFs during live migration When live migrating a virtual machine (VM) with a configured virtual function (VF), such as a VMs that uses virtual SR-IOV software, the network of the VM is not visible to other devices and the VM cannot be reached by commands such as ping . After the migration is finished, however, the problem no longer occurs. ( BZ#1789206 ) Using a large number of queues might cause Windows virtual machines to fail Windows virtual machines (VMs) might fail when the virtual Trusted Platform Module (vTPM) device is enabled and the multi-queue virtio-net feature is configured to use more than 250 queues. This problem is caused by a limitation in the vTPM device. The vTPM device has a hardcoded limit on the maximum number of opened file descriptors. Since multiple file descriptors are opened for every new queue, the internal vTPM limit can be exceeded, causing the VM to fail. To work around this problem, choose one of the following two options: Keep the vTPM device enabled, but use less than 250 queues. Disable the vTPM device to use more than 250 queues. ( BZ#2020146 ) PCIe ATS devices do not work on Windows VMs When you configure a PCIe Address Translation Services (ATS) device in the XML configuration of virtual machine (VM) with a Windows guest operating system, the guest does not enable the ATS device after booting the VM. This is because Windows currently does not support ATS on virtio devices. For more information, see the Red Hat KnowledgeBase . (BZ#2073872) Kdump fails on virtual machines with AMD SEV-SNP Currently, kdump fails on RHEL 9 virtual machines (VMs) that use the AMD Secure Encrypted Virtualization (SEV) with the Secure Nested Paging (SNP) feature. (JIRA:RHEL-10019) 11.18. RHEL in cloud environments Cloning or restoring RHEL 9 virtual machines that use LVM on Nutanix AHV causes non-root partitions to disappear When running a RHEL 9 guest operating system on a virtual machine (VM) hosted on the Nutanix AHV hypervisor, restoring the VM from a snapshot or cloning the VM currently causes non-root partitions in the VM to disappear if the guest is using Logical Volume Management (LVM). As a consequence, the following problems occur: After restoring the VM from a snapshot, the VM cannot boot, and instead enters emergency mode. A VM created by cloning cannot boot, and instead enters emergency mode. To work around these problems, do the following in emergency mode of the VM: Remove the LVM system devices file: rm /etc/lvm/devices/system.devices Recreate LVM device settings: vgimportdevices -a Reboot the VM This makes it possible for the cloned or restored VM to boot up correctly. Alternatively, to prevent the issue from occurring, do the following before cloning a VM or creating a VM snapshot: Uncomment the use_devicesfile = 0 line in the /etc/lvm/lvm.conf file Reboot the VM (BZ#2059545) Customizing RHEL 9 guests on ESXi sometimes causes networking problems Currently, customizing a RHEL 9 guest operating system in the VMware ESXi hypervisor does not work correctly with NetworkManager key files. As a consequence, if the guest is using such a key file, it will have incorrect network settings, such as the IP address or the gateway. For details and workaround instructions, see the VMware Knowledge Base . (BZ#2037657) Setting static IP in a RHEL virtual machine on a VMware host does not work Currently, when using RHEL as a guest operating system of a virtual machine (VM) on a VMware host, the DatasourceOVF function does not work correctly. As a consequence, if you use the cloud-init utility to set the VM's network to static IP and then reboot the VM, the VM's network will be changed to DHCP. ( BZ#1750862 ) 11.19. Supportability Timeout when running sos report on IBM Power Systems, Little Endian When running the sos report command on IBM Power Systems, Little Endian with hundreds or thousands of CPUs, the processor plugin reaches its default timeout of 300 seconds when collecting huge content of the /sys/devices/system/cpu directory. As a workaround, increase the plugin's timeout accordingly: For one-time setting, run: For a permanent change, edit the [plugin_options] section of the /etc/sos/sos.conf file: The example value is set to 1800. The particular timeout value highly depends on a specific system. To set the plugin's timeout appropriately, you can first estimate the time needed to collect the one plugin with no timeout by running the following command: (BZ#1869561) 11.20. Containers Running systemd within an older container image does not work Running systemd within an older container image, for example, centos:7 , does not work: To work around this problem, use the following commands: (JIRA:RHELPLAN-96940) | [
"%pre wipefs -a /dev/sda %end",
"The command 'mount --bind /mnt/sysimage/data /mnt/sysroot/data' exited with the code 32.",
"nmcli connection show",
"nmcli connection delete <connection_name>",
"subscription-manager config --rhsm.progress_messages=0",
"10:20:56,416 DDEBUG dnf: RPM transaction over.",
"To have a specific working area directory prefix for Relax-and-Recover specify in /etc/rear/local.conf something like # export TMPDIR=\"/prefix/for/rear/working/directory\" # where /prefix/for/rear/working/directory must already exist. This is useful for example when there is not sufficient free space in /tmp or USDTMPDIR for the ISO image or even the backup archive.",
"mktemp: failed to create file via template '/prefix/for/rear/working/directory/tmp.XXXXXXXXXX': No such file or directory cp: missing destination file operand after '/etc/rear/mappings/mac' Try 'cp --help' for more information. No network interface mapping is specified in /etc/rear/mappings/mac",
"ERROR: Could not create build area",
"dnf install libxkbcommon",
"SignatureAlgorithms = RSA+SHA256:RSA+SHA512:RSA+SHA384:ECDSA+SHA256:ECDSA+SHA512:ECDSA+SHA384 MaxProtocol = TLSv1.2",
"Title: Set SSH Client Alive Count Max to zero CCE Identifier: CCE-90271-8 Rule ID: xccdf_org.ssgproject.content_rule_sshd_set_keepalive_0 Title: Set SSH Idle Timeout Interval CCE Identifier: CCE-90811-1 Rule ID: xccdf_org.ssgproject.content_rule_sshd_set_idle_timeout",
"systemctl disable --now nm-cloud-setup.service nm-cloud-setup.timer",
"nmcli connection show",
"nmcli connection up \"<profile_name>\"",
"BUG: Bad page cache in process umount pfn:142b4b",
"dracut: Failed to set up znet kdump: mkdumprd: failed to make kdump initrd",
"nmcli connection modify enc600 master bond0 slave-type bond",
"nmcli connection show NAME UUID TYPE Device bridge-br0 ed391a43-bdea-4170-b8a2 bridge br0 bridge-slave-enc600 caf7f770-1e55-4126-a2f4 ethernet enc600 enc600 bc293b8d-ef1e-45f6-bad1 ethernet --",
"#!/bin/bash inactive_connection=enc600 active_connection=bridge-slave-enc600 for name in nettype subchannels options; do field=802-3-ethernet.s390-USDname val=USD(nmcli --get-values \"USDfield\"connection show \"USDinactive_connection\") nmcli connection modify \"USDactive_connection\" \"USDfield\" USDval\" done",
"kdumpctl restart",
"kdumpctl estimate",
"grubby --args=crashkernel=652M --update-kernel=ALL",
"reboot",
"CONFIG_TASK_DELAY_ACCT not enabled in kernel, cannot determine SWAPIN and IO%",
"echo 1 > /proc/sys/kernel/task_delayacct",
"kernel.task_delayacct = 1",
"grub2-mkconfig -o /boot/grub2/grub.cfg",
"kernel: iwlwifi 0000:09:00.0: Failed to start RT ucode: -110 kernel: iwlwifi 0000:09:00.0: WRT: Collecting data: ini trigger 13 fired (delay=0ms) kernel: iwlwifi 0000:09:00.0: Failed to run INIT ucode: -110",
"grubby --add-kernel /boot/my_kernel --initrd /boot/my_initrd --args \"root=/dev/mapper/rhel-root\" --title \"entry_with_root_set\"",
"grubby --add-kernel /boot/my_kernel --initrd /boot/my_initrd --args \"root=/dev/mapper/rhel-root some_args and_some_more\" --title \"entry_with_root_set_and_other_args_too\"",
"systemctl enable --now blk-availability.service",
"update-crypto-policies --set DEFAULT:SHA1",
"update-crypto-polices --set DEFAULT:SHA1",
"[libdefaults] pkinit_dh_min_bits = 1759",
"\"Generic error (see e-text) while getting credentials for <service principal>\"",
"Error: 103 - 9 - 53 - Server is unwilling to perform - [] - need to set nsslapd-referral before moving to referral state",
"ldapmodify -D \"cn=Directory Manager\" -W -H ldap://server.example.com dn: cn=dc\\3Dexample\\2Cdc\\3Dcom,cn=mapping tree,cn=config changetype: modify add: nsslapd-referral nsslapd-referral: ldap://remote_server:389/dc=example,dc=com",
"dsconf <instance_name> backend suffix set --state referral",
"ldapadd -D \"cn=Directory Manager\" -W -H ldap://server.example.com -x dn: cn=entryuuid_fixup_<time_stamp>,cn=entryuuid task,cn=tasks,cn=config objectClass: top objectClass: extensibleObject basedn: <fixup base tree> cn: entryuuid_fixup_<time_stamp> filter: <filtered_entry>",
"systemctl enable --now vncserver@: port-number",
"sos report -k processor.timeout=1800",
"Specify any plugin options and their values here. These options take the form plugin_name.option_name = value #rpm.rpmva = off processor.timeout = 1800",
"time sos report -o processor -k processor.timeout=0 --batch --build",
"podman run --rm -ti centos:7 /usr/lib/systemd/systemd Storing signatures Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted [!!!!!!] Failed to mount API filesystems, freezing.",
"mkdir /sys/fs/cgroup/systemd mount none -t cgroup -o none,name=systemd /sys/fs/cgroup/systemd podman run --runtime /usr/bin/crun --annotation=run.oci.systemd.force_cgroup_v1=/sys/fs/cgroup --rm -ti centos:7 /usr/lib/systemd/systemd"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.1_release_notes/known-issues |
Chapter 3. Installing a cluster | Chapter 3. Installing a cluster You can install a basic OpenShift Container Platform cluster using the Agent-based Installer. For procedures that include optional customizations you can make while using the Agent-based Installer, see Installing a cluster with customizations . 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall or proxy, you configured it to allow the sites that your cluster requires access to. 3.2. Installing OpenShift Container Platform with the Agent-based Installer The following procedures deploy a single-node OpenShift Container Platform in a disconnected environment. You can use these procedures as a basis and modify according to your requirements. 3.2.1. Downloading the Agent-based Installer Use this procedure to download the Agent-based Installer and the CLI needed for your installation. Note Currently, downloading the Agent-based Installer is not supported on the IBM Z(R) ( s390x ) architecture. The recommended method is by creating PXE assets. Procedure Log in to the OpenShift Container Platform web console using your login credentials. Navigate to Datacenter . Click Run Agent-based Installer locally . Select the operating system and architecture for the OpenShift Installer and Command line interface . Click Download Installer to download and extract the install program. Download or copy the pull secret by clicking on Download pull secret or Copy pull secret . Click Download command-line tools and place the openshift-install binary in a directory that is on your PATH . 3.2.2. Creating the configuration inputs You must create the configuration files that are used by the installation program to create the agent image. Procedure Place the openshift-install binary in a directory that is on your PATH. Create a directory to store the install configuration by running the following command: USD mkdir ~/<directory_name> Create the install-config.yaml file by running the following command: USD cat << EOF > ./my-cluster/install-config.yaml apiVersion: v1 baseDomain: test.example.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 hyperthreading: Enabled name: master replicas: 1 metadata: name: sno-cluster 2 networking: clusterNetwork: - cidr: fd01::/48 hostPrefix: 64 machineNetwork: - cidr: fd2e:6f44:5dd8:c956::/120 networkType: OVNKubernetes 3 serviceNetwork: - fd02::/112 platform: 4 none: {} pullSecret: '<pull_secret>' 5 sshKey: '<ssh_pub_key>' 6 additionalTrustBundle: | 7 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 8 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev EOF 1 Specify the system architecture. Valid values are amd64 , arm64 , ppc64le , and s390x . If you are using the release image with the multi payload, you can install the cluster on different architectures such as arm64 , amd64 , s390x , and ppc64le . Otherwise, you can install the cluster only on the release architecture displayed in the output of the openshift-install version command. For more information, see "Verifying the supported architecture for installing an Agent-based Installer cluster". 2 Required. Specify your cluster name. 3 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 4 Specify your platform. Note For bare metal platforms, host settings made in the platform section of the install-config.yaml file are used by default, unless they are overridden by configurations made in the agent-config.yaml file. 5 Specify your pull secret. 6 Specify your SSH public key. 7 Provide the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry. You must specify this parameter if you are using a disconnected mirror registry. 8 Provide the imageContentSources section according to the output of the command that you used to mirror the repository. You must specify this parameter if you are using a disconnected mirror registry. Important When using the oc adm release mirror command, use the output from the imageContentSources section. When using the oc mirror command, use the repositoryDigestMirrors section of the ImageContentSourcePolicy file that results from running the command. The ImageContentSourcePolicy resource is deprecated. Create the agent-config.yaml file by running the following command: USD cat > agent-config.yaml << EOF apiVersion: v1beta1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: fd2e:6f44:5dd8:c956::50 1 EOF 1 This IP address is used to determine which node performs the bootstrapping process as well as running the assisted-service component. You must provide the rendezvous IP address when you do not specify at least one host IP address in the networkConfig parameter. If this address is not provided, one IP address is selected from the provided host networkConfig parameter. 3.2.3. Creating and booting the agent image Use this procedure to boot the agent image on your machines. Procedure Create the agent image by running the following command: USD openshift-install --dir <install_directory> agent create image Note Red Hat Enterprise Linux CoreOS (RHCOS) supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. Multipathing is enabled by default in the agent ISO image, with a default /etc/multipath.conf configuration. Boot the agent.x86_64.iso or agent.aarch64.iso image on the bare metal machines. 3.2.4. Verifying that the current installation host can pull release images After you boot the agent image and network services are made available to the host, the agent console application performs a pull check to verify that the current host can retrieve release images. If the primary pull check passes, you can quit the application to continue with the installation. If the pull check fails, the application performs additional checks, as seen in the Additional checks section of the TUI, to help you troubleshoot the problem. A failure for any of the additional checks is not necessarily critical as long as the primary pull check succeeds. If there are host network configuration issues that might cause an installation to fail, you can use the console application to make adjustments to your network configurations. Important If the agent console application detects host network configuration issues, the installation workflow will be halted until the user manually stops the console application and signals the intention to proceed. Procedure Wait for the agent console application to check whether or not the configured release image can be pulled from a registry. If the agent console application states that the installer connectivity checks have passed, wait for the prompt to time out to continue with the installation. Note You can still choose to view or change network configuration settings even if the connectivity checks have passed. However, if you choose to interact with the agent console application rather than letting it time out, you must manually quit the TUI to proceed with the installation. If the agent console application checks have failed, which is indicated by a red icon beside the Release image URL pull check, use the following steps to reconfigure the host's network settings: Read the Check Errors section of the TUI. This section displays error messages specific to the failed checks. Select Configure network to launch the NetworkManager TUI. Select Edit a connection and select the connection you want to reconfigure. Edit the configuration and select OK to save your changes. Select Back to return to the main screen of the NetworkManager TUI. Select Activate a Connection . Select the reconfigured network to deactivate it. Select the reconfigured network again to reactivate it. Select Back and then select Quit to return to the agent console application. Wait at least five seconds for the continuous network checks to restart using the new network configuration. If the Release image URL pull check succeeds and displays a green icon beside the URL, select Quit to exit the agent console application and continue with the installation. 3.2.5. Tracking and verifying installation progress Use the following procedure to track installation progress and to verify a successful installation. Prerequisites You have configured a DNS record for the Kubernetes API server. Procedure Optional: To know when the bootstrap host (rendezvous host) reboots, run the following command: USD ./openshift-install --dir <install_directory> agent wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <install_directory> , specify the path to the directory where the agent ISO was generated. 2 To view different installation details, specify warn , debug , or error instead of info . Example output ................................................................... ................................................................... INFO Bootstrap configMap status is complete INFO cluster bootstrap is complete The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. To track the progress and verify successful installation, run the following command: USD openshift-install --dir <install_directory> agent wait-for install-complete 1 1 For <install_directory> directory, specify the path to the directory where the agent ISO was generated. Example output ................................................................... ................................................................... INFO Cluster is installed INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run INFO export KUBECONFIG=/home/core/installer/auth/kubeconfig INFO Access the OpenShift web-console here: https://console-openshift-console.apps.sno-cluster.test.example.com 3.3. Gathering log data from a failed Agent-based installation Use the following procedure to gather log data about a failed Agent-based installation to provide for a support case. Prerequisites You have configured a DNS record for the Kubernetes API server. Procedure Run the following command and collect the output: USD ./openshift-install --dir <installation_directory> agent wait-for bootstrap-complete --log-level=debug Example error message ... ERROR Bootstrap failed to complete: : bootstrap process timed out: context deadline exceeded If the output from the command indicates a failure, or if the bootstrap is not progressing, run the following command to connect to the rendezvous host and collect the output: USD ssh core@<node-ip> agent-gather -O >agent-gather.tar.xz Note Red Hat Support can diagnose most issues using the data gathered from the rendezvous host, but if some hosts are not able to register, gathering this data from every host might be helpful. If the bootstrap completes and the cluster nodes reboot, run the following command and collect the output: USD ./openshift-install --dir <install_directory> agent wait-for install-complete --log-level=debug If the output from the command indicates a failure, perform the following steps: Export the kubeconfig file to your environment by running the following command: USD export KUBECONFIG=<install_directory>/auth/kubeconfig Gather information for debugging by running the following command: USD oc adm must-gather Create a compressed file from the must-gather directory that was just created in your working directory by running the following command: USD tar cvaf must-gather.tar.gz <must_gather_directory> Excluding the /auth subdirectory, attach the installation directory used during the deployment to your support case on the Red Hat Customer Portal . Attach all other data gathered from this procedure to your support case. | [
"mkdir ~/<directory_name>",
"cat << EOF > ./my-cluster/install-config.yaml apiVersion: v1 baseDomain: test.example.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 hyperthreading: Enabled name: master replicas: 1 metadata: name: sno-cluster 2 networking: clusterNetwork: - cidr: fd01::/48 hostPrefix: 64 machineNetwork: - cidr: fd2e:6f44:5dd8:c956::/120 networkType: OVNKubernetes 3 serviceNetwork: - fd02::/112 platform: 4 none: {} pullSecret: '<pull_secret>' 5 sshKey: '<ssh_pub_key>' 6 additionalTrustBundle: | 7 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 8 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev EOF",
"cat > agent-config.yaml << EOF apiVersion: v1beta1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: fd2e:6f44:5dd8:c956::50 1 EOF",
"openshift-install --dir <install_directory> agent create image",
"./openshift-install --dir <install_directory> agent wait-for bootstrap-complete \\ 1 --log-level=info 2",
"................................................................ ................................................................ INFO Bootstrap configMap status is complete INFO cluster bootstrap is complete",
"openshift-install --dir <install_directory> agent wait-for install-complete 1",
"................................................................ ................................................................ INFO Cluster is installed INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run INFO export KUBECONFIG=/home/core/installer/auth/kubeconfig INFO Access the OpenShift web-console here: https://console-openshift-console.apps.sno-cluster.test.example.com",
"./openshift-install --dir <installation_directory> agent wait-for bootstrap-complete --log-level=debug",
"ERROR Bootstrap failed to complete: : bootstrap process timed out: context deadline exceeded",
"ssh core@<node-ip> agent-gather -O >agent-gather.tar.xz",
"./openshift-install --dir <install_directory> agent wait-for install-complete --log-level=debug",
"export KUBECONFIG=<install_directory>/auth/kubeconfig",
"oc adm must-gather",
"tar cvaf must-gather.tar.gz <must_gather_directory>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_an_on-premise_cluster_with_the_agent-based_installer/installing-with-agent-basic |
B.102. webkitgtk | B.102. webkitgtk B.102.1. RHSA-2011:0177 - Moderate: webkitgtk security update Updated webkitgtk packages that fix several security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. WebKitGTK+ is the port of the portable web rendering engine WebKit to the GTK+ platform. CVE-2010-1782 , CVE-2010-1783 , CVE-2010-1784 , CVE-2010-1785 , CVE-2010-1787 , CVE-2010-1788 , CVE-2010-1790 , CVE-2010-1792 , CVE-2010-1807 , CVE-2010-1814 , CVE-2010-3114 , CVE-2010-3116 , CVE-2010-3119 , CVE-2010-3255 , CVE-2010-3812 , CVE-2010-4198 Multiple memory corruption flaws were found in WebKit. Malicious web content could cause an application using WebKitGTK+ to crash or, potentially, execute arbitrary code with the privileges of the user running the application. CVE-2010-1780 , CVE-2010-1786 , CVE-2010-1793 , CVE-2010-1812 , CVE-2010-1815 , CVE-2010-3113 , CVE-2010-3257 , CVE-2010-4197 , CVE-2010-4204 Multiple use-after-free flaws were found in WebKit. Malicious web content could cause an application using WebKitGTK+ to crash or, potentially, execute arbitrary code with the privileges of the user running the application. CVE-2010-4206 , CVE-2010-4577 Two array index errors, leading to out-of-bounds memory reads, were found in WebKit. Malicious web content could cause an application using WebKitGTK+ to crash. CVE-2010-3115 A flaw in WebKit could allow malicious web content to trick a user into thinking they are visiting the site reported by the location bar, when the page is actually content controlled by an attacker. CVE-2010-3259 It was found that WebKit did not correctly restrict read access to images created from the "canvas" element. Malicious web content could allow a remote attacker to bypass the same-origin policy and potentially access sensitive image data. CVE-2010-3813 A flaw was found in the way WebKit handled DNS prefetching. Even when it was disabled, web content containing certain "link" elements could cause WebKitGTK+ to perform DNS prefetching. Users of WebKitGTK+ should upgrade to these updated packages, which contain WebKitGTK+ version 1.2.6, and resolve these issues. All running applications that use WebKitGTK+ must be restarted for this update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/webkitgtk |
Chapter 8. Renewing and changing the SSL certificate | Chapter 8. Renewing and changing the SSL certificate If your current SSL certificate has expired or will expire soon, you can either renew or replace the SSL certificate used by Ansible Automation Platform. You must renew the SSL certificate if you need to regenerate the SSL certificate with new information such as new hosts. You must replace the SSL certificate if you want to use an SSL certificate signed by an internal certificate authority. 8.1. Renewing the self-signed SSL certificate The following steps regenerate a new SSL certificate for both automation controller and automation hub. Procedure Add aap_service_regen_cert=true to the inventory file in the [all:vars] section: [all:vars] aap_service_regen_cert=true Run the installer. Verification Validate the CA file and server.crt file on automation controller: openssl verify -CAfile ansible-automation-platform-managed-ca-cert.crt /etc/tower/tower.cert openssl s_client -connect <AUTOMATION_HUB_URL>:443 Validate the CA file and server.crt file on automation hub: openssl verify -CAfile ansible-automation-platform-managed-ca-cert.crt /etc/pulp/certs/pulp_webserver.crt openssl s_client -connect <AUTOMATION_CONTROLLER_URL>:443 8.2. Changing SSL certificates To change the SSL certificate, you can edit the inventory file and run the installer. The installer verifies that all Ansible Automation Platform components are working. The installer can take a long time to run. Alternatively, you can change the SSL certificates manually. This is quicker, but there is no automatic verification. Red Hat recommends that you use the installer to make changes to your Ansible Automation Platform instance. 8.2.1. Prerequisites If there is an intermediate certificate authority, you must append it to the server certificate. Both automation controller and automation hub use NGINX so the server certificate must be in PEM format. Use the correct order for the certificates: The server certificate comes first, followed by the intermediate certificate authority. For further information, see the ssl certificate section of the NGINX documentation . 8.2.2. Changing the SSL certificate and key using the installer The following procedure describes how to change the SSL certificate and key in the inventory file. Procedure Copy the new SSL certificates and keys to a path relative to the Ansible Automation Platform installer. Add the absolute paths of the SSL certificates and keys to the inventory file. Refer to the Automation controller variables , Automation hub variables , and Event-Driven Ansible controller variables sections of the Red Hat Ansible Automation Platform Installation Guide for guidance on setting these variables. Automation controller: web_server_ssl_cert , web_server_ssl_key , custom_ca_cert Automation hub: automationhub_ssl_cert , automationhub_ssl_key , custom_ca_cert Event-Driven Ansible controller: automationedacontroller_ssl_cert , automationedacontroller_ssl_key , custom_ca_cert Note The custom_ca_cert must be the root certificate authority that signed the intermediate certificate authority. This file is installed in /etc/pki/ca-trust/source/anchors . Run the installer. 8.2.3. Changing the SSL certificate manually 8.2.3.1. Changing the SSL certificate and key manually on automation controller The following procedure describes how to change the SSL certificate and key manually on Automation Controller. Procedure Backup the current SSL certificate: cp /etc/tower/tower.cert /etc/tower/tower.cert-USD(date +%F) Backup the current key files: cp /etc/tower/tower.key /etc/tower/tower.key-USD(date +%F)+ Copy the new SSL certificate to /etc/tower/tower.cert . Copy the new key to /etc/tower/tower.key . Restore the SELinux context: restorecon -v /etc/tower/tower.cert /etc/tower/tower.key Set appropriate permissions for the certificate and key files: chown root:awx /etc/tower/tower.cert /etc/tower/tower.key chmod 0600 /etc/tower/tower.cert /etc/tower/tower.key Test the NGINX configuration: nginx -t Reload NGINX: systemctl reload nginx.service Verify that new SSL certificate and key have been installed: true | openssl s_client -showcerts -connect USD{CONTROLLER_FQDN}:443 8.2.3.2. Changing the SSL certificate and key on automation controller on OpenShift Container Platform The following procedure describes how to change the SSL certificate and key for automation controller running on OpenShift Container Platform. Procedure Copy the signed SSL certificate and key to a secure location. Create a TLS secret within OpenShift: oc create secret tls USD{CONTROLLER_INSTANCE}-certs-USD(date +%F) --cert=/path/to/ssl.crt --key=/path/to/ssl.key Modify the automation controller custom resource to add route_tls_secret and the name of the new secret to the spec section. oc edit automationcontroller/USD{CONTROLLER_INSTANCE} ... spec: route_tls_secret: automation-controller-certs-2023-04-06 ... The name of the TLS secret is arbitrary. In this example, it is timestamped with the date that the secret is created, to differentiate it from other TLS secrets applied to the automation controller instance. Wait a few minutes for the changes to be applied. Verify that new SSL certificate and key have been installed: true | openssl s_client -showcerts -connect USD{CONTROLLER_FQDN}:443 8.2.3.3. Changing the SSL certificate and key on Event-Driven Ansible controller The following procedure describes how to change the SSL certificate and key manually on Event-Driven Ansible controller. Procedure Backup the current SSL certificate: cp /etc/ansible-automation-platform/eda/server.cert /etc/ansible-automation-platform/eda/server.cert-USD(date +%F) Backup the current key files: cp /etc/ansible-automation-platform/eda/server.key /etc/ansible-automation-platform/eda/server.key-USD(date +%F) Copy the new SSL certificate to /etc/ansible-automation-platform/eda/server.cert . Copy the new key to /etc/ansible-automation-platform/eda/server.key . Restore the SELinux context: restorecon -v /etc/ansible-automation-platform/eda/server.cert /etc/ansible-automation-platform/eda/server.key Set appropriate permissions for the certificate and key files: chown root:eda /etc/ansible-automation-platform/eda/server.cert /etc/ansible-automation-platform/eda/server.key chmod 0600 /etc/ansible-automation-platform/eda/server.cert /etc/ansible-automation-platform/eda/server.key Test the NGINX configuration: nginx -t Reload NGINX: systemctl reload nginx.service Verify that new SSL certificate and key have been installed: true | openssl s_client -showcerts -connect USD{CONTROLLER_FQDN}:443 8.2.3.4. Changing the SSL certificate and key manually on automation hub The following procedure describes how to change the SSL certificate and key manually on automation hub. Procedure Backup the current SSL certificate: cp /etc/pulp/certs/pulp_webserver.crt /etc/pulp/certs/pulp_webserver.crt-USD(date +%F) Backup the current key files: cp /etc/pulp/certs/pulp_webserver.key /etc/pulp/certs/pulp_webserver.key-USD(date +%F) Copy the new SSL certificate to /etc/pulp/certs/pulp_webserver.crt . Copy the new key to /etc/pulp/certs/pulp_webserver.key . Restore the SELinux context: restorecon -v /etc/pulp/certs/pulp_webserver.crt /etc/pulp/certs/pulp_webserver.key Set appropriate permissions for the certificate and key files: chown root:pulp /etc/pulp/certs/pulp_webserver.crt /etc/pulp/certs/pulp_webserver.key chmod 0600 /etc/pulp/certs/pulp_webserver.crt /etc/pulp/certs/pulp_webserver.key Test the NGINX configuration: nginx -t Reload NGINX: systemctl reload nginx.service Verify that new SSL certificate and key have been installed: true | openssl s_client -showcerts -connect USD{CONTROLLER_FQDN}:443 | [
"[all:vars] aap_service_regen_cert=true",
"openssl verify -CAfile ansible-automation-platform-managed-ca-cert.crt /etc/tower/tower.cert openssl s_client -connect <AUTOMATION_HUB_URL>:443",
"openssl verify -CAfile ansible-automation-platform-managed-ca-cert.crt /etc/pulp/certs/pulp_webserver.crt openssl s_client -connect <AUTOMATION_CONTROLLER_URL>:443",
"cp /etc/tower/tower.cert /etc/tower/tower.cert-USD(date +%F)",
"cp /etc/tower/tower.key /etc/tower/tower.key-USD(date +%F)+",
"restorecon -v /etc/tower/tower.cert /etc/tower/tower.key",
"chown root:awx /etc/tower/tower.cert /etc/tower/tower.key chmod 0600 /etc/tower/tower.cert /etc/tower/tower.key",
"nginx -t",
"systemctl reload nginx.service",
"true | openssl s_client -showcerts -connect USD{CONTROLLER_FQDN}:443",
"create secret tls USD{CONTROLLER_INSTANCE}-certs-USD(date +%F) --cert=/path/to/ssl.crt --key=/path/to/ssl.key",
"edit automationcontroller/USD{CONTROLLER_INSTANCE}",
"spec: route_tls_secret: automation-controller-certs-2023-04-06",
"true | openssl s_client -showcerts -connect USD{CONTROLLER_FQDN}:443",
"cp /etc/ansible-automation-platform/eda/server.cert /etc/ansible-automation-platform/eda/server.cert-USD(date +%F)",
"cp /etc/ansible-automation-platform/eda/server.key /etc/ansible-automation-platform/eda/server.key-USD(date +%F)",
"restorecon -v /etc/ansible-automation-platform/eda/server.cert /etc/ansible-automation-platform/eda/server.key",
"chown root:eda /etc/ansible-automation-platform/eda/server.cert /etc/ansible-automation-platform/eda/server.key",
"chmod 0600 /etc/ansible-automation-platform/eda/server.cert /etc/ansible-automation-platform/eda/server.key",
"nginx -t",
"systemctl reload nginx.service",
"true | openssl s_client -showcerts -connect USD{CONTROLLER_FQDN}:443",
"cp /etc/pulp/certs/pulp_webserver.crt /etc/pulp/certs/pulp_webserver.crt-USD(date +%F)",
"cp /etc/pulp/certs/pulp_webserver.key /etc/pulp/certs/pulp_webserver.key-USD(date +%F)",
"restorecon -v /etc/pulp/certs/pulp_webserver.crt /etc/pulp/certs/pulp_webserver.key",
"chown root:pulp /etc/pulp/certs/pulp_webserver.crt /etc/pulp/certs/pulp_webserver.key",
"chmod 0600 /etc/pulp/certs/pulp_webserver.crt /etc/pulp/certs/pulp_webserver.key",
"nginx -t",
"systemctl reload nginx.service",
"true | openssl s_client -showcerts -connect USD{CONTROLLER_FQDN}:443"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_operations_guide/changing-ssl-certs-keys |
Chapter 3. Updating Red Hat build of OpenJDK container images | Chapter 3. Updating Red Hat build of OpenJDK container images To ensure that an Red Hat build of OpenJDK container with Java applications includes the latest security updates, rebuild the container. Procedure Pull the base Red Hat build of OpenJDK image. Deploy the Red Hat build of OpenJDK application. For more information, see Deploying Red Hat build of OpenJDK applications in containers . The Red Hat build of OpenJDK container with the Red Hat build of OpenJDK application is updated. Additional resources For more information, see Red Hat OpenJDK Container images . Revised on 2024-05-09 14:48:54 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/packaging_red_hat_build_of_openjdk_21_applications_in_containers/updating-openjdk-container-images |
39.3. Replacing Red Hat Enterprise Linux with MS-DOS or Legacy Versions of Microsoft Windows | 39.3. Replacing Red Hat Enterprise Linux with MS-DOS or Legacy Versions of Microsoft Windows In DOS and Windows, use the Windows fdisk utility to create a new MBR with the undocumented flag /mbr . This ONLY rewrites the MBR to boot the primary DOS partition. The command should look like the following: If you need to remove Linux from a hard drive and have attempted to do this with the default DOS (Windows) fdisk , you will experience the Partitions exist but they do not exist problem. The best way to remove non-DOS partitions is with a tool that understands partitions other than DOS. To begin, insert the Red Hat Enterprise Linux DVD and boot your system. When the boot prompt appears, type: linux rescue . This starts the rescue mode program. You are prompted for your keyboard and language requirements. Enter these values as you would during the installation of Red Hat Enterprise Linux. , a screen appears telling you that the program attempts to find a Red Hat Enterprise Linux install to rescue. Select Skip on this screen. After selecting Skip , you are given a command prompt where you can access the partitions you would like to remove. First, type the command list-harddrives . This command lists all hard drives on your system that are recognizable by the installation program, as well as their sizes in megabytes. Warning Be careful to remove only the necessary Red Hat Enterprise Linux partitions. Removing other partitions could result in data loss or a corrupted system environment. To remove partitions, use the partitioning utility parted . Start parted , where /dev/hda is the device on which to remove the partition: Using the print command, view the current partition table to determine the minor number of the partition to remove: The print command also displays the partition's type (such as linux-swap, ext2, ext3, ext4 and so on). Knowing the type of the partition helps you in determining whether to remove the partition. Remove the partition with the command rm . For example, to remove the partition with minor number 3: Important The changes start taking place as soon as you press [Enter], so review the command before committing to it. After removing the partition, use the print command to confirm that it is removed from the partition table. Once you have removed the Linux partitions and made all of the changes you need to make, type quit to quit parted . After quitting parted , type exit at the boot prompt to exit rescue mode and reboot your system, instead of continuing with the installation. The system should reboot automatically. If it does not, you can reboot your computer using Control + Alt + Delete . | [
"fdisk /mbr",
"parted /dev/hda",
"print",
"rm 3"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-x86-uninstall-legacy |
14.15.2.4. snapshot-edit-domain | 14.15.2.4. snapshot-edit-domain This command is used to edit the snapshot that is currently in use. To use, run: If both snapshotname and --current are specified, it forces the edited snapshot to become the current snapshot. If snapshotname is omitted, then --current must be supplied, in order to edit the current snapshot. This is equivalent to the following command sequence below, but it also includes some error checking: If --rename is specified, then the resulting edited file gets saved in a different file name. If --clone is specified, then changing the snapshot name will create a clone of the snapshot metadata. If neither is specified, then the edits will not change the snapshot name. Note that changing a snapshot name must be done with care, since the contents of some snapshots, such as internal snapshots within a single qcow2 file, are accessible only from the original snapshot filename. | [
"virsh snapshot-edit domain [snapshotname] [--current] {[--rename] [--clone]}",
"virsh snapshot-dumpxml dom name > snapshot.xml vi snapshot.xml [note - this can be any editor] virsh snapshot-create dom snapshot.xml --redefine [--current]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sub-sect-managing_snapshots-snapshot_edit_domain |
6.3.3. Use a Boot Option to Specify a Driver Update Disk | 6.3.3. Use a Boot Option to Specify a Driver Update Disk Important This method only works to introduce completely new drivers, not to update existing drivers. Type linux dd at the boot prompt at the start of the installation process and press Enter . The installer prompts you to confirm that you have a driver disk: Figure 6.6. The driver disk prompt Insert the driver update disk that you created on CD, DVD, or USB flash drive and select Yes . The installer examines the storage devices that it can detect. If there is only one possible location that could hold a driver disk (for example, the installer detects the presence of a DVD drive, but no other storage devices) it will automatically load any driver updates that it finds at this location. If the installer finds more than one location that could hold a driver update, it prompts you to specify the location of the update. See Section 6.4, "Specifying the Location of a Driver Update Image File or a Driver Update Disk" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sect-driver_updates-use_a_boot_option_to_specify_a_driver_update_disk-x86 |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To provide feedback, open a Jira issue that describes your concerns. Provide as much detail as possible so that your request can be addressed quickly. Prerequisites You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure To provide your feedback, perform the following steps: Click the following link: Create Issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide more details about the issue. Include the URL where you found the issue. Provide information for any other required fields. Allow all fields that contain default information to remain at the defaults. Click Create to create the Jira issue for the documentation team. A documentation issue will be created and routed to the appropriate documentation team. Thank you for taking the time to provide feedback. | null | https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/red_hat_discovery_release_notes/proc-providing-feedback-on-redhat-documentation |
function::user_string_n_quoted | function::user_string_n_quoted Name function::user_string_n_quoted - Retrieves and quotes string from user space Synopsis Arguments addr the user space address to retrieve the string from n the maximum length of the string (if not null terminated) Description Returns up to n characters of a C string from the given user space memory address where any ASCII characters that are not printable are replaced by the corresponding escape sequence in the returned string. Note that the string will be surrounded by double quotes. On the rare cases when userspace data is not accessible at the given address, the address itself is returned as a string, without double quotes. | [
"user_string_n_quoted:string(addr:long,n:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-user-string-n-quoted |
Chapter 55. BrokerCapacity schema reference | Chapter 55. BrokerCapacity schema reference Used in: CruiseControlSpec Property Description disk The disk property has been deprecated. The Cruise Control disk capacity setting has been deprecated, is ignored, and will be removed in the future Broker capacity for disk in bytes. Use a number value with either standard OpenShift byte units (K, M, G, or T), their bibyte (power of two) equivalents (Ki, Mi, Gi, or Ti), or a byte value with or without E notation. For example, 100000M, 100000Mi, 104857600000, or 1e+11. string cpuUtilization The cpuUtilization property has been deprecated. The Cruise Control CPU capacity setting has been deprecated, is ignored, and will be removed in the future Broker capacity for CPU resource utilization as a percentage (0 - 100). integer cpu Broker capacity for CPU resource in cores or millicores. For example, 1, 1.500, 1500m. For more information on valid CPU resource units see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu . string inboundNetwork Broker capacity for inbound network throughput in bytes per second. Use an integer value with standard OpenShift byte units (K, M, G) or their bibyte (power of two) equivalents (Ki, Mi, Gi) per second. For example, 10000KiB/s. string outboundNetwork Broker capacity for outbound network throughput in bytes per second. Use an integer value with standard OpenShift byte units (K, M, G) or their bibyte (power of two) equivalents (Ki, Mi, Gi) per second. For example, 10000KiB/s. string overrides Overrides for individual brokers. The overrides property lets you specify a different capacity configuration for different brokers. BrokerCapacityOverride array | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-BrokerCapacity-reference |
Managing certificates in IdM | Managing certificates in IdM Red Hat Enterprise Linux 8 Issuing certificates, configuring certificate-based authentication, and controlling certificate validity Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_certificates_in_idm/index |
Chapter 18. Configuring Remoting | Chapter 18. Configuring Remoting 18.1. About the remoting subsystem The remoting subsystem allows you to configure inbound and outbound connections for local and remote services as well as the settings for those connections. The JBoss Remoting project includes the following configurable elements: the endpoint, connectors, http-connector, and a series of local and remote connection URIs. For the majority of use cases, you might not need to configure the remoting subsystem. If you use custom connectors for your application, you must configure the remoting subsystem. Applications that act as remoting clients, such as Jakarta Enterprise Beans, need separate configuration to connect to a specific connector. Default Remoting Subsystem Configuration <subsystem xmlns="urn:jboss:domain:remoting:4.0"> <endpoint/> <http-connector name="http-remoting-connector" connector-ref="default" security-realm="ApplicationRealm"/> </subsystem> See Remoting Subsystem Attributes for a full list of the attributes available for the remoting subsystem. The remoting endpoint The remoting endpoint uses the XNIO worker declared and configured by the io subsystem. See Configuring the Endpoint for details on how to configure the remoting endpoint. connector The connector is the main remoting configuration element of the JBoss Remoting project, which is used to allow external clients to connect to the server on a given port. Clients that require a connection to the server through a connector must use the Remoting remote protocol in the URL referring to the server, for example, remote://localhost:4447. You can configure multiple connectors. Each connector consists of a <connector> element with several sub-elements and few other attributes, such as socket-binding and ssl-context . Several JBoss EAP subsystems can use the default connector. Specific settings for the elements and attributes of your custom connectors depend on your applications. Contact Red Hat Global Support Services for more information. See Configuring a Connector for details on how to configure connectors. http-connector The http-connector element is a special connector configuration element. An external client can use this element to connect to the server by using the HTTP upgrade feature of undertow . With this configuration, the client first uses the HTTP protocol to establish a connection with a server and then uses the remote protocol over the same connection. This helps clients that use different protocols to connect over the same port, such as the default port 8080 of undertow . Connecting over the same port reduces the number of open ports on the server. Clients that require a connection to the server through HTTP upgrade must use the remoting remote+http protocol for unencrypted connections or the remoting remote+https protocol for encrypted connections. Outbound connections You can specify three different types of outbound connections: An outbound connection , specified by a URI A local outbound connection , which connects to a local resource such as a socket A remote outbound connection , which connects to a remote resource and authenticates using a security realm Additional configuration Remoting depends on several elements that are configured outside of the remoting subsystem, such as the network interface and IO worker. For more information, see Additional Remoting Configuration . 18.2. Configuring the Endpoint Important In JBoss EAP 6, the worker thread pool was configured directly in the remoting subsystem. In JBoss EAP 7, the remoting endpoint configuration references a worker from the io subsystem. JBoss EAP provides the following endpoint configuration by default. <subsystem xmlns="urn:jboss:domain:remoting:4.0"> <endpoint/> ... </subsystem> Updating the Existing Endpoint Configuration Creating a New Endpoint Configuration Deleting an Endpoint Configuration See Endpoint Attributes for a full list of the attributes available for the endpoint configuration. 18.3. Configuring a Connector The connector is the main configuration element relating to remoting and contains several sub-elements for additional configuration. Updating the Existing Connector Configuration Creating a New Connector Deleting a Connector For a full list of the attributes available for configuring a connector, please see the Remoting Subsystem Attributes section. 18.4. Configuring an HTTP Connector The HTTP connector provides the configuration for the HTTP upgrade-based remoting connector. JBoss EAP provides the following http-connector configuration by default. <subsystem xmlns="urn:jboss:domain:remoting:4.0"> ... <http-connector name="http-remoting-connector" connector-ref="default" security-realm="ApplicationRealm"/> </subsystem> By default, this HTTP connector connects to an HTTP listener named default that is configured in the undertow subsystem. For more information, see Configuring the Web Server (Undertow) . Updating the Existing HTTP Connector Configuration Creating a New HTTP Connector Deleting an HTTP Connector See Connector Attributes for a full list of the attributes available for configuring an HTTP connector. 18.5. Configuring an Outbound Connection An outbound connection is a generic remoting outbound connection that is fully specified by a URI. Updating an Existing Outbound Connection Creating a New Outbound Connection Deleting an Outbound Connection See Outbound Connection Attributes for a full list of the attributes available for configuring an outbound connection. 18.6. Configuring a Remote Outbound Connection A remote outbound connection is specified by a protocol, an outbound socket binding, a username and a security realm. The protocol can be either remote , http-remoting or https-remoting . Updating an Existing Remote Outbound Connection Creating a New Remote Outbound Connection Deleting a Remote Outbound Connection See Remote Outbound Connection Attributes for a full list of the attributes available for configuring a remote outbound connection. 18.7. Configuring a Local Outbound Connection A local outbound connection is a remoting outbound connection with a protocol of local , specified only by an outbound socket binding. Updating an Existing Local Outbound Connection Creating a New Local Outbound Connection Deleting a Local Outbound Connection See Local Outbound Connection Attributes for a full list of the attributes available for configuring a local outbound connection. 18.8. Additional Remoting Configuration There are several remoting elements that are configured outside of the remoting subsystem. IO worker Use the following command to set the IO worker for remoting: See Configuring a Worker for details on how to configure an IO worker. Network interface The network interface used by the remoting subsystem is the public interface. This interface is also used by several other subsystems, so exercise caution when modifying it. <interfaces> <interface name="management"> <inet-address value="USD{jboss.bind.address.management:127.0.0.1}"/> </interface> <interface name="public"> <inet-address value="USD{jboss.bind.address:127.0.0.1}"/> </interface> <interface name="unsecure"> <inet-address value="USD{jboss.bind.address.unsecure:127.0.0.1}"/> </interface> </interfaces> In a managed domain, the public interface is defined per host in its host.xml file. Socket binding The default socket binding used by the remoting subsystem binds to port 8080 . For more information about socket binding and socket binding groups, see Socket Bindings . Secure transport configuration Remoting transports use STARTTLS to use a secure connection, such as HTTPS, Secure Servlet, if the client requests it. The same socket binding, or network port, is used for secured and unsecured connections, so no additional server-side configuration is necessary. The client requests the secure or unsecured transport, as its needs dictate. JBoss EAP components that use remoting, such as Jakarta Enterprise Beans, ORB, and the Java Messaging Service provider, request secured interfaces by default. Warning STARTTLS works by activating a secure connection if the client requests it, and otherwise defaults to an unsecured connection. It is inherently susceptible to a man-in-the-middle exploit, where an attacker intercepts the request of the client and modifies it to request an unsecured connection. Clients must be written to fail appropriately if they do not receive a secure connection, unless an unsecured connection is an appropriate fall-back. | [
"<subsystem xmlns=\"urn:jboss:domain:remoting:4.0\"> <endpoint/> <http-connector name=\"http-remoting-connector\" connector-ref=\"default\" security-realm=\"ApplicationRealm\"/> </subsystem>",
"<subsystem xmlns=\"urn:jboss:domain:remoting:4.0\"> <endpoint/> </subsystem>",
"/subsystem=remoting/configuration=endpoint:write-attribute(name=authentication-retries,value=2)",
"reload",
"/subsystem=remoting/configuration=endpoint:add",
"/subsystem=remoting/configuration=endpoint:remove",
"reload",
"/subsystem=remoting/connector=new-connector:write-attribute(name=socket-binding,value=my-socket-binding)",
"reload",
"/subsystem=remoting/connector=new-connector:add(socket-binding=my-socket-binding)",
"/subsystem=remoting/connector=new-connector:remove",
"reload",
"<subsystem xmlns=\"urn:jboss:domain:remoting:4.0\"> <http-connector name=\"http-remoting-connector\" connector-ref=\"default\" security-realm=\"ApplicationRealm\"/> </subsystem>",
"/subsystem=remoting/http-connector=new-connector:write-attribute(name=connector-ref,value=new-connector-ref)",
"reload",
"/subsystem=remoting/http-connector=new-connector:add(connector-ref=default)",
"/subsystem=remoting/http-connector=new-connector:remove",
"/subsystem=remoting/outbound-connection=new-outbound-connection:write-attribute(name=uri,value=http://example.com)",
"/subsystem=remoting/outbound-connection=new-outbound-connection:add(uri=http://example.com)",
"/subsystem=remoting/outbound-connection=new-outbound-connection:remove",
"/subsystem=remoting/remote-outbound-connection=new-remote-outbound-connection:write-attribute(name=outbound-socket-binding-ref,value=outbound-socket-binding)",
"/subsystem=remoting/remote-outbound-connection=new-remote-outbound-connection:add(outbound-socket-binding-ref=outbound-socket-binding)",
"/subsystem=remoting/remote-outbound-connection=new-remote-outbound-connection:remove",
"/subsystem=remoting/local-outbound-connection=new-local-outbound-connection:write-attribute(name=outbound-socket-binding-ref,value=outbound-socket-binding)",
"/subsystem=remoting/local-outbound-connection=new-local-outbound-connection:add(outbound-socket-binding-ref=outbound-socket-binding)",
"/subsystem=remoting/local-outbound-connection=new-local-outbound-connection:remove",
"/subsystem=remoting/configuration=endpoint:write-attribute(name=worker, value= WORKER_NAME )",
"<interfaces> <interface name=\"management\"> <inet-address value=\"USD{jboss.bind.address.management:127.0.0.1}\"/> </interface> <interface name=\"public\"> <inet-address value=\"USD{jboss.bind.address:127.0.0.1}\"/> </interface> <interface name=\"unsecure\"> <inet-address value=\"USD{jboss.bind.address.unsecure:127.0.0.1}\"/> </interface> </interfaces>"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuration_guide/configuring_remoting |
Chapter 1. Introduction to security | Chapter 1. Introduction to security Use the tools provided with Red Hat Openstack Platform (RHOSP) to prioritize security in planning, and in operations, to meet users' expectations of privacy and the security of their data. Failure to implement security standards can lead to downtime or data breaches. Your use case might be subject to laws that require passing audits and compliance processes. For information about hardening Ceph, see Data security and hardening guide . Note Follow the instructions in this guide to harden the security of your environment. However, these recommendations do not guarantee security or compliance. You must assess security from the unique requirements of your environment. 1.1. Red Hat OpenStack Platform security By default, Red Hat OpenStack Platform (RHOSP) director creates the overcloud with the following tools and access controls for security: SElinux SELinux provides security enhancement for RHOSP by providing access controls that require each process to have explicit permissions for every action. Podman Podman as a container tool is a secure option for RHOSP as it does not use a client/server model that requires processes with root access to function. System access restriction You can only log into overcloud nodes using either the SSH key that director creates for tripleo-admin during the overcloud deployment, or a SSH key that you have created on the overcloud. You cannot use SSH with a password to log into overcloud nodes, or log into overcloud nodes using root. You can configure director with the following additional security features based on the needs and trust level of your organization: Public TLS and TLS-everywhere Hardware security module integration with OpenStack Key Manager (barbican) Signed images and encrypted volumes Password and fernet key rotation using workflow executions 1.2. Understanding the Red Hat OpenStack Platform admin role When you assign a user the role of admin , this user has permissions to view, change, create, or delete any resource on any project. This user can create shared resources that are accessible across projects, such as publicly available glance images, or provider networks. Additionally, a user with the admin role can create or delete users and manage roles. The project to which you assign a user the admin role is the default project in which openstack commands are executed. For example, if an admin user in a project named development runs the following command, a network called internal-network is created in the development project: The admin user can create an internal-network in any project by using the --project parameter: 1.3. Identifying security zones in Red Hat OpenStack Platform Security zones are resources, applications, networks and servers that share common security concerns. Design security zones so to have common authentication and authorization requirements, and users. You can define your own security zones to be as granular as needed based on the architecture of your cloud, the level of acceptable trust in your environment, and your organization's standardized requirements. The zones and their trust requirements can vary depending upon whether the cloud instance is public, private, or hybrid. For example, a you can segment a default installation of Red Hat OpenStack Platform into the following zones: Table 1.1. Security zones Zone Networks Details Public external The public zone hosts the external networks, public APIs, and floating IP addresses for the external connectivity of instances. This zone allows access from networks outside of your administrative control and is an untrusted area of the cloud infrastructure. Guest tenant The guest zone hosts project networks. It is untrusted for public and private cloud providers that allow unrestricted access to instances. Storage access storage, storage_mgmt The storage access zone is for storage management, monitoring and clustering, and storage traffic. Control ctlplane, internal_api, ipmi The control zone also includes the undercloud, host operating system, server hardware, physical networking, and the Red Hat OpenStack Platform director control plane. 1.4. Locating security zones in Red Hat OpenStack Platform Run the following commands to collect information on the physical configuration of your Red Hat OpenStack Platform deployment: Prerequisites You have an installed Red Hat OpenStack Platform environment. You are logged into the director as stack. Procedure Source stackrc : Run openstack subnet list to match the assigned ip networks to their associated zones: Run openstack server list to list the physical servers in your infrastructure: Use the ctlplane address from the openstack server list command to query the configuration of a physical node: 1.5. Connecting security zones You must carefully configure any component that spans multiple security zones with varying trust levels or authentication requirements. These connections are often the weak points in network architecture. Ensure that you configure these connections to meet the security requirements of the highest trust level of any of the zones being connected. In many cases, the security controls of the connected zones are a primary concern due to the likelihood of attack. The points where zones meet present an additional potential point of attack and adds opportunities for attackers to migrate their attack to more sensitive parts of the deployment. In some cases, OpenStack operators might want to consider securing the integration point at a higher standard than any of the zones in which it resides. Given the above example of an API endpoint, an adversary could potentially target the Public API endpoint from the public zone, leveraging this foothold in the hopes of compromising or gaining access to the internal or admin API within the management zone if these zones were not completely isolated. The design of OpenStack is such that separation of security zones is difficult. Because core services will usually span at least two zones, special consideration must be given when applying security controls to them. 1.6. Threat mitigation Most types of cloud deployment, public, private, or hybrid, are exposed to some form of security threat. The following practices help mitigate security threats: Apply the principle of least privilege. Use encryption on internal and external interfaces. Use centralized identity management. Keep Red Hat OpenStack Platform updated. Compute services can provide malicious actors with a tool for DDoS and brute force attacks. Methods of prevention include egress security groups, traffic inspection, intrusion detection systems, and customer education and awareness. For deployments accessible by public networks or with access to public networks, such as the Internet, ensure that processes and infrastructure are in place to detect and address outbound abuse. Additional resources Implementing TLS-e with Ansible Integrating OpenStack Identity (keystone) with Red Hat Identity Manager (IdM) Keeping Red Hat OpenStack Platform Updated | [
"openstack network create internal-network",
"openstack network create internal-network --project testing",
"source /home/stack/stackrc",
"openstack subnet list -c Name -c Subnet +---------------------+------------------+ | Name | Subnet | +---------------------+------------------+ | ctlplane-subnet | 192.168.101.0/24 | | storage_mgmt_subnet | 172.16.105.0/24 | | tenant_subnet | 172.16.102.0/24 | | external_subnet | 10.94.81.0/24 | | internal_api_subnet | 172.16.103.0/24 | | storage_subnet | 172.16.104.0/24 | +---------------------+------------------+",
"openstack server list -c Name -c Networks +-------------------------+-------------------------+ | Name | Networks | +-------------------------+-------------------------+ | overcloud-controller-0 | ctlplane=192.168.101.15 | | overcloud-controller-1 | ctlplane=192.168.101.19 | | overcloud-controller-2 | ctlplane=192.168.101.14 | | overcloud-novacompute-0 | ctlplane=192.168.101.18 | | overcloud-novacompute-2 | ctlplane=192.168.101.17 | | overcloud-novacompute-1 | ctlplane=192.168.101.11 | +-------------------------+-------------------------+",
"ssh [email protected] ip addr"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/security_and_hardening_guide/introduction |
Chapter 54. Jasypt | Chapter 54. Jasypt Since Camel 2.5 Jasypt is a simplified encryption library which makes encryption and decryption easy. Camel integrates with Jasypt to allow sensitive information in Properties files to be encrypted. By dropping camel-jasypt on the classpath those encrypted values will automatically be decrypted on-the-fly by Camel. This ensures that human eyes can't easily spot sensitive information such as usernames and passwords. 54.1. Dependencies When using camel-jasypt with Red Hat build of Camel Spring Boot, add the following Maven dependency to your pom.xml to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jasypt-starter</artifactId> </dependency> 54.2. Tooling The Jasypt component is a runnable JAR that provides a command line utility to encrypt or decrypt values. The usage documentation can be output to the console to describe the syntax and options it provides: Apache Camel Jasypt takes the following options -h or -help = Displays the help screen -c or -command <command> = Command either encrypt or decrypt -p or -password <password> = Password to use -i or -input <input> = Text to encrypt or decrypt -a or -algorithm <algorithm> = Optional algorithm to use -rsga or -algorithm <algorithm> = Optional random salt generator algorithm to use -riga or -algorithm <algorithm> = Optional random iv generator algorithm to use A simple way of running the tool is with JBang. For example, to encrypt the value tiger , you can use the following parameters. Make sure to specify the version of camel-jasypt that you want to use. USD jbang org.apache.camel:camel-jasypt:<camel version here> -c encrypt -p secret -i tiger Which outputs the following result Encrypted text: qaEEacuW7BUti8LcMgyjKw== This means the encrypted representation qaEEacuW7BUti8LcMgyjKw== can be decrypted back to tiger if you know the master password which was secret . If you run the tool again then the encrypted value will return a different result. But decrypting the value will always return the correct original value. You can test decrypting the value by running the tooling using the following parameters: USD jbang org.apache.camel:camel-jasypt:<camel version here> -c decrypt -p secret -i qaEEacuW7BUti8LcMgyjKw== Which outputs the following result: Decrypted text: tiger The idea is then to use those encrypted values in your Properties files. For example, # Encrypted value for 'tiger' my.secret = ENC(qaEEacuW7BUti8LcMgyjKw==) 54.3. Protecting the master password The master password used by Jasypt must be provided, so that it's capable of decrypting the values. However, having this master password out in the open may not be an ideal solution. Therefore, you could for example provide it as a JVM system property or as an OS environment setting. If you decide to do so then the password option supports prefixes that dictates this. sysenv: means to lookup the OS system environment with the given key. sys: means to lookup a JVM system property. For example, you could provide the password before you start the application USD export CAMEL_ENCRYPTION_PASSWORD=secret Then start the application, such as running the start script. When the application is up and running you can unset the environment USD unset CAMEL_ENCRYPTION_PASSWORD On runtimes like Spring Boot and Quarkus, you can configure a password property in the application.properties file as follows. password=sysenv:CAMEL_ENCRYPTION_PASSWORD Or if configuring JasyptPropertiesParser manually, you can set the password like this. jasyptPropertiesParser.setPassword("sysenv:CAMEL_ENCRYPTION_PASSWORD"); 54.4. Example with Java DSL On the Spring Boot and Quarkus runtimes, Camel Jasypt can be configured via configuration properties. Refer to their respective documentation pages for more information. In Java DSL you need to configure Jasypt as a JasyptPropertiesParser instance and set the properties in the Properties component as shown below: // create the jasypt properties parser JasyptPropertiesParser jasypt = new JasyptPropertiesParser(); // set the master password (see above for how to do this in a secure way) jasypt.setPassword("secret"); // create the properties' component PropertiesComponent pc = new PropertiesComponent(); pc.setLocation("classpath:org/apache/camel/component/jasypt/secret.properties"); // and use the jasypt properties parser, so we can decrypt values pc.setPropertiesParser(jasypt); // end enable nested placeholder support pc.setNestedPlaceholder(true); // add properties component to camel context context.setPropertiesComponent(pc); It is possible to configure custom algorithms on the JasyptPropertiesParser like this. JasyptPropertiesParser jasyptPropertiesParser = new JasyptPropertiesParser(); jasyptPropertiesParser.setAlgorithm("PBEWithHmacSHA256AndAES_256"); jasyptPropertiesParser.setRandomSaltGeneratorAlgorithm("PKCS11"); jasyptPropertiesParser.setRandomIvGeneratorAlgorithm("PKCS11"); The properties file secret.properties will contain your encrypted configuration values, such as shown below. Notice how the password value is encrypted and is surrounded like ENC(value here). my.secret.password=ENC(bsW9uV37gQ0QHFu7KO03Ww==) 54.5. Example with Spring XML In Spring XML you need to configure the JasyptPropertiesParser which is shown below. Then the Camel Properties component is told to use jasypt as the properties parser, which means Jasypt has its chance to decrypt values looked up in the properties. <!-- define the jasypt properties parser with the given password to be used --> <bean id="jasypt" class="org.apache.camel.component.jasypt.JasyptPropertiesParser"> <property name="password" value="secret"/> </bean> <!-- define the camel properties component --> <bean id="properties" class="org.apache.camel.component.properties.PropertiesComponent"> <!-- the properties file is in the classpath --> <property name="location" value="classpath:org/apache/camel/component/jasypt/secret.properties"/> <!-- and let it leverage the jasypt parser --> <property name="propertiesParser" ref="jasypt"/> <!-- end enable nested placeholder --> <property name="nestedPlaceholder" value="true"/> </bean> The Properties component can also be inlined inside the <camelContext> tag which is shown below. Notice how we use the propertiesParserRef attribute to refer to Jasypt. <!-- define the jasypt properties parser with the given password to be used --> <bean id="jasypt" class="org.apache.camel.component.jasypt.JasyptPropertiesParser"> <!-- password is mandatory, you can prefix it with sysenv: or sys: to indicate it should use an OS environment or JVM system property value, so you dont have the master password defined here --> <property name="password" value="secret"/> </bean> <camelContext xmlns="http://camel.apache.org/schema/spring"> <!-- define the camel properties placeholder, and let it leverage jasypt --> <propertyPlaceholder id="properties" location="classpath:org/apache/camel/component/jasypt/myproperties.properties" nestedPlaceholder="true" propertiesParserRef="jasypt"/> <route> <from uri="direct:start"/> <to uri="{{cool.result}}"/> </route> </camelContext> 54.6. Spring Boot Auto-Configuration The component supports 8 options, which are listed below. Name Description Default Type camel.component.jasypt.algorithm The algorithm to be used for decryption. PBEWithMD5AndDES String camel.component.jasypt.enabled Enable the component. false Boolean camel.component.jasypt.iv-generator-class-name The initialization vector (IV) generator applied in decryption operations. Default: org.jasypt.iv. String camel.component.jasypt.password The master password used by Jasypt for decrypting the values. This option supports prefixes which influence the master password lookup behaviour: sysenv: means to lookup the OS system environment with the given key. sys: means to lookup a JVM system property. String camel.component.jasypt.provider-name The class name of the security provider to be used for obtaining the encryption algorithm. String camel.component.jasypt.random-iv-generator-algorithm The algorithm for the random iv generator. SHA1PRNG String camel.component.jasypt.random-salt-generator-algorithm The algorithm for the salt generator. SHA1PRNG String camel.component.jasypt.salt-generator-class-name The salt generator applied in decryption operations. Default: org.jasypt.salt.RandomSaltGenerator. org.jasypt.salt.RandomSaltGenerator String 4.0// ParentAssemblies: assemblies/ | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jasypt-starter</artifactId> </dependency>",
"Apache Camel Jasypt takes the following options -h or -help = Displays the help screen -c or -command <command> = Command either encrypt or decrypt -p or -password <password> = Password to use -i or -input <input> = Text to encrypt or decrypt -a or -algorithm <algorithm> = Optional algorithm to use -rsga or -algorithm <algorithm> = Optional random salt generator algorithm to use -riga or -algorithm <algorithm> = Optional random iv generator algorithm to use",
"jbang org.apache.camel:camel-jasypt:<camel version here> -c encrypt -p secret -i tiger",
"Encrypted text: qaEEacuW7BUti8LcMgyjKw==",
"jbang org.apache.camel:camel-jasypt:<camel version here> -c decrypt -p secret -i qaEEacuW7BUti8LcMgyjKw==",
"Decrypted text: tiger",
"Encrypted value for 'tiger' my.secret = ENC(qaEEacuW7BUti8LcMgyjKw==)",
"export CAMEL_ENCRYPTION_PASSWORD=secret",
"unset CAMEL_ENCRYPTION_PASSWORD",
"password=sysenv:CAMEL_ENCRYPTION_PASSWORD",
"jasyptPropertiesParser.setPassword(\"sysenv:CAMEL_ENCRYPTION_PASSWORD\");",
"// create the jasypt properties parser JasyptPropertiesParser jasypt = new JasyptPropertiesParser(); // set the master password (see above for how to do this in a secure way) jasypt.setPassword(\"secret\"); // create the properties' component PropertiesComponent pc = new PropertiesComponent(); pc.setLocation(\"classpath:org/apache/camel/component/jasypt/secret.properties\"); // and use the jasypt properties parser, so we can decrypt values pc.setPropertiesParser(jasypt); // end enable nested placeholder support pc.setNestedPlaceholder(true); // add properties component to camel context context.setPropertiesComponent(pc);",
"JasyptPropertiesParser jasyptPropertiesParser = new JasyptPropertiesParser(); jasyptPropertiesParser.setAlgorithm(\"PBEWithHmacSHA256AndAES_256\"); jasyptPropertiesParser.setRandomSaltGeneratorAlgorithm(\"PKCS11\"); jasyptPropertiesParser.setRandomIvGeneratorAlgorithm(\"PKCS11\");",
"my.secret.password=ENC(bsW9uV37gQ0QHFu7KO03Ww==)",
"<!-- define the jasypt properties parser with the given password to be used --> <bean id=\"jasypt\" class=\"org.apache.camel.component.jasypt.JasyptPropertiesParser\"> <property name=\"password\" value=\"secret\"/> </bean> <!-- define the camel properties component --> <bean id=\"properties\" class=\"org.apache.camel.component.properties.PropertiesComponent\"> <!-- the properties file is in the classpath --> <property name=\"location\" value=\"classpath:org/apache/camel/component/jasypt/secret.properties\"/> <!-- and let it leverage the jasypt parser --> <property name=\"propertiesParser\" ref=\"jasypt\"/> <!-- end enable nested placeholder --> <property name=\"nestedPlaceholder\" value=\"true\"/> </bean>",
"<!-- define the jasypt properties parser with the given password to be used --> <bean id=\"jasypt\" class=\"org.apache.camel.component.jasypt.JasyptPropertiesParser\"> <!-- password is mandatory, you can prefix it with sysenv: or sys: to indicate it should use an OS environment or JVM system property value, so you dont have the master password defined here --> <property name=\"password\" value=\"secret\"/> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <!-- define the camel properties placeholder, and let it leverage jasypt --> <propertyPlaceholder id=\"properties\" location=\"classpath:org/apache/camel/component/jasypt/myproperties.properties\" nestedPlaceholder=\"true\" propertiesParserRef=\"jasypt\"/> <route> <from uri=\"direct:start\"/> <to uri=\"{{cool.result}}\"/> </route> </camelContext>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-jasypt-component-starter |
Chapter 7. Running and interpreting hardware and firmware latency tests | Chapter 7. Running and interpreting hardware and firmware latency tests With the hwlatdetect program, you can test and verify if a potential hardware platform is suitable for using real-time operations. Prerequisites Ensure that the RHEL-RT (RHEL for Real Time) and realtime-tests packages are installed. Check the vendor documentation for any tuning steps required for low latency operation. The vendor documentation can provide instructions to reduce or remove any System Management Interrupts (SMIs) that would transition the system into System Management Mode (SMM). While a system is in SMM, it runs firmware and not operating system code. This means that any timers that expire while in SMM wait until the system transitions back to normal operation. This can cause unexplained latencies, because SMIs cannot be blocked by Linux, and the only indication that we actually took an SMI can be found in vendor-specific performance counter registers. Warning Red Hat strongly recommends that you do not completely disable SMIs, as it can result in catastrophic hardware failure. 7.1. Running hardware and firmware latency tests It is not required to run any load on the system while running the hwlatdetect program, because the test looks for latencies introduced by the hardware architecture or BIOS or EFI firmware. The default values for hwlatdetect are to poll for 0.5 seconds each second, and report any gaps greater than 10 microseconds between consecutive calls to fetch the time. hwlatdetect returns the best maximum latency possible on the system. Therefore, if you have an application that requires maximum latency values of less than 10us and hwlatdetect reports one of the gaps as 20us, then the system can only guarantee latency of 20us. Note If hwlatdetect shows that the system cannot meet the latency requirements of the application, try changing the BIOS settings or working with the system vendor to get new firmware that meets the latency requirements of the application. Prerequisites Ensure that the RHEL-RT and realtime-tests packages are installed. Procedure Run hwlatdetect , specifying the test duration in seconds. hwlatdetect looks for hardware and firmware-induced latencies by polling the clock-source and looking for unexplained gaps. Additional resources hwlatdetect man page on your system Interpreting hardware and firmware latency tests 7.2. Interpreting hardware and firmware latency test results The hardware latency detector ( hwlatdetect ) uses the tracer mechanism to detect latencies introduced by the hardware architecture or BIOS/EFI firmware. By checking the latencies measured by hwlatdetect , you can determine if a potential hardware is suitable to support the RHEL for Real Time kernel. Examples The example result represents a system tuned to minimize system interruptions from firmware. In this situation, the output of hwlatdetect looks like this: The example result represents a system that could not be tuned to minimize system interruptions from firmware. In this situation, the output of hwlatdetect looks like this: The output shows that during the consecutive reads of the system clocksource , there were 10 delays that showed up in the 15-18 us range. Note versions used a kernel module rather than the ftrace tracer. Understanding the results The information on testing method, parameters, and results helps you understand the latency parameters and the latency values detected by the hwlatdetect utility. The table for Testing method, parameters, and results, lists the parameters and the latency values detected by the hwlatdetect utility. Table 7.1. Testing method, parameters, and results Parameter Value Description test duration 10 seconds The duration of the test in seconds detector tracer The utility that runs the detector thread parameters Latency threshold 10us The maximum allowable latency Sample window 1000000us 1 second Sample width 500000us 0.05 seconds Non-sampling period 500000us 0.05 seconds Output File None The file to which the output is saved. Results Max Latency 18us The highest latency during the test that exceeded the Latency threshold . If no sample exceeded the Latency threshold , the report shows Below threshold . Samples recorded 10 The number of samples recorded by the test. Samples exceeding threshold 10 The number of samples recorded by the test where the latency exceeded the Latency threshold . SMIs during run 0 The number of System Management Interrupts (SMIs) that occurred during the test run. Note The values printed by the hwlatdetect utility for inner and outer are the maximum latency values. They are deltas between consecutive reads of the current system clocksource (usually the TSC or TSC register, but potentially the HPET or ACPI power management clock) and any delays between consecutive reads introduced by the hardware-firmware combination. After finding the suitable hardware-firmware combination, the step is to test the real-time performance of the system while under a load. | [
"hwlatdetect --duration=60s hwlatdetect: test duration 60 seconds detector: tracer parameters: Latency threshold: 10us Sample window: 1000000us Sample width: 500000us Non-sampling period: 500000us Output File: None Starting test test finished Max Latency: Below threshold Samples recorded: 0 Samples exceeding threshold: 0",
"hwlatdetect --duration=60s hwlatdetect: test duration 60 seconds detector: tracer parameters: Latency threshold: 10us Sample window: 1000000us Sample width: 500000us Non-sampling period: 500000us Output File: None Starting test test finished Max Latency: Below threshold Samples recorded: 0 Samples exceeding threshold: 0",
"hwlatdetect --duration=10s hwlatdetect: test duration 10 seconds detector: tracer parameters: Latency threshold: 10us Sample window: 1000000us Sample width: 500000us Non-sampling period: 500000us Output File: None Starting test test finished Max Latency: 18us Samples recorded: 10 Samples exceeding threshold: 10 SMIs during run: 0 ts: 1519674281.220664736, inner:17, outer:15 ts: 1519674282.721666674, inner:18, outer:17 ts: 1519674283.722667966, inner:16, outer:17 ts: 1519674284.723669259, inner:17, outer:18 ts: 1519674285.724670551, inner:16, outer:17 ts: 1519674286.725671843, inner:17, outer:17 ts: 1519674287.726673136, inner:17, outer:16 ts: 1519674288.727674428, inner:16, outer:18 ts: 1519674289.728675721, inner:17, outer:17 ts: 1519674290.729677013, inner:18, outer:17----"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/assembly_running-and-interpreting-hardware-and-firmware-latency-tests_optimizing-rhel9-for-real-time-for-low-latency-operation |
B.6. chkconfig | B.6. chkconfig B.6.1. RHBA-2012:0417 - chkconfig bug fix update Updated chkconfig packages that fix two bugs are now available for Red Hat Enterprise Linux 6 Extended Update Support. The basic system utility chkconfig updates and queries runlevel information for system services. Bug Fixes BZ# 797840 When installing multiple Linux Standard Base (LSB) services which only had LSB headers, the stop priority of the related LSB init scripts could have been miscalculated and set to "-1". With this update, the LSB init script ordering mechanism has been fixed, and the stop priority of the LSB init scripts is now set correctly. BZ# 797839 When an LSB init script requiring the "USDlocal_fs" facility was installed with the "install_initd" command, the installation of the script could fail under certain circumstances. With this update, the underlying code has been modified to ignore this requirement because the "USDlocal_fs" facility is always implicitly provided. LSB init scripts with requirements on "USDlocal_fs" are now installed correctly. All users of chkconfig are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/chkconfig |
Chapter 23. Installation configuration | Chapter 23. Installation configuration 23.1. Customizing nodes OpenShift Container Platform supports both cluster-wide and per-machine configuration via Ignition, which allows arbitrary partitioning and file content changes to the operating system. In general, if a configuration file is documented in Red Hat Enterprise Linux (RHEL), then modifying it via Ignition is supported. There are two ways to deploy machine config changes: Creating machine configs that are included in manifest files to start up a cluster during openshift-install . Creating machine configs that are passed to running OpenShift Container Platform nodes via the Machine Config Operator. Additionally, modifying the reference config, such as the Ignition config that is passed to coreos-installer when installing bare-metal nodes allows per-machine configuration. These changes are currently not visible to the Machine Config Operator. The following sections describe features that you might want to configure on your nodes in this way. 23.1.1. Creating machine configs with Butane Machine configs are used to configure control plane and worker machines by instructing machines how to create users and file systems, set up the network, install systemd units, and more. Because modifying machine configs can be difficult, you can use Butane configs to create machine configs for you, thereby making node configuration much easier. 23.1.1.1. About Butane Butane is a command-line utility that OpenShift Container Platform uses to provide convenient, short-hand syntax for writing machine configs, as well as for performing additional validation of machine configs. The format of the Butane config file that Butane accepts is defined in the OpenShift Butane config spec . 23.1.1.2. Installing Butane You can install the Butane tool ( butane ) to create OpenShift Container Platform machine configs from a command-line interface. You can install butane on Linux, Windows, or macOS by downloading the corresponding binary file. Tip Butane releases are backwards-compatible with older releases and with the Fedora CoreOS Config Transpiler (FCCT). Procedure Navigate to the Butane image download page at https://mirror.openshift.com/pub/openshift-v4/clients/butane/ . Get the butane binary: For the newest version of Butane, save the latest butane image to your current directory: USD curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane --output butane Optional: For a specific type of architecture you are installing Butane on, such as aarch64 or ppc64le, indicate the appropriate URL. For example: USD curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-aarch64 --output butane Make the downloaded binary file executable: USD chmod +x butane Move the butane binary file to a directory on your PATH . To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification steps You can now use the Butane tool by running the butane command: USD butane <butane_file> 23.1.1.3. Creating a MachineConfig object by using Butane You can use Butane to produce a MachineConfig object so that you can configure worker or control plane nodes at installation time or via the Machine Config Operator. Prerequisites You have installed the butane utility. Procedure Create a Butane config file. The following example creates a file named 99-worker-custom.bu that configures the system console to show kernel debug messages and specifies custom settings for the chrony time service: variant: openshift version: 4.10.0 metadata: name: 99-worker-custom labels: machineconfiguration.openshift.io/role: worker openshift: kernel_arguments: - loglevel=7 storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony Note The 99-worker-custom.bu file is set to create a machine config for worker nodes. To deploy on control plane nodes, change the role from worker to master . To do both, you could repeat the whole procedure using different file names for the two types of deployments. Create a MachineConfig object by giving Butane the file that you created in the step: USD butane 99-worker-custom.bu -o ./99-worker-custom.yaml A MachineConfig object YAML file is created for you to finish configuring your machines. Save the Butane config in case you need to update the MachineConfig object in the future. If the cluster is not running yet, generate manifest files and add the MachineConfig object YAML file to the openshift directory. If the cluster is already running, apply the file as follows: USD oc create -f 99-worker-custom.yaml Additional resources Adding kernel modules to nodes Encrypting and mirroring disks during installation 23.1.2. Adding day-1 kernel arguments Although it is often preferable to modify kernel arguments as a day-2 activity, you might want to add kernel arguments to all master or worker nodes during initial cluster installation. Here are some reasons you might want to add kernel arguments during cluster installation so they take effect before the systems first boot up: You want to disable a feature, such as SELinux, so it has no impact on the systems when they first come up. Warning Disabling SELinux on RHCOS is not supported. You need to do some low-level network configuration before the systems start. To add kernel arguments to master or worker nodes, you can create a MachineConfig object and inject that object into the set of manifest files used by Ignition during cluster setup. For a listing of arguments you can pass to a RHEL 8 kernel at boot time, see Kernel.org kernel parameters . It is best to only add kernel arguments with this procedure if they are needed to complete the initial OpenShift Container Platform installation. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> Decide if you want to add kernel arguments to worker or control plane nodes. In the openshift directory, create a file (for example, 99-openshift-machineconfig-master-kargs.yaml ) to define a MachineConfig object to add the kernel settings. This example adds a loglevel=7 kernel argument to control plane nodes: USD cat << EOF > 99-openshift-machineconfig-master-kargs.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 EOF You can change master to worker to add kernel arguments to worker nodes instead. Create a separate YAML file to add to both master and worker nodes. You can now continue on to create the cluster. 23.1.3. Adding kernel modules to nodes For most common hardware, the Linux kernel includes the device driver modules needed to use that hardware when the computer starts up. For some hardware, however, modules are not available in Linux. Therefore, you must find a way to provide those modules to each host computer. This procedure describes how to do that for nodes in an OpenShift Container Platform cluster. When a kernel module is first deployed by following these instructions, the module is made available for the current kernel. If a new kernel is installed, the kmods-via-containers software will rebuild and deploy the module so a compatible version of that module is available with the new kernel. The way that this feature is able to keep the module up to date on each node is by: Adding a systemd service to each node that starts at boot time to detect if a new kernel has been installed and If a new kernel is detected, the service rebuilds the module and installs it to the kernel For information on the software needed for this procedure, see the kmods-via-containers github site. A few important issues to keep in mind: This procedure is Technology Preview. Software tools and examples are not yet available in official RPM form and can only be obtained for now from unofficial github.com sites noted in the procedure. Third-party kernel modules you might add through these procedures are not supported by Red Hat. In this procedure, the software needed to build your kernel modules is deployed in a RHEL 8 container. Keep in mind that modules are rebuilt automatically on each node when that node gets a new kernel. For that reason, each node needs access to a yum repository that contains the kernel and related packages needed to rebuild the module. That content is best provided with a valid RHEL subscription. 23.1.3.1. Building and testing the kernel module container Before deploying kernel modules to your OpenShift Container Platform cluster, you can test the process on a separate RHEL system. Gather the kernel module's source code, the KVC framework, and the kmod-via-containers software. Then build and test the module. To do that on a RHEL 8 system, do the following: Procedure Register a RHEL 8 system: # subscription-manager register Attach a subscription to the RHEL 8 system: # subscription-manager attach --auto Install software that is required to build the software and container: # yum install podman make git -y Clone the kmod-via-containers repository: Create a folder for the repository: USD mkdir kmods; cd kmods Clone the repository: USD git clone https://github.com/kmods-via-containers/kmods-via-containers Install a KVC framework instance on your RHEL 8 build host to test the module. This adds a kmods-via-container systemd service and loads it: Change to the kmod-via-containers directory: USD cd kmods-via-containers/ Install the KVC framework instance: USD sudo make install Reload the systemd manager configuration: USD sudo systemctl daemon-reload Get the kernel module source code. The source code might be used to build a third-party module that you do not have control over, but is supplied by others. You will need content similar to the content shown in the kvc-simple-kmod example that can be cloned to your system as follows: USD cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod Edit the configuration file, simple-kmod.conf file, in this example, and change the name of the Dockerfile to Dockerfile.rhel : Change to the kvc-simple-kmod directory: USD cd kvc-simple-kmod Rename the Dockerfile: USD cat simple-kmod.conf Example Dockerfile KMOD_CONTAINER_BUILD_CONTEXT="https://github.com/kmods-via-containers/kvc-simple-kmod.git" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 KMOD_NAMES="simple-kmod simple-procfs-kmod" Create an instance of [email protected] for your kernel module, simple-kmod in this example: USD sudo make install Enable the [email protected] instance: USD sudo kmods-via-containers build simple-kmod USD(uname -r) Enable and start the systemd service: USD sudo systemctl enable [email protected] --now Review the service status: USD sudo systemctl status [email protected] Example output ● [email protected] - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago... To confirm that the kernel modules are loaded, use the lsmod command to list the modules: USD lsmod | grep simple_ Example output simple_procfs_kmod 16384 0 simple_kmod 16384 0 Optional. Use other methods to check that the simple-kmod example is working: Look for a "Hello world" message in the kernel ring buffer with dmesg : USD dmesg | grep 'Hello world' Example output [ 6420.761332] Hello world from simple_kmod. Check the value of simple-procfs-kmod in /proc : USD sudo cat /proc/simple-procfs-kmod Example output simple-procfs-kmod number = 0 Run the spkut command to get more information from the module: USD sudo spkut 44 Example output KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container... + podman run -i --rm --privileged simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44 Going forward, when the system boots this service will check if a new kernel is running. If there is a new kernel, the service builds a new version of the kernel module and then loads it. If the module is already built, it will just load it. 23.1.3.2. Provisioning a kernel module to OpenShift Container Platform Depending on whether or not you must have the kernel module in place when OpenShift Container Platform cluster first boots, you can set up the kernel modules to be deployed in one of two ways: Provision kernel modules at cluster install time (day-1) : You can create the content as a MachineConfig object and provide it to openshift-install by including it with a set of manifest files. Provision kernel modules via Machine Config Operator (day-2) : If you can wait until the cluster is up and running to add your kernel module, you can deploy the kernel module software via the Machine Config Operator (MCO). In either case, each node needs to be able to get the kernel packages and related software packages at the time that a new kernel is detected. There are a few ways you can set up each node to be able to obtain that content. Provide RHEL entitlements to each node. Get RHEL entitlements from an existing RHEL host, from the /etc/pki/entitlement directory and copy them to the same location as the other files you provide when you build your Ignition config. Inside the Dockerfile, add pointers to a yum repository containing the kernel and other packages. This must include new kernel packages as they are needed to match newly installed kernels. 23.1.3.2.1. Provision kernel modules via a MachineConfig object By packaging kernel module software with a MachineConfig object, you can deliver that software to worker or control plane nodes at installation time or via the Machine Config Operator. Procedure Register a RHEL 8 system: # subscription-manager register Attach a subscription to the RHEL 8 system: # subscription-manager attach --auto Install software needed to build the software: # yum install podman make git -y Create a directory to host the kernel module and tooling: USD mkdir kmods; cd kmods Get the kmods-via-containers software: Clone the kmods-via-containers repository: USD git clone https://github.com/kmods-via-containers/kmods-via-containers Clone the kvc-simple-kmod repository: USD git clone https://github.com/kmods-via-containers/kvc-simple-kmod Get your module software. In this example, kvc-simple-kmod is used. Create a fakeroot directory and populate it with files that you want to deliver via Ignition, using the repositories cloned earlier: Create the directory: USD FAKEROOT=USD(mktemp -d) Change to the kmod-via-containers directory: USD cd kmods-via-containers Install the KVC framework instance: USD make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/ Change to the kvc-simple-kmod directory: USD cd ../kvc-simple-kmod Create the instance: USD make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/ Clone the fakeroot directory, replacing any symbolic links with copies of their targets, by running the following command: USD cd .. && rm -rf kmod-tree && cp -Lpr USD{FAKEROOT} kmod-tree Create a Butane config file, 99-simple-kmod.bu , that embeds the kernel module tree and enables the systemd service. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.10.0 metadata: name: 99-simple-kmod labels: machineconfiguration.openshift.io/role: worker 1 storage: trees: - local: kmod-tree systemd: units: - name: [email protected] enabled: true 1 To deploy on control plane nodes, change worker to master . To deploy on both control plane and worker nodes, perform the remainder of these instructions once for each node type. Use Butane to generate a machine config YAML file, 99-simple-kmod.yaml , containing the files and configuration to be delivered: USD butane 99-simple-kmod.bu --files-dir . -o 99-simple-kmod.yaml If the cluster is not up yet, generate manifest files and add this file to the openshift directory. If the cluster is already running, apply the file as follows: USD oc create -f 99-simple-kmod.yaml Your nodes will start the [email protected] service and the kernel modules will be loaded. To confirm that the kernel modules are loaded, you can log in to a node (using oc debug node/<openshift-node> , then chroot /host ). To list the modules, use the lsmod command: USD lsmod | grep simple_ Example output simple_procfs_kmod 16384 0 simple_kmod 16384 0 23.1.4. Encrypting and mirroring disks during installation During an OpenShift Container Platform installation, you can enable boot disk encryption and mirroring on the cluster nodes. 23.1.4.1. About disk encryption You can enable encryption for the boot disks on the control plane and compute nodes at installation time. OpenShift Container Platform supports the Trusted Platform Module (TPM) v2 and Tang encryption modes. TPM v2: This is the preferred mode. TPM v2 stores passphrases in a secure cryptoprocessor contained within a server. You can use this mode to prevent the boot disk data on a cluster node from being decrypted if the disk is removed from the server. Tang: Tang and Clevis are server and client components that enable network-bound disk encryption (NBDE). You can bind the boot disk data on your cluster nodes to one or more Tang servers. This prevents the data from being decrypted unless the nodes are on a secure network where the Tang servers can be accessed. Clevis is an automated decryption framework that is used to implement the decryption on the client side. Important The use of the Tang encryption mode to encrypt your disks is only supported for bare metal and vSphere installations on user-provisioned infrastructure. Note On versions of Red Hat Enterprise Linux CoreOS (RHCOS), disk encryption was configured by specifying /etc/clevis.json in the Ignition config. That file is not supported in clusters created with OpenShift Container Platform 4.7 or above, and disk encryption should be configured by using the following procedure. When the TPM v2 or Tang encryption modes are enabled, the RHCOS boot disks are encrypted using the LUKS2 format. This feature: Is available for installer-provisioned infrastructure and user-provisioned infrastructure deployments Is supported on Red Hat Enterprise Linux CoreOS (RHCOS) systems only Sets up disk encryption during the manifest installation phase so all data written to disk, from first boot forward, is encrypted Requires no user intervention for providing passphrases Uses AES-256-XTS encryption, or AES-256-CBC if FIPS mode is enabled 23.1.4.1.1. Configuring an encryption threshold In OpenShift Container Platform, you can specify a requirement for more than one Tang server. You can also configure the TPM v2 and Tang encryption modes simultaneously, so that the boot disk data can be decrypted only if the TPM secure cryptoprocessor is present and the Tang servers can be accessed over a secure network. You can use the threshold attribute in your Butane configuration to define the minimum number of TPM v2 and Tang encryption conditions that must be met for decryption to occur. The threshold is met when the stated value is reached through any combination of the declared conditions. For example, the threshold value of 2 in the following configuration can be reached by accessing the two Tang servers, or by accessing the TPM secure cryptoprocessor and one of the Tang servers: Example Butane configuration for disk encryption variant: openshift version: 4.10.0 metadata: name: worker-storage labels: machineconfiguration.openshift.io/role: worker boot_device: layout: x86_64 luks: tpm2: true 1 tang: 2 - url: http://tang1.example.com:7500 thumbprint: jwGN5tRFK-kF6pIX89ssF3khxxX - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF threshold: 2 3 openshift: fips: true 1 Include this field if you want to use a Trusted Platform Module (TPM) to encrypt the root file system. 2 Include this section if you want to use one or more Tang servers. 3 Specify the minimum number of TPM v2 and Tang encryption conditions that must be met for decryption to occur. Important The default threshold value is 1 . If you include multiple encryption conditions in your configuration but do not specify a threshold, decryption can occur if any of the conditions are met. Note If you require both TPM v2 and Tang for decryption, the value of the threshold attribute must equal the total number of stated Tang servers plus one. If the threshold value is lower, it is possible for the threshold to be reached by using one of the encryption modes only. For example, if tpm2 is set to true and you specify two Tang servers, a threshold of 2 can be met by accessing the two Tang servers even if the TPM secure cryptoprocessor is not available. 23.1.4.2. About disk mirroring During OpenShift Container Platform installation on control plane and worker nodes, you can enable mirroring of the boot and other disks to two or more redundant storage devices. A node continues to function after storage device failure as long as one device remains available. Mirroring does not support replacement of a failed disk. To restore the mirror to a pristine, non-degraded state, reprovision the node. Note For user-provisioned infrastructure deployments, mirroring is available only on RHCOS systems. Support for mirroring is available on x86_64 nodes booted with BIOS or UEFI and on ppc64le nodes. 23.1.4.3. Configuring disk encryption and mirroring You can enable and configure encryption and mirroring during an OpenShift Container Platform installation. Prerequisites You have downloaded the OpenShift Container Platform installation program on your installation node. You installed Butane on your installation node. Note Butane is a command-line utility that OpenShift Container Platform uses to provide convenient, short-hand syntax for writing machine configs, as well as for performing additional validation of machine configs. For more information, see the Creating machine configs with Butane section. You have access to a Red Hat Enterprise Linux (RHEL) 8 machine that can be used to generate a thumbprint of the Tang exchange key. Procedure If you want to use TPM v2 to encrypt your cluster, check to see if TPM v2 encryption needs to be enabled in the BIOS on each node. This is required on most Dell systems. Check the manual for your computer. If you want to use Tang to encrypt your cluster, follow these preparatory steps: Set up a Tang server or access an existing one. See Network-bound disk encryption for instructions. Install the clevis package on a RHEL 8 machine, if it is not already installed: USD sudo yum install clevis On the RHEL 8 machine, run the following command to generate a thumbprint of the exchange key. Replace http://tang.example.com:7500 with the URL of your Tang server: USD clevis-encrypt-tang '{"url":"http://tang.example.com:7500"}' < /dev/null > /dev/null 1 1 In this example, tangd.socket is listening on port 7500 on the Tang server. Note The clevis-encrypt-tang command is used in this step only to generate a thumbprint of the exchange key. No data is being passed to the command for encryption at this point, so /dev/null is provided as an input instead of plain text. The encrypted output is also sent to /dev/null , because it is not required for this procedure. Example output The advertisement contains the following signing keys: PLjNyRdGw03zlRoGjQYMahSZGu9 1 1 The thumbprint of the exchange key. When the Do you wish to trust these keys? [ynYN] prompt displays, type Y . Note RHEL 8 provides Clevis version 15, which uses the SHA-1 hash algorithm to generate thumbprints. Some other distributions provide Clevis version 17 or later, which use the SHA-256 hash algorithm for thumbprints. You must use a Clevis version that uses SHA-1 to create the thumbprint, to prevent Clevis binding issues when you install Red Hat Enterprise Linux CoreOS (RHCOS) on your OpenShift Container Platform cluster nodes. If the nodes are configured with static IP addressing, run coreos-installer iso customize --dest-karg-append or use the coreos-installer --append-karg option when installing RHCOS nodes to set the IP address of the installed system. Append the ip= and other arguments needed for your network. Important Some methods for configuring static IPs do not affect the initramfs after the first boot and will not work with Tang encryption. These include the coreos-installer --copy-network option, the coreos-installer iso customize --network-keyfile option, and the coreos-installer pxe customize --network-keyfile option, as well as adding ip= arguments to the kernel command line of the live ISO or PXE image during installation. Incorrect static IP configuration causes the second boot of the node to fail. On your installation node, change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 Replace <installation_directory> with the path to the directory that you want to store the installation files in. Create a Butane config that configures disk encryption, mirroring, or both. For example, to configure storage for compute nodes, create a USDHOME/clusterconfig/worker-storage.bu file. Butane config example for a boot device variant: openshift version: 4.10.0 metadata: name: worker-storage 1 labels: machineconfiguration.openshift.io/role: worker 2 boot_device: layout: x86_64 3 luks: 4 tpm2: true 5 tang: 6 - url: http://tang.example.com:7500 7 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 8 threshold: 1 9 mirror: 10 devices: 11 - /dev/sda - /dev/sdb openshift: fips: true 12 1 2 For control plane configurations, replace worker with master in both of these locations. 3 On ppc64le nodes, set this field to ppc64le . On all other nodes, this field can be omitted. 4 Include this section if you want to encrypt the root file system. For more details, see the About disk encryption section. 5 Include this field if you want to use a Trusted Platform Module (TPM) to encrypt the root file system. 6 Include this section if you want to use one or more Tang servers. 7 Specify the URL of a Tang server. In this example, tangd.socket is listening on port 7500 on the Tang server. 8 Specify the exchange key thumbprint, which was generated in a preceding step. 9 Specify the minimum number of TPM v2 and Tang encryption conditions that must be met for decryption to occur. The default value is 1 . For more information on this topic, see the Configuring an encryption threshold section. 10 Include this section if you want to mirror the boot disk. For more details, see About disk mirroring . 11 List all disk devices that should be included in the boot disk mirror, including the disk that RHCOS will be installed onto. 12 Include this directive to enable FIPS mode on your cluster. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . If you are configuring nodes to use both disk encryption and mirroring, both features must be configured in the same Butane configuration file. In addition, if you are configuring disk encryption on a node with FIPS mode enabled, you must include the fips directive in the same Butane configuration file, even if FIPS mode is also enabled in a separate manifest. Create a control plane or compute node manifest from the corresponding Butane configuration file and save it to the <installation_directory>/openshift directory. For example, to create a manifest for the compute nodes, run the following command: USD butane USDHOME/clusterconfig/worker-storage.bu -o <installation_directory>/openshift/99-worker-storage.yaml Repeat this step for each node type that requires disk encryption or mirroring. Save the Butane configuration file in case you need to update the manifests in the future. Continue with the remainder of the OpenShift Container Platform installation. Tip You can monitor the console log on the RHCOS nodes during installation for error messages relating to disk encryption or mirroring. Important If you configure additional data partitions, they will not be encrypted unless encryption is explicitly requested. Verification After installing OpenShift Container Platform, you can verify if boot disk encryption or mirroring is enabled on the cluster nodes. From the installation host, access a cluster node by using a debug pod: Start a debug pod for the node. The following example starts a debug pod for the compute-1 node: USD oc debug node/compute-1 Set /host as the root directory within the debug shell. The debug pod mounts the root file system of the node in /host within the pod. By changing the root directory to /host , you can run binaries contained in the executable paths on the node: # chroot /host Note OpenShift Container Platform cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. If you configured boot disk encryption, verify if it is enabled: From the debug shell, review the status of the root mapping on the node: # cryptsetup status root Example output /dev/mapper/root is active and is in use. type: LUKS2 1 cipher: aes-xts-plain64 2 keysize: 512 bits key location: keyring device: /dev/sda4 3 sector size: 512 offset: 32768 sectors size: 15683456 sectors mode: read/write 1 The encryption format. When the TPM v2 or Tang encryption modes are enabled, the RHCOS boot disks are encrypted using the LUKS2 format. 2 The encryption algorithm used to encrypt the LUKS2 volume. The aes-cbc-essiv:sha256 cipher is used if FIPS mode is enabled. 3 The device that contains the encrypted LUKS2 volume. If mirroring is enabled, the value will represent a software mirror device, for example /dev/md126 . List the Clevis plugins that are bound to the encrypted device: # clevis luks list -d /dev/sda4 1 1 Specify the device that is listed in the device field in the output of the preceding step. Example output 1: sss '{"t":1,"pins":{"tang":[{"url":"http://tang.example.com:7500"}]}}' 1 1 In the example output, the Tang plugin is used by the Shamir's Secret Sharing (SSS) Clevis plugin for the /dev/sda4 device. If you configured mirroring, verify if it is enabled: From the debug shell, list the software RAID devices on the node: # cat /proc/mdstat Example output Personalities : [raid1] md126 : active raid1 sdb3[1] sda3[0] 1 393152 blocks super 1.0 [2/2] [UU] md127 : active raid1 sda4[0] sdb4[1] 2 51869632 blocks super 1.2 [2/2] [UU] unused devices: <none> 1 In the example, the /dev/md126 software RAID mirror device uses the /dev/sda3 and /dev/sdb3 disk devices on the cluster node. 2 In the example, the /dev/md127 software RAID mirror device uses the /dev/sda4 and /dev/sdb4 disk devices on the cluster node. Review the details of each of the software RAID devices listed in the output of the preceding command. The following example lists the details of the /dev/md126 device: # mdadm --detail /dev/md126 Example output /dev/md126: Version : 1.0 Creation Time : Wed Jul 7 11:07:36 2021 Raid Level : raid1 1 Array Size : 393152 (383.94 MiB 402.59 MB) Used Dev Size : 393152 (383.94 MiB 402.59 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 11:18:24 2021 State : clean 2 Active Devices : 2 3 Working Devices : 2 4 Failed Devices : 0 5 Spare Devices : 0 Consistency Policy : resync Name : any:md-boot 6 UUID : ccfa3801:c520e0b5:2bee2755:69043055 Events : 19 Number Major Minor RaidDevice State 0 252 3 0 active sync /dev/sda3 7 1 252 19 1 active sync /dev/sdb3 8 1 Specifies the RAID level of the device. raid1 indicates RAID 1 disk mirroring. 2 Specifies the state of the RAID device. 3 4 States the number of underlying disk devices that are active and working. 5 States the number of underlying disk devices that are in a failed state. 6 The name of the software RAID device. 7 8 Provides information about the underlying disk devices that are used by the software RAID device. List the file systems that are mounted on the software RAID devices: # mount | grep /dev/md Example output /dev/md127 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /etc type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /usr type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /sysroot type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/containers/storage/overlay type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/1 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/2 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/3 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/4 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/5 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md126 on /boot type ext4 (rw,relatime,seclabel) In the example output, the /boot file system is mounted on the /dev/md126 software RAID device and the root file system is mounted on /dev/md127 . Repeat the verification steps for each OpenShift Container Platform node type. Additional resources For more information about the TPM v2 and Tang encryption modes, see Configuring automated unlocking of encrypted volumes using policy-based decryption . 23.1.4.4. Configuring a RAID-enabled data volume You can enable software RAID partitioning to provide an external data volume. OpenShift Container Platform supports RAID 0, RAID 1, RAID 4, RAID 5, RAID 6, and RAID 10 for data protection and fault tolerance. See "About disk mirroring" for more details. Prerequisites You have downloaded the OpenShift Container Platform installation program on your installation node. You have installed Butane on your installation node. Note Butane is a command-line utility that OpenShift Container Platform uses to provide convenient, short-hand syntax for writing machine configs, as well as for performing additional validation of machine configs. For more information, see the Creating machine configs with Butane section. Procedure Create a Butane config that configures a data volume by using software RAID. To configure a data volume with RAID 1 on the same disks that are used for a mirrored boot disk, create a USDHOME/clusterconfig/raid1-storage.bu file, for example: RAID 1 on mirrored boot disk variant: openshift version: 4.10.0 metadata: name: raid1-storage labels: machineconfiguration.openshift.io/role: worker boot_device: mirror: devices: - /dev/sda - /dev/sdb storage: disks: - device: /dev/sda partitions: - label: root-1 size_mib: 25000 1 - label: var-1 - device: /dev/sdb partitions: - label: root-2 size_mib: 25000 2 - label: var-2 raid: - name: md-var level: raid1 devices: - /dev/disk/by-partlabel/var-1 - /dev/disk/by-partlabel/var-2 filesystems: - device: /dev/md/md-var path: /var format: xfs wipe_filesystem: true with_mount_unit: true 1 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. To configure a data volume with RAID 1 on secondary disks, create a USDHOME/clusterconfig/raid1-alt-storage.bu file, for example: RAID 1 on secondary disks variant: openshift version: 4.10.0 metadata: name: raid1-alt-storage labels: machineconfiguration.openshift.io/role: worker storage: disks: - device: /dev/sdc wipe_table: true partitions: - label: data-1 - device: /dev/sdd wipe_table: true partitions: - label: data-2 raid: - name: md-var-lib-containers level: raid1 devices: - /dev/disk/by-partlabel/data-1 - /dev/disk/by-partlabel/data-2 filesystems: - device: /dev/md/md-var-lib-containers path: /var/lib/containers format: xfs wipe_filesystem: true with_mount_unit: true Create a RAID manifest from the Butane config you created in the step and save it to the <installation_directory>/openshift directory. For example, to create a manifest for the compute nodes, run the following command: USD butane USDHOME/clusterconfig/<butane_config>.bu -o <installation_directory>/openshift/<manifest_name>.yaml 1 1 Replace <butane_config> and <manifest_name> with the file names from the step. For example, raid1-alt-storage.bu and raid1-alt-storage.yaml for secondary disks. Save the Butane config in case you need to update the manifest in the future. Continue with the remainder of the OpenShift Container Platform installation. 23.1.5. Configuring chrony time service You can set the time server and related settings used by the chrony time service ( chronyd ) by modifying the contents of the chrony.conf file and passing those contents to your nodes as a machine config. Procedure Create a Butane config including the contents of the chrony.conf file. For example, to configure chrony on worker nodes, create a 99-worker-chrony.bu file. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.10.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony 1 2 On control plane nodes, substitute master for worker in both of these locations. 3 Specify an octal value mode for the mode field in the machine config file. After creating the file and applying the changes, the mode is converted to a decimal value. You can check the YAML file with the command oc get mc <mc-name> -o yaml . 4 Specify any valid, reachable time source, such as the one provided by your DHCP server. Alternately, you can specify any of the following NTP servers: 1.rhel.pool.ntp.org , 2.rhel.pool.ntp.org , or 3.rhel.pool.ntp.org . Use Butane to generate a MachineConfig object file, 99-worker-chrony.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-chrony.bu -o 99-worker-chrony.yaml Apply the configurations in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f ./99-worker-chrony.yaml 23.1.6. Additional resources For information on Butane, see Creating machine configs with Butane . For information on FIPS support, see Support for FIPS cryptography . 23.2. Configuring your firewall If you use a firewall, you must configure it so that OpenShift Container Platform can access the sites that it requires to function. You must always grant access to some sites, and you grant access to more if you use Red Hat Insights, the Telemetry service, a cloud to host your cluster, and certain build strategies. 23.2.1. Configuring your firewall for OpenShift Container Platform Before you install OpenShift Container Platform, you must configure your firewall to grant access to the sites that OpenShift Container Platform requires. There are no special configuration considerations for services running on only controller nodes compared to worker nodes. Note If your environment has a dedicated load balancer in front of your OpenShift Container Platform cluster, review the allowlists between your firewall and load balancer to prevent unwanted network restrictions to your cluster. Procedure Allowlist the following registry URLs: URL Port Function registry.redhat.io 443, 80 Provides core container images access.redhat.com [1] 443, 80 Hosts all the container images that are stored on the Red Hat Ecosytem Catalog, including core container images. quay.io 443, 80 Provides core container images cdn.quay.io 443, 80 Provides core container images cdn01.quay.io 443, 80 Provides core container images cdn02.quay.io 443, 80 Provides core container images cdn03.quay.io 443, 80 Provides core container images sso.redhat.com 443, 80 The https://console.redhat.com/openshift site uses authentication from sso.redhat.com [.small] In a firewall environment, ensure that the access.redhat.com resource is on the allowlist. This resource hosts a signature store that a container client requires for verifying images when pulling them from registry.access.redhat.com . You can use the wildcard *.quay.io instead of cdn0[1-3].quay.io in your allowlist. When you add a site, such as quay.io , to your allowlist, do not add a wildcard entry, such as *.quay.io , to your denylist. In most cases, image registries use a content delivery network (CDN) to serve images. If a firewall blocks access, image downloads are denied when the initial download request redirects to a hostname such as cdn01.quay.io . Allowlist any site that provides resources for a language or framework that your builds require. If you do not disable Telemetry, you must grant access to the following URLs to access Red Hat Insights: URL Port Function cert-api.access.redhat.com 443, 80 Required for Telemetry api.access.redhat.com 443, 80 Required for Telemetry infogw.api.openshift.com 443, 80 Required for Telemetry console.redhat.com/api/ingress , cloud.redhat.com/api/ingress 443, 80 Required for Telemetry and for insights-operator If you use Alibaba Cloud, Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) to host your cluster, you must grant access to the URLs that provide the cloud provider API and DNS for that cloud: Cloud URL Port Function Alibaba *.aliyuncs.com 443, 80 Required to access Alibaba Cloud services and resources. Review the Alibaba endpoints_config.go file to determine the exact endpoints to allow for the regions that you use. AWS *.amazonaws.com Alternatively, if you choose to not use a wildcard for AWS APIs, you must allowlist the following URLs: 443, 80 Required to access AWS services and resources. Review the AWS Service Endpoints in the AWS documentation to determine the exact endpoints to allow for the regions that you use. ec2.amazonaws.com 443 Used to install and manage clusters in an AWS environment. events.amazonaws.com 443 Used to install and manage clusters in an AWS environment. iam.amazonaws.com 443 Used to install and manage clusters in an AWS environment. route53.amazonaws.com 443 Used to install and manage clusters in an AWS environment. s3.amazonaws.com 443 Used to install and manage clusters in an AWS environment. s3.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. s3.dualstack.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. sts.amazonaws.com 443 Used to install and manage clusters in an AWS environment. sts.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. tagging.us-east-1.amazonaws.com 443 Used to install and manage clusters in an AWS environment. This endpoint is always us-east-1 , regardless of the region the cluster is deployed in. ec2.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. elasticloadbalancing.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. servicequotas.<aws_region>.amazonaws.com 443, 80 Required. Used to confirm quotas for deploying the service. tagging.<aws_region>.amazonaws.com 443, 80 Allows the assignment of metadata about AWS resources in the form of tags. GCP *.googleapis.com 443, 80 Required to access GCP services and resources. Review Cloud Endpoints in the GCP documentation to determine the endpoints to allow for your APIs. accounts.google.com 443, 80 Required to access your GCP account. Azure management.azure.com 443, 80 Required to access Azure services and resources. Review the Azure REST API reference in the Azure documentation to determine the endpoints to allow for your APIs. *.blob.core.windows.net 443, 80 Required to download Ignition files. login.microsoftonline.com 443, 80 Required to access Azure services and resources. Review the Azure REST API reference in the Azure documentation to determine the endpoints to allow for your APIs. Allowlist the following URLs: URL Port Function mirror.openshift.com 443, 80 Required to access mirrored installation content and images. This site is also a source of release image signatures, although the Cluster Version Operator needs only a single functioning source. storage.googleapis.com/openshift-release 443, 80 A source of release image signatures, although the Cluster Version Operator needs only a single functioning source. *.apps.<cluster_name>.<base_domain> 443, 80 Required to access the default cluster routes unless you set an ingress wildcard during installation. quayio-production-s3.s3.amazonaws.com 443, 80 Required to access Quay image content in AWS. api.openshift.com 443, 80 Required both for your cluster token and to check if updates are available for the cluster. rhcos-redirector.apps.art.xq1c.p1.openshiftapps.com , rhcos.mirror.openshift.com 443, 80 Required to download Red Hat Enterprise Linux CoreOS (RHCOS) images. console.redhat.com/openshift 443, 80 Required for your cluster token. sso.redhat.com 443, 80 The https://console.redhat.com/openshift site uses authentication from sso.redhat.com Operators require route access to perform health checks. Specifically, the authentication and web console Operators connect to two routes to verify that the routes work. If you are the cluster administrator and do not want to allow *.apps.<cluster_name>.<base_domain> , then allow these routes: oauth-openshift.apps.<cluster_name>.<base_domain> console-openshift-console.apps.<cluster_name>.<base_domain> , or the hostname that is specified in the spec.route.hostname field of the consoles.operator/cluster object if the field is not empty. Allowlist the following URLs for optional third-party content: URL Port Function registry.connect.redhat.com 443, 80 Required for all third-party images and certified operators. rhc4tp-prod-z8cxf-image-registry-us-east-1-evenkyleffocxqvofrk.s3.dualstack.us-east-1.amazonaws.com 443, 80 Provides access to container images hosted on registry.connect.redhat.com oso-rhc4tp-docker-registry.s3-us-west-2.amazonaws.com 443, 80 Required for Sonatype Nexus, F5 Big IP operators. If you use a default Red Hat Network Time Protocol (NTP) server allow the following URLs: 1.rhel.pool.ntp.org 2.rhel.pool.ntp.org 3.rhel.pool.ntp.org Note If you do not use a default Red Hat NTP server, verify the NTP server for your platform and allow it in your firewall. | [
"curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane --output butane",
"curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-aarch64 --output butane",
"chmod +x butane",
"echo USDPATH",
"butane <butane_file>",
"variant: openshift version: 4.10.0 metadata: name: 99-worker-custom labels: machineconfiguration.openshift.io/role: worker openshift: kernel_arguments: - loglevel=7 storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-custom.bu -o ./99-worker-custom.yaml",
"oc create -f 99-worker-custom.yaml",
"./openshift-install create manifests --dir <installation_directory>",
"cat << EOF > 99-openshift-machineconfig-master-kargs.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 EOF",
"subscription-manager register",
"subscription-manager attach --auto",
"yum install podman make git -y",
"mkdir kmods; cd kmods",
"git clone https://github.com/kmods-via-containers/kmods-via-containers",
"cd kmods-via-containers/",
"sudo make install",
"sudo systemctl daemon-reload",
"cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod",
"cd kvc-simple-kmod",
"cat simple-kmod.conf",
"KMOD_CONTAINER_BUILD_CONTEXT=\"https://github.com/kmods-via-containers/kvc-simple-kmod.git\" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 KMOD_NAMES=\"simple-kmod simple-procfs-kmod\"",
"sudo make install",
"sudo kmods-via-containers build simple-kmod USD(uname -r)",
"sudo systemctl enable [email protected] --now",
"sudo systemctl status [email protected]",
"● [email protected] - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago",
"lsmod | grep simple_",
"simple_procfs_kmod 16384 0 simple_kmod 16384 0",
"dmesg | grep 'Hello world'",
"[ 6420.761332] Hello world from simple_kmod.",
"sudo cat /proc/simple-procfs-kmod",
"simple-procfs-kmod number = 0",
"sudo spkut 44",
"KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container + podman run -i --rm --privileged simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44",
"subscription-manager register",
"subscription-manager attach --auto",
"yum install podman make git -y",
"mkdir kmods; cd kmods",
"git clone https://github.com/kmods-via-containers/kmods-via-containers",
"git clone https://github.com/kmods-via-containers/kvc-simple-kmod",
"FAKEROOT=USD(mktemp -d)",
"cd kmods-via-containers",
"make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/",
"cd ../kvc-simple-kmod",
"make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/",
"cd .. && rm -rf kmod-tree && cp -Lpr USD{FAKEROOT} kmod-tree",
"variant: openshift version: 4.10.0 metadata: name: 99-simple-kmod labels: machineconfiguration.openshift.io/role: worker 1 storage: trees: - local: kmod-tree systemd: units: - name: [email protected] enabled: true",
"butane 99-simple-kmod.bu --files-dir . -o 99-simple-kmod.yaml",
"oc create -f 99-simple-kmod.yaml",
"lsmod | grep simple_",
"simple_procfs_kmod 16384 0 simple_kmod 16384 0",
"variant: openshift version: 4.10.0 metadata: name: worker-storage labels: machineconfiguration.openshift.io/role: worker boot_device: layout: x86_64 luks: tpm2: true 1 tang: 2 - url: http://tang1.example.com:7500 thumbprint: jwGN5tRFK-kF6pIX89ssF3khxxX - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF threshold: 2 3 openshift: fips: true",
"sudo yum install clevis",
"clevis-encrypt-tang '{\"url\":\"http://tang.example.com:7500\"}' < /dev/null > /dev/null 1",
"The advertisement contains the following signing keys: PLjNyRdGw03zlRoGjQYMahSZGu9 1",
"./openshift-install create manifests --dir <installation_directory> 1",
"variant: openshift version: 4.10.0 metadata: name: worker-storage 1 labels: machineconfiguration.openshift.io/role: worker 2 boot_device: layout: x86_64 3 luks: 4 tpm2: true 5 tang: 6 - url: http://tang.example.com:7500 7 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 8 threshold: 1 9 mirror: 10 devices: 11 - /dev/sda - /dev/sdb openshift: fips: true 12",
"butane USDHOME/clusterconfig/worker-storage.bu -o <installation_directory>/openshift/99-worker-storage.yaml",
"oc debug node/compute-1",
"chroot /host",
"cryptsetup status root",
"/dev/mapper/root is active and is in use. type: LUKS2 1 cipher: aes-xts-plain64 2 keysize: 512 bits key location: keyring device: /dev/sda4 3 sector size: 512 offset: 32768 sectors size: 15683456 sectors mode: read/write",
"clevis luks list -d /dev/sda4 1",
"1: sss '{\"t\":1,\"pins\":{\"tang\":[{\"url\":\"http://tang.example.com:7500\"}]}}' 1",
"cat /proc/mdstat",
"Personalities : [raid1] md126 : active raid1 sdb3[1] sda3[0] 1 393152 blocks super 1.0 [2/2] [UU] md127 : active raid1 sda4[0] sdb4[1] 2 51869632 blocks super 1.2 [2/2] [UU] unused devices: <none>",
"mdadm --detail /dev/md126",
"/dev/md126: Version : 1.0 Creation Time : Wed Jul 7 11:07:36 2021 Raid Level : raid1 1 Array Size : 393152 (383.94 MiB 402.59 MB) Used Dev Size : 393152 (383.94 MiB 402.59 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 11:18:24 2021 State : clean 2 Active Devices : 2 3 Working Devices : 2 4 Failed Devices : 0 5 Spare Devices : 0 Consistency Policy : resync Name : any:md-boot 6 UUID : ccfa3801:c520e0b5:2bee2755:69043055 Events : 19 Number Major Minor RaidDevice State 0 252 3 0 active sync /dev/sda3 7 1 252 19 1 active sync /dev/sdb3 8",
"mount | grep /dev/md",
"/dev/md127 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /etc type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /usr type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /sysroot type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/containers/storage/overlay type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/1 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/2 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/3 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/4 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/5 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md126 on /boot type ext4 (rw,relatime,seclabel)",
"variant: openshift version: 4.10.0 metadata: name: raid1-storage labels: machineconfiguration.openshift.io/role: worker boot_device: mirror: devices: - /dev/sda - /dev/sdb storage: disks: - device: /dev/sda partitions: - label: root-1 size_mib: 25000 1 - label: var-1 - device: /dev/sdb partitions: - label: root-2 size_mib: 25000 2 - label: var-2 raid: - name: md-var level: raid1 devices: - /dev/disk/by-partlabel/var-1 - /dev/disk/by-partlabel/var-2 filesystems: - device: /dev/md/md-var path: /var format: xfs wipe_filesystem: true with_mount_unit: true",
"variant: openshift version: 4.10.0 metadata: name: raid1-alt-storage labels: machineconfiguration.openshift.io/role: worker storage: disks: - device: /dev/sdc wipe_table: true partitions: - label: data-1 - device: /dev/sdd wipe_table: true partitions: - label: data-2 raid: - name: md-var-lib-containers level: raid1 devices: - /dev/disk/by-partlabel/data-1 - /dev/disk/by-partlabel/data-2 filesystems: - device: /dev/md/md-var-lib-containers path: /var/lib/containers format: xfs wipe_filesystem: true with_mount_unit: true",
"butane USDHOME/clusterconfig/<butane_config>.bu -o <installation_directory>/openshift/<manifest_name>.yaml 1",
"variant: openshift version: 4.10.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-chrony.bu -o 99-worker-chrony.yaml",
"oc apply -f ./99-worker-chrony.yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/installing/installation-configuration |
Web console | Web console OpenShift Container Platform 4.15 Getting started with the web console in OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/web_console/index |
7.302. cluster | 7.302. cluster 7.302.1. RHBA-2013:1189 - cluster and gfs2-utils bug fix update Updated cluster and gfs2-utils packages that fix one bug are now available for Red Hat Enterprise Linux 6. The Red Hat Cluster Manager is a collection of technologies working together to provide data integrity and the ability to maintain application availability in the event of a failure. Using redundant hardware, shared disk storage, power management, and robust cluster communication and application failover mechanisms, a cluster can meet the needs of the enterprise market. Bug Fix BZ# 1001504 Prior to this update, if one of the gfs2_tool, gfs2_quota, gfs2_grow, or gfs2_jadd commands was killed unexpectedly, a temporary GFS2 metadata mount point used by those tools could be left mounted. The mount point was also not registered in the /etc/mtab file, and so the "umount -a -t gfs2" command would not unmount it. This mount point could prevent systems from rebooting properly, and cause the kernel to panic in cases where it was manually unmounted after the normal GFS2 mount point. This update corrects the problem by creating an mtab entry for the temporary mount point, which unmounts it before exiting when signals are received. Users of the Red Hat Cluster Manager and GFS2 are advised to upgrade to these updated packages, which fix this bug. 7.302.2. RHBA-2013:1055 - cluster and gfs2-utils bug fix update Updated cluster and gfs2-utils packages that fix one bug are now available for Red Hat Enterprise Linux 6. The Red Hat Cluster Manager is a collection of technologies working together to provide data integrity and the ability to maintain application availability in the event of a failure. Using redundant hardware, shared disk storage, power management, and robust cluster communication and application failover mechanisms, a cluster can meet the needs of the enterprise market. Bug Fix BZ# 982700 Previously, the cman init script did not handle its lock file correctly. During a node reboot, this could have caused the node itself to be evicted from the cluster by other members. With this update, the cman init script now handles the lock file correctly, and no fencing action is taken by other nodes of the cluster. Users of cluster and gfs2-utils are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/cluster |
6.2. Creating an ext3 File System | 6.2. Creating an ext3 File System After installation, it is sometimes necessary to create a new ext3 file system. For example, if you add a new disk drive to the system, you may want to partition the drive and use the ext3 file system. The steps for creating an ext3 file system are as follows: Create the partition using parted or fdisk . Format the partition with the ext3 file system using mkfs . Label the partition using e2label . Create the mount point. Add the partition to the /etc/fstab file. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/The_ext3_File_System-Creating_an_ext3_File_System |
1.2. Power Management Basics | 1.2. Power Management Basics Effective power management is built on the following principles: An idle CPU should only wake up when needed Since Red Hat Enterprise Linux 6, the kernel runs tickless which means the periodic timer interrupts have been replaced with on-demand interrupts. Therefore, idle CPUs are allowed to remain idle until a new task is queued for processing, and CPUs that have entered lower power states can remain in these states longer. However, benefits from this feature can be offset if your system has applications that create unnecessary timer events. Polling events, such as checks for volume changes or mouse movement are examples of such events. Red Hat Enterprise Linux 7 includes tools with which you can identify and audit applications on the basis of their CPU usage. Refer to Chapter 2, Power Management Auditing And Analysis for details. Unused hardware and devices should be disabled completely This is especially true for devices that have moving parts (for example, hard disks). In addition to this, some applications may leave an unused but enabled device "open"; when this occurs, the kernel assumes that the device is in use, which can prevent the device from going into a power saving state. Low activity should translate to low wattage In many cases, however, this depends on modern hardware and correct BIOS configuration. Older system components often do not have support for some of the new features that we now can support in Red Hat Enterprise Linux 7. Make sure that you are using the latest official firmware for your systems and that in the power management or device configuration sections of the BIOS the power management features are enabled. Some features to look for include: SpeedStep PowerNow! Cool'n'Quiet ACPI (C state) Smart If your hardware has support for these features and they are enabled in the BIOS, Red Hat Enterprise Linux 7 will use them by default. Different forms of CPU states and their effects Modern CPUs together with Advanced Configuration and Power Interface (ACPI) provide different power states. The three different states are: Sleep (C-states) Frequency and voltage (P-states) P-state describes frequency of a processor and its voltage operating point, which are both scaled as the P-state increases. Heat output (T-states or "thermal states") A CPU running on the lowest sleep state possible consumes the least amount of watts, but it also takes considerably more time to wake it up from that state when needed. In very rare cases this can lead to the CPU having to wake up immediately every time it just went to sleep. This situation results in an effectively permanently busy CPU and loses some of the potential power saving if another state had been used. A turned off machine uses the least amount of power As obvious as this might sound, one of the best ways to actually save power is to turn off systems. For example, your company can develop a corporate culture focused on "green IT" awareness with a guideline to turn of machines during lunch break or when going home. You also might consolidate several physical servers into one bigger server and virtualize them using the virtualization technology we ship with Red Hat Enterprise Linux 7. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/power_management_guide/basics |
Chapter 5. Configuring the Network Observability Operator | Chapter 5. Configuring the Network Observability Operator You can update the FlowCollector API resource to configure the Network Observability Operator and its managed components. The FlowCollector is explicitly created during installation. Since this resource operates cluster-wide, only a single FlowCollector is allowed, and it must be named cluster . For more information, see the FlowCollector API reference . 5.1. View the FlowCollector resource You can view and edit YAML directly in the OpenShift Container Platform web console. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster then select the YAML tab. There, you can modify the FlowCollector resource to configure the Network Observability operator. The following example shows a sample FlowCollector resource for OpenShift Container Platform Network Observability operator: Sample FlowCollector resource apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF 1 ebpf: sampling: 50 2 logLevel: info privileged: false resources: requests: memory: 50Mi cpu: 100m limits: memory: 800Mi processor: 3 logLevel: info resources: requests: memory: 100Mi cpu: 100m limits: memory: 800Mi logTypes: Flows advanced: conversationEndTimeout: 10s conversationHeartbeatInterval: 30s loki: 4 mode: LokiStack 5 consolePlugin: register: true logLevel: info portNaming: enable: true portNames: "3100": loki quickFilters: 6 - name: Applications filter: src_namespace!: 'openshift-,netobserv' dst_namespace!: 'openshift-,netobserv' default: true - name: Infrastructure filter: src_namespace: 'openshift-,netobserv' dst_namespace: 'openshift-,netobserv' - name: Pods network filter: src_kind: 'Pod' dst_kind: 'Pod' default: true - name: Services network filter: dst_kind: 'Service' 1 The Agent specification, spec.agent.type , must be EBPF . eBPF is the only OpenShift Container Platform supported option. 2 You can set the Sampling specification, spec.agent.ebpf.sampling , to manage resources. Lower sampling values might consume a large amount of computational, memory and storage resources. You can mitigate this by specifying a sampling ratio value. A value of 100 means 1 flow every 100 is sampled. A value of 0 or 1 means all flows are captured. The lower the value, the increase in returned flows and the accuracy of derived metrics. By default, eBPF sampling is set to a value of 50, so 1 flow every 50 is sampled. Note that more sampled flows also means more storage needed. It is recommend to start with default values and refine empirically, to determine which setting your cluster can manage. 3 The Processor specification spec.processor. can be set to enable conversation tracking. When enabled, conversation events are queryable in the web console. The spec.processor.logTypes value is Flows . The spec.processor.advanced values are Conversations , EndedConversations , or ALL . Storage requirements are highest for All and lowest for EndedConversations . 4 The Loki specification, spec.loki , specifies the Loki client. The default values match the Loki install paths mentioned in the Installing the Loki Operator section. If you used another installation method for Loki, specify the appropriate client information for your install. 5 The LokiStack mode automatically sets a few configurations: querierUrl , ingesterUrl and statusUrl , tenantID , and corresponding TLS configuration. Cluster roles and a cluster role binding are created for reading and writing logs to Loki. And authToken is set to Forward . You can set these manually using the Manual mode. 6 The spec.quickFilters specification defines filters that show up in the web console. The Application filter keys, src_namespace and dst_namespace , are negated ( ! ), so the Application filter shows all traffic that does not originate from, or have a destination to, any openshift- or netobserv namespaces. For more information, see Configuring quick filters below. Additional resources FlowCollector API reference Working with conversation tracking 5.2. Configuring the Flow Collector resource with Kafka You can configure the FlowCollector resource to use Kafka for high-throughput and low-latency data feeds. A Kafka instance needs to be running, and a Kafka topic dedicated to OpenShift Container Platform Network Observability must be created in that instance. For more information, see Kafka documentation with AMQ Streams . Prerequisites Kafka is installed. Red Hat supports Kafka with AMQ Streams Operator. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the Network Observability Operator, select Flow Collector . Select the cluster and then click the YAML tab. Modify the FlowCollector resource for OpenShift Container Platform Network Observability Operator to use Kafka, as shown in the following sample YAML: Sample Kafka configuration in FlowCollector resource apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: deploymentModel: Kafka 1 kafka: address: "kafka-cluster-kafka-bootstrap.netobserv" 2 topic: network-flows 3 tls: enable: false 4 1 Set spec.deploymentModel to Kafka instead of Direct to enable the Kafka deployment model. 2 spec.kafka.address refers to the Kafka bootstrap server address. You can specify a port if needed, for instance kafka-cluster-kafka-bootstrap.netobserv:9093 for using TLS on port 9093. 3 spec.kafka.topic should match the name of a topic created in Kafka. 4 spec.kafka.tls can be used to encrypt all communications to and from Kafka with TLS or mTLS. When enabled, the Kafka CA certificate must be available as a ConfigMap or a Secret, both in the namespace where the flowlogs-pipeline processor component is deployed (default: netobserv ) and where the eBPF agents are deployed (default: netobserv-privileged ). It must be referenced with spec.kafka.tls.caCert . When using mTLS, client secrets must be available in these namespaces as well (they can be generated for instance using the AMQ Streams User Operator) and referenced with spec.kafka.tls.userCert . 5.3. Export enriched network flow data You can send network flows to Kafka, IPFIX, the Red Hat build of OpenTelemetry, or all three at the same time. For Kafka or IPFIX, any processor or storage that supports those inputs, such as Splunk, Elasticsearch, or Fluentd, can consume the enriched network flow data. For OpenTelemetry, network flow data and metrics can be exported to a compatible OpenTelemetry endpoint, such as Red Hat build of OpenTelemetry, Jaeger, or Prometheus. Prerequisites Your Kafka, IPFIX, or OpenTelemetry collector endpoints are available from Network Observability flowlogs-pipeline pods. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster and then select the YAML tab. Edit the FlowCollector to configure spec.exporters as follows: apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: exporters: - type: Kafka 1 kafka: address: "kafka-cluster-kafka-bootstrap.netobserv" topic: netobserv-flows-export 2 tls: enable: false 3 - type: IPFIX 4 ipfix: targetHost: "ipfix-collector.ipfix.svc.cluster.local" targetPort: 4739 transport: tcp or udp 5 - type: OpenTelemetry 6 openTelemetry: targetHost: my-otelcol-collector-headless.otlp.svc targetPort: 4317 type: grpc 7 logs: 8 enable: true metrics: 9 enable: true prefix: netobserv pushTimeInterval: 20s 10 expiryTime: 2m # fieldsMapping: 11 # input: SrcAddr # output: source.address 1 4 6 You can export flows to IPFIX, OpenTelemetry, and Kafka individually or concurrently. 2 The Network Observability Operator exports all flows to the configured Kafka topic. 3 You can encrypt all communications to and from Kafka with SSL/TLS or mTLS. When enabled, the Kafka CA certificate must be available as a ConfigMap or a Secret, both in the namespace where the flowlogs-pipeline processor component is deployed (default: netobserv). It must be referenced with spec.exporters.tls.caCert . When using mTLS, client secrets must be available in these namespaces as well (they can be generated for instance using the AMQ Streams User Operator) and referenced with spec.exporters.tls.userCert . 5 You have the option to specify transport. The default value is tcp but you can also specify udp . 7 The protocol of OpenTelemetry connection. The available options are http and grpc . 8 OpenTelemetry configuration for exporting logs, which are the same as the logs created for Loki. 9 OpenTelemetry configuration for exporting metrics, which are the same as the metrics created for Prometheus. These configurations are specified in the spec.processor.metrics.includeList parameter of the FlowCollector custom resource, along with any custom metrics you defined using the FlowMetrics custom resource. 10 The time interval that metrics are sent to the OpenTelemetry collector. 11 Optional :Network Observability network flows formats get automatically renamed to an OpenTelemetry compliant format. The fieldsMapping specification gives you the ability to customize the OpenTelemetry format output. For example in the YAML sample, SrcAddr is the Network Observability input field, and it is being renamed source.address in OpenTelemetry output. You can see both Network Observability and OpenTelemetry formats in the "Network flows format reference". After configuration, network flows data can be sent to an available output in a JSON format. For more information, see "Network flows format reference". Additional resources Network flows format reference . 5.4. Updating the Flow Collector resource As an alternative to editing YAML in the OpenShift Container Platform web console, you can configure specifications, such as eBPF sampling, by patching the flowcollector custom resource (CR): Procedure Run the following command to patch the flowcollector CR and update the spec.agent.ebpf.sampling value: USD oc patch flowcollector cluster --type=json -p "[{"op": "replace", "path": "/spec/agent/ebpf/sampling", "value": <new value>}] -n netobserv" 5.5. Configuring quick filters You can modify the filters in the FlowCollector resource. Exact matches are possible using double-quotes around values. Otherwise, partial matches are used for textual values. The bang (!) character, placed at the end of a key, means negation. See the sample FlowCollector resource for more context about modifying the YAML. Note The filter matching types "all of" or "any of" is a UI setting that the users can modify from the query options. It is not part of this resource configuration. Here is a list of all available filter keys: Table 5.1. Filter keys Universal* Source Destination Description namespace src_namespace dst_namespace Filter traffic related to a specific namespace. name src_name dst_name Filter traffic related to a given leaf resource name, such as a specific pod, service, or node (for host-network traffic). kind src_kind dst_kind Filter traffic related to a given resource kind. The resource kinds include the leaf resource (Pod, Service or Node), or the owner resource (Deployment and StatefulSet). owner_name src_owner_name dst_owner_name Filter traffic related to a given resource owner; that is, a workload or a set of pods. For example, it can be a Deployment name, a StatefulSet name, etc. resource src_resource dst_resource Filter traffic related to a specific resource that is denoted by its canonical name, that identifies it uniquely. The canonical notation is kind.namespace.name for namespaced kinds, or node.name for nodes. For example, Deployment.my-namespace.my-web-server . address src_address dst_address Filter traffic related to an IP address. IPv4 and IPv6 are supported. CIDR ranges are also supported. mac src_mac dst_mac Filter traffic related to a MAC address. port src_port dst_port Filter traffic related to a specific port. host_address src_host_address dst_host_address Filter traffic related to the host IP address where the pods are running. protocol N/A N/A Filter traffic related to a protocol, such as TCP or UDP. Universal keys filter for any of source or destination. For example, filtering name: 'my-pod' means all traffic from my-pod and all traffic to my-pod , regardless of the matching type used, whether Match all or Match any . 5.6. Resource management and performance considerations The amount of resources required by Network Observability depends on the size of your cluster and your requirements for the cluster to ingest and store observability data. To manage resources and set performance criteria for your cluster, consider configuring the following settings. Configuring these settings might meet your optimal setup and observability needs. The following settings can help you manage resources and performance from the outset: eBPF Sampling You can set the Sampling specification, spec.agent.ebpf.sampling , to manage resources. Smaller sampling values might consume a large amount of computational, memory and storage resources. You can mitigate this by specifying a sampling ratio value. A value of 100 means 1 flow every 100 is sampled. A value of 0 or 1 means all flows are captured. Smaller values result in an increase in returned flows and the accuracy of derived metrics. By default, eBPF sampling is set to a value of 50, so 1 flow every 50 is sampled. Note that more sampled flows also means more storage needed. Consider starting with the default values and refine empirically, in order to determine which setting your cluster can manage. eBPF features The more features that are enabled, the more CPU and memory are impacted. See "Observing the network traffic" for a complete list of these features. Without Loki You can reduce the amount of resources that Network Observability requires by not using Loki and instead relying on Prometheus. For example, when Network Observability is configured without Loki, the total savings of memory usage are in the 20-65% range and CPU utilization is lower by 10-30%, depending upon the sampling value. See "Network Observability without Loki" for more information. Restricting or excluding interfaces Reduce the overall observed traffic by setting the values for spec.agent.ebpf.interfaces and spec.agent.ebpf.excludeInterfaces . By default, the agent fetches all the interfaces in the system, except the ones listed in excludeInterfaces and lo (local interface). Note that the interface names might vary according to the Container Network Interface (CNI) used. Performance fine-tuning The following settings can be used to fine-tune performance after the Network Observability has been running for a while: Resource requirements and limits : Adapt the resource requirements and limits to the load and memory usage you expect on your cluster by using the spec.agent.ebpf.resources and spec.processor.resources specifications. The default limits of 800MB might be sufficient for most medium-sized clusters. Cache max flows timeout : Control how often flows are reported by the agents by using the eBPF agent's spec.agent.ebpf.cacheMaxFlows and spec.agent.ebpf.cacheActiveTimeout specifications. A larger value results in less traffic being generated by the agents, which correlates with a lower CPU load. However, a larger value leads to a slightly higher memory consumption, and might generate more latency in the flow collection. 5.6.1. Resource considerations The following table outlines examples of resource considerations for clusters with certain workload sizes. Important The examples outlined in the table demonstrate scenarios that are tailored to specific workloads. Consider each example only as a baseline from which adjustments can be made to accommodate your workload needs. Table 5.2. Resource recommendations Extra small (10 nodes) Small (25 nodes) Large (250 nodes) [2] Worker Node vCPU and memory 4 vCPUs| 16GiB mem [1] 16 vCPUs| 64GiB mem [1] 16 vCPUs| 64GiB Mem [1] LokiStack size 1x.extra-small 1x.small 1x.medium Network Observability controller memory limit 400Mi (default) 400Mi (default) 400Mi (default) eBPF sampling rate 50 (default) 50 (default) 50 (default) eBPF memory limit 800Mi (default) 800Mi (default) 1600Mi cacheMaxSize 50,000 100,000 (default) 100,000 (default) FLP memory limit 800Mi (default) 800Mi (default) 800Mi (default) FLP Kafka partitions - 48 48 Kafka consumer replicas - 6 18 Kafka brokers - 3 (default) 3 (default) Tested with AWS M6i instances. In addition to this worker and its controller, 3 infra nodes (size M6i.12xlarge ) and 1 workload node (size M6i.8xlarge ) were tested. 5.6.2. Total average memory and CPU usage The following table outlines averages of total resource usage for clusters with a sampling value of 1 and 50 for two different tests: Test 1 and Test 2 . The tests differ in the following ways: Test 1 takes into account high ingress traffic volume in addition to the total number of namespace, pods and services in an OpenShift Container Platform cluster, places load on the eBPF agent, and represents use cases with a high number of workloads for a given cluster size. For example, Test 1 consists of 76 Namespaces, 5153 Pods, and 2305 Services with a network traffic scale of ~350 MB/s. Test 2 takes into account high ingress traffic volume in addition to the total number of namespace, pods and services in an OpenShift Container Platform cluster and represents use cases with a high number of workloads for a given cluster size. For example, Test 2 consists of 553 Namespaces, 6998 Pods, and 2508 Services with a network traffic scale of ~950 MB/s. Since different types of cluster use cases are exemplified in the different tests, the numbers in this table do not scale linearly when compared side-by-side. Instead, they are intended to be used as a benchmark for evaluating your personal cluster usage. The examples outlined in the table demonstrate scenarios that are tailored to specific workloads. Consider each example only as a baseline from which adjustments can be made to accommodate your workload needs. Note Metrics exported to Prometheus can impact the resource usage. Cardinality values for the metrics can help determine how much resources are impacted. For more information, see "Network Flows format" in the Additional resources section. Table 5.3. Total average resource usage Sampling value Resources used Test 1 (25 nodes) Test 2 (250 nodes) Sampling = 50 Total NetObserv CPU Usage 1.35 5.39 Total NetObserv RSS (Memory) Usage 16 GB 63 GB Sampling = 1 Total NetObserv CPU Usage 1.82 11.99 Total NetObserv RSS (Memory) Usage 22 GB 87 GB Summary: This table shows average total resource usage of Network Observability, which includes Agents, FLP, Kafka, and Loki with all features enabled. For details about what features are enabled, see the features covered in "Observing the network traffic", which comprises all the features that are enabled for this testing. Additional resources Observing the network traffic from the traffic flows view Network Observability without Loki Network Flows format reference | [
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF 1 ebpf: sampling: 50 2 logLevel: info privileged: false resources: requests: memory: 50Mi cpu: 100m limits: memory: 800Mi processor: 3 logLevel: info resources: requests: memory: 100Mi cpu: 100m limits: memory: 800Mi logTypes: Flows advanced: conversationEndTimeout: 10s conversationHeartbeatInterval: 30s loki: 4 mode: LokiStack 5 consolePlugin: register: true logLevel: info portNaming: enable: true portNames: \"3100\": loki quickFilters: 6 - name: Applications filter: src_namespace!: 'openshift-,netobserv' dst_namespace!: 'openshift-,netobserv' default: true - name: Infrastructure filter: src_namespace: 'openshift-,netobserv' dst_namespace: 'openshift-,netobserv' - name: Pods network filter: src_kind: 'Pod' dst_kind: 'Pod' default: true - name: Services network filter: dst_kind: 'Service'",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: deploymentModel: Kafka 1 kafka: address: \"kafka-cluster-kafka-bootstrap.netobserv\" 2 topic: network-flows 3 tls: enable: false 4",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: exporters: - type: Kafka 1 kafka: address: \"kafka-cluster-kafka-bootstrap.netobserv\" topic: netobserv-flows-export 2 tls: enable: false 3 - type: IPFIX 4 ipfix: targetHost: \"ipfix-collector.ipfix.svc.cluster.local\" targetPort: 4739 transport: tcp or udp 5 - type: OpenTelemetry 6 openTelemetry: targetHost: my-otelcol-collector-headless.otlp.svc targetPort: 4317 type: grpc 7 logs: 8 enable: true metrics: 9 enable: true prefix: netobserv pushTimeInterval: 20s 10 expiryTime: 2m # fieldsMapping: 11 # input: SrcAddr # output: source.address",
"oc patch flowcollector cluster --type=json -p \"[{\"op\": \"replace\", \"path\": \"/spec/agent/ebpf/sampling\", \"value\": <new value>}] -n netobserv\""
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/network_observability/configuring-network-observability-operators |
Chapter 13. Configuring virtual GPUs for instances | Chapter 13. Configuring virtual GPUs for instances Important This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . To support GPU-based rendering on your instances, you can define and manage virtual GPU (vGPU) resources according to your available physical GPU devices and your hypervisor type. You can use this configuration to divide the rendering workloads between all your physical GPU devices more effectively, and to have more control over scheduling your vGPU-enabled instances. To enable vGPU in the Compute (nova) service, create flavors that your cloud users can use to create Red Hat Enterprise Linux (RHEL) instances with vGPU devices. Each instance can then support GPU workloads with virtual GPU devices that correspond to the physical GPU devices. The Compute service tracks the number of vGPU devices that are available for each GPU profile you define on each host. The Compute service schedules instances to these hosts based on the flavor, attaches the devices, and monitors usage on an ongoing basis. When an instance is deleted, the Compute service adds the vGPU devices back to the available pool. Important Red Hat enables the use of NVIDIA vGPU in RHOSP without the requirement for support exceptions. However, Red Hat does not provide technical support for the NVIDIA vGPU drivers. The NVIDIA vGPU drivers are shipped and supported by NVIDIA. You require an NVIDIA Certified Support Services subscription to obtain NVIDIA Enterprise Support for NVIDIA vGPU software. For issues that result from the use of NVIDIA vGPUs where you are unable to reproduce the issue on a supported component, the following support policies apply: When Red Hat does not suspect that the third-party component is involved in the issue, the normal Scope of Support and Red Hat SLA apply. When Red Hat suspects that the third-party component is involved in the issue, the customer will be directed to NVIDIA in line with the Red Hat third party support and certification policies . For more information, see the Knowledge Base article Obtaining Support from NVIDIA . 13.1. Supported configurations and limitations Supported GPU cards For a list of supported NVIDIA GPU cards, see Virtual GPU Software Supported Products on the NVIDIA website. Limitations when using vGPU devices You can enable only one vGPU type on each Compute node. Each instance can use only one vGPU resource. Live migration of vGPU instances between hosts is not supported. Evacuation of vGPU instances is not supported. If you need to reboot the Compute node that hosts the vGPU instances, the vGPUs are not automatically reassigned to the recreated instances. You must either cold migrate the instances before you reboot the Compute node, or manually allocate each vGPU to the correct instance after reboot. To manually allocate each vGPU, you must retrieve the mdev UUID from the instance XML for each vGPU instance that runs on the Compute node before you reboot. You can use the following command to discover the mdev UUID for each instance: Replace <instance_name> with the libvirt instance name, OS-EXT-SRV-ATTR:instance_name , returned in a /servers request to the Compute API. Suspend operations on a vGPU-enabled instance is not supported due to a libvirt limitation. Instead, you can snapshot or shelve the instance. By default, vGPU types on Compute hosts are not exposed to API users. To grant access, add the hosts to a host aggregate. For more information, see Creating and managing host aggregates . If you use NVIDIA accelerator hardware, you must comply with the NVIDIA licensing requirements. For example, NVIDIA vGPU GRID requires a licensing server. For more information about the NVIDIA licensing requirements, see NVIDIA License Server Release Notes on the NVIDIA website. 13.2. Configuring vGPU on the Compute nodes To enable your cloud users to create instances that use a virtual GPU (vGPU), you must configure the Compute nodes that have the physical GPUs: Designate Compute nodes for vGPU. Configure the Compute node for vGPU. Deploy the overcloud. Create a vGPU flavor for launching instances that have vGPU. Tip If the GPU hardware is limited, you can also configure a host aggregate to optimize scheduling on the vGPU Compute nodes. To schedule only instances that request vGPUs on the vGPU Compute nodes, create a host aggregate of the vGPU Compute nodes, and configure the Compute scheduler to place only vGPU instances on the host aggregate. For more information, see Creating and managing host aggregates and Filtering by isolating host aggregates . Note To use an NVIDIA GRID vGPU, you must comply with the NVIDIA GRID licensing requirements and you must have the URL of your self-hosted license server. For more information, see the NVIDIA License Server Release Notes web page. 13.2.1. Prerequisites You have downloaded the NVIDIA GRID host driver RPM package that corresponds to your GPU device from the NVIDIA website. To determine which driver you need, see the NVIDIA Driver Downloads Portal . You must be a registered NVIDIA customer to download the drivers from the portal. You have built a custom overcloud image that has the NVIDIA GRID host driver installed. 13.2.2. Designating Compute nodes for vGPU To designate Compute nodes for vGPU workloads, you must create a new role file to configure the vGPU role, and configure the bare metal nodes with a GPU resource class to use to tag the GPU-enabled Compute nodes. Note The following procedure applies to new overcloud nodes that have not yet been provisioned. To assign a resource class to an existing overcloud node that has already been provisioned, you must use the scale down procedure to unprovision the node, then use the scale up procedure to reprovision the node with the new resource class assignment. For more information, see Scaling overcloud nodes . Procedure Log in to the undercloud as the stack user. Source the stackrc file: Generate a new roles data file named roles_data_gpu.yaml that includes the Controller , Compute , and ComputeGpu roles, along with any other roles that you need for the overcloud: Open roles_data_gpu.yaml and edit or add the following parameters and sections: Section/Parameter Current value New value Role comment Role: Compute Role: ComputeGpu Role name name: Compute name: ComputeGpu description Basic Compute Node role GPU Compute Node role HostnameFormatDefault -compute- -computegpu- deprecated_nic_config_name compute.yaml compute-gpu.yaml Register the GPU-enabled Compute nodes for the overcloud by adding them to your node definition template, node.json or node.yaml . For more information, see Registering nodes for the overcloud in the Director Installation and Usage guide. Inspect the node hardware: For more information, see Creating an inventory of the bare-metal node hardware in the Director Installation and Usage guide. Tag each bare metal node that you want to designate for GPU workloads with a custom GPU resource class: Replace <node> with the ID of the baremetal node. Add the ComputeGpu role to your node definition file, overcloud-baremetal-deploy.yaml , and define any predictive node placements, resource classes, network topologies, or other attributes that you want to assign to your nodes: 1 You can reuse an existing network topology or create a new custom network interface template for the role. For more information, see Custom network interface templates in the Director Installation and Usage guide. If you do not define the network definitions by using the network_config property, then the default network definitions are used. For more information about the properties you can use to configure node attributes in your node definition file, see Bare metal node provisioning attributes . For an example node definition file, see Example node definition file . Run the provisioning command to provision the new nodes for your role: Replace <stack> with the name of the stack for which the bare-metal nodes are provisioned. If not specified, the default is overcloud . Include the --network-config optional argument to provide the network definitions to the cli-overcloud-node-network-config.yaml Ansible playbook. If you do not define the network definitions by using the network_config property, then the default network definitions are used. Monitor the provisioning progress in a separate terminal. When provisioning is successful, the node state changes from available to active : If you did not run the provisioning command with the --network-config option, then configure the <Role>NetworkConfigTemplate parameters in your network-environment.yaml file to point to your NIC template files: Replace <gpu_net_top> with the name of the file that contains the network topology of the ComputeGpu role, for example, compute.yaml to use the default network topology. 13.2.3. Configuring the Compute node for vGPU and deploying the overcloud You need to retrieve and assign the vGPU type that corresponds to the physical GPU device in your environment, and prepare the environment files to configure the Compute node for vGPU. Procedure Install Red Hat Enterprise Linux and the NVIDIA GRID driver on a temporary Compute node and launch the node. On the Compute node, locate the vGPU type of the physical GPU device that you want to enable. For libvirt, virtual GPUs are mediated devices, or mdev type devices. To discover the supported mdev devices, enter the following command: Create a gpu.yaml file to specify the vGPU type of your GPU device: Note Each physical GPU supports only one virtual GPU type. If you specify multiple vGPU types in this property, only the first type is used. Save the updates to your Compute environment file. Add your new role and environment files to the stack with your other environment files and deploy the overcloud: 13.3. Creating a custom GPU instance image To enable your cloud users to create instances that use a virtual GPU (vGPU), you can create a custom vGPU-enabled image for launching instances. Use the following procedure to create a custom vGPU-enabled instance image with the NVIDIA GRID guest driver and license file. Prerequisites You have configured and deployed the overcloud with GPU-enabled Compute nodes. Procedure Log in to the undercloud as the stack user. Source the overcloudrc credential file: Create an instance with the hardware and software profile that your vGPU instances require: Replace <flavor> with the name or ID of the flavor that has the hardware profile that your vGPU instances require. For information about creating a vGPU flavor, see Creating a vGPU flavor for instances . Replace <image> with the name or ID of the image that has the software profile that your vGPU instances require. For information about downloading RHEL cloud images, see Managing images . Log in to the instance as a cloud-user. Create the gridd.conf NVIDIA GRID license file on the instance, following the NVIDIA guidance: Licensing an NVIDIA vGPU on Linux by Using a Configuration File . Install the GPU driver on the instance. For more information about installing an NVIDIA driver, see Installing the NVIDIA vGPU Software Graphics Driver on Linux . Note Use the hw_video_model image property to define the GPU driver type. You can choose none if you want to disable the emulated GPUs for your vGPU instances. For more information about supported drivers, see Image configuration parameters . Create an image snapshot of the instance: Optional: Delete the instance. 13.4. Creating a vGPU flavor for instances To enable your cloud users to create instances for GPU workloads, you can create a GPU flavor that can be used to launch vGPU instances, and assign the vGPU resource to that flavor. Prerequisites You have configured and deployed the overcloud with GPU-designated Compute nodes. Procedure Create an NVIDIA GPU flavor, for example: Assign a vGPU resource to the flavor that you created. You can assign only one vGPU for each instance. 13.5. Launching a vGPU instance You can create a GPU-enabled instance for GPU workloads. Procedure Create an instance using a GPU flavor and image, for example: Log in to the instance as a cloud-user. To verify that the GPU is accessible from the instance, enter the following command from the instance: 13.6. Enabling PCI passthrough for a GPU device You can use PCI passthrough to attach a physical PCI device, such as a graphics card, to an instance. If you use PCI passthrough for a device, the instance reserves exclusive access to the device for performing tasks, and the device is not available to the host. Prerequisites The pciutils package is installed on the physical servers that have the PCI cards. The driver for the GPU device must be installed on the instance that the device is passed through to. Therefore, you need to have created a custom instance image that has the required GPU driver installed. For more information about how to create a custom instance image with the GPU driver installed, see Creating a custom GPU instance image . Procedure To determine the vendor ID and product ID for each passthrough device type, enter the following command on the physical server that has the PCI cards: For example, to determine the vendor and product ID for an NVIDIA GPU, enter the following command: To determine if each PCI device has Single Root I/O Virtualization (SR-IOV) capabilities, enter the following command on the physical server that has the PCI cards: To configure the Controller node on the overcloud for PCI passthrough, create an environment file, for example, pci_passthru_controller.yaml . Add PciPassthroughFilter to the NovaSchedulerEnabledFilters parameter in pci_passthru_controller.yaml : To specify the PCI alias for the devices on the Controller node, add the following configuration to pci_passthru_controller.yaml : If the PCI device has SR-IOV capabilities: If the PCI device does not have SR-IOV capabilities: For more information on configuring the device_type field, see PCI passthrough device type field . Note If the nova-api service is running in a role other than the Controller, then replace ControllerExtraConfig with the user role, in the format <Role>ExtraConfig . To configure the Compute node on the overcloud for PCI passthrough, create an environment file, for example, pci_passthru_compute.yaml . To specify the available PCIs for the devices on the Compute node, add the following to pci_passthru_compute.yaml : You must create a copy of the PCI alias on the Compute node for instance migration and resize operations. To specify the PCI alias for the devices on the Compute node, add the following to pci_passthru_compute.yaml : If the PCI device has SR-IOV capabilities: If the PCI device does not have SR-IOV capabilities: Note The Compute node aliases must be identical to the aliases on the Controller node. To enable IOMMU in the server BIOS of the Compute nodes to support PCI passthrough, add the KernelArgs parameter to pci_passthru_compute.yaml : Note When you first add the KernelArgs parameter to the configuration of a role, the overcloud nodes are automatically rebooted. If required, you can disable the automatic rebooting of nodes and instead perform node reboots manually after each overcloud deployment. For more information, see Configuring manual node reboot to define KernelArgs . Add your custom environment files to the stack with your other environment files and deploy the overcloud: Configure a flavor to request the PCI devices. The following example requests two devices, each with a vendor ID of 10de and a product ID of 13f2 : Verification Create an instance with a PCI passthrough device: Replace <custom_gpu> with the name of your custom instance image that has the required GPU driver installed. Log in to the instance as a cloud user. To verify that the GPU is accessible from the instance, enter the following command from the instance: To check the NVIDIA System Management Interface status, enter the following command from the instance: Example output: | [
"virsh dumpxml <instance_name> | grep mdev",
"[stack@director ~]USD source ~/stackrc",
"(undercloud)USD openstack overcloud roles generate -o /home/stack/templates/roles_data_gpu.yaml Compute:ComputeGpu Compute Controller",
"(undercloud)USD openstack overcloud node introspect --all-manageable --provide",
"(undercloud)USD openstack baremetal node set --resource-class baremetal.GPU <node>",
"- name: Controller count: 3 - name: Compute count: 3 - name: ComputeGpu count: 1 defaults: resource_class: baremetal.GPU network_config: template: /home/stack/templates/nic-config/myRoleTopology.j2 1",
"(undercloud)USD openstack overcloud node provision --stack <stack> [--network-config \\] --output /home/stack/templates/overcloud-baremetal-deployed.yaml /home/stack/templates/overcloud-baremetal-deploy.yaml",
"(undercloud)USD watch openstack baremetal node list",
"parameter_defaults: ComputeNetworkConfigTemplate: /home/stack/templates/nic-configs/compute.j2 ComputeGpuNetworkConfigTemplate: /home/stack/templates/nic-configs/<gpu_net_top>.j2 ControllerNetworkConfigTemplate: /home/stack/templates/nic-configs/controller.j2",
"ls /sys/class/mdev_bus/0000\\:06\\:00.0/mdev_supported_types/ nvidia-11 nvidia-12 nvidia-13 nvidia-14 nvidia-15 nvidia-16 nvidia-17 nvidia-18 nvidia-19 nvidia-20 nvidia-21 nvidia-210 nvidia-22 cat /sys/class/mdev_bus/0000\\:06\\:00.0/mdev_supported_types/nvidia-18/description num_heads=4, frl_config=60, framebuffer=2048M, max_resolution=4096x2160, max_instance=4",
"parameter_defaults: ComputeGpuExtraConfig: nova::compute::vgpu::enabled_vgpu_types: - nvidia-18",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -r /home/stack/templates/roles_data_gpu.yaml -e /home/stack/templates/network-environment.yaml -e /home/stack/templates/gpu.yaml -e /home/stack/templates/overcloud-baremetal-deployed.yaml -e /home/stack/templates/node-info.yaml",
"source ~/overcloudrc",
"(overcloud)USD openstack server create --flavor <flavor> --image <image> temp_vgpu_instance",
"(overcloud)USD openstack server image create --name vgpu_image temp_vgpu_instance",
"(overcloud)USD openstack flavor create --vcpus 6 --ram 8192 --disk 100 m1.small-gpu +----------------------------+--------------------------------------+ | Field | Value | +----------------------------+--------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 100 | | id | a27b14dd-c42d-4084-9b6a-225555876f68 | | name | m1.small-gpu | | os-flavor-access:is_public | True | | properties | | | ram | 8192 | | rxtx_factor | 1.0 | | swap | | | vcpus | 6 | +----------------------------+--------------------------------------+",
"(overcloud)USD openstack flavor set m1.small-gpu --property \"resources:VGPU=1\" (overcloud)USD openstack flavor show m1.small-gpu +----------------------------+--------------------------------------+ | Field | Value | +----------------------------+--------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | access_project_ids | None | | disk | 100 | | id | a27b14dd-c42d-4084-9b6a-225555876f68 | | name | m1.small-gpu | | os-flavor-access:is_public | True | | properties | resources:VGPU='1' | | ram | 8192 | | rxtx_factor | 1.0 | | swap | | | vcpus | 6 | +----------------------------+--------------------------------------+",
"(overcloud)USD openstack server create --flavor m1.small-gpu --image vgpu_image --security-group web --nic net-id=internal0 --key-name lambda vgpu-instance",
"lspci -nn | grep <gpu_name>",
"lspci -nn | grep -i <gpu_name>",
"lspci -nn | grep -i nvidia 3b:00.0 3D controller [0302]: NVIDIA Corporation TU104GL [Tesla T4] [10de:1eb8] (rev a1) d8:00.0 3D controller [0302]: NVIDIA Corporation TU104GL [Tesla T4] [10de:1db4] (rev a1)",
"lspci -v -s 3b:00.0 3b:00.0 3D controller: NVIDIA Corporation TU104GL [Tesla T4] (rev a1) Capabilities: [bcc] Single Root I/O Virtualization (SR-IOV)",
"parameter_defaults: NovaSchedulerEnabledFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter",
"ControllerExtraConfig: nova::pci::aliases: - name: \"t4\" product_id: \"1eb8\" vendor_id: \"10de\" device_type: \"type-PF\" - name: \"v100\" product_id: \"1db4\" vendor_id: \"10de\" device_type: \"type-PF\"",
"ControllerExtraConfig: nova::pci::aliases: - name: \"t4\" product_id: \"1eb8\" vendor_id: \"10de\" - name: \"v100\" product_id: \"1db4\" vendor_id: \"10de\"",
"parameter_defaults: NovaPCIPassthrough: - vendor_id: \"10de\" product_id: \"1eb8\"",
"ComputeExtraConfig: nova::pci::aliases: - name: \"t4\" product_id: \"1eb8\" vendor_id: \"10de\" device_type: \"type-PF\" - name: \"v100\" product_id: \"1db4\" vendor_id: \"10de\" device_type: \"type-PF\"",
"ComputeExtraConfig: nova::pci::aliases: - name: \"t4\" product_id: \"1eb8\" vendor_id: \"10de\" - name: \"v100\" product_id: \"1db4\" vendor_id: \"10de\"",
"parameter_defaults: ComputeParameters: KernelArgs: \"intel_iommu=on iommu=pt\"",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/pci_passthru_controller.yaml -e /home/stack/templates/pci_passthru_compute.yaml",
"openstack flavor set m1.large --property \"pci_passthrough:alias\"=\"t4:2\"",
"openstack server create --flavor m1.large --image <custom_gpu> --wait test-pci",
"lspci -nn | grep <gpu_name>",
"nvidia-smi",
"----------------------------------------------------------------------------- | NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 | |------------------------------- ---------------------- ----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |=============================== ====================== ======================| | 0 Tesla T4 Off | 00000000:01:00.0 Off | 0 | | N/A 43C P0 20W / 70W | 0MiB / 15109MiB | 0% Default | ------------------------------- ---------------------- ---------------------- ----------------------------------------------------------------------------- | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | -----------------------------------------------------------------------------"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-virtual-gpus-for-instances_vgpu |
32.8.3. Registering a System in RHN Classic | 32.8.3. Registering a System in RHN Classic The rhnreg_ks command is a utility for registering a system with the Red Hat Network. It is designed to be used in a non-interactive environment (a Kickstart style install, for example). All the information can be specified on the command line or standard input (stdin). This command should be used when you have created an activation key and you want to register a system using a key. For details about using rhnreg_ks to automatically register your system, see the Knowledgebase article at https://access.redhat.com/solutions/876433 . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sect-kickstart-example-register-rhn-classic |
Chapter 7. Backing up and restoring Red Hat Quay managed by the Red Hat Quay Operator | Chapter 7. Backing up and restoring Red Hat Quay managed by the Red Hat Quay Operator Use the content within this section to back up and restore Red Hat Quay when managed by the Red Hat Quay Operator on OpenShift Container Platform 7.1. Optional: Enabling read-only mode for Red Hat Quay on OpenShift Container Platform Enabling read-only mode for your Red Hat Quay on OpenShift Container Platform deployment allows you to manage the registry's operations. Administrators can enable read-only mode to restrict write access to the registry, which helps ensure data integrity, mitigate risks during maintenance windows, and provide a safeguard against unintended modifications to registry data. It also helps to ensure that your Red Hat Quay registry remains online and available to serve images to users. When backing up and restoring, you are required to scale down your Red Hat Quay on OpenShift Container Platform deployment. This results in service unavailability during the backup period which, in some cases, might be unacceptable. Enabling read-only mode ensures service availability during the backup and restore procedure for Red Hat Quay on OpenShift Container Platform deployments. Prerequisites If you are using Red Hat Enterprise Linux (RHEL) 7.x: You have enabled the Red Hat Software Collections List (RHSCL). You have installed Python 3.6. You have downloaded the virtualenv package. You have installed the git CLI. If you are using Red Hat Enterprise Linux (RHEL) 8: You have installed Python 3 on your machine. You have downloaded the python3-virtualenv package. You have installed the git CLI. You have cloned the https://github.com/quay/quay.git repository. You have installed the oc CLI. You have access to the cluster with cluster-admin privileges. 7.1.1. Creating service keys for Red Hat Quay on OpenShift Container Platform Red Hat Quay uses service keys to communicate with various components. These keys are used to sign completed requests, such as requesting to scan images, login, storage access, and so on. Procedure Enter the following command to obtain a list of Red Hat Quay pods: USD oc get pods -n <namespace> Example output example-registry-clair-app-7dc7ff5844-4skw5 0/1 Error 0 70d example-registry-clair-app-7dc7ff5844-nvn4f 1/1 Running 0 31d example-registry-clair-app-7dc7ff5844-x4smw 0/1 ContainerStatusUnknown 6 (70d ago) 70d example-registry-clair-app-7dc7ff5844-xjnvt 1/1 Running 0 60d example-registry-clair-postgres-547d75759-75c49 1/1 Running 0 70d example-registry-quay-app-76c8f55467-52wjz 1/1 Running 0 70d example-registry-quay-app-76c8f55467-hwz4c 1/1 Running 0 70d example-registry-quay-app-upgrade-57ghs 0/1 Completed 1 70d example-registry-quay-database-7c55899f89-hmnm6 1/1 Running 0 70d example-registry-quay-mirror-6cccbd76d-btsnb 1/1 Running 0 70d example-registry-quay-mirror-6cccbd76d-x8g42 1/1 Running 0 70d example-registry-quay-redis-85cbdf96bf-4vk5m 1/1 Running 0 70d Open a remote shell session to the Quay container by entering the following command: USD oc rsh example-registry-quay-app-76c8f55467-52wjz Enter the following command to create the necessary service keys: sh-4.4USD python3 tools/generatekeypair.py quay-readonly Example output Writing public key to quay-readonly.jwk Writing key ID to quay-readonly.kid Writing private key to quay-readonly.pem 7.1.2. Adding keys to the PostgreSQL database Use the following procedure to add your service keys to the PostgreSQL database. Prerequistes You have created the service keys. Procedure Enter the following command to enter your Red Hat Quay database environment: USD oc rsh example-registry-quay-app-76c8f55467-52wjz psql -U <database_username> -d <database_name> Display the approval types and associated notes of the servicekeyapproval by entering the following command: quay=# select * from servicekeyapproval; Example output id | approver_id | approval_type | approved_date | notes ----+-------------+----------------------------------+----------------------------+------- 1 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:47:48.181347 | 2 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:47:55.808087 | 3 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:49:04.27095 | 4 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:49:05.46235 | 5 | 1 | ServiceKeyApprovalType.SUPERUSER | 2024-05-07 04:05:10.296796 | ... Add the service key to your Red Hat Quay database by entering the following query: quay=# INSERT INTO servicekey (name, service, metadata, kid, jwk, created_date, expiration_date) VALUES ('quay-readonly', 'quay', '{}', '{<contents_of_.kid_file>}', '{<contents_of_.jwk_file>}', '{<created_date_of_read-only>}', '{<expiration_date_of_read-only>}'); Example output INSERT 0 1 , add the key approval with the following query: quay=# INSERT INTO servicekeyapproval ('approval_type', 'approved_date', 'notes') VALUES ("ServiceKeyApprovalType.SUPERUSER", "CURRENT_DATE", {include_notes_here_on_why_this_is_being_added}); Example output INSERT 0 1 Set the approval_id field on the created service key row to the id field from the created service key approval. You can use the following SELECT statements to get the necessary IDs: UPDATE servicekey SET approval_id = (SELECT id FROM servicekeyapproval WHERE approval_type = 'ServiceKeyApprovalType.SUPERUSER') WHERE name = 'quay-readonly'; UPDATE 1 7.1.3. Configuring read-only mode Red Hat Quay on OpenShift Container Platform After the service keys have been created and added to your PostgreSQL database, you must restart the Quay container on your OpenShift Container Platform deployment. Important Deploying Red Hat Quay on OpenShift Container Platform in read-only mode requires you to modify the secrets stored inside of your OpenShift Container Platform cluster. It is highly recommended that you create a backup of the secret prior to making changes to it. Prerequisites You have created the service keys and added them to your PostgreSQL database. Procedure Enter the following command to read the secret name of your Red Hat Quay on OpenShift Container Platform deployment: USD oc get deployment -o yaml <quay_main_app_deployment_name> Use the base64 command to encode the quay-readonly.kid and quay-readonly.pem files: USD base64 -w0 quay-readonly.kid Example output ZjUyNDFm... USD base64 -w0 quay-readonly.pem Example output LS0tLS1CRUdJTiBSU0E... Obtain the current configuration bundle and secret by entering the following command: USD oc get secret quay-config-secret-name -o json | jq '.data."config.yaml"' | cut -d '"' -f2 | base64 -d -w0 > config.yaml Edit the config.yaml file and add the following information: # ... REGISTRY_STATE: readonly INSTANCE_SERVICE_KEY_KID_LOCATION: 'conf/stack/quay-readonly.kid' INSTANCE_SERVICE_KEY_LOCATION: 'conf/stack/quay-readonly.pem' # ... Save the file and base64 encode it by running the following command: USD base64 -w0 quay-config.yaml Scale down the Red Hat Quay Operator pods to 0 . This ensures that the Operator does not reconcile the secret after editing it. USD oc scale --replicas=0 deployment quay-operator -n openshift-operators Edit the secret to include the new content: USD oc edit secret quay-config-secret-name -n quay-namespace # ... data: "quay-readonly.kid": "ZjUyNDFm..." "quay-readonly.pem": "LS0tLS1CRUdJTiBSU0E..." "config.yaml": "QUNUSU9OX0xPR19..." # ... With your Red Hat Quay on OpenShift Container Platform deployment on read-only mode, you can safely manage your registry's operations and perform such actions as backup and restore. 7.1.3.1. Scaling up the Red Hat Quay on OpenShift Container Platform from a read-only deployment When you no longer want Red Hat Quay on OpenShift Container Platform to be in read-only mode, you can scale the deployment back up and remove the content added from the secret. Procedure Edit the config.yaml file and remove the following information: # ... REGISTRY_STATE: readonly INSTANCE_SERVICE_KEY_KID_LOCATION: 'conf/stack/quay-readonly.kid' INSTANCE_SERVICE_KEY_LOCATION: 'conf/stack/quay-readonly.pem' # ... Scale the Red Hat Quay Operator back up by entering the following command: oc scale --replicas=1 deployment quay-operator -n openshift-operators 7.2. Backing up Red Hat Quay Database backups should be performed regularly using either the supplied tools on the PostgreSQL image or your own backup infrastructure. The Red Hat Quay Operator does not ensure that the PostgreSQL database is backed up. Note This procedure covers backing up your Red Hat Quay PostgreSQL database. It does not cover backing up the Clair PostgreSQL database. Strictly speaking, backing up the Clair PostgreSQL database is not needed because it can be recreated. If you opt to recreate it from scratch, you will wait for the information to be repopulated after all images inside of your Red Hat Quay deployment are scanned. During this downtime, security reports are unavailable. If you are considering backing up the Clair PostgreSQL database, you must consider that its size is dependent upon the number of images stored inside of Red Hat Quay. As a result, the database can be extremely large. This procedure describes how to create a backup of Red Hat Quay on OpenShift Container Platform using the Operator. Prerequisites A healthy Red Hat Quay deployment on OpenShift Container Platform using the Red Hat Quay Operator. The status condition Available is set to true . The components quay , postgres and objectstorage are set to managed: true If the component clair is set to managed: true the component clairpostgres is also set to managed: true (starting with Red Hat Quay v3.7 or later) Note If your deployment contains partially unmanaged database or storage components and you are using external services for PostgreSQL or S3-compatible object storage to run your Red Hat Quay deployment, you must refer to the service provider or vendor documentation to create a backup of the data. You can refer to the tools described in this guide as a starting point on how to backup your external PostgreSQL database or object storage. 7.2.1. Red Hat Quay configuration backup Use the following procedure to back up your Red Hat Quay configuration. Procedure To back the QuayRegistry custom resource by exporting it, enter the following command: USD oc get quayregistry <quay_registry_name> -n <quay_namespace> -o yaml > quay-registry.yaml Edit the resulting quayregistry.yaml and remove the status section and the following metadata fields: metadata.creationTimestamp metadata.finalizers metadata.generation metadata.resourceVersion metadata.uid Backup the managed keys secret by entering the following command: Note If you are running a version older than Red Hat Quay 3.7.0, this step can be skipped. Some secrets are automatically generated while deploying Red Hat Quay for the first time. These are stored in a secret called <quay_registry_name>-quay_registry_managed_secret_keys in the namespace of the QuayRegistry resource. USD oc get secret -n <quay_namespace> <quay_registry_name>_quay_registry_managed_secret_keys -o yaml > managed_secret_keys.yaml Edit the resulting managed_secret_keys.yaml file and remove the entry metadata.ownerReferences . Your managed_secret_keys.yaml file should look similar to the following: apiVersion: v1 kind: Secret type: Opaque metadata: name: <quayname>_quay_registry_managed_secret_keys> namespace: <quay_namespace> data: CONFIG_EDITOR_PW: <redacted> DATABASE_SECRET_KEY: <redacted> DB_ROOT_PW: <redacted> DB_URI: <redacted> SECRET_KEY: <redacted> SECURITY_SCANNER_V4_PSK: <redacted> All information under the data property should remain the same. Redirect the current Quay configuration file by entering the following command: USD oc get secret -n <quay-namespace> USD(oc get quayregistry <quay_registry_name> -n <quay_namespace> -o jsonpath='{.spec.configBundleSecret}') -o yaml > config-bundle.yaml Backup the /conf/stack/config.yaml file mounted inside of the Quay pods: USD oc exec -it quay_pod_name -- cat /conf/stack/config.yaml > quay_config.yaml 7.2.2. Scaling down your Red Hat Quay deployment Use the following procedure to scale down your Red Hat Quay deployment. Important This step is needed to create a consistent backup of the state of your Red Hat Quay deployment. Do not omit this step, including in setups where PostgreSQL databases and/or S3-compatible object storage are provided by external services (unmanaged by the Red Hat Quay Operator). Procedure Depending on the version of your Red Hat Quay deployment, scale down your deployment using one of the following options. For Operator version 3.7 and newer: Scale down the Red Hat Quay deployment by disabling auto scaling and overriding the replica count for Red Hat Quay, mirror workers, and Clair (if managed). Your QuayRegistry resource should look similar to the following: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: ... - kind: horizontalpodautoscaler managed: false 1 - kind: quay managed: true overrides: 2 replicas: 0 - kind: clair managed: true overrides: replicas: 0 - kind: mirror managed: true overrides: replicas: 0 ... 1 Disable auto scaling of Quay, Clair and Mirroring workers 2 Set the replica count to 0 for components accessing the database and objectstorage For Operator version 3.6 and earlier : Scale down the Red Hat Quay deployment by scaling down the Red Hat Quay registry first and then the managed Red Hat Quay resources: USD oc scale --replicas=0 deployment USD(oc get deployment -n <quay-operator-namespace>|awk '/^quay-operator/ {print USD1}') -n <quay-operator-namespace> USD oc scale --replicas=0 deployment USD(oc get deployment -n <quay-namespace>|awk '/quay-app/ {print USD1}') -n <quay-namespace> USD oc scale --replicas=0 deployment USD(oc get deployment -n <quay-namespace>|awk '/quay-mirror/ {print USD1}') -n <quay-namespace> USD oc scale --replicas=0 deployment USD(oc get deployment -n <quay-namespace>|awk '/clair-app/ {print USD1}') -n <quay-namespace> Wait for the registry-quay-app , registry-quay-mirror and registry-clair-app pods (depending on which components you set to be managed by the Red Hat Quay Operator) to disappear. You can check their status by running the following command: USD oc get pods -n <quay_namespace> Example output: USD oc get pod Example output quay-operator.v3.7.1-6f9d859bd-p5ftc 1/1 Running 0 12m quayregistry-clair-postgres-7487f5bd86-xnxpr 1/1 Running 1 (12m ago) 12m quayregistry-quay-app-upgrade-xq2v6 0/1 Completed 0 12m quayregistry-quay-database-859d5445ff-cqthr 1/1 Running 0 12m quayregistry-quay-redis-84f888776f-hhgms 1/1 Running 0 12m 7.2.3. Backing up the Red Hat Quay managed database Use the following procedure to back up the Red Hat Quay managed database. Note If your Red Hat Quay deployment is configured with external, or unmanged, PostgreSQL database(s), refer to your vendor's documentation on how to create a consistent backup of these databases. Procedure Identify the Quay PostgreSQL pod name: USD oc get pod -l quay-component=postgres -n <quay_namespace> -o jsonpath='{.items[0].metadata.name}' Example output: quayregistry-quay-database-59f54bb7-58xs7 Obtain the Quay database name: USD oc -n <quay_namespace> rsh USD(oc get pod -l app=quay -o NAME -n <quay_namespace> |head -n 1) cat /conf/stack/config.yaml|awk -F"/" '/^DB_URI/ {print USD4}' quayregistry-quay-database Download a backup database: USD oc -n <quay_namespace> exec quayregistry-quay-database-59f54bb7-58xs7 -- /usr/bin/pg_dump -C quayregistry-quay-database > backup.sql 7.2.3.1. Backing up the Red Hat Quay managed object storage Use the following procedure to back up the Red Hat Quay managed object storage. The instructions in this section apply to the following configurations: Standalone, multi-cloud object gateway configurations OpenShift Data Foundations storage requires that the Red Hat Quay Operator provisioned an S3 object storage bucket from, through the ObjectStorageBucketClaim API Note If your Red Hat Quay deployment is configured with external (unmanged) object storage, refer to your vendor's documentation on how to create a copy of the content of Quay's storage bucket. Procedure Decode and export the AWS_ACCESS_KEY_ID by entering the following command: USD export AWS_ACCESS_KEY_ID=USD(oc get secret -l app=noobaa -n <quay-namespace> -o jsonpath='{.items[0].data.AWS_ACCESS_KEY_ID}' |base64 -d) Decode and export the AWS_SECRET_ACCESS_KEY_ID by entering the following command: USD export AWS_SECRET_ACCESS_KEY=USD(oc get secret -l app=noobaa -n <quay-namespace> -o jsonpath='{.items[0].data.AWS_SECRET_ACCESS_KEY}' |base64 -d) Create a new directory: USD mkdir blobs Note You can also use rclone or sc3md instead of the AWS command line utility. Copy all blobs to the directory by entering the following command: USD aws s3 sync --no-verify-ssl --endpoint https://USD(oc get route s3 -n openshift-storage -o jsonpath='{.spec.host}') s3://USD(oc get cm -l app=noobaa -n <quay-namespace> -o jsonpath='{.items[0].data.BUCKET_NAME}') ./blobs 7.2.4. Scale the Red Hat Quay deployment back up Depending on the version of your Red Hat Quay deployment, scale up your deployment using one of the following options. For Operator version 3.7 and newer: Scale up the Red Hat Quay deployment by re-enabling auto scaling, if desired, and removing the replica overrides for Quay, mirror workers and Clair as applicable. Your QuayRegistry resource should look similar to the following: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: ... - kind: horizontalpodautoscaler managed: true 1 - kind: quay 2 managed: true - kind: clair managed: true - kind: mirror managed: true ... 1 Re-enables auto scaling of Quay, Clair and Mirroring workers again (if desired) 2 Replica overrides are removed again to scale the Quay components back up For Operator version 3.6 and earlier: Scale up the Red Hat Quay deployment by scaling up the Red Hat Quay registry: USD oc scale --replicas=1 deployment USD(oc get deployment -n <quay_operator_namespace> | awk '/^quay-operator/ {print USD1}') -n <quay_operator_namespace> Check the status of the Red Hat Quay deployment by entering the following command: USD oc wait quayregistry registry --for=condition=Available=true -n <quay_namespace> Example output: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: ... name: registry namespace: <quay-namespace> ... spec: ... status: - lastTransitionTime: '2022-06-20T05:31:17Z' lastUpdateTime: '2022-06-20T17:31:13Z' message: All components reporting as healthy reason: HealthChecksPassing status: 'True' type: Available 7.3. Restoring Red Hat Quay Use the following procedures to restore Red Hat Quay when the Red Hat Quay Operator manages the database. It should be performed after a backup of your Red Hat Quay registry has been performed. See Backing up Red Hat Quay for more information. Prerequisites Red Hat Quay is deployed on OpenShift Container Platform using the Red Hat Quay Operator. A backup of the Red Hat Quay configuration managed by the Red Hat Quay Operator has been created following the instructions in the Backing up Red Hat Quay section Your Red Hat Quay database has been backed up. The object storage bucket used by Red Hat Quay has been backed up. The components quay , postgres and objectstorage are set to managed: true If the component clair is set to managed: true , the component clairpostgres is also set to managed: true (starting with Red Hat Quay v3.7 or later) There is no running Red Hat Quay deployment managed by the Red Hat Quay Operator in the target namespace on your OpenShift Container Platform cluster Note If your deployment contains partially unmanaged database or storage components and you are using external services for PostgreSQL or S3-compatible object storage to run your Red Hat Quay deployment, you must refer to the service provider or vendor documentation to restore their data from a backup prior to restore Red Hat Quay 7.3.1. Restoring Red Hat Quay and its configuration from a backup Use the following procedure to restore Red Hat Quay and its configuration files from a backup. Note These instructions assume you have followed the process in the Backing up Red Hat Quay guide and create the backup files with the same names. Procedure Restore the backed up Red Hat Quay configuration by entering the following command: USD oc create -f ./config-bundle.yaml Important If you receive the error Error from server (AlreadyExists): error when creating "./config-bundle.yaml": secrets "config-bundle-secret" already exists , you must delete your existing resource with USD oc delete Secret config-bundle-secret -n <quay-namespace> and recreate it with USD oc create -f ./config-bundle.yaml . Restore the generated keys from the backup by entering the following command: USD oc create -f ./managed-secret-keys.yaml Restore the QuayRegistry custom resource: USD oc create -f ./quay-registry.yaml Check the status of the Red Hat Quay deployment and wait for it to be available: USD oc wait quayregistry registry --for=condition=Available=true -n <quay-namespace> 7.3.2. Scaling down your Red Hat Quay deployment Use the following procedure to scale down your Red Hat Quay deployment. Procedure Depending on the version of your Red Hat Quay deployment, scale down your deployment using one of the following options. For Operator version 3.7 and newer: Scale down the Red Hat Quay deployment by disabling auto scaling and overriding the replica count for Quay, mirror workers and Clair (if managed). Your QuayRegistry resource should look similar to the following: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: ... - kind: horizontalpodautoscaler managed: false 1 - kind: quay managed: true overrides: 2 replicas: 0 - kind: clair managed: true overrides: replicas: 0 - kind: mirror managed: true overrides: replicas: 0 ... 1 Disable auto scaling of Quay, Clair and Mirroring workers 2 Set the replica count to 0 for components accessing the database and objectstorage For Operator version 3.6 and earlier: Scale down the Red Hat Quay deployment by scaling down the Red Hat Quay registry first and then the managed Red Hat Quay resources: USD oc scale --replicas=0 deployment USD(oc get deployment -n <quay-operator-namespace>|awk '/^quay-operator/ {print USD1}') -n <quay-operator-namespace> USD oc scale --replicas=0 deployment USD(oc get deployment -n <quay-namespace>|awk '/quay-app/ {print USD1}') -n <quay-namespace> USD oc scale --replicas=0 deployment USD(oc get deployment -n <quay-namespace>|awk '/quay-mirror/ {print USD1}') -n <quay-namespace> USD oc scale --replicas=0 deployment USD(oc get deployment -n <quay-namespace>|awk '/clair-app/ {print USD1}') -n <quay-namespace> Wait for the registry-quay-app , registry-quay-mirror and registry-clair-app pods (depending on which components you set to be managed by Red Hat Quay Operator) to disappear. You can check their status by running the following command: USD oc get pods -n <quay-namespace> Example output: registry-quay-config-editor-77847fc4f5-nsbbv 1/1 Running 0 9m1s registry-quay-database-66969cd859-n2ssm 1/1 Running 0 6d1h registry-quay-redis-7cc5f6c977-956g8 1/1 Running 0 5d21h 7.3.3. Restoring your Red Hat Quay database Use the following procedure to restore your Red Hat Quay database. Procedure Identify your Quay database pod by entering the following command: USD oc get pod -l quay-component=postgres -n <quay-namespace> -o jsonpath='{.items[0].metadata.name}' Example output: Upload the backup by copying it from the local environment and into the pod: Open a remote terminal to the database by entering the following command: USD oc rsh -n <quay-namespace> registry-quay-database-66969cd859-n2ssm Enter psql by running the following command: bash-4.4USD psql You can list the database by running the following command: Example output List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges ----------------------------+----------------------------+----------+------------+------------+----------------------- postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 | quayregistry-quay-database | quayregistry-quay-database | UTF8 | en_US.utf8 | en_US.utf8 | Drop the database by entering the following command: postgres=# DROP DATABASE "quayregistry-quay-database"; Example output DROP DATABASE Exit the postgres CLI to re-enter bash-4.4: \q Redirect your PostgreSQL database to your backup database: sh-4.4USD psql < /tmp/backup.sql Exit bash by entering the following command: sh-4.4USD exit 7.3.4. Restore your Red Hat Quay object storage data Use the following procedure to restore your Red Hat Quay object storage data. Procedure Export the AWS_ACCESS_KEY_ID by entering the following command: USD export AWS_ACCESS_KEY_ID=USD(oc get secret -l app=noobaa -n <quay-namespace> -o jsonpath='{.items[0].data.AWS_ACCESS_KEY_ID}' |base64 -d) Export the AWS_SECRET_ACCESS_KEY by entering the following command: USD export AWS_SECRET_ACCESS_KEY=USD(oc get secret -l app=noobaa -n <quay-namespace> -o jsonpath='{.items[0].data.AWS_SECRET_ACCESS_KEY}' |base64 -d) Upload all blobs to the bucket by running the following command: USD aws s3 sync --no-verify-ssl --endpoint https://USD(oc get route s3 -n openshift-storage -o jsonpath='{.spec.host}') ./blobs s3://USD(oc get cm -l app=noobaa -n <quay-namespace> -o jsonpath='{.items[0].data.BUCKET_NAME}') Note You can also use rclone or sc3md instead of the AWS command line utility. 7.3.5. Scaling up your Red Hat Quay deployment Depending on the version of your Red Hat Quay deployment, scale up your deployment using one of the following options. For Operator version 3.7 and newer: Scale up the Red Hat Quay deployment by re-enabling auto scaling, if desired, and removing the replica overrides for Quay, mirror workers and Clair as applicable. Your QuayRegistry resource should look similar to the following: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: ... - kind: horizontalpodautoscaler managed: true 1 - kind: quay 2 managed: true - kind: clair managed: true - kind: mirror managed: true ... 1 Re-enables auto scaling of Red Hat Quay, Clair and mirroring workers again (if desired) 2 Replica overrides are removed again to scale the Red Hat Quay components back up For Operator version 3.6 and earlier: Scale up the Red Hat Quay deployment by scaling up the Red Hat Quay registry again: USD oc scale --replicas=1 deployment USD(oc get deployment -n <quay-operator-namespace> | awk '/^quay-operator/ {print USD1}') -n <quay-operator-namespace> Check the status of the Red Hat Quay deployment: USD oc wait quayregistry registry --for=condition=Available=true -n <quay-namespace> Example output: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: ... name: registry namespace: <quay-namespace> ... spec: ... status: - lastTransitionTime: '2022-06-20T05:31:17Z' lastUpdateTime: '2022-06-20T17:31:13Z' message: All components reporting as healthy reason: HealthChecksPassing status: 'True' type: Available | [
"oc get pods -n <namespace>",
"example-registry-clair-app-7dc7ff5844-4skw5 0/1 Error 0 70d example-registry-clair-app-7dc7ff5844-nvn4f 1/1 Running 0 31d example-registry-clair-app-7dc7ff5844-x4smw 0/1 ContainerStatusUnknown 6 (70d ago) 70d example-registry-clair-app-7dc7ff5844-xjnvt 1/1 Running 0 60d example-registry-clair-postgres-547d75759-75c49 1/1 Running 0 70d example-registry-quay-app-76c8f55467-52wjz 1/1 Running 0 70d example-registry-quay-app-76c8f55467-hwz4c 1/1 Running 0 70d example-registry-quay-app-upgrade-57ghs 0/1 Completed 1 70d example-registry-quay-database-7c55899f89-hmnm6 1/1 Running 0 70d example-registry-quay-mirror-6cccbd76d-btsnb 1/1 Running 0 70d example-registry-quay-mirror-6cccbd76d-x8g42 1/1 Running 0 70d example-registry-quay-redis-85cbdf96bf-4vk5m 1/1 Running 0 70d",
"oc rsh example-registry-quay-app-76c8f55467-52wjz",
"sh-4.4USD python3 tools/generatekeypair.py quay-readonly",
"Writing public key to quay-readonly.jwk Writing key ID to quay-readonly.kid Writing private key to quay-readonly.pem",
"oc rsh example-registry-quay-app-76c8f55467-52wjz psql -U <database_username> -d <database_name>",
"quay=# select * from servicekeyapproval;",
"id | approver_id | approval_type | approved_date | notes ----+-------------+----------------------------------+----------------------------+------- 1 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:47:48.181347 | 2 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:47:55.808087 | 3 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:49:04.27095 | 4 | | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:49:05.46235 | 5 | 1 | ServiceKeyApprovalType.SUPERUSER | 2024-05-07 04:05:10.296796 |",
"quay=# INSERT INTO servicekey (name, service, metadata, kid, jwk, created_date, expiration_date) VALUES ('quay-readonly', 'quay', '{}', '{<contents_of_.kid_file>}', '{<contents_of_.jwk_file>}', '{<created_date_of_read-only>}', '{<expiration_date_of_read-only>}');",
"INSERT 0 1",
"quay=# INSERT INTO servicekeyapproval ('approval_type', 'approved_date', 'notes') VALUES (\"ServiceKeyApprovalType.SUPERUSER\", \"CURRENT_DATE\", {include_notes_here_on_why_this_is_being_added});",
"INSERT 0 1",
"UPDATE servicekey SET approval_id = (SELECT id FROM servicekeyapproval WHERE approval_type = 'ServiceKeyApprovalType.SUPERUSER') WHERE name = 'quay-readonly';",
"UPDATE 1",
"oc get deployment -o yaml <quay_main_app_deployment_name>",
"base64 -w0 quay-readonly.kid",
"ZjUyNDFm",
"base64 -w0 quay-readonly.pem",
"LS0tLS1CRUdJTiBSU0E",
"oc get secret quay-config-secret-name -o json | jq '.data.\"config.yaml\"' | cut -d '\"' -f2 | base64 -d -w0 > config.yaml",
"REGISTRY_STATE: readonly INSTANCE_SERVICE_KEY_KID_LOCATION: 'conf/stack/quay-readonly.kid' INSTANCE_SERVICE_KEY_LOCATION: 'conf/stack/quay-readonly.pem'",
"base64 -w0 quay-config.yaml",
"oc scale --replicas=0 deployment quay-operator -n openshift-operators",
"oc edit secret quay-config-secret-name -n quay-namespace",
"data: \"quay-readonly.kid\": \"ZjUyNDFm...\" \"quay-readonly.pem\": \"LS0tLS1CRUdJTiBSU0E...\" \"config.yaml\": \"QUNUSU9OX0xPR19...\"",
"REGISTRY_STATE: readonly INSTANCE_SERVICE_KEY_KID_LOCATION: 'conf/stack/quay-readonly.kid' INSTANCE_SERVICE_KEY_LOCATION: 'conf/stack/quay-readonly.pem'",
"scale --replicas=1 deployment quay-operator -n openshift-operators",
"oc get quayregistry <quay_registry_name> -n <quay_namespace> -o yaml > quay-registry.yaml",
"metadata.creationTimestamp metadata.finalizers metadata.generation metadata.resourceVersion metadata.uid",
"oc get secret -n <quay_namespace> <quay_registry_name>_quay_registry_managed_secret_keys -o yaml > managed_secret_keys.yaml",
"apiVersion: v1 kind: Secret type: Opaque metadata: name: <quayname>_quay_registry_managed_secret_keys> namespace: <quay_namespace> data: CONFIG_EDITOR_PW: <redacted> DATABASE_SECRET_KEY: <redacted> DB_ROOT_PW: <redacted> DB_URI: <redacted> SECRET_KEY: <redacted> SECURITY_SCANNER_V4_PSK: <redacted>",
"oc get secret -n <quay-namespace> USD(oc get quayregistry <quay_registry_name> -n <quay_namespace> -o jsonpath='{.spec.configBundleSecret}') -o yaml > config-bundle.yaml",
"oc exec -it quay_pod_name -- cat /conf/stack/config.yaml > quay_config.yaml",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: ... - kind: horizontalpodautoscaler managed: false 1 - kind: quay managed: true overrides: 2 replicas: 0 - kind: clair managed: true overrides: replicas: 0 - kind: mirror managed: true overrides: replicas: 0 ...",
"oc scale --replicas=0 deployment USD(oc get deployment -n <quay-operator-namespace>|awk '/^quay-operator/ {print USD1}') -n <quay-operator-namespace>",
"oc scale --replicas=0 deployment USD(oc get deployment -n <quay-namespace>|awk '/quay-app/ {print USD1}') -n <quay-namespace>",
"oc scale --replicas=0 deployment USD(oc get deployment -n <quay-namespace>|awk '/quay-mirror/ {print USD1}') -n <quay-namespace>",
"oc scale --replicas=0 deployment USD(oc get deployment -n <quay-namespace>|awk '/clair-app/ {print USD1}') -n <quay-namespace>",
"oc get pods -n <quay_namespace>",
"oc get pod",
"quay-operator.v3.7.1-6f9d859bd-p5ftc 1/1 Running 0 12m quayregistry-clair-postgres-7487f5bd86-xnxpr 1/1 Running 1 (12m ago) 12m quayregistry-quay-app-upgrade-xq2v6 0/1 Completed 0 12m quayregistry-quay-database-859d5445ff-cqthr 1/1 Running 0 12m quayregistry-quay-redis-84f888776f-hhgms 1/1 Running 0 12m",
"oc get pod -l quay-component=postgres -n <quay_namespace> -o jsonpath='{.items[0].metadata.name}'",
"quayregistry-quay-database-59f54bb7-58xs7",
"oc -n <quay_namespace> rsh USD(oc get pod -l app=quay -o NAME -n <quay_namespace> |head -n 1) cat /conf/stack/config.yaml|awk -F\"/\" '/^DB_URI/ {print USD4}' quayregistry-quay-database",
"oc -n <quay_namespace> exec quayregistry-quay-database-59f54bb7-58xs7 -- /usr/bin/pg_dump -C quayregistry-quay-database > backup.sql",
"export AWS_ACCESS_KEY_ID=USD(oc get secret -l app=noobaa -n <quay-namespace> -o jsonpath='{.items[0].data.AWS_ACCESS_KEY_ID}' |base64 -d)",
"export AWS_SECRET_ACCESS_KEY=USD(oc get secret -l app=noobaa -n <quay-namespace> -o jsonpath='{.items[0].data.AWS_SECRET_ACCESS_KEY}' |base64 -d)",
"mkdir blobs",
"aws s3 sync --no-verify-ssl --endpoint https://USD(oc get route s3 -n openshift-storage -o jsonpath='{.spec.host}') s3://USD(oc get cm -l app=noobaa -n <quay-namespace> -o jsonpath='{.items[0].data.BUCKET_NAME}') ./blobs",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: ... - kind: horizontalpodautoscaler managed: true 1 - kind: quay 2 managed: true - kind: clair managed: true - kind: mirror managed: true ...",
"oc scale --replicas=1 deployment USD(oc get deployment -n <quay_operator_namespace> | awk '/^quay-operator/ {print USD1}') -n <quay_operator_namespace>",
"oc wait quayregistry registry --for=condition=Available=true -n <quay_namespace>",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: <quay-namespace> spec: status: - lastTransitionTime: '2022-06-20T05:31:17Z' lastUpdateTime: '2022-06-20T17:31:13Z' message: All components reporting as healthy reason: HealthChecksPassing status: 'True' type: Available",
"oc create -f ./config-bundle.yaml",
"oc create -f ./managed-secret-keys.yaml",
"oc create -f ./quay-registry.yaml",
"oc wait quayregistry registry --for=condition=Available=true -n <quay-namespace>",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: ... - kind: horizontalpodautoscaler managed: false 1 - kind: quay managed: true overrides: 2 replicas: 0 - kind: clair managed: true overrides: replicas: 0 - kind: mirror managed: true overrides: replicas: 0 ...",
"oc scale --replicas=0 deployment USD(oc get deployment -n <quay-operator-namespace>|awk '/^quay-operator/ {print USD1}') -n <quay-operator-namespace>",
"oc scale --replicas=0 deployment USD(oc get deployment -n <quay-namespace>|awk '/quay-app/ {print USD1}') -n <quay-namespace>",
"oc scale --replicas=0 deployment USD(oc get deployment -n <quay-namespace>|awk '/quay-mirror/ {print USD1}') -n <quay-namespace>",
"oc scale --replicas=0 deployment USD(oc get deployment -n <quay-namespace>|awk '/clair-app/ {print USD1}') -n <quay-namespace>",
"oc get pods -n <quay-namespace>",
"registry-quay-config-editor-77847fc4f5-nsbbv 1/1 Running 0 9m1s registry-quay-database-66969cd859-n2ssm 1/1 Running 0 6d1h registry-quay-redis-7cc5f6c977-956g8 1/1 Running 0 5d21h",
"oc get pod -l quay-component=postgres -n <quay-namespace> -o jsonpath='{.items[0].metadata.name}'",
"quayregistry-quay-database-59f54bb7-58xs7",
"oc cp ./backup.sql -n <quay-namespace> registry-quay-database-66969cd859-n2ssm:/tmp/backup.sql",
"oc rsh -n <quay-namespace> registry-quay-database-66969cd859-n2ssm",
"bash-4.4USD psql",
"postgres=# \\l",
"List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges ----------------------------+----------------------------+----------+------------+------------+----------------------- postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 | quayregistry-quay-database | quayregistry-quay-database | UTF8 | en_US.utf8 | en_US.utf8 |",
"postgres=# DROP DATABASE \"quayregistry-quay-database\";",
"DROP DATABASE",
"\\q",
"sh-4.4USD psql < /tmp/backup.sql",
"sh-4.4USD exit",
"export AWS_ACCESS_KEY_ID=USD(oc get secret -l app=noobaa -n <quay-namespace> -o jsonpath='{.items[0].data.AWS_ACCESS_KEY_ID}' |base64 -d)",
"export AWS_SECRET_ACCESS_KEY=USD(oc get secret -l app=noobaa -n <quay-namespace> -o jsonpath='{.items[0].data.AWS_SECRET_ACCESS_KEY}' |base64 -d)",
"aws s3 sync --no-verify-ssl --endpoint https://USD(oc get route s3 -n openshift-storage -o jsonpath='{.spec.host}') ./blobs s3://USD(oc get cm -l app=noobaa -n <quay-namespace> -o jsonpath='{.items[0].data.BUCKET_NAME}')",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: ... - kind: horizontalpodautoscaler managed: true 1 - kind: quay 2 managed: true - kind: clair managed: true - kind: mirror managed: true ...",
"oc scale --replicas=1 deployment USD(oc get deployment -n <quay-operator-namespace> | awk '/^quay-operator/ {print USD1}') -n <quay-operator-namespace>",
"oc wait quayregistry registry --for=condition=Available=true -n <quay-namespace>",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: <quay-namespace> spec: status: - lastTransitionTime: '2022-06-20T05:31:17Z' lastUpdateTime: '2022-06-20T17:31:13Z' message: All components reporting as healthy reason: HealthChecksPassing status: 'True' type: Available"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/red_hat_quay_operator_features/backing-up-and-restoring-intro |
Chapter 1. About Red Hat OpenShift GitOps | Chapter 1. About Red Hat OpenShift GitOps Red Hat OpenShift GitOps is an Operator that uses Argo CD as the declarative GitOps engine. It enables GitOps workflows across multicluster OpenShift and Kubernetes infrastructure. Using Red Hat OpenShift GitOps, administrators can consistently configure and deploy Kubernetes-based infrastructure and applications across clusters and development lifecycles. Red Hat OpenShift GitOps is based on the open source project Argo CD and provides a similar set of features to what the upstream offers, with additional automation, integration into Red Hat {OCP} and the benefits of Red Hat's enterprise support, quality assurance and focus on enterprise security. Note Because Red Hat OpenShift GitOps releases on a different cadence from OpenShift Container Platform, the Red Hat OpenShift GitOps documentation is now available as a separate documentation set at Red Hat OpenShift GitOps . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/gitops/about-redhat-openshift-gitops |
4.21. biosdevname | 4.21. biosdevname 4.21.1. RHBA-2011:1608 - biosdevname bug fix and enhancement update An updated biosdevname package that fixes several bugs and adds various enhancements are now available for Red Hat Enterprise Linux 6. The biosdevname package contains an optional convention for naming network interfaces; it assigns names to network interfaces based on their physical location. The package is disabled by default, except for a limited set of Dell PowerEdge, C Series and Precision Workstation systems. The biosdevname package has been upgraded to upstream version 0.3.11, which provides a number of bug fixes and enhancements over the version. (BZ# 696203 ) Bug Fixes BZ# 700248 When NPAR (NIC Partitioning) is enabled, the partition number should be appended as a suffix to the interface name. Previously, biosdevname did not add partition numbers to interface names, for example, instead of naming an interface "em3_1", the interface was named "em3". Consequently, partitioned network interfaces were missing the suffix necessary to describe the partition. Now, biosdevname correctly recognizes the VPD (Vital Product Data) suffix and full interface names are created correctly. BZ# 700251 When biosdevname ran in a guest environment, it suggested names to new network interfaces as if it was in a host environment. Consequently, affected network interfaces were incorrectly renamed. Now, biosdevname no longer suggests names in the described scenario. BZ# 729591 When biosdevname was reading VPD information to retrieve NPAR-related data, the read operations failed or became unresponsive on certain RAID controllers. Additionally, biosdevname sometimes attempted to read beyond the VPD boundary in the sysfs VPD file, which also resulted in a hang. This bug has been fixed and biosdevname now performs the read operation correctly in the described scenarios. BZ# 739592 Previously, the "--smbios" and "--nopirq" command-line parameters were missing in the biosdevname binary. Consequently, consistent network device naming could not be enabled because biosdevname exited without suggesting a name. This update adds support for these parameters and enables the device naming. BZ# 740532 Previously, NICs (Network Interface Cards) on biosdevname-compatible machines were given traditional "eth*" names instead of "em*" or "p*p*" names. This bug has been fixed and biosdevname now provides correct names for the NICs. Enhancements BZ# 696252 With this update, "--smbios" and "--nopirq" command-line parameters have been added to biosdevname. BZ# 736442 The biosdevname man page has been updated to explain the functionality of the "--smbios" and "--nopirq" command-line parameters. Users of biosdevname are advised to upgrade to this updated package, which fixes these bugs and adds these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/biosdevname |
3.4. Configuring Profiles to Enable Renewal | 3.4. Configuring Profiles to Enable Renewal This section discusses how to set up profiles for certificate renewals. For more information on how to renew certificates, see Section 5.4, "Renewing Certificates" . A profile that allows renewal is often accompanied by the renewGracePeriodConstraint entry. For example: 3.4.1. Renewing Using the Same Key A profile that allows the same key to be submitted for renewal has the allowSameKeyRenewal parameter set to true in the uniqueKeyConstraint entry. For example: 3.4.2. Renewal Using a New Key To renew a certificate with a new key, use the same profile with a new key. Certificate System uses the subjectDN from the user signing certificate used to sign the request for the new certificate. | [
"policyset.cmcUserCertSet.10.constraint.class_id=renewGracePeriodConstraintImpl policyset.cmcUserCertSet.10.constraint.name=Renewal Grace Period Constraint policyset.cmcUserCertSet.10.constraint.params.renewal.graceBefore=30 policyset.cmcUserCertSet.10.constraint.params.renewal.graceAfter=30 policyset.cmcUserCertSet.10.default.class_id=noDefaultImpl policyset.cmcUserCertSet.10.default.name=No Default",
"policyset.cmcUserCertSet.9.constraint.class_id=uniqueKeyConstraintImpl policyset.cmcUserCertSet.9.constraint.name=Unique Key Constraint policyset.cmcUserCertSet.9.constraint.params.allowSameKeyRenewal=true policyset.cmcUserCertSet.9.default.class_id=noDefaultImpl policyset.cmcUserCertSet.9.default.name=No Default"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/renewing_certificates |
Chapter 3. Installing Data Grid Operator | Chapter 3. Installing Data Grid Operator Install Data Grid Operator into a OpenShift namespace to create and manage Data Grid clusters. 3.1. Installing Data Grid Operator on Red Hat OpenShift Create subscriptions to Data Grid Operator on OpenShift so you can install different Data Grid versions and receive automatic updates. Automatic updates apply to Data Grid Operator first and then for each Data Grid node. Data Grid Operator updates clusters one node at a time, gracefully shutting down each node and then bringing it back online with the updated version before going on to the node. Prerequisites Access to OperatorHub running on OpenShift. Some OpenShift environments, such as OpenShift Container Platform, can require administrator credentials. Have an OpenShift project for Data Grid Operator if you plan to install it into a specific namespace. Procedure Log in to the OpenShift Web Console. Navigate to OperatorHub . Find and select Data Grid Operator. Select Install and continue to Create Operator Subscription . Specify options for your subscription. Installation Mode You can install Data Grid Operator into a Specific namespace or All namespaces. Update Channel Get updates for Data Grid Operator 8.5.x. Approval Strategies Automatically install updates from the 8.5.x channel or require approval before installation. Select Subscribe to install Data Grid Operator. Navigate to Installed Operators to verify the Data Grid Operator installation. 3.2. Installing Data Grid Operator with the native CLI plugin Install Data Grid Operator with the native Data Grid CLI plugin, kubectl-infinispan . Prerequisites Have kubectl-infinispan on your PATH . Procedure Run the oc infinispan install command to create Data Grid Operator subscriptions, for example: Verify the installation. Tip Use oc infinispan install --help for command options and descriptions. 3.3. Installing Data Grid Operator with an OpenShift client You can use the oc client to create Data Grid Operator subscriptions as an alternative to installing through the OperatorHub or with the native Data Grid CLI. Prerequisites Have an oc client. Procedure Set up projects. Create a project for Data Grid Operator. If you want Data Grid Operator to control a specific Data Grid cluster only, create a project for that cluster. 1 Creates a project into which you install Data Grid Operator. 2 Optionally creates a project for a specific Data Grid cluster if you do not want Data Grid Operator to watch all projects. Create an OperatorGroup resource. Control all Data Grid clusters Control a specific Data Grid cluster Create a subscription for Data Grid Operator. Note If you want to manually approve updates from the 8.5.x channel, change the value of the spec.installPlanApproval field to Manual . Verify the installation. | [
"infinispan install --channel=8.5.x --source=redhat-operators --source-namespace=openshift-marketplace",
"get pods -n openshift-operators | grep infinispan-operator NAME READY STATUS infinispan-operator-<id> 1/1 Running",
"new-project USD{INSTALL_NAMESPACE} 1 new-project USD{WATCH_NAMESPACE} 2",
"apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: datagrid namespace: USD{INSTALL_NAMESPACE} EOF",
"apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: datagrid namespace: USD{INSTALL_NAMESPACE} spec: targetNamespaces: - USD{WATCH_NAMESPACE} EOF",
"apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: datagrid-operator namespace: USD{INSTALL_NAMESPACE} spec: channel: 8.5.x installPlanApproval: Automatic name: datagrid source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"get pods -n USD{INSTALL_NAMESPACE} NAME READY STATUS infinispan-operator-<id> 1/1 Running"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_operator_guide/installation |
3. Software Versions | 3. Software Versions Table 1. Software Versions Software Description RHEL4 refers to RHEL4 and higher GFS refers to GFS 6.1 and higher | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_network_block_device/software_versions-gnbd |
Chapter 2. Getting started with the OpenShift CLI | Chapter 2. Getting started with the OpenShift CLI To use the OpenShift CLI ( oc ) tool, you must download and install it separately from your MicroShift installation. You can install oc by downloading the binary or by using Homebrew. 2.1. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with Red Hat build of MicroShift from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in Red Hat build of MicroShift 4.18. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Note Red Hat build of MicroShift version numbering matches OpenShift Container Platform version numbering. Use the oc binary that matches your MicroShift version and has the appropriate RHEL compatibility. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Note Red Hat build of MicroShift version numbering matches OpenShift Container Platform version numbering. Use the oc binary that matches your MicroShift version and has the appropriate RHEL compatibility. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Note Red Hat build of MicroShift version numbering matches OpenShift Container Platform version numbering. Use the oc binary that matches your MicroShift version and has the appropriate RHEL compatibility. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 macOS Clients entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 2.2. Installing the OpenShift CLI by using Homebrew For macOS, you can install the OpenShift CLI ( oc ) by using the Homebrew package manager. Prerequisites You must have Homebrew ( brew ) installed. Procedure Install the openshift-cli package by running the following command: USD brew install openshift-cli Verification Verify your installation by using an oc command: USD oc <command> 2.3. Installing the OpenShift CLI by using an RPM For Red Hat Enterprise Linux (RHEL), you can install the OpenShift CLI ( oc ) as an RPM if you have an active Red Hat build of MicroShift subscription on your Red Hat account. Important You must install oc for RHEL 9 by downloading the binary. Installing oc by using an RPM package is not supported on Red Hat Enterprise Linux (RHEL) 9. Prerequisites Must have root or sudo privileges. Procedure Register with Red Hat Subscription Manager: # subscription-manager register Pull the latest subscription data: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an Red Hat build of MicroShift subscription and attach the subscription to the registered system: # subscription-manager attach --pool=<pool_id> Enable the repositories required by Red Hat build of MicroShift 4.18. # subscription-manager repos --enable="rhocp-4.18-for-rhel-8-x86_64-rpms" Install the openshift-clients package: # yum install openshift-clients Verification Verify your installation by using an oc command: USD oc <command> | [
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"brew install openshift-cli",
"oc <command>",
"subscription-manager register",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --enable=\"rhocp-4.18-for-rhel-8-x86_64-rpms\"",
"yum install openshift-clients",
"oc <command>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/cli_tools/microshift-oc-cli-install |
17.7. Establishing a Wireless Connection | 17.7. Establishing a Wireless Connection Wireless Ethernet devices are becoming increasingly popular. The configuration is similar to the Ethernet configuration except that it allows you to configure settings such as the SSID and key for the wireless device. To add a wireless Ethernet connection, follow these steps: Click the Devices tab. Click the New button on the toolbar. Select Wireless connection from the Device Type list and click Forward . If you have already added the wireless network interface card to the hardware list, select it from the Wireless card list. Otherwise, select Other Wireless Card to add the hardware device. Note The installation program usually detects supported wireless Ethernet devices and prompts you to configure them. If you configured them during the installation, they are displayed in the hardware list on the Hardware tab. If you selected Other Wireless Card , the Select Ethernet Adapter window appears. Select the manufacturer and model of the Ethernet card and the device. If this is the first Ethernet card for the system, select eth0 ; if this is the second Ethernet card for the system, select eth1 (and so on). The Network Administration Tool also allows the user to configure the resources for the wireless network interface card. Click Forward to continue. On the Configure Wireless Connection page as shown in Figure 17.12, "Wireless Settings" , configure the settings for the wireless device. Figure 17.12. Wireless Settings On the Configure Network Settings page, choose between DHCP and static IP address. You may specify a hostname for the device. If the device receives a dynamic IP address each time the network is started, do not specify a hostname. Click Forward to continue. Click Apply on the Create Wireless Device page. After configuring the wireless device, it appears in the device list as shown in Figure 17.13, "Wireless Device" . Figure 17.13. Wireless Device Be sure to select File => Save to save the changes. After adding the wireless device, you can edit its configuration by selecting the device from the device list and clicking Edit . For example, you can configure the device to activate at boot time. When the device is added, it is not activated immediately, as seen by its Inactive status. To activate the device, select it from the device list, and click the Activate button. If the system is configured to activate the device when the computer starts (the default), this step does not have to be performed again. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/s1-network-config-wireless |
Chapter 9. Administration | Chapter 9. Administration As a storage administrator, you can manage the Ceph Object Gateway using the radosgw-admin command line interface (CLI) or using the Red Hat Ceph Storage Dashboard. Note Not all of the Ceph Object Gateway features are available to the Red Hat Ceph Storage Dashboard. Storage Policies Indexless Buckets Configure bucket index resharding Compression User Management Role Management Quota Management Bucket Management Usage Ceph Object Gateway data layout Prerequisites A healthy running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway software. 9.1. Creating storage policies The Ceph Object Gateway stores the client bucket and object data by identifying placement targets, and storing buckets and objects in the pools associated with a placement target. If you don't configure placement targets and map them to pools in the instance's zone configuration, the Ceph Object Gateway will use default targets and pools, for example, default_placement . Storage policies give Ceph Object Gateway clients a way of accessing a storage strategy, that is, the ability to target a particular type of storage, such as SSDs, SAS drives, and SATA drives, as a way of ensuring, for example, durability, replication, and erasure coding. For details, see the Storage Strategies guide for Red Hat Ceph Storage 7. To create a storage policy, use the following procedure: Create a new pool .rgw.buckets.special with the desired storage strategy. For example, a pool customized with erasure-coding, a particular CRUSH ruleset, the number of replicas, and the pg_num and pgp_num count. Get the zone group configuration and store it in a file: Syntax Example Add a special-placement entry under placement_target in the zonegroup.json file: Example Set the zone group with the modified zonegroup.json file: Example Get the zone configuration and store it in a file, for example, zone.json : Example Edit the zone file and add the new placement policy key under placement_pool : Example Set the new zone configuration: Example Update the zone group map: Example The special-placement entry is listed as a placement_target . To specify the storage policy when making a request: Example 9.2. Creating indexless buckets You can configure a placement target where created buckets do not use the bucket index to store objects index; that is, indexless buckets. Placement targets that do not use data replication or listing might implement indexless buckets. Indexless buckets provide a mechanism in which the placement target does not track objects in specific buckets. This removes a resource contention that happens whenever an object write happens and reduces the number of round trips that Ceph Object Gateway needs to make to the Ceph storage cluster. This can have a positive effect on concurrent operations and small object write performance. Important The bucket index does not reflect the correct state of the bucket, and listing these buckets does not correctly return their list of objects. This affects multiple features. Specifically, these buckets are not synced in a multi-zone environment because the bucket index is not used to store change information. Red Hat recommends not to use S3 object versioning on indexless buckets, because the bucket index is necessary for this feature. Note Using indexless buckets removes the limit of the max number of objects in a single bucket. Note Objects in indexless buckets cannot be listed from NFS. Prerequisites A running and healthy Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway software. Root-level access to a Ceph Object Gateway node. Procedure Add a new placement target to the zonegroup: Example Add a new placement target to the zone: Example Set the zonegroup's default placement to indexless-placement : Example In this example, the buckets created in the indexless-placement target will be indexless buckets. Update and commit the period if the cluster is in a multi-site configuration: Example Restart the Ceph Object Gateways on all nodes in the storage cluster for the change to take effect: Syntax Example 9.3. Configure bucket index resharding As a storage administrator, you can configure bucket index resharding in single-site and multi-site deployments to improve performance. You can reshard a bucket index either manually offline or dynamically online. 9.3.1. Bucket index resharding The Ceph Object Gateway stores bucket index data in the index pool, which defaults to .rgw.buckets.index parameter. When the client puts many objects in a single bucket without setting quotas for the maximum number of objects per bucket, the index pool can result in significant performance degradation. Bucket index resharding prevents performance bottlenecks when you add a high number of objects per bucket. You can configure bucket index resharding for new buckets or change the bucket index on the existing ones. You need to have the shard count as the nearest prime number to the calculated shard count. The bucket index shards that are prime numbers tend to work better in an evenly distributed bucket index entries across shards. Bucket index can be resharded manually or dynamically. During the process of resharding bucket index dynamically, there is a periodic check of all the Ceph Object Gateway buckets and it detects buckets that require resharding. If a bucket has grown larger than the value specified in the rgw_max_objs_per_shard parameter, the Ceph Object Gateway reshards the bucket dynamically in the background. The default value for rgw_max_objs_per_shard is 100k objects per shard. Resharding bucket index dynamically works as expected on the upgraded single-site configuration without any modification to the zone or the zone group. A single site-configuration can be any of the following: A default zone configuration with no realm. A non-default configuration with at least one realm. A multi-realm single-site configuration. Note Versioned buckets may exhibit imbalanced indexes, especially if a small subset of objects are written and re-written. This issue may lead to large omaps on the versioned bucket when a large number of object uploads happen (around a million objects). 9.3.2. Recovering bucket index Resharding a bucket that was created with bucket_index_max_shards = 0 , removes the bucket's metadata. However, you can restore the bucket indexes by recovering the affected buckets. The /usr/bin/rgw-restore-bucket-index tool creates temporary files in the /tmp directory. These temporary files consume space based on the bucket objects count from the buckets. The buckets with more than 10M objects needs more than 4GB of free space in /tmp directory. If the storage space in /tmp is exhausted, the tool fails with the following message: The temporary objects are removed. Prerequisites A running Red Hat Ceph Storage cluster. A Ceph Object Gateway installed at a minimum of two sites. The jq package installed. Procedure Perform either of the below two steps to perform recovery of bucket indexes: Run radosgw-admin object reindex --bucket BUCKET_NAME --object OBJECT_NAME command. Run the script - /usr/bin/rgw-restore-bucket-index -b BUCKET_NAME -p DATA_POOL_NAME . Example Note The tool does not work for versioned buckets. The tool's scope is limited to a single site only and not multi-site, that is, if we run rgw-restore-bucket-index tool at site-1, it does not recover objects in site-2 and vice versa. On a multi-site, the recovery tool and the object re-index command should be executed at both sites for a bucket. 9.3.3. Limitations of bucket index resharding Important Use the following limitations with caution. There are implications related to your hardware selections, so you should always discuss these requirements with your Red Hat account team. Maximum number of objects in one bucket before it needs resharding: Use a maximum of 102,400 objects per bucket index shard. To take full advantage of resharding and maximize parallelism, provide a sufficient number of OSDs in the Ceph Object Gateway bucket index pool. This parallelization scales with the number of Ceph Object Gateway instances, and replaces the in-order index shard enumeration with a number sequence. The default locking timeout is extended from 60 seconds to 90 seconds. Maximum number of objects when using sharding: Based on prior testing, the number of bucket index shards currently supported is 65521. Red Hat quality assurance has NOT performed full scalability testing on bucket sharding. Maximum number of objects when using sharding: Based on prior testing, the number of bucket index shards currently supported is 65,521. You can reshard a bucket three times before the other zones catch-up: Resharding is not recommended until the older generations synchronize. Around four generations of the buckets from reshards are supported. Once the limit is reached, dynamic resharding does not reshard the bucket again until at least one of the old log generations are fully trimmed. Using the command radosgw-admin bucket reshard throws the following error: 9.3.4. Configuring bucket index resharding in simple deployments To enable and configure bucket index resharding on all new buckets, use the rgw_override_bucket_index_max_shards parameter. You can set the parameter to one of the following values: 0 to disable bucket index sharding, which is the default value. A value greater than 0 to enable bucket sharding and to set the maximum number of shards. Prerequisites A running Red Hat Ceph Storage cluster. A Ceph Object Gateway installed at a minimum of two sites. Procedure Calculate the recommended number of shards: Note The maximum number of bucket index shards currently supported is 65,521. Set the rgw_override_bucket_index_max_shards option accordingly: Syntax Replace VALUE with the recommended number of shards calculated: Example To configure bucket index resharding for all instances of the Ceph Object Gateway, set the rgw_override_bucket_index_max_shards parameter with the global option. To configure bucket index resharding only for a particular instance of the Ceph Object Gateway, add rgw_override_bucket_index_max_shards parameter under the instance. Restart the Ceph Object Gateways on all nodes in the cluster to take effect: Syntax Example Additional Resources See the Resharding bucket index dynamically See the Resharding bucket index manually 9.3.5. Configuring bucket index resharding in multi-site deployments In multi-site deployments, each zone can have a different index_pool setting to manage failover. To configure a consistent shard count for zones in one zone group, set the bucket_index_max_shards parameter in the configuration for that zone group. The default value of bucket_index_max_shards parameter is 11. You can set the parameter to one of the following values: 0 to disable bucket index sharding. A value greater than 0 to enable bucket sharding and to set the maximum number of shards. Note Mapping the index pool, for each zone, if applicable, to a CRUSH ruleset of SSD-based OSDs might also help with bucket index performance. See the Establishing performance domains section for more information. Important To prevent sync issues in multi-site deployments, a bucket should not have more than three generation gaps. Prerequisites A running Red Hat Ceph Storage cluster. A Ceph Object Gateway installed at a minimum of two sites. Procedure Calculate the recommended number of shards: Note The maximum number of bucket index shards currently supported is 65,521. Extract the zone group configuration to the zonegroup.json file: Example In the zonegroup.json file, set the bucket_index_max_shards parameter for each named zone: Syntax Replace VALUE with the recommended number of shards calculated: Example Reset the zone group: Example Update the period: Example Check if resharding is complete: Syntax Example Verification Check the sync status of the storage cluster: Example 9.3.6. Resharding bucket index dynamically You can reshard the bucket index dynamically by adding the bucket to the resharding queue. It gets scheduled to be resharded. The reshard threads run in the background and executes the scheduled resharding, one at a time. Prerequisites A running Red Hat Ceph Storage cluster. A Ceph Object Gateway installed at a minimum of two sites. Procedure Set the rgw_dynamic_resharding parameter is set to true . Example Optional: Customize Ceph configuration using the following command: Syntax Replace OPTION with the following options: rgw_reshard_num_logs : The number of shards for the resharding log. The default value is 16 . rgw_reshard_bucket_lock_duration : The duration of the lock on a bucket during resharding. The default value is 360 seconds. rgw_dynamic_resharding : Enables or disables dynamic resharding. The default value is true . rgw_max_objs_per_shard : The maximum number of objects per shard. The default value is 100000 objects per shard. rgw_reshard_thread_interval : The maximum time between rounds of reshard thread processing. The default value is 600 seconds. Example Add a bucket to the resharding queue: Syntax Example List the resharding queue: Example Check the bucket log generations and shards: Example Check bucket resharding status: Syntax Example Process entries on the resharding queue immediately: Cancel pending bucket resharding: Warning You can only cancel pending resharding operations. Do not cancel ongoing resharding operations. Syntax Example Verification Check bucket resharding status: Syntax Example Additional resources See the Cleaning stale instances of bucket entries after resharding section to remove the stale bucket entries. See the Resharding bucket index manually . See the Configuring bucket index resharding in simple deployments . 9.3.7. Resharding bucket index dynamically in multi-site configuration Red Hat Ceph Storage supports dynamic bucket index resharding in multi-site configuration. The feature allows buckets to be resharded in a multi-site configuration without interrupting the replication of their objects. When rgw_dynamic_resharding is enabled, it runs on each zone independently, and the zones might choose different shard counts for the same bucket. These steps that need to be followed are for an existing Red Hat Ceph Storage cluster only . You need to enable the resharding feature manually on the existing zones and the zone groups after upgrading the storage cluster. Note Zones and zone groups are supported and enabled by default. Note You can reshard a bucket three times before the other zones catch-up. See the Limitations of bucket index resharding for more details. Note If a bucket is created and uploaded with more than the threshold number of objects for resharding dynamically, you need to continue to write I/Os to old buckets to begin the resharding process. Prerequisites The Red Hat Ceph Storage clusters at both sites are upgraded to the latest version. All the Ceph Object Gateway daemons enabled at both the sites are upgraded to the latest version. Root-level access to all the nodes. Procedure Check if resharding is enabled on the zonegroup: Example If zonegroup features enabled is not enabled for resharding on the zonegroup, then continue with the procedure. Enable the resharding feature on all the zonegroups in the multi-site configuration where Ceph Object Gateway is installed: Syntax Example Update the period and commit: Example Enable the resharding feature on all the zones in the multi-site configuration where Ceph Object Gateway is installed: Syntax Example Update the period and commit: Example Verify the resharding feature is enabled on the zones and zonegroups. You can see that each zone lists its supported_features and the zonegroups lists its enabled_features Example Check the sync status: Example In this example, the resharding feature is enabled for the us zonegroup. Optional: You can disable the resharding feature for the zonegroups: Important To disable resharding on any singular zone, set the rgw_dynamic_resharding configuration option to false on that specific zone. Disable the feature on all the zonegroups in the multi-site where Ceph Object Gateway is installed: Syntax Example Update the period and commit: Example Additional Resources For more configurable parameters for dynamic bucket index resharding, see the Resharding bucket index dynamically section in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide . 9.3.8. Resharding bucket index manually If a bucket has grown larger than the initial configuration for which it was optimzed, reshard the bucket index pool by using the radosgw-admin bucket reshard command. This command performs the following tasks: Creates a new set of bucket index objects for the specified bucket. Distributes object entries across these bucket index objects. Creates a new bucket instance. Links the new bucket instance with the bucket so that all new index operations go through the new bucket indexes. Prints the old and the new bucket ID to the command output. Prerequisites A running Red Hat Ceph Storage cluster. A Ceph Object Gateway installed at a minimum of two sites. Procedure Back up the original bucket index: Syntax Example Reshard the bucket index: Syntax Example Verification Check bucket resharding status: Syntax Example Additional Resources See the Configuring bucket index resharding in multi-site deployments in the Red Hat Ceph Storage Object Gateway Guide for more details. See the Resharding bucket index dynamically . See the Configuring bucket index resharding in simple deployments . 9.3.9. Cleaning stale instances of bucket entries after resharding The resharding process might not clean stale instances of bucket entries automatically and these instances can impact performance of the storage cluster. Clean them manually to prevent the stale instances from negatively impacting the performance of the storage cluster. Important Contact Red Hat Support prior to cleaning the stale instances. Important Use this procedure only in simple deployments, not in multi-site clusters. Prerequisites A running Red Hat Ceph Storage cluster. Ceph Object Gateway installed. Procedure List stale instances: Clean the stale instances of the bucket entries: Verification Check bucket resharding status: Syntax Example 9.3.10. Enabling compression The Ceph Object Gateway supports server-side compression of uploaded objects using any of Ceph's compression plugins. These include: zlib : Supported. snappy : Supported. zstd : Supported. Configuration To enable compression on a zone's placement target, provide the --compression= TYPE option to the radosgw-admin zone placement modify command. The compression TYPE refers to the name of the compression plugin to use when writing new object data. Each compressed object stores the compression type. Changing the setting does not hinder the ability to decompress existing compressed objects, nor does it force the Ceph Object Gateway to recompress existing objects. This compression setting applies to all new objects uploaded to buckets using this placement target. To disable compression on a zone's placement target, provide the --compression= TYPE option to the radosgw-admin zone placement modify command and specify an empty string or none . Example After enabling or disabling compression, restart the Ceph Object Gateway instance so the change will take effect. Note Ceph Object Gateway creates a default zone and a set of pools. For production deployments, see the Creating a Realm section first. Statistics While all existing commands and APIs continue to report object and bucket sizes based on their uncompressed data, the radosgw-admin bucket stats command includes compression statistics for all buckets. The usage types for the radosgw-admin bucket stats command are: rgw.main refers to regular entries or objects. rgw.multimeta refers to the metadata of incomplete multipart uploads. rgw.cloudtiered refers to objects that a lifecycle policy has transitioned to a cloud tier. When configured with retain_head_object=true , a head object is left behind that no longer contains data, but can still serve the object's metadata via HeadObject requests. These stub head objects use the rgw.cloudtiered category. See the Transitioning data to Amazon S3 cloud service section in the Red Hat Ceph Storage Object Gateway Guide for more information. Syntax The size is the accumulated size of the objects in the bucket, uncompressed and unencrypted. The size_kb is the accumulated size in kilobytes and is calculated as ceiling(size/1024) . In this example, it is ceiling(1075028/1024) = 1050 . The size_actual is the accumulated size of all the objects after each object is distributed in a set of 4096-byte blocks. If a bucket has two objects, one of size 4100 bytes and the other of 8500 bytes, the first object is rounded up to 8192 bytes, and the second one rounded 12288 bytes, and their total for the bucket is 20480 bytes. The size_kb_actual is the actual size in kilobytes and is calculated as size_actual/1024 . In this example, it is 1331200/1024 = 1300 . The size_utilized is the total size of the data in bytes after it has been compressed and/or encrypted. Encryption could increase the size of the object while compression could decrease it. The size_kb_utilized is the total size in kilobytes and is calculated as ceiling(size_utilized/1024) . In this example, it is ceiling(592035/1024)= 579 . 9.4. User management Ceph Object Storage user management refers to users that are client applications of the Ceph Object Storage service; not the Ceph Object Gateway as a client application of the Ceph Storage Cluster. You must create a user, access key, and secret to enable client applications to interact with the Ceph Object Gateway service. There are two user types: User: The term 'user' reflects a user of the S3 interface. Subuser: The term 'subuser' reflects a user of the Swift interface. A subuser is associated to a user . You can create, modify, view, suspend, and remove users and subusers. Important When managing users in a multi-site deployment, ALWAYS issue the radosgw-admin command on a Ceph Object Gateway node within the master zone of the master zone group to ensure that users synchronize throughout the multi-site cluster. DO NOT create, modify, or delete users on a multi-site cluster from a secondary zone or a secondary zone group. In addition to creating user and subuser IDs, you may add a display name and an email address for a user. You can specify a key and secret, or generate a key and secret automatically. When generating or specifying keys, note that user IDs correspond to an S3 key type and subuser IDs correspond to a swift key type. Swift keys also have access levels of read , write , readwrite and full . User management command line syntax generally follows the pattern user COMMAND USER_ID where USER_ID is either the --uid= option followed by the user's ID (S3) or the --subuser= option followed by the user name (Swift). Syntax Additional options may be required depending on the command you issue. 9.4.1. Multi-tenancy The Ceph Object Gateway supports multi-tenancy for both the S3 and Swift APIs, where each user and bucket lies under a "tenant." Multi tenancy prevents namespace clashing when multiple tenants are using common bucket names, such as "test", "main", and so forth. Each user and bucket lies under a tenant. For backward compatibility, a "legacy" tenant with an empty name is added. Whenever referring to a bucket without specifically specifying a tenant, the Swift API will assume the "legacy" tenant. Existing users are also stored under the legacy tenant, so they will access buckets and objects the same way as earlier releases. Tenants as such do not have any operations on them. They appear and disappear as needed, when users are administered. In order to create, modify, and remove users with explicit tenants, either an additional option --tenant is supplied, or a syntax " TENANT USD USER " is used in the parameters of the radosgw-admin command. To create a user testxUSDtester for S3, run the following command: Example To create a user testxUSDtester for Swift, run one of the following commands: Example Note The subuser with explicit tenant had to be quoted in the shell. 9.4.2. Create a user Use the user create command to create an S3-interface user. You MUST specify a user ID and a display name. You may also specify an email address. If you DO NOT specify a key or secret, radosgw-admin will generate them for you automatically. However, you may specify a key and/or a secret if you prefer not to use generated key/secret pairs. Syntax Example Important Check the key output. Sometimes radosgw-admin generates a JSON escape ( \ ) character, and some clients do not know how to handle JSON escape characters. Remedies include removing the JSON escape character ( \ ), encapsulating the string in quotes, regenerating the key to ensure that it does not have a JSON escape character, or specifying the key and secret manually. 9.4.3. Create a subuser To create a subuser (Swift interface), you must specify the user ID ( --uid= USERNAME ), a subuser ID and the access level for the subuser. If you DO NOT specify a key or secret, radosgw-admin generates them for you automatically. However, you can specify a key, a secret, or both if you prefer not to use generated key and secret pairs. Note full is not readwrite , as it also includes the access control policy. Syntax Example 9.4.4. Get user information To get information about a user, specify user info and the user ID ( --uid= USERNAME ). Example To get information about a tenanted user, specify both the user ID and the name of the tenant. 9.4.5. Modify user information To modify information about a user, you must specify the user ID ( --uid= USERNAME ) and the attributes you want to modify. Typical modifications are to keys and secrets, email addresses, display names, and access levels. Example To modify subuser values, specify subuser modify and the subuser ID. Example 9.4.6. Enable and suspend users When you create a user, the user is enabled by default. However, you may suspend user privileges and re-enable them at a later time. To suspend a user, specify user suspend and the user ID. To re-enable a suspended user, specify user enable and the user ID: Note Disabling the user disables the subuser. 9.4.7. Remove a user When you remove a user, the user and subuser are removed from the system. However, you may remove only the subuser if you wish. To remove a user (and subuser), specify user rm and the user ID. Syntax Example To remove the subuser only, specify subuser rm and the subuser name. Example Options include: Purge Data: The --purge-data option purges all data associated with the UID. Purge Keys: The --purge-keys option purges all keys associated with the UID. 9.4.8. Remove a subuser When you remove a subuser, you are removing access to the Swift interface. The user remains in the system. To remove the subuser, specify subuser rm and the subuser ID. Syntax Example Options include: Purge Keys: The --purge-keys option purges all keys associated with the UID. 9.4.9. Rename a user To change the name of a user, use the radosgw-admin user rename command. The time that this command takes depends on the number of buckets and objects that the user has. If the number is large, Red Hat recommends using the command in the Screen utility provided by the screen package. Prerequisites A working Ceph cluster. root or sudo access to the host running the Ceph Object Gateway. Installed Ceph Object Gateway. Procedure Rename a user: Syntax Example If a user is inside a tenant, specify both the user name and the tenant: Syntax Example Verify that the user has been renamed successfully: Syntax Example If a user is inside a tenant, use the TENANT USD USER_NAME format: Syntax Example Additional Resources The screen(1) manual page 9.4.10. Create a key To create a key for a user, you must specify key create . For a user, specify the user ID and the s3 key type. To create a key for a subuser, you must specify the subuser ID and the swift keytype. Example 9.4.11. Add and remove access keys Users and subusers must have access keys to use the S3 and Swift interfaces. When you create a user or subuser and you do not specify an access key and secret, the key and secret get generated automatically. You may create a key and either specify or generate the access key and/or secret. You may also remove an access key and secret. Options include: --secret= SECRET_KEY specifies a secret key, for example, manually generated. --gen-access-key generates a random access key (for S3 users by default). --gen-secret generates a random secret key. --key-type= KEY_TYPE specifies a key type. The options are: swift and s3. To add a key, specify the user: Example You might also specify a key and a secret. To remove an access key, you need to specify the user and the key: Find the access key for the specific user: Example The access key is the "access_key" value in the output: Example Specify the user ID and the access key from the step to remove the access key: Syntax Example 9.4.12. Add and remove admin capabilities The Ceph Storage Cluster provides an administrative API that enables users to run administrative functions via the REST API. By default, users DO NOT have access to this API. To enable a user to exercise administrative functionality, provide the user with administrative capabilities. To add administrative capabilities to a user, run the following command: Syntax You can add read, write, or all capabilities to users, buckets, metadata, and usage (utilization). Syntax Example To remove administrative capabilities from a user, run the following command: Example 9.5. Role management As a storage administrator, you can create, delete, or update a role and the permissions associated with that role with the radosgw-admin commands. A role is similar to a user and has permission policies attached to it. It can be assumed by any identity. If a user assumes a role, a set of dynamically created temporary credentials are returned to the user. A role can be used to delegate access to users, applications and services that do not have permissions to access some S3 resources. 9.5.1. Creating a role Create a role for the user with the radosgw-admin role create command. You need to create a user with assume-role-policy-doc parameter in the command, which is the trust relationship policy document that grants an entity the permission to assume the role. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway. Root-level access to a Ceph Object Gateway node. An S3 bucket created. An S3 user created with user access. Procedure Create the role: Syntax Example The value for --path is / by default. 9.5.2. Getting a role Get the information about a role with the get command. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway. Root-level access to a Ceph Object Gateway node. An S3 bucket created. A role created. An S3 user created with user access. Procedure Getting the information about the role: Syntax Example Additional Resources See the Creating a role section in the Red Hat Ceph Storage Object Gateway Guide for details. 9.5.3. Listing a role You can list the roles in the specific path with the list command. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway. Root-level access to a Ceph Object Gateway node. An S3 bucket created. A role created. An S3 user created with user access. Procedure List the roles: Syntax Example 9.5.4. Updating assume role policy document of a role You can update the assume role policy document that grants an entity permission to assume the role with the modify command. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway. Root-level access to a Ceph Object Gateway node. An S3 bucket created. A role created. An S3 user created with user access. Procedure Modify the assume role policy document of a role: Syntax Example 9.5.5. Getting permission policy attached to a role You can get the specific permission policy attached to a role with the get command. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway. Root-level access to a Ceph Object Gateway node. An S3 bucket created. A role created. An S3 user created with user access. Procedure Get the permission policy: Syntax Example 9.5.6. Deleting a role You can delete the role only after removing the permission policy attached to it. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway. Root-level access to a Ceph Object Gateway node. A role created. An S3 bucket created. An S3 user created with user access. Procedure Delete the policy attached to the role: Syntax Example Delete the role: Syntax Example 9.5.7. Updating a policy attached to a role You can either add or update the inline policy attached to a role with the put command. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway. Root-level access to a Ceph Object Gateway node. An S3 bucket created. A role created. An S3 user created with user access. Procedure Update the inline policy: Syntax Example In this example, you attach the Policy1 to the role S3Access1 which allows all S3 actions on an example_bucket . 9.5.8. Listing permission policy attached to a role You can list the names of the permission policies attached to a role with the list command. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway. Root-level access to a Ceph Object Gateway node. An S3 bucket created. A role created. An S3 user created with user access. Procedure List the names of the permission policies: Syntax Example 9.5.9. Deleting policy attached to a role You can delete the permission policy attached to a role with the rm command. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway. Root-level access to a Ceph Object Gateway node. An S3 bucket created. A role created. An S3 user created with user access. Procedure Delete the permission policy: Syntax Example 9.5.10. Updating the session duration of a role You can update the session duration of a role with the update command to control the length of time that a user can be signed into the account with the provided credentials. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway. Root-level access to a Ceph Object Gateway node. An S3 bucket created. A role created. An S3 user created with user access. Procedure Update the max-session-duration using the update command: Syntax Example Verification List the roles to verify the updates: Example Additional Resources See the REST APIs for manipulating a role section in the Red Hat Ceph Storage Developer Guide for details. 9.6. Quota management The Ceph Object Gateway enables you to set quotas on users and buckets owned by users. Quotas include the maximum number of objects in a bucket and the maximum storage size in megabytes. Bucket: The --bucket option allows you to specify a quota for buckets the user owns. Maximum Objects: The --max-objects setting allows you to specify the maximum number of objects. A negative value disables this setting. Maximum Size: The --max-size option allows you to specify a quota for the maximum number of bytes. A negative value disables this setting. Quota Scope: The --quota-scope option sets the scope for the quota. The options are bucket and user . Bucket quotas apply to buckets a user owns. User quotas apply to a user. Important Buckets with a large number of objects can cause serious performance issues. The recommended maximum number of objects in a one bucket is 100,000. To increase this number, configure bucket index sharding. See the Configure bucket index resharding for details. 9.6.1. Set user quotas Before you enable a quota, you must first set the quota parameters. Syntax Example A negative value for num objects and / or max size means that the specific quota attribute check is disabled. 9.6.2. Enable and disable user quotas Once you set a user quota, you can enable it. Syntax You may disable an enabled user quota. Syntax 9.6.3. Set bucket quotas Bucket quotas apply to the buckets owned by the specified uid . They are independent of the user. Syntax A negative value for NUMBER_OF_OBJECTS , MAXIMUM_SIZE_IN_BYTES , or both means that the specific quota attribute check is disabled. 9.6.4. Enable and disable bucket quotas Once you set a bucket quota, you may enable it. Syntax You may disable an enabled bucket quota. Syntax 9.6.5. Get quota settings You may access each user's quota settings via the user information API. To read user quota setting information with the CLI interface, run the following command: Syntax To get quota settings for a tenanted user, specify the user ID and the name of the tenant: Syntax 9.6.6. Update quota stats Quota stats get updated asynchronously. You can update quota statistics for all users and all buckets manually to retrieve the latest quota stats. Syntax 9.6.7. Get user quota usage stats To see how much of the quota a user has consumed, run the following command: Syntax Note You should run the radosgw-admin user stats command with the --sync-stats option to receive the latest data. 9.6.8. Quota cache Quota statistics are cached for each Ceph Gateway instance. If there are multiple instances, then the cache can keep quotas from being perfectly enforced, as each instance will have a different view of the quotas. The options that control this are rgw bucket quota ttl , rgw user quota bucket sync interval , and rgw user quota sync interval . The higher these values are, the more efficient quota operations are, but the more out-of-sync multiple instances will be. The lower these values are, the closer to perfect enforcement multiple instances will achieve. If all three are 0, then quota caching is effectively disabled, and multiple instances will have perfect quota enforcement. 9.6.9. Reading and writing global quotas You can read and write quota settings in a zonegroup map. To get a zonegroup map: The global quota settings can be manipulated with the global quota counterparts of the quota set , quota enable , and quota disable commands, for example: Note In a multi-site configuration, where there is a realm and period present, changes to the global quotas must be committed using period update --commit . If there is no period present, the Ceph Object Gateways must be restarted for the changes to take effect. 9.7. Bucket management As a storage administrator, when using the Ceph Object Gateway you can manage buckets by moving them between users and renaming them. You can create bucket notifications to trigger on specific events. Also, you can find orphan or leaky objects within the Ceph Object Gateway that can occur over the lifetime of a storage cluster. Note When millions of objects are uploaded to a Ceph Object Gateway bucket with a high ingest rate, incorrect num_objects are reported with the radosgw-admin bucket stats command. With the radosgw-admin bucket list command you can correct the value of num_objects parameter. Note In a multi-site cluster, deletion of a bucket from the secondary site does not sync the metadata changes with the primary site. Hence, Red Hat recommends to delete a bucket only from the primary site and not from the secondary site. 9.7.1. Renaming buckets You can rename buckets. If you want to allow underscores in bucket names, then set the rgw_relaxed_s3_bucket_names option to true . Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway software. An existing bucket. Procedure List the buckets: Example Rename the bucket: Syntax Example If the bucket is inside a tenant, specify the tenant as well: Syntax Example Verify the bucket was renamed: Example 9.7.2. Removing buckets Remove buckets from a Red Hat Ceph Storage cluster with the Ceph Object Gateway configuration. When the bucket does not have objects, you can run the radosgw-admin bucket rm command. If there are objects in the buckets, then you can use the --purge-objects option. For multi-site configuration, Red Hat recommends to delete the buckets from the primary site. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway software. An existing bucket. Procedure List the buckets. Example Remove the bucket. Syntax Example If the bucket has objects, then run the following command: Syntax Example The --purge-objects option purges the objects and --bypass-gc option triggers deletion of objects without the garbage collector to make the process more efficient. Verify the bucket was removed. Example 9.7.3. Moving buckets The radosgw-admin bucket utility provides the ability to move buckets between users. To do so, link the bucket to a new user and change the ownership of the bucket to the new user. You can move buckets: between two non-tenanted users between two tenanted users between a non-tenanted user to a tenanted user Prerequisites A running Red Hat Ceph Storage cluster. Ceph Object Gateway is installed. An S3 bucket. Various tenanted and non-tenanted users. 9.7.3.1. Moving buckets between non-tenanted users The radosgw-admin bucket chown command provides the ability to change the ownership of buckets and all objects they contain from one user to another. To do so, unlink a bucket from the current user, link it to a new user, and change the ownership of the bucket to the new user. Procedure Link the bucket to a new user: Syntax Example Verify that the bucket has been linked to user2 successfully: Example Change the ownership of the bucket to the new user: Syntax Example Verify that the ownership of the data bucket has been successfully changed by checking the owner line in the output of the following command: Example 9.7.3.2. Moving buckets between tenanted users You can move buckets between one tenanted user and another. Procedure Link the bucket to a new user: Syntax Example Verify that the bucket has been linked to user2 successfully: Change the ownership of the bucket to the new user: Syntax Example Verify that the ownership of the data bucket has been successfully changed by checking the owner line in the output of the following command: 9.7.3.3. Moving buckets from non-tenanted users to tenanted users You can move buckets from a non-tenanted user to a tenanted user. Procedure Optional: If you do not already have multiple tenants, you can create them by enabling rgw_keystone_implicit_tenants and accessing the Ceph Object Gateway from an external tenant: Enable the rgw_keystone_implicit_tenants option: Example Access the Ceph Object Gateway from an eternal tenant using either the s3cmd or swift command: Example Or use s3cmd : Example The first access from an external tenant creates an equivalent Ceph Object Gateway user. Move a bucket to a tenanted user: Syntax Example Verify that the data bucket has been linked to tenanted-user successfully: Example Change the ownership of the bucket to the new user: Syntax Example Verify that the ownership of the data bucket has been successfully changed by checking the owner line in the output of the following command: Example 9.7.3.4. Finding orphan and leaky objects A healthy storage cluster does not have any orphan or leaky objects, but in some cases orphan or leaky objects can occur. An orphan object exists in a storage cluster and has an object ID associated with the RADOS object. However, there is no reference of the RADOS object with the S3 object in the bucket index reference. For example, if the Ceph Object Gateway goes down in the middle of an operation, this can cause some objects to become orphans. Also, an undiscovered bug can cause orphan objects to occur. You can see how the Ceph Object Gateway objects map to the RADOS objects. The radosgw-admin command provides a tool to search for and produce a list of these potential orphan or leaky objects. Using the radoslist subcommand displays objects stored within buckets, or all buckets in the storage cluster. The rgw-orphan-list script displays orphan objects within a pool. Note The radoslist subcommand is replacing the deprecated orphans find and orphans finish subcommands. Important Do not use this command where Indexless buckets are in use as all the objects appear as orphaned . Another alternate way to identity orphaned objects is to run the rados -p <pool> ls | grep BUCKET_ID command. Prerequisites A running Red Hat Ceph Storage cluster. A running Ceph Object Gateway. Procedure Generate a list of objects that hold data within a bucket. Syntax Example Note If the BUCKET_NAME is omitted, then all objects in all buckets are displayed. Check the version of rgw-orphan-list . Example The version should be 2023-01-11 or newer. Create a directory where you need to generate the list of orphans. Example Navigate to the directory created earlier. Example From the pool list, select the pool in which you want to find orphans. This script might run for a long time depending on the objects in the cluster. Example Example Enter the pool name to search for orphans. Important A data pool must be specified when using the rgw-orphan-list command, and not a metadata pool. View the details of the rgw-orphan-list tool usage. Synatx Example Run the ls -l command to verify the files ending with error should be zero length indicating the script ran without any issues. Example Review the orphan objects listed. Example Remove orphan objects: Syntax Example Warning Verify you are removing the correct objects. Running the rados rm command removes data from the storage cluster. 9.7.3.5. Managing bucket index entries You can manage the bucket index entries of the Ceph Object Gateway in a Red Hat Ceph Storage cluster using the radosgw-admin bucket check sub-command. Each bucket index entry related to a piece of a multipart upload object is matched against its corresponding .meta index entry. There should be one .meta entry for all the pieces of a given multipart upload. If it fails to find a corresponding .meta entry for a piece, it lists out the "orphaned" piece entries in a section of the output. The stats for the bucket are stored in the bucket index headers. This phase loads those headers and also iterates through all the plain object entries in the bucket index and recalculates the stats. It then displays the actual and calculated stats in sections labeled "existing_header" and "calculated_header" respectively, so they can be compared. If you use the --fix option with the bucket check sub-command, it removes the "orphaned" entries from the bucket index and also overwrites the existing stats in the header with those that it calculated. It causes all entries, including the multiple entries used in versioning, to be listed in the output. Prerequisites A running Red Hat Ceph Storage cluster. A running Ceph Object Gateway. A newly created bucket. Procedure Check the bucket index of a specific bucket: Syntax Example Fix the inconsistencies in the bucket index, including removal of orphaned objects: Syntax Example 9.7.3.6. Bucket notifications Bucket notifications provide a way to send information out of the Ceph Object Gateway when certain events happen in the bucket. Bucket notifications can be sent to HTTP, AMQP0.9.1, and Kafka endpoints. A notification entry must be created to send bucket notifications for events on a specific bucket and to a specific topic. A bucket notification can be created on a subset of event types or by default for all event types. The bucket notification can filter out events based on key prefix or suffix, regular expression matching the keys, and on the metadata attributes attached to the object, or the object tags. Bucket notifications have a REST API to provide configuration and control interfaces for the bucket notification mechanism. Note The bucket notifications API is enabled by default. If rgw_enable_apis configuration parameter is explicitly set, ensure that s3 , and notifications are included. To verify this, run the ceph --admin-daemon /var/run/ceph/ceph-client.rgw. NAME .asok config get rgw_enable_apis command. Replace NAME with the Ceph Object Gateway instance name. Topic management using CLI You can manage list, get, and remove topics for the Ceph Object Gateway buckets: List topics : Run the following command to list the configuration of all topics: Example Get topics : Run the following command to get the configuration of a specific topic: Example Remove topics : Run the following command to remove the configuration of a specific topic: Example Note The topic is removed even if the Ceph Object Gateway bucket is configured to that topic. 9.7.3.7. Creating bucket notifications Create bucket notifications at the bucket level. The notification configuration has the Red Hat Ceph Storage Object Gateway S3 events, ObjectCreated , ObjectRemoved , and ObjectLifecycle:Expiration . These need to be published with the destination to send the bucket notifications. Bucket notifications are S3 operations. Prerequisites A running Red Hat Ceph Storage cluster. A running HTTP server, RabbitMQ server, or a Kafka server. Root-level access. Installation of the Red Hat Ceph Storage Object Gateway. User access key and secret key. Endpoint parameters. Important Red Hat supports ObjectCreate events, such as put , post , multipartUpload , and copy . Red Hat also supports ObjectRemove events, such as object_delete and s3_multi_object_delete . Procedure Create an S3 bucket. Create an SNS topic for http , amqp , or kafka protocol. Create an S3 bucket notification for s3:objectCreate , s3:objectRemove , and s3:ObjectLifecycle:Expiration events: Example Create S3 objects in the bucket. Verify the object creation events at the http , rabbitmq , or kafka receiver. Delete the objects. Verify the object deletion events at the http , rabbitmq , or kafka receiver. 9.7.4. S3 bucket replication API The S3 bucket replication API is implemented, and allows users to create replication rules between different buckets. Note though that while the AWS replication feature allows bucket replication within the same zone, Ceph Object Gateway does not allow it at the moment. However, the Ceph Object Gateway API also added a Zone array that allows users to select to what zones the specific bucket will be synced. 9.7.4.1. Creating S3 bucket replication Create a replication configuration for a bucket or replace an existing one. A replication configuration must include at least one rule. Each rule identifies a subset of objects to replicate by filtering the objects in the source bucket. Note If you have created a bucket-level policy using S3 replication, the pipe configuration by default sets the user mode as user destination parameter. For information on destination parameters see, Destination Params: User Mode . Prerequisites A running Red Hat Ceph Storage cluster with multi-site Ceph object gateway configured. For more information on creating multi-site sync policies, see Creating a sync policy group . Zone group level policy created. For more information on creating zone group policies, see Bucket granular sync policies . Procedure Create a replication configuration file that contains the details of replication: Syntax Example Create the S3 API put bucket replication: Syntax Example Verification Verify the sync policy, by using the sync policy get command. Syntax Note When applying replication policy, the rules are converted to sync-policy rules, known as pipes , and are categorized as enabled and disabled . Enabled : These pipes are enabled and are active and the group status is set to 'rgw_sync_policy_group:STATUS'. For example, s3-bucket-replication:enabled . Disabled : The pipes under this set are not active and the group status is set to 'rgw_sync_policy_group:STATUS'. For example, s3-bucket-replication:disabled . Since there can be multiple rules which can be configured as part of replication policy, it has two separate groups (one with 'enabled' and another with 'allowed' state) for accurate mapping of each group. Example Additional Resources See the Using multi-site sync policies section in the Red Hat Ceph Storage Object Gateway Guide for details. 9.7.4.2. Getting S3 bucket replication You can retrieve the replication configuration of the bucket. Prerequisites A running Red Hat Ceph Storage cluster with multi-site Ceph object gateway configured. For more information on creating multi-site sync policies, see Creating a sync policy group . Zone group level policy created. For more information on creating zone group policies, see Bucket granular sync policies . An S3 bucket replication created. For more information, see S3 bucket replication API . Procedure Get the S3 API put bucket replication: Syntax Example 9.7.4.3. Deleting S3 bucket replication Delete a replication configuration from a bucket. The bucket owner can grant permission to others to remove the replication configuration. Prerequisites A running Red Hat Ceph Storage cluster with multi-site Ceph object gateway configured. For more information on creating multi-site sync policies, see Creating a sync policy group . Zone group level policy created. For more information on creating zone group policies, see Bucket granular sync policies . An S3 bucket replication created. For more information, see S3 bucket replication API . Procedure Delete the S3 API put bucket replication: Syntax Example Verification Verify that the existing replication rules are deleted: Syntax Example 9.7.4.4. Disabling S3 bucket replication for user As an administrator, you can set a user policy for other users to restrict performing any s3 replication API operations on buckets that reside under that particular user/users. Prerequisites A running Red Hat Ceph Storage cluster with multi-site Ceph object gateway configured. For more information on creating multi-site sync policies, see Creating a sync policy group . Zone group level policy created. For more information on creating zone group policies, see Bucket granular sync policies . Procedure Create a user policy configuration file to deny access to S3 bucket replication API: Example As an admin user, set user policy to user to disable user access to S3 API: Syntax Example Verification As an admin user, verify the user policy set: Syntax Example As a user on whom the user policy is set by admin user, try performing th below S3 bucket replication API operations to verify whether the action is denied as expected. Creating S3 bucket replication Getting S3 bucket replication Deleting S3 bucket replication Additional Resources See the S3 bucket replication API section in the Red Hat Ceph Storage Object Gateway Guide for details. 9.8. Bucket lifecycle As a storage administrator, you can use a bucket lifecycle configuration to manage your objects so they are stored effectively throughout their lifetime. For example, you can transition objects to less expensive storage classes, archive, or even delete them based on your use case. RADOS Gateway supports S3 API object expiration by using rules defined for a set of bucket objects. Each rule has a prefix, which selects the objects, and a number of days after which objects become unavailable. Note The radosgw-admin lc reshard command is deprecated in Red Hat Ceph Storage 3.3 and not supported in Red Hat Ceph Storage 4 and later releases. 9.8.1. Creating a lifecycle management policy You can manage a bucket lifecycle policy configuration using standard S3 operations rather than using the radosgw-admin command. RADOS Gateway supports only a subset of the Amazon S3 API policy language applied to buckets. The lifecycle configuration contains one or more rules defined for a set of bucket objects. Prerequisites A running Red Hat Storage cluster. Installation of the Ceph Object Gateway. Root-level access to a Ceph Object Gateway node. An S3 bucket created. An S3 user created with user access. Access to a Ceph Object Gateway client with the AWS CLI package installed. Procedure Create a JSON file for lifecycle configuration: Example Add the specific lifecycle configuration rules in the file: Example The lifecycle configuration example expires objects in the images directory after 1 day. Set the lifecycle configuration on the bucket: Syntax Example In this example, the lifecycle.json file exists in the current directory. Verification Retrieve the lifecycle configuration for the bucket: Syntax Example Optional: From the Ceph Object Gateway node, log into the Cephadm shell and retrieve the bucket lifecycle configuration: Syntax Example Additional Resources See the S3 bucket lifecycle section in the Red Hat Ceph Storage Developer Guide for details. For more information on using the AWS CLI to manage lifecycle configurations, see the Setting lifecycle configuration on a bucket section of the Amazon Simple Storage Service documentation. 9.8.2. Deleting a lifecycle management policy You can delete the lifecycle management policy for a specified bucket by using the s3api delete-bucket-lifecycle command. Prerequisites A running Red Hat Storage cluster. Installation of the Ceph Object Gateway. Root-level access to a Ceph Object Gateway node. An S3 bucket created. An S3 user created with user access. Access to a Ceph Object Gateway client with the AWS CLI package installed. Procedure Delete a lifecycle configuration: Syntax Example Verification Retrieve lifecycle configuration for the bucket: Syntax Example Optional: From the Ceph Object Gateway node, retrieve the bucket lifecycle configuration: Syntax Example Note The command does not return any information if a bucket lifecycle policy is not present. Additional Resources See the S3 bucket lifecycle section in the Red Hat Ceph Storage Developer Guide for details. 9.8.3. Updating a lifecycle management policy You can update a lifecycle management policy by using the s3cmd put-bucket-lifecycle-configuration command. Note The put-bucket-lifecycle-configuration overwrites an existing bucket lifecycle configuration. If you want to retain any of the current lifecycle policy settings, you must include them in the lifecycle configuration file. Prerequisites A running Red Hat Storage cluster. Installation of the Ceph Object Gateway. Root-level access to a Ceph Object Gateway node. An S3 bucket created. An S3 user created with user access. Access to a Ceph Object Gateway client with the AWS CLI package installed. Procedure Create a JSON file for the lifecycle configuration: Example Add the specific lifecycle configuration rules to the file: Example Update the lifecycle configuration on the bucket: Syntax Example Verification Retrieve the lifecycle configuration for the bucket: Syntax Example Optional: From the Ceph Object Gateway node, log into the Cephadm shell and retrieve the bucket lifecycle configuration: Syntax Example Additional Resources See the Red Hat Ceph Storage Developer Guide for details on Amazon S3 bucket lifecycles . 9.8.4. Monitoring bucket lifecycles You can monitor lifecycle processing and manually process the lifecycle of buckets with the radosgw-admin lc list and radosgw-admin lc process commands. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to a Ceph Object Gateway node. Creation of an S3 bucket with a lifecycle configuration policy applied. Procedure Log into the Cephadm shell: Example List bucket lifecycle progress: Example The bucket lifecycle processing status can be one of the following: UNINITIAL - The process has not run yet. PROCESSING - The process is currently running. COMPLETE - The process has completed. Optional: You can manually process bucket lifecycle policies: Process the lifecycle policy for a single bucket: Syntax Example Process all bucket lifecycle policies immediately: Example Verification List the bucket lifecycle policies: Additional Resources See the S3 bucket lifecycle section in the Red Hat Ceph Storage Developer Guide for details. 9.8.5. Configuring lifecycle expiration window You can set the time that the lifecycle management process runs each day by setting the rgw_lifecycle_work_time parameter. By default, lifecycle processing occurs once per day, at midnight. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway. Root-level access to a Ceph Object Gateway node. Procedure Log into the Cephadm shell: Example Set the lifecycle expiration time: Syntax Replace %d:%d-%d:%d with start_hour:start_minute-end_hour:end_minute . Example Verification Retrieve the lifecycle expiration work time: Example Additional Resources See the S3 bucket lifecycle section in the Red Hat Ceph Storage Developer Guide for details. 9.8.6. S3 bucket lifecycle transition within a storage cluster You can use a bucket lifecycle configuration to manage objects so objects are stored effectively throughout the object's lifetime. The object lifecycle transition rule allows you to manage, and effectively store the objects throughout the object's lifetime. You can transition objects to less expensive storage classes, archive, or even delete them. You can create storage classes for: Fast media, such as SSD or NVMe for I/O sensitive workloads. Slow magnetic media, such as SAS or SATA for archiving. You can create a schedule for data movement between a hot storage class and a cold storage class. You can schedule this movement after a specified time so that the object expires and is deleted permanently for example you can transition objects to a storage class 30 days after you have created or even archived the objects to a storage class one year after creating them. You can do this through a transition rule. This rule applies to an object transitioning from one storage class to another. The lifecycle configuration contains one or more rules using the <Rule> element. Additional Resources See the Red Hat Ceph Storage Developer Guide for details on bucket lifecycle . 9.8.7. Transitioning an object from one storage class to another The object lifecycle transition rule allows you to transition an object from one storage class to another class. You can migrate data between replicated pools, erasure-coded pools, replicated to erasure-coded pools, or erasure-coded to replicated pools with the Ceph Object Gateway lifecycle transition policy. Note In a multi-site configuration, when the lifecycle transition rule is applied on the first site, to transition objects from one data pool to another in the same storage cluster, then the same rule is valid for the second site, if the second site has the respective data pool created and enabled with rgw application. Prerequisites Installation of the Ceph Object Gateway software. Root-level access to the Ceph Object Gateway node. An S3 user created with user access. Procedure Create a new data pool: Syntax Example Add a new storage class: Syntax Example Provide the zone placement information for the new storage class: Syntax Example Note Consider setting the compression_type when creating cold or archival data storage pools with write once. Enable the rgw application on the data pool: Syntax Example Restart all the rgw daemons. Create a bucket: Example Add the object: Example Create a second data pool: Syntax Example Add a new storage class: Syntax Example Provide the zone placement information for the new storage class: Syntax Example Enable rgw application on the data pool: Syntax Example Restart all the rgw daemons. To view the zone group configuration, run the following command: Syntax To view the zone configuration, run the following command: Syntax Create a bucket: Example List the objects prior to transition: Example Create a JSON file for lifecycle configuration: Example Add the specific lifecycle configuration rule in the file: Example The lifecycle configuration example shows an object that will transition from the default STANDARD storage class to the hot.test storage class after 5 days, again transitions after 20 days to the cold.test storage class, and finally expires after 365 days in the cold.test storage class. Set the lifecycle configuration on the bucket: Example Retrieve the lifecycle configuration on the bucket: Example Verify that the object is transitioned to the given storage class: Example Additional Resources See the Red Hat Ceph Storage Developer Guide for details on bucket lifecycle . 9.8.8. Enabling object lock for S3 Using the S3 object lock mechanism, you can use object lock concepts like retention period, legal hold, and bucket configuration to implement Write-Once-Read-Many (WORM) functionality as part of the custom workflow overriding data deletion permissions. Important The object version(s), not the object name, is the defining and required value for object lock to perform correctly to support the GOVERNANCE or COMPLIANCE mode. You need to know the version of the object when it is written so that you can retrieve it at a later time. Prerequisites A running Red Hat Ceph Storage cluster with Ceph Object Gateway installed. Root-level access to the Ceph Object Gateway node. S3 user with version-bucket creation access. Procedure Create a bucket with object lock enabled: Syntax Example Set a retention period for the bucket: Syntax Example Note You can choose either the GOVERNANCE or COMPLIANCE mode for the RETENTION_MODE in S3 object lock, to apply different levels of protection to any object version that is protected by object lock. In GOVERNANCE mode, users cannot overwrite or delete an object version or alter its lock settings unless they have special permissions. In COMPLIANCE mode, a protected object version cannot be overwritten or deleted by any user, including the root user in your AWS account. When an object is locked in COMPLIANCE mode, its RETENTION_MODE cannot be changed, and its retention period cannot be shortened. COMPLIANCE mode helps ensure that an object version cannot be overwritten or deleted for the duration of the period. Put the object into the bucket with a retention time set: Syntax Example Upload a new object using the same key: Syntax Example Command line options Set an object lock legal hold on an object version: Example Note Using the object lock legal hold operation, you can place a legal hold on an object version, thereby preventing an object version from being overwritten or deleted. A legal hold doesn't have an associated retention period and hence, remains in effect until removed. List the objects from the bucket to retrieve only the latest version of the object: Example List the object versions from the bucket: Example Access objects using version-ids: Example 9.9. Usage The Ceph Object Gateway logs usage for each user. You can track user usage within date ranges too. Options include: Start Date: The --start-date option allows you to filter usage stats from a particular start date ( format: yyyy-mm-dd[HH:MM:SS] ). End Date: The --end-date option allows you to filter usage up to a particular date ( format: yyyy-mm-dd[HH:MM:SS] ). Log Entries: The --show-log-entries option allows you to specify whether or not to include log entries with the usage stats (options: true | false ). Note You can specify time with minutes and seconds, but it is stored with 1 hour resolution. 9.9.1. Show usage To show usage statistics, specify the usage show . To show usage for a particular user, you must specify a user ID. You may also specify a start date, end date, and whether or not to show log entries. Example You may also show a summary of usage information for all users by omitting a user ID. Example 9.9.2. Trim usage With heavy use, usage logs can begin to take up storage space. You can trim usage logs for all users and for specific users. You may also specify date ranges for trim operations. Example 9.10. Ceph Object Gateway data layout Although RADOS only knows about pools and objects with their Extended Attributes ( xattrs ) and object map (OMAP), conceptually Ceph Object Gateway organizes its data into three different kinds: metadata bucket index data Metadata There are three sections of metadata: user : Holds user information. bucket : Holds a mapping between bucket name and bucket instance ID. bucket.instance : Holds bucket instance information. You can use the following commands to view metadata entries: Syntax Example Every metadata entry is kept on a single RADOS object. Note A Ceph Object Gateway object might consist of several RADOS objects, the first of which is the head that contains the metadata, such as manifest, Access Control List (ACL), content type, ETag, and user-defined metadata. The metadata is stored in xattrs . The head might also contain up to 512 KB of object data, for efficiency and atomicity. The manifest describes how each object is laid out in RADOS objects. Bucket index It is a different kind of metadata, and kept separately. The bucket index holds a key-value map in RADOS objects. By default, it is a single RADOS object per bucket, but it is possible to shard the map over multiple RADOS objects. The map itself is kept in OMAP associated with each RADOS object. The key of each OMAP is the name of the objects, and the value holds some basic metadata of that object, the metadata that appears when listing the bucket. Each OMAP holds a header, and we keep some bucket accounting metadata in that header such as number of objects, total size, and the like. Important When using the radosgw-admin tool, ensure that the tool and the Ceph Cluster are of the same version. The use of mismatched versions is not supported. Note OMAP is a key-value store, associated with an object, in a way similar to how extended attributes associate with a POSIX file. An object's OMAP is not physically located in the object's storage, but its precise implementation is invisible and immaterial to the Ceph Object Gateway. Data Objects data is kept in one or more RADOS objects for each Ceph Object Gateway object. 9.10.1. Object lookup path When accessing objects, REST APIs come to Ceph Object Gateway with three parameters: Account information, which has the access key in S3 or account name in Swift Bucket or container name Object name or key At present, Ceph Object Gateway only uses account information to find out the user ID and for access control. It uses only the bucket name and object key to address the object in a pool. Account information The user ID in Ceph Object Gateway is a string, typically the actual user name from the user credentials and not a hashed or mapped identifier. When accessing a user's data, the user record is loaded from an object USER_ID in the default.rgw.meta pool with users.uid namespace. .Bucket names They are represented in the default.rgw.meta pool with root namespace. Bucket record is loaded in order to obtain a marker, which serves as a bucket ID. Object names The object is located in the default.rgw.buckets.data pool. Object name is MARKER_KEY , for example default.7593.4_image.png , where the marker is default.7593.4 and the key is image.png . These concatenated names are not parsed and are passed down to RADOS only. Therefore, the choice of the separator is not important and causes no ambiguity. For the same reason, slashes are permitted in object names, such as keys. 9.10.1.1. Multiple data pools It is possible to create multiple data pools so that different users' buckets are created in different RADOS pools by default, thus providing the necessary scaling. The layout and naming of these pools is controlled by a policy setting. 9.10.2. Bucket and object listing Buckets that belong to a given user are listed in an OMAP of an object named USER_ID .buckets , for example, foo.buckets , in the default.rgw.meta pool with users.uid namespace. These objects are accessed when listing buckets, when updating bucket contents, and updating and retrieving bucket statistics such as quota. These listings are kept consistent with buckets in the .rgw pool. Note See the user-visible, encoded class cls_user_bucket_entry and its nested class cls_user_bucket for the values of these OMAP entries. Objects that belong to a given bucket are listed in a bucket index. The default naming for index objects is .dir.MARKER in the default.rgw.buckets.index pool. Additional Resources See the Configure bucket index resharding section in the Red Hat Ceph Storage Object Gateway Guide for more details. 9.10.3. Object Gateway data layout parameters This is a list of data layout parameters for Ceph Object Gateway. Known pools: .rgw.root Unspecified region, zone, and global information records, one per object. ZONE .rgw.control notify. N ZONE .rgw.meta Multiple namespaces with different kinds of metadata namespace: root BUCKET .bucket.meta. BUCKET : MARKER # see put_bucket_instance_info() The tenant is used to disambiguate buckets, but not bucket instances. Example namespace: users.uid Contains per-user information (RGWUserInfo) in USER objects and per-user lists of buckets in omaps of USER .buckets objects. The USER might contain the tenant if non-empty. Example namespace: users.email Unimportant namespace: users.keys 47UA98JSTJZ9YAN3OS3O This allows Ceph Object Gateway to look up users by their access keys during authentication. namespace: users.swift test:tester ZONE .rgw.buckets.index Objects are named .dir. MARKER , each contains a bucket index. If the index is sharded, each shard appends the shard index after the marker. ZONE .rgw.buckets.data default.7593.4__shadow_.488urDFerTYXavx4yAd-Op8mxehnvTI_1 MARKER_KEY An example of a marker would be default.16004.1 or default.7593.4 . The current format is ZONE . INSTANCE_ID . BUCKET_ID , but once generated, a marker is not parsed again, so its format might change freely in the future. Additional Resources See the Ceph Object Gateway data layout in the Red Hat Ceph Storage Object Gateway Guide for more details. 9.11. Rate limits for ingesting data As a storage administrator, you can set rate limits on users and buckets based on the operations and bandwidth when saving an object in a Red Hat Ceph Storage cluster with a Ceph Object Gateway configuration. 9.11.1. Purpose of rate limits in a storage cluster You can set rate limits on users and buckets in a Ceph Object Gateway configuration. The rate limit includes the maximum number of read operations, write operations per minute, and how many bytes per minute can be written or read per user or per bucket. Requests that use GET or HEAD method in the REST are "read requests", else they are "write requests". The Ceph Object Gateway tracks the user and bucket requests separately and does not share with other gateways, which means that the desired limits configured should be divided by the number of active Object Gateways. For example, if user A should be limited by ten ops per minute and there are two Ceph Object Gateways in the cluster, the limit over user A should be five, that is, ten ops per minute for two Ceph Object Gateways. If the requests are not balanced between Ceph Object Gateways, the rate limit may be underutilized. For example, if the ops limit is five and there are two Ceph Object Gateways, but the load balancer sends load only to one of those Ceph Object Gateways, the effective limit would be five ops, because this limit is enforced per Ceph Object Gateway. If there is a limit reached for the bucket, but not for the user, or vice versa the request would be canceled as well. The bandwidth counting happens after the request is accepted. As a result, this request proceeds even if the bucket or the user has reached its bandwidth limit in the middle of the request. The Ceph Object Gateway keeps a "debt" of used bytes more than the configured value and prevents this user or bucket from sending more requests until their "debt" is paid. The "debt" maximum size is twice the max-read/write-bytes per minute. If user A has 1 byte read limit per minute and this user tries to GET 1 GB object, the user can do it. After user A completes this 1 GB operation, the Ceph Object Gateway blocks the user request for up to two minutes until user A is able to send the GET request again. Different options for limiting rates: Bucket: The --bucket option allows you to specify a rate limit for a bucket. User: The --uid option allows you to specify a rate limit for a user. Maximum read ops: The --max-read-ops setting allows you to specify the maximum number of read ops per minute per Ceph Object Gateway. A value of 0 disables this setting, which means unlimited access. Maximum read bytes: The --max-read-bytes setting allows you to specify the maximum number of read bytes per minute per Ceph Object Gateway. A value of 0 disables this setting, which means unlimited access. Maximum write ops: The --max-write-ops setting allows you to specify the maximum number of write ops per minute per Ceph Object Gateway. A value of 0 disables this setting, which means unlimited access. Maximum write bytes: The --max-write-bytes setting allows you to specify the maximum number of write bytes per minute per Ceph Object Gateway. A value of 0 disables this setting, which means unlimited access. Rate limit scope: The --rate-limit-scope option sets the scope for the rate limit. The options are bucket , user , and anonymous . Bucket rate limit applies to buckets, user rate limit applies to a user, and anonymous applies to an unauthenticated user. Anonymous scope is only available for global rate limit. 9.11.2. Enabling user rate limit You can set rate limits on users in a Ceph Object Gateway configuration. The rate limit on users include the maximum number of read operations, write operations per minute, and how many bytes per minute can be written or read per user. You can enable the rate limit on users after setting the value of rate limits by using the radosgw-admin ratelimit set command with the ratelimit-scope set as user . Prerequisites A running Red Hat Ceph Storage cluster. A Ceph Object Gateway installed. Procedure Set the rate limit for the user: Syntax Example A value of 0 for NUMBER_OF_OPERATIONS or NUMBER_OF_BYTES means that the specific rate limit attribute check is disabled. Get the user rate limit: Syntax Example Enable user rate limit: Syntax Example Optional: Disable user rate limit: Syntax Example 9.11.3. Enabling bucket rate limit You can set rate limits on buckets in a Ceph Object Gateway configuration. The rate limit on buckets include the maximum number of read operations, write operations per minute, and how many bytes per minute can be written or read per user. You can enable the rate limit on buckets after setting the value of rate limits by using the radosgw-admin ratelimit set command with the ratelimit-scope set as bucket . Prerequisites A running Red Hat Ceph Storage cluster. A Ceph Object Gateway installed. Procedure Set the rate limit for the bucket: Syntax Example A value of 0 for NUMBER_OF_OPERATIONS or NUMBER_OF_BYTES means that the specific rate limit attribute check is disabled. Get the bucket rate limit: Syntax Example Enable bucket rate limit: Syntax Example Optional: Disable bucket rate limit: Syntax Example 9.11.4. Configuring global rate limits You can read or write global rate limit settings in period configuration. You can override the user or bucket rate limit configuration by manipulating the global rate limit settings with the global ratelimit parameter, which is the counterpart of ratelimit set , ratelimit enable , and ratelimit disable commands. Note In a multi-site configuration, where there is a realm and period present, changes to the global rate limit must be committed using period update --commit command. If there is no period present, the Ceph Object Gateways must be restarted for the changes to take effect. Prerequisites A running Red Hat Ceph Storage cluster. A Ceph Object Gateway installed. Procedure View the global rate limit settings: Syntax Example Configure and enable rate limit scope for the buckets: Set the global rate limits for bucket: Syntax Example Enable bucket rate limit: Syntax Example Configure and enable rate limit scope for authenticated users: Set the global rate limits for users: Syntax Example Enable user rate limit: Syntax Example Configure and enable rate limit scope for unauthenticated users: Set the global rate limits for unauthenticated users: Syntax Example Enable user rate limit: Syntax Example 9.12. Optimize the Ceph Object Gateway's garbage collection When new data objects are written into the storage cluster, the Ceph Object Gateway immediately allocates the storage for these new objects. After you delete or overwrite data objects in the storage cluster, the Ceph Object Gateway deletes those objects from the bucket index. Some time afterward, the Ceph Object Gateway then purges the space that was used to store the objects in the storage cluster. The process of purging the deleted object data from the storage cluster is known as Garbage Collection, or GC. Garbage collection operations typically run in the background. You can configure these operations to either run continuously, or to run only during intervals of low activity and light workloads. By default, the Ceph Object Gateway conducts GC operations continuously. Because GC operations are a normal part of Ceph Object Gateway operations, deleted objects that are eligible for garbage collection exist most of the time. 9.12.1. Viewing the garbage collection queue Before you purge deleted and overwritten objects from the storage cluster, use radosgw-admin to view the objects awaiting garbage collection. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Object Gateway. Procedure To view the queue of objects awaiting garbage collection: Example Note To list all entries in the queue, including unexpired entries, use the --include-all option. 9.12.2. Adjusting Garbage Collection Settings The Ceph Object Gateway allocates storage for new and overwritten objects immediately. Additionally, the parts of a multi-part upload also consume some storage. The Ceph Object Gateway purges the storage space used for deleted objects after deleting the objects from the bucket index. Similarly, the Ceph Object Gateway will delete data associated with a multi-part upload after the multi-part upload completes or when the upload has gone inactive or failed to complete for a configurable amount of time. The process of purging the deleted object data from the Red Hat Ceph Storage cluster is known as garbage collection (GC). Viewing the objects awaiting garbage collection can be done with the following command: Garbage collection is a background activity that runs continuously or during times of low loads, depending upon how the storage administrator configures the Ceph Object Gateway. By default, the Ceph Object Gateway conducts garbage collection operations continuously. Since garbage collection operations are a normal function of the Ceph Object Gateway, especially with object delete operations, objects eligible for garbage collection exist most of the time. Some workloads can temporarily or permanently outpace the rate of garbage collection activity. This is especially true of delete-heavy workloads, where many objects get stored for a short period of time and then deleted. For these types of workloads, storage administrators can increase the priority of garbage collection operations relative to other operations with the following configuration parameters: The rgw_gc_obj_min_wait configuration option waits a minimum length of time, in seconds, before purging a deleted object's data. The default value is two hours, or 7200 seconds. The object is not purged immediately, because a client might be reading the object. Under heavy workloads, this setting can consume too much storage or have a large number of deleted objects to purge. Red Hat recommends not setting this value below 30 minutes, or 1800 seconds. The rgw_gc_processor_period configuration option is the garbage collection cycle run time. That is, the amount of time between the start of consecutive runs of garbage collection threads. If garbage collection runs longer than this period, the Ceph Object Gateway will not wait before running a garbage collection cycle again. The rgw_gc_max_concurrent_io configuration option specifies the maximum number of concurrent IO operations that the gateway garbage collection thread will use when purging deleted data. Under delete heavy workloads, consider increasing this setting to a larger number of concurrent IO operations. The rgw_gc_max_trim_chunk configuration option specifies the maximum number of keys to remove from the garbage collector log in a single operation. Under delete heavy operations, consider increasing the maximum number of keys so that more objects are purged during each garbage collection operation. Starting with Red Hat Ceph Storage 4.1, offloading the index object's OMAP from the garbage collection log helps lessen the performance impact of garbage collection activities on the storage cluster. Some new configuration parameters have been added to Ceph Object Gateway to tune the garbage collection queue, as follows: The rgw_gc_max_deferred_entries_size configuration option sets the maximum size of deferred entries in the garbage collection queue. The rgw_gc_max_queue_size configuration option sets the maximum queue size used for garbage collection. This value should not be greater than osd_max_object_size minus rgw_gc_max_deferred_entries_size minus 1 KB. The rgw_gc_max_deferred configuration option sets the maximum number of deferred entries stored in the garbage collection queue. Note These garbage collection configuration parameters are for Red Hat Ceph Storage 7 and higher. Note In testing, with an evenly balanced delete-write workload, such as 50% delete and 50% write operations, the storage cluster fills completely in 11 hours. This is because Ceph Object Gateway garbage collection fails to keep pace with the delete operations. The cluster status switches to the HEALTH_ERR state if this happens. Aggressive settings for parallel garbage collection tunables significantly delayed the onset of storage cluster fill in testing and can be helpful for many workloads. Typical real-world storage cluster workloads are not likely to cause a storage cluster fill primarily due to garbage collection. 9.12.3. Adjusting garbage collection for delete-heavy workloads Some workloads may temporarily or permanently outpace the rate of garbage collection activity. This is especially true of delete-heavy workloads, where many objects get stored for a short period of time and are then deleted. For these types of workloads, consider increasing the priority of garbage collection operations relative to other operations. Contact Red Hat Support with any additional questions about Ceph Object Gateway Garbage Collection. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all nodes in the storage cluster. Procedure Set the value of rgw_gc_max_concurrent_io to 20 , and the value of rgw_gc_max_trim_chunk to 64 : Example Restart the Ceph Object Gateway to allow the changed settings to take effect. Monitor the storage cluster during GC activity to verify that the increased values do not adversely affect performance. Important Never modify the value for the rgw_gc_max_objs option in a running cluster. You should only change this value before deploying the RGW nodes. Additional Resources Ceph RGW - GC Tuning Options RGW General Settings Configuration Reference 9.13. Optimize the Ceph Object Gateway's data object storage Bucket lifecycle configuration optimizes data object storage to increase its efficiency and to provide effective storage throughout the lifetime of the data. The S3 API in the Ceph Object Gateway currently supports a subset of the AWS bucket lifecycle configuration actions: Expiration NoncurrentVersionExpiration AbortIncompleteMultipartUpload Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all of the nodes in the storage cluster. 9.13.1. Parallel thread processing for bucket life cycles The Ceph Object Gateway now allows for parallel thread processing of bucket life cycles across multiple Ceph Object Gateway instances. Increasing the number of threads that run in parallel enables the Ceph Object Gateway to process large workloads more efficiently. In addition, the Ceph Object Gateway now uses a numbered sequence for index shard enumeration instead of using in-order numbering. 9.13.2. Optimizing the bucket lifecycle Two options in the Ceph configuration file affect the efficiency of bucket lifecycle processing: rgw_lc_max_worker specifies the number of lifecycle worker threads to run in parallel. This enables the simultaneous processing of both bucket and index shards. The default value for this option is 3. rgw_lc_max_wp_worker specifies the number of threads in each lifecycle worker thread's work pool. This option helps to accelerate processing for each bucket. The default value for this option is 3. For a workload with a large number of buckets - for example, a workload with thousands of buckets - consider increasing the value of the rgw_lc_max_worker option. For a workload with a smaller number of buckets but with a higher number of objects in each bucket - such as in the hundreds of thousands - consider increasing the value of the rgw_lc_max_wp_worker option. Note Before increasing the value of either of these options, please validate current storage cluster performance and Ceph Object Gateway utilization. Red Hat does not recommend that you assign a value of 10 or above for either of these options. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all of the nodes in the storage cluster. Procedure To increase the number of threads to run in parallel, set the value of rgw_lc_max_worker to a value between 3 and 9 : Example To increase the number of threads in each thread's work pool, set the value of rgw_lc_max_wp_worker to a value between 3 and 9 : Example Restart the Ceph Object Gateway to allow the changed settings to take effect. Monitor the storage cluster to verify that the increased values do not adversely affect performance. Additional Resources For more information about the bucket lifecycle and parallel thread processing, see Bucket lifecycle parallel processing For more information about Ceph Object Gateway lifecycle, contact Red Hat Support . 9.14. Transitioning data to Amazon S3 cloud service You can transition data to a remote cloud service as part of the lifecycle configuration using storage classes to reduce cost and improve manageability. The transition is unidirectional and data cannot be transitioned back from the remote zone. This feature is to enable data transition to multiple cloud providers such as Amazon (S3). Use cloud-s3 as tier-type to configure the remote cloud S3 object store service to which the data needs to be transitioned. These do not need a data pool and are defined in terms of the zonegroup placement targets. Prerequisites A Red Hat Ceph Storage cluster with Ceph Object Gateway installed. User credentials for the remote cloud service, Amazon S3. Target path created on Amazon S3. s3cmd installed on the bootstrapped node. Amazon AWS configured locally to download data. Procedure Create a user with access key and secret key: Syntax Example On the bootstrapped node, add a storage class with the tier type as cloud-s3 : Note Once a storage class is created with the --tier-type=cloud-s3 option , it cannot be later modified to any other storage class type. Syntax Example Update storage_class : Note If the cluster is part of a multi-site setup, run period update --commit so that the zonegroup changes are propagated to all the zones in the multi-site. Note Make sure access_key and secret do not start with a digit. Mandatory parameters are: access_key is the remote cloud S3 access key used for a specific connection. secret is the secret key for the remote cloud S3 service. endpoint is the URL of the remote cloud S3 service endpoint. region (for AWS) is the remote cloud S3 service region name. Optional parameters are: target_path defines how the target path is created. The target path specifies a prefix to which the source bucket-name/object-name is appended. If not specified, the target_path created is rgwx- ZONE_GROUP_NAME - STORAGE_CLASS_NAME -cloud-bucket . target_storage_class defines the target storage class to which the object transitions. If not specified, the object is transitioned to STANDARD storage class. retain_head_object , if true, retains the metadata of the object transitioned to cloud. If false (default), the object is deleted post transition. This option is ignored for current versioned objects. multipart_sync_threshold specifies that objects this size or larger are transitioned to the cloud using multipart upload. multipart_min_part_size specifies the minimum part size to use when transitioning objects using multipart upload. Syntax Example Restart the Ceph Object Gateway: Syntax Example Exit the shell and as a root user, configure Amazon S3 on your bootstrapped node: Example Create the S3 bucket: Syntax Example Create your file, input all the data, and move it to S3 service: Syntax Example Create the lifecycle configuration transition policy: Syntax Example Set the lifecycle configuration transition policy: Syntax Example Log in to cephadm shell : Example Restart the Ceph Object Gateway: Syntax Example Verification On the source cluster, verify if the data has moved to S3 with radosgw-admin lc list command: Example Verify object transition at cloud endpoint: Example List the objects in the bucket: Example List the contents of the S3 bucket: Example Check the information of the file: Example Download data locally from Amazon S3: Configure AWS: Example List the contents of the AWS bucket: Example Download data from S3: Example 9.15. Transitioning data to Azure cloud service You can transition data to a remote cloud service as part of the lifecycle configuration using storage classes to reduce cost and improve manageability. The transition is unidirectional and data cannot be transitioned back from the remote zone. This feature is to enable data transition to multiple cloud providers such as Azure. One of the key differences with the AWS configuration is that you need to configure the multi-cloud gateway (MCG) and use MCG to translate from the S3 protocol to Azure Blob. Use cloud-s3 as tier-type to configure the remote cloud S3 object store service to which the data needs to be transitioned. These do not need a data pool and are defined in terms of the zonegroup placement targets. Prerequisites A Red Hat Ceph Storage cluster with Ceph Object Gateway installed. User credentials for the remote cloud service, Azure. Azure configured locally to download data. s3cmd installed on the bootstrapped node. Azure container for the for MCG namespace created. In this example, it is mcgnamespace . Procedure Create a user with access key and secret key: Syntax Example As a root user, configure AWS CLI with the user credentials and create a bucket with default placement: Syntax Example Verify that the bucket is using default-placement with the placement rule: Example Log into the OpenShift Container Platform (OCP) cluster with OpenShift Data Foundation (ODF) deployed: Example Configure the multi-cloud gateway (MCG) namespace Azure bucket running on an OCP cluster in Azure: Syntax Example Create an MCG bucket class pointing to the namespacestore : Example Create an object bucket claim (OBC) for the transition to cloud: Syntax Example Note Use the credentials provided by OBC to configure zonegroup placement on the Ceph Object Gateway. On the bootstrapped node, create a storage class with the tier type as cloud-s3 on the default placement within the default zonegroup on the previously configured MCG in Azure: Note Once a storage class is created with the --tier-type=cloud-s3 option , it cannot be later modified to any other storage class type. Syntax Example Configure the cloud S3 cloud storage class: Syntax Important Setting the retain_head_object parameter to true retains the metadata or the head of the object to list the objects that are transitioned. Example Restart the Ceph Object Gateway: Syntax Example Create the lifecycle configuration transition policy for the bucket created previously. In this example, the bucket is transition : Syntax Note All the objects in the bucket older than 30 days are transferred to the cloud storage class called AZURE . Example Apply the bucket lifecycle configuration using AWS CLI: Syntax Example Optional: Get the lifecycle configuration: Syntax Example Optional: Get the lifecycle configuration with the radosgw-admin lc list command: Example Note The UNINITAL status implies that the lifecycle configuration is not processed. It moves to COMPLETED state after the transition process is complete. Log in to cephadm shell : Example Restart the Ceph Object Gateway daemon: Syntax Example Migrate data from the source cluster to Azure: Example Verify transition of data: Example Verify if the data has moved to Azure with rados ls command: Example If the data is not transitioned, you can run the lc process command: Example This will force the lifecycle process to start and evaluates all the bucket lifecycle policies configured. It then starts the transition of data wherever needed. Verification Run the radosgw-admin lc list command to verify the completion of the transition: Example List the objects in the bucket: Example List the objects on the cluster: Example The objects are 0 in size. You can list the objects, but cannot copy them since they are transitioned to Azure. Check the head of the object using the S3 API: Example You can see that the storage class has changed from STANDARD to CLOUDTIER . | [
"radosgw-admin zonegroup --rgw-zonegroup= ZONE_GROUP_NAME get > FILE_NAME .json",
"radosgw-admin zonegroup --rgw-zonegroup=default get > zonegroup.json",
"{ \"name\": \"default\", \"api_name\": \"\", \"is_master\": \"true\", \"endpoints\": [], \"hostnames\": [], \"master_zone\": \"\", \"zones\": [{ \"name\": \"default\", \"endpoints\": [], \"log_meta\": \"false\", \"log_data\": \"false\", \"bucket_index_max_shards\": 5 }], \"placement_targets\": [{ \"name\": \"default-placement\", \"tags\": [] }, { \"name\": \"special-placement\", \"tags\": [] }], \"default_placement\": \"default-placement\" }",
"radosgw-admin zonegroup set < zonegroup.json",
"radosgw-admin zone get > zone.json",
"{ \"domain_root\": \".rgw\", \"control_pool\": \".rgw.control\", \"gc_pool\": \".rgw.gc\", \"log_pool\": \".log\", \"intent_log_pool\": \".intent-log\", \"usage_log_pool\": \".usage\", \"user_keys_pool\": \".users\", \"user_email_pool\": \".users.email\", \"user_swift_pool\": \".users.swift\", \"user_uid_pool\": \".users.uid\", \"system_key\": { \"access_key\": \"\", \"secret_key\": \"\" }, \"placement_pools\": [{ \"key\": \"default-placement\", \"val\": { \"index_pool\": \".rgw.buckets.index\", \"data_pool\": \".rgw.buckets\", \"data_extra_pool\": \".rgw.buckets.extra\" } }, { \"key\": \"special-placement\", \"val\": { \"index_pool\": \".rgw.buckets.index\", \"data_pool\": \".rgw.buckets.special\", \"data_extra_pool\": \".rgw.buckets.extra\" } }] }",
"radosgw-admin zone set < zone.json",
"radosgw-admin period update --commit",
"curl -i http://10.0.0.1/swift/v1/TestContainer/file.txt -X PUT -H \"X-Storage-Policy: special-placement\" -H \"X-Auth-Token: AUTH_rgwtxxxxxx\"",
"radosgw-admin zonegroup placement add --rgw-zonegroup=\"default\" --placement-id=\"indexless-placement\"",
"radosgw-admin zone placement add --rgw-zone=\"default\" --placement-id=\"indexless-placement\" --data-pool=\"default.rgw.buckets.data\" --index-pool=\"default.rgw.buckets.index\" --data_extra_pool=\"default.rgw.buckets.non-ec\" --placement-index-type=\"indexless\"",
"radosgw-admin zonegroup placement default --placement-id \"indexless-placement\"",
"radosgw-admin period update --commit",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"ln: failed to access '/tmp/rgwrbi-object-list.4053207': No such file or directory",
"/usr/bin/rgw-restore-bucket-index -b bucket-large-1 -p local-zone.rgw.buckets.data marker is d8a347a4-99b6-4312-a5c1-75b83904b3d4.41610.2 bucket_id is d8a347a4-99b6-4312-a5c1-75b83904b3d4.41610.2 number of bucket index shards is 5 data pool is local-zone.rgw.buckets.data NOTICE: This tool is currently considered EXPERIMENTAL. The list of objects that we will attempt to restore can be found in \"/tmp/rgwrbi-object-list.49946\". Please review the object names in that file (either below or in another window/terminal) before proceeding. Type \"proceed!\" to proceed, \"view\" to view object list, or \"q\" to quit: view Viewing Type \"proceed!\" to proceed, \"view\" to view object list, or \"q\" to quit: proceed! Proceeding NOTICE: Bucket stats are currently incorrect. They can be restored with the following command after 2 minutes: radosgw-admin bucket list --bucket=bucket-large-1 --allow-unordered --max-entries=1073741824 Would you like to take the time to recalculate bucket stats now? [yes/no] yes Done real 2m16.530s user 0m1.082s sys 0m0.870s",
"time rgw-restore-bucket-index --proceed serp-bu-ver-1 default.rgw.buckets.data NOTICE: This tool is currently considered EXPERIMENTAL. marker is e871fb65-b87f-4c16-a7c3-064b66feb1c4.25076.5 bucket_id is e871fb65-b87f-4c16-a7c3-064b66feb1c4.25076.5 Error: this bucket appears to be versioned, and this tool cannot work with versioned buckets.",
"Bucket _BUCKET_NAME_ already has too many log generations (4) from previous reshards that peer zones haven't finished syncing. Resharding is not recommended until the old generations sync, but you can force a reshard with `--yes-i-really-mean-it`.",
"number of objects expected in a bucket / 100,000",
"ceph config set client.rgw rgw_override_bucket_index_max_shards VALUE",
"ceph config set client.rgw rgw_override_bucket_index_max_shards 12",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"number of objects expected in a bucket / 100,000",
"radosgw-admin zonegroup get > zonegroup.json",
"bucket_index_max_shards = VALUE",
"bucket_index_max_shards = 12",
"radosgw-admin zonegroup set < zonegroup.json",
"radosgw-admin period update --commit",
"radosgw-admin reshard status --bucket BUCKET_NAME",
"radosgw-admin reshard status --bucket data",
"radosgw-admin sync status",
"radosgw-admin period get",
"ceph config set client.rgw OPTION VALUE",
"ceph config set client.rgw rgw_reshard_num_logs 23",
"radosgw-admin reshard add --bucket BUCKET --num-shards NUMBER",
"radosgw-admin reshard add --bucket data --num-shards 10",
"radosgw-admin reshard list",
"radosgw-admin bucket layout --bucket data { \"layout\": { \"resharding\": \"None\", \"current_index\": { \"gen\": 1, \"layout\": { \"type\": \"Normal\", \"normal\": { \"num_shards\": 23, \"hash_type\": \"Mod\" } } }, \"logs\": [ { \"gen\": 0, \"layout\": { \"type\": \"InIndex\", \"in_index\": { \"gen\": 0, \"layout\": { \"num_shards\": 11, \"hash_type\": \"Mod\" } } } }, { \"gen\": 1, \"layout\": { \"type\": \"InIndex\", \"in_index\": { \"gen\": 1, \"layout\": { \"num_shards\": 23, \"hash_type\": \"Mod\" } } } } ] } }",
"radosgw-admin reshard status --bucket BUCKET",
"radosgw-admin reshard status --bucket data",
"radosgw-admin reshard process",
"radosgw-admin reshard cancel --bucket BUCKET",
"radosgw-admin reshard cancel --bucket data",
"radosgw-admin reshard status --bucket BUCKET",
"radosgw-admin reshard status --bucket data",
"radosgw-admin sync status",
"radosgw-admin zonegroup modify --rgw-zonegroup= ZONEGROUP_NAME --enable-feature=resharding",
"radosgw-admin zonegroup modify --rgw-zonegroup=us --enable-feature=resharding",
"radosgw-admin period update --commit",
"radosgw-admin zone modify --rgw-zone= ZONE_NAME --enable-feature=resharding",
"radosgw-admin zone modify --rgw-zone=us-east --enable-feature=resharding",
"radosgw-admin period update --commit",
"radosgw-admin period get \"zones\": [ { \"id\": \"505b48db-6de0-45d5-8208-8c98f7b1278d\", \"name\": \"us_east\", \"endpoints\": [ \"http://10.0.208.11:8080\" ], \"log_meta\": \"false\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\", \"tier_type\": \"\", \"sync_from_all\": \"true\", \"sync_from\": [], \"redirect_zone\": \"\", \"supported_features\": [ \"resharding\" ] \"default_placement\": \"default-placement\", \"realm_id\": \"26cf6f23-c3a0-4d57-aae4-9b0010ee55cc\", \"sync_policy\": { \"groups\": [] }, \"enabled_features\": [ \"resharding\" ]",
"radosgw-admin sync status realm 26cf6f23-c3a0-4d57-aae4-9b0010ee55cc (usa) zonegroup 33a17718-6c77-493e-99fe-048d3110a06e (us) zone 505b48db-6de0-45d5-8208-8c98f7b1278d (us_east) zonegroup features enabled: resharding",
"radosgw-admin zonegroup modify --rgw-zonegroup= ZONEGROUP_NAME --disable-feature=resharding",
"radosgw-admin zonegroup modify --rgw-zonegroup=us --disable-feature=resharding",
"radosgw-admin period update --commit",
"radosgw-admin bi list --bucket= BUCKET > BUCKET .list.backup",
"radosgw-admin bi list --bucket=data > data.list.backup",
"radosgw-admin bucket reshard --bucket= BUCKET --num-shards= NUMBER",
"radosgw-admin bucket reshard --bucket=data --num-shards=100",
"radosgw-admin reshard status --bucket bucket",
"radosgw-admin reshard status --bucket data",
"radosgw-admin reshard stale-instances list",
"radosgw-admin reshard stale-instances rm",
"radosgw-admin reshard status --bucket BUCKET",
"radosgw-admin reshard status --bucket data",
"[root@host01 ~] radosgw-admin zone placement modify --rgw-zone=default --placement-id=default-placement --compression=zlib { \"placement_pools\": [ { \"key\": \"default-placement\", \"val\": { \"index_pool\": \"default.rgw.buckets.index\", \"data_pool\": \"default.rgw.buckets.data\", \"data_extra_pool\": \"default.rgw.buckets.non-ec\", \"index_type\": 0, \"compression\": \"zlib\" } } ], }",
"radosgw-admin bucket stats --bucket= BUCKET_NAME { \"usage\": { \"rgw.main\": { \"size\": 1075028, \"size_actual\": 1331200, \"size_utilized\": 592035, \"size_kb\": 1050, \"size_kb_actual\": 1300, \"size_kb_utilized\": 579, \"num_objects\": 104 } }, }",
"radosgw-admin user <create|modify|info|rm|suspend|enable|check|stats> <--uid= USER_ID |--subuser= SUB_USER_NAME > [other-options]",
"radosgw-admin --tenant testx --uid tester --display-name \"Test User\" --access_key TESTER --secret test123 user create",
"radosgw-admin --tenant testx --uid tester --display-name \"Test User\" --subuser tester:swift --key-type swift --access full subuser create radosgw-admin key create --subuser 'testxUSDtester:swift' --key-type swift --secret test123",
"radosgw-admin user create --uid= USER_ID [--key-type= KEY_TYPE ] [--gen-access-key|--access-key= ACCESS_KEY ] [--gen-secret | --secret= SECRET_KEY ] [--email= EMAIL ] --display-name= DISPLAY_NAME",
"radosgw-admin user create --uid=janedoe --access-key=11BS02LGFB6AL6H1ADMW --secret=vzCEkuryfn060dfee4fgQPqFrncKEIkh3ZcdOANY [email protected] --display-name=Jane Doe",
"{ \"user_id\": \"janedoe\", \"display_name\": \"Jane Doe\", \"email\": \"[email protected]\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [], \"keys\": [ { \"user\": \"janedoe\", \"access_key\": \"11BS02LGFB6AL6H1ADMW\", \"secret_key\": \"vzCEkuryfn060dfee4fgQPqFrncKEIkh3ZcdOANY\"}], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1}, \"user_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1}, \"temp_url_keys\": []}",
"radosgw-admin subuser create --uid= USER_ID --subuser= SUB_USER_ID --access=[ read | write | readwrite | full ]",
"radosgw-admin subuser create --uid=janedoe --subuser=janedoe:swift --access=full { \"user_id\": \"janedoe\", \"display_name\": \"Jane Doe\", \"email\": \"[email protected]\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [ { \"id\": \"janedoe:swift\", \"permissions\": \"full-control\"}], \"keys\": [ { \"user\": \"janedoe\", \"access_key\": \"11BS02LGFB6AL6H1ADMW\", \"secret_key\": \"vzCEkuryfn060dfee4fgQPqFrncKEIkh3ZcdOANY\"}], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1}, \"user_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1}, \"temp_url_keys\": []}",
"radosgw-admin user info --uid=janedoe",
"radosgw-admin user info --uid=janedoe --tenant=test",
"radosgw-admin user modify --uid=janedoe --display-name=\"Jane E. Doe\"",
"radosgw-admin subuser modify --subuser=janedoe:swift --access=full",
"radosgw-admin user suspend --uid=johndoe",
"radosgw-admin user enable --uid=johndoe",
"radosgw-admin user rm --uid= USER_ID [--purge-keys] [--purge-data]",
"radosgw-admin user rm --uid=johndoe --purge-data",
"radosgw-admin subuser rm --subuser=johndoe:swift --purge-keys",
"radosgw-admin subuser rm --subuser= SUB_USER_ID",
"radosgw-admin subuser rm --subuser=johndoe:swift",
"radosgw-admin user rename --uid= CURRENT_USER_NAME --new-uid= NEW_USER_NAME",
"radosgw-admin user rename --uid=user1 --new-uid=user2 { \"user_id\": \"user2\", \"display_name\": \"user 2\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [], \"keys\": [ { \"user\": \"user2\", \"access_key\": \"59EKHI6AI9F8WOW8JQZJ\", \"secret_key\": \"XH0uY3rKCUcuL73X0ftjXbZqUbk0cavD11rD8MsA\" } ], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }",
"radosgw-admin user rename --uid USER_NAME --new-uid NEW_USER_NAME --tenant TENANT",
"radosgw-admin user rename --uid=testUSDuser1 --new-uid=testUSDuser2 --tenant test 1000 objects processed in tvtester1. Next marker 80_tVtester1_99 2000 objects processed in tvtester1. Next marker 64_tVtester1_44 3000 objects processed in tvtester1. Next marker 48_tVtester1_28 4000 objects processed in tvtester1. Next marker 2_tVtester1_74 5000 objects processed in tvtester1. Next marker 14_tVtester1_53 6000 objects processed in tvtester1. Next marker 87_tVtester1_61 7000 objects processed in tvtester1. Next marker 6_tVtester1_57 8000 objects processed in tvtester1. Next marker 52_tVtester1_91 9000 objects processed in tvtester1. Next marker 34_tVtester1_74 9900 objects processed in tvtester1. Next marker 9_tVtester1_95 1000 objects processed in tvtester2. Next marker 82_tVtester2_93 2000 objects processed in tvtester2. Next marker 64_tVtester2_9 3000 objects processed in tvtester2. Next marker 48_tVtester2_22 4000 objects processed in tvtester2. Next marker 32_tVtester2_42 5000 objects processed in tvtester2. Next marker 16_tVtester2_36 6000 objects processed in tvtester2. Next marker 89_tVtester2_46 7000 objects processed in tvtester2. Next marker 70_tVtester2_78 8000 objects processed in tvtester2. Next marker 51_tVtester2_41 9000 objects processed in tvtester2. Next marker 33_tVtester2_32 9900 objects processed in tvtester2. Next marker 9_tVtester2_83 { \"user_id\": \"testUSDuser2\", \"display_name\": \"User 2\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [], \"keys\": [ { \"user\": \"testUSDuser2\", \"access_key\": \"user2\", \"secret_key\": \"123456789\" } ], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }",
"radosgw-admin user info --uid= NEW_USER_NAME",
"radosgw-admin user info --uid=user2",
"radosgw-admin user info --uid= TENANT USD USER_NAME",
"radosgw-admin user info --uid=testUSDuser2",
"radosgw-admin key create --subuser=johndoe:swift --key-type=swift --gen-secret { \"user_id\": \"johndoe\", \"rados_uid\": 0, \"display_name\": \"John Doe\", \"email\": \"[email protected]\", \"suspended\": 0, \"subusers\": [ { \"id\": \"johndoe:swift\", \"permissions\": \"full-control\"}], \"keys\": [ { \"user\": \"johndoe\", \"access_key\": \"QFAMEDSJP5DEKJO0DDXY\", \"secret_key\": \"iaSFLDVvDdQt6lkNzHyW4fPLZugBAI1g17LO0+87\"}], \"swift_keys\": [ { \"user\": \"johndoe:swift\", \"secret_key\": \"E9T2rUZNu2gxUjcwUBO8n\\/Ev4KX6\\/GprEuH4qhu1\"}]}",
"radosgw-admin key create --uid=johndoe --key-type=s3 --gen-access-key --gen-secret",
"radosgw-admin user info --uid=johndoe",
"radosgw-admin user info --uid=johndoe { \"user_id\": \"johndoe\", \"keys\": [ { \"user\": \"johndoe\", \"access_key\": \"0555b35654ad1656d804\", \"secret_key\": \"h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q==\" } ], }",
"radosgw-admin key rm --uid= USER_ID --access-key ACCESS_KEY",
"radosgw-admin key rm --uid=johndoe --access-key 0555b35654ad1656d804",
"radosgw-admin caps add --uid= USER_ID --caps= CAPS",
"--caps=\"[users|buckets|metadata|usage|zone]=[*|read|write|read, write]\"",
"radosgw-admin caps add --uid=johndoe --caps=\"users=*\"",
"radosgw-admin caps remove --uid=johndoe --caps={caps}",
"radosgw-admin role create --role-name= ROLE_NAME [--path==\" PATH_TO_FILE \"] [--assume-role-policy-doc= TRUST_RELATIONSHIP_POLICY_DOCUMENT ]",
"radosgw-admin role create --role-name=S3Access1 --path=/application_abc/component_xyz/ --assume-role-policy-doc=\\{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":\\[\\{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":\\{\\\"AWS\\\":\\[\\\"arn:aws:iam:::user/TESTER\\\"\\]\\},\\\"Action\\\":\\[\\\"sts:AssumeRole\\\"\\]\\}\\]\\} { \"RoleId\": \"ca43045c-082c-491a-8af1-2eebca13deec\", \"RoleName\": \"S3Access1\", \"Path\": \"/application_abc/component_xyz/\", \"Arn\": \"arn:aws:iam:::role/application_abc/component_xyz/S3Access1\", \"CreateDate\": \"2022-06-17T10:18:29.116Z\", \"MaxSessionDuration\": 3600, \"AssumeRolePolicyDocument\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/TESTER\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" }",
"radosgw-admin role get --role-name= ROLE_NAME",
"radosgw-admin role get --role-name=S3Access1 { \"RoleId\": \"ca43045c-082c-491a-8af1-2eebca13deec\", \"RoleName\": \"S3Access1\", \"Path\": \"/application_abc/component_xyz/\", \"Arn\": \"arn:aws:iam:::role/application_abc/component_xyz/S3Access1\", \"CreateDate\": \"2022-06-17T10:18:29.116Z\", \"MaxSessionDuration\": 3600, \"AssumeRolePolicyDocument\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/TESTER\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" }",
"radosgw-admin role list",
"radosgw-admin role list [ { \"RoleId\": \"85fb46dd-a88a-4233-96f5-4fb54f4353f7\", \"RoleName\": \"kvm-sts\", \"Path\": \"/application_abc/component_xyz/\", \"Arn\": \"arn:aws:iam:::role/application_abc/component_xyz/kvm-sts\", \"CreateDate\": \"2022-09-13T11:55:09.39Z\", \"MaxSessionDuration\": 7200, \"AssumeRolePolicyDocument\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/kvm\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" }, { \"RoleId\": \"9116218d-4e85-4413-b28d-cdfafba24794\", \"RoleName\": \"kvm-sts-1\", \"Path\": \"/application_abc/component_xyz/\", \"Arn\": \"arn:aws:iam:::role/application_abc/component_xyz/kvm-sts-1\", \"CreateDate\": \"2022-09-16T00:05:57.483Z\", \"MaxSessionDuration\": 3600, \"AssumeRolePolicyDocument\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/kvm\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" } ]",
"radosgw-admin role-trust-policy modify --role-name= ROLE_NAME --assume-role-policy-doc= TRUST_RELATIONSHIP_POLICY_DOCUMENT",
"radosgw-admin role-trust-policy modify --role-name=S3Access1 --assume-role-policy-doc=\\{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":\\[\\{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":\\{\\\"AWS\\\":\\[\\\"arn:aws:iam:::user/TESTER\\\"\\]\\},\\\"Action\\\":\\[\\\"sts:AssumeRole\\\"\\]\\}\\]\\} { \"RoleId\": \"ca43045c-082c-491a-8af1-2eebca13deec\", \"RoleName\": \"S3Access1\", \"Path\": \"/application_abc/component_xyz/\", \"Arn\": \"arn:aws:iam:::role/application_abc/component_xyz/S3Access1\", \"CreateDate\": \"2022-06-17T10:18:29.116Z\", \"MaxSessionDuration\": 3600, \"AssumeRolePolicyDocument\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/TESTER\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" }",
"radosgw-admin role-policy get --role-name= ROLE_NAME --policy-name= POLICY_NAME",
"radosgw-admin role-policy get --role-name=S3Access1 --policy-name=Policy1 { \"Permission policy\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Action\\\":[\\\"s3:*\\\"],\\\"Resource\\\":\\\"arn:aws:s3:::example_bucket\\\"}]}\" }",
"radosgw-admin role policy delete --role-name= ROLE_NAME --policy-name= POLICY_NAME",
"radosgw-admin role policy delete --role-name=S3Access1 --policy-name=Policy1",
"radosgw-admin role delete --role-name= ROLE_NAME",
"radosgw-admin role delete --role-name=S3Access1",
"radosgw-admin role-policy put --role-name= ROLE_NAME --policy-name= POLICY_NAME --policy-doc= PERMISSION_POLICY_DOCUMENT",
"radosgw-admin role-policy put --role-name=S3Access1 --policy-name=Policy1 --policy-doc=\\{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":\\[\\{\\\"Effect\\\":\\\"Allow\\\",\\\"Action\\\":\\[\\\"s3:*\\\"\\],\\\"Resource\\\":\\\"arn:aws:s3:::example_bucket\\\"\\}\\]\\}",
"radosgw-admin role-policy list --role-name= ROLE_NAME",
"radosgw-admin role-policy list --role-name=S3Access1 [ \"Policy1\" ]",
"radosgw-admin role policy delete --role-name= ROLE_NAME --policy-name= POLICY_NAME",
"radosgw-admin role policy delete --role-name=S3Access1 --policy-name=Policy1",
"radosgw-admin role update --role-name= ROLE_NAME --max-session-duration=7200",
"radosgw-admin role update --role-name=test-sts-role --max-session-duration=7200",
"radosgw-admin role list [ { \"RoleId\": \"d4caf33f-caba-42f3-8bd4-48c84b4ea4d3\", \"RoleName\": \"test-sts-role\", \"Path\": \"/\", \"Arn\": \"arn:aws:iam:::role/test-role\", \"CreateDate\": \"2022-09-07T20:01:15.563Z\", \"MaxSessionDuration\": 7200, <<<<<< \"AssumeRolePolicyDocument\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/kvm\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" } ]",
"radosgw-admin quota set --quota-scope=user --uid= USER_ID [--max-objects= NUMBER_OF_OBJECTS ] [--max-size= MAXIMUM_SIZE_IN_BYTES ]",
"radosgw-admin quota set --quota-scope=user --uid=johndoe --max-objects=1024 --max-size=1024",
"radosgw-admin quota enable --quota-scope=user --uid= USER_ID",
"radosgw-admin quota disable --quota-scope=user --uid= USER_ID",
"radosgw-admin quota set --uid= USER_ID --quota-scope=bucket --bucket= BUCKET_NAME [--max-objects= NUMBER_OF_OBJECTS ] [--max-size= MAXIMUM_SIZE_IN_BYTES ]",
"radosgw-admin quota enable --quota-scope=bucket --uid= USER_ID",
"radosgw-admin quota disable --quota-scope=bucket --uid= USER_ID",
"radosgw-admin user info --uid= USER_ID",
"radosgw-admin user info --uid= USER_ID --tenant= TENANT",
"radosgw-admin user stats --uid= USER_ID --sync-stats",
"radosgw-admin user stats --uid= USER_ID",
"radosgw-admin global quota get",
"radosgw-admin global quota set --quota-scope bucket --max-objects 1024 radosgw-admin global quota enable --quota-scope bucket",
"radosgw-admin bucket list [ \"34150b2e9174475db8e191c188e920f6/swcontainer\", \"s3bucket1\", \"34150b2e9174475db8e191c188e920f6/swimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/ec2container\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten1\", \"c278edd68cfb4705bb3e07837c7ad1a8/demo-ct\", \"c278edd68cfb4705bb3e07837c7ad1a8/demopostup\", \"34150b2e9174475db8e191c188e920f6/postimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten2\", \"c278edd68cfb4705bb3e07837c7ad1a8/postupsw\" ]",
"radosgw-admin bucket link --bucket= ORIGINAL_NAME --bucket-new-name= NEW_NAME --uid= USER_ID",
"radosgw-admin bucket link --bucket=s3bucket1 --bucket-new-name=s3newb --uid=testuser",
"radosgw-admin bucket link --bucket= tenant / ORIGINAL_NAME --bucket-new-name= NEW_NAME --uid= TENANT USD USER_ID",
"radosgw-admin bucket link --bucket=test/s3bucket1 --bucket-new-name=s3newb --uid=testUSDtestuser",
"radosgw-admin bucket list [ \"34150b2e9174475db8e191c188e920f6/swcontainer\", \"34150b2e9174475db8e191c188e920f6/swimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/ec2container\", \"s3newb\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten1\", \"c278edd68cfb4705bb3e07837c7ad1a8/demo-ct\", \"c278edd68cfb4705bb3e07837c7ad1a8/demopostup\", \"34150b2e9174475db8e191c188e920f6/postimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten2\", \"c278edd68cfb4705bb3e07837c7ad1a8/postupsw\" ]",
"radosgw-admin bucket list [ \"34150b2e9174475db8e191c188e920f6/swcontainer\", \"s3bucket1\", \"34150b2e9174475db8e191c188e920f6/swimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/ec2container\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten1\", \"c278edd68cfb4705bb3e07837c7ad1a8/demo-ct\", \"c278edd68cfb4705bb3e07837c7ad1a8/demopostup\", \"34150b2e9174475db8e191c188e920f6/postimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten2\", \"c278edd68cfb4705bb3e07837c7ad1a8/postupsw\" ]",
"radosgw-admin bucket rm --bucket= BUCKET_NAME",
"radosgw-admin bucket rm --bucket=s3bucket1",
"radosgw-admin bucket rm --bucket= BUCKET --purge-objects --bypass-gc",
"radosgw-admin bucket rm --bucket=s3bucket1 --purge-objects --bypass-gc",
"radosgw-admin bucket list [ \"34150b2e9174475db8e191c188e920f6/swcontainer\", \"34150b2e9174475db8e191c188e920f6/swimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/ec2container\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten1\", \"c278edd68cfb4705bb3e07837c7ad1a8/demo-ct\", \"c278edd68cfb4705bb3e07837c7ad1a8/demopostup\", \"34150b2e9174475db8e191c188e920f6/postimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten2\", \"c278edd68cfb4705bb3e07837c7ad1a8/postupsw\" ]",
"radosgw-admin bucket link --uid= USER --bucket= BUCKET",
"radosgw-admin bucket link --uid=user2 --bucket=data",
"radosgw-admin bucket list --uid=user2 [ \"data\" ]",
"radosgw-admin bucket chown --uid= user --bucket= bucket",
"radosgw-admin bucket chown --uid=user2 --bucket=data",
"radosgw-admin bucket list --bucket=data",
"radosgw-admin bucket link --bucket= CURRENT_TENANT / BUCKET --uid= NEW_TENANT USD USER",
"radosgw-admin bucket link --bucket=test/data --uid=test2USDuser2",
"radosgw-admin bucket list --uid=testUSDuser2 [ \"data\" ]",
"radosgw-admin bucket chown --bucket= NEW_TENANT / BUCKET --uid= NEW_TENANT USD USER",
"radosgw-admin bucket chown --bucket='test2/data' --uid='testUSDtuser2'",
"radosgw-admin bucket list --bucket=test2/data",
"ceph config set client.rgw rgw_keystone_implicit_tenants true",
"swift list",
"s3cmd ls",
"radosgw-admin bucket link --bucket=/ BUCKET --uid=' TENANT USD USER '",
"radosgw-admin bucket link --bucket=/data --uid='testUSDtenanted-user'",
"radosgw-admin bucket list --uid='testUSDtenanted-user' [ \"data\" ]",
"radosgw-admin bucket chown --bucket=' tenant / bucket name ' --uid=' tenant USD user '",
"radosgw-admin bucket chown --bucket='test/data' --uid='testUSDtenanted-user'",
"radosgw-admin bucket list --bucket=test/data",
"radosgw-admin bucket radoslist --bucket BUCKET_NAME",
"radosgw-admin bucket radoslist --bucket mybucket",
"head /usr/bin/rgw-orphan-list",
"mkdir orphans",
"cd orphans",
"rgw-orphan-list",
"Available pools: .rgw.root default.rgw.control default.rgw.meta default.rgw.log default.rgw.buckets.index default.rgw.buckets.data rbd default.rgw.buckets.non-ec ma.rgw.control ma.rgw.meta ma.rgw.log ma.rgw.buckets.index ma.rgw.buckets.data ma.rgw.buckets.non-ec Which pool do you want to search for orphans?",
"rgw-orphan-list -h rgw-orphan-list POOL_NAME / DIRECTORY",
"rgw-orphan-list default.rgw.buckets.data /orphans 2023-09-12 08:41:14 ceph-host01 Computing delta 2023-09-12 08:41:14 ceph-host01 Computing results 10 potential orphans found out of a possible 2412 (0%). <<<<<<< orphans detected The results can be found in './orphan-list-20230912124113.out'. Intermediate files are './rados-20230912124113.intermediate' and './radosgw-admin-20230912124113.intermediate'. *** *** WARNING: This is EXPERIMENTAL code and the results should be used *** only with CAUTION! *** Done at 2023-09-12 08:41:14.",
"ls -l -rw-r--r--. 1 root root 770 Sep 12 03:59 orphan-list-20230912075939.out -rw-r--r--. 1 root root 0 Sep 12 03:59 rados-20230912075939.error -rw-r--r--. 1 root root 248508 Sep 12 03:59 rados-20230912075939.intermediate -rw-r--r--. 1 root root 0 Sep 12 03:59 rados-20230912075939.issues -rw-r--r--. 1 root root 0 Sep 12 03:59 radosgw-admin-20230912075939.error -rw-r--r--. 1 root root 247738 Sep 12 03:59 radosgw-admin-20230912075939.intermediate",
"cat ./orphan-list-20230912124113.out a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.0 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.1 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.2 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.3 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.4 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.5 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.6 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.7 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.8 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.9",
"rados -p POOL_NAME rm OBJECT_NAME",
"rados -p default.rgw.buckets.data rm myobject",
"radosgw-admin bucket check --bucket= BUCKET_NAME",
"radosgw-admin bucket check --bucket=mybucket",
"radosgw-admin bucket check --fix --bucket= BUCKET_NAME",
"radosgw-admin bucket check --fix --bucket=mybucket",
"radosgw-admin topic list",
"radosgw-admin topic get --topic=topic1",
"radosgw-admin topic rm --topic=topic1",
"client.put_bucket_notification_configuration( Bucket=bucket_name, NotificationConfiguration={ 'TopicConfigurations': [ { 'Id': notification_name, 'TopicArn': topic_arn, 'Events': ['s3:ObjectCreated:*', 's3:ObjectRemoved:*', 's3:ObjectLifecycle:Expiration:*'] }]})",
"{ \"Role\": \"arn:aws:iam::account-id:role/role-name\", \"Rules\": [ { \"ID\": \"String\", \"Status\": \"Enabled\", \"Priority\": 1, \"DeleteMarkerReplication\": { \"Status\": \"Enabled\"|\"Disabled\" }, \"Destination\": { \"Bucket\": \"BUCKET_NAME\" } } ] }",
"cat replication.json { \"Role\": \"arn:aws:iam::account-id:role/role-name\", \"Rules\": [ { \"ID\": \"pipe-bkt\", \"Status\": \"Enabled\", \"Priority\": 1, \"DeleteMarkerReplication\": { \"Status\": \"Disabled\" }, \"Destination\": { \"Bucket\": \"testbucket\" } } ] }",
"aws --endpoint-url=RADOSGW_ENDPOINT_URL s3api put-bucket-replication --bucket BUCKET_NAME --replication-configuration file://REPLICATION_CONFIIRATION_FILE.json",
"aws --endpoint-url=http://host01:80 s3api put-bucket-replication --bucket testbucket --replication-configuration file://replication.json",
"radosgw-admin sync policy get --bucket BUCKET_NAME",
"radosgw-admin sync policy get --bucket testbucket { \"groups\": [ { \"id\": \"s3-bucket-replication:disabled\", \"data_flow\": {}, \"pipes\": [], \"status\": \"allowed\" }, { \"id\": \"s3-bucket-replication:enabled\", \"data_flow\": {}, \"pipes\": [ { \"id\": \"\", \"source\": { \"bucket\": \"*\", \"zones\": [ \"*\" ] }, \"dest\": { \"bucket\": \"testbucket\", \"zones\": [ \"*\" ] }, \"params\": { \"source\": {}, \"dest\": {}, \"priority\": 1, \"mode\": \"user\", \"user\": \"s3cmd\" } } ], \"status\": \"enabled\" } ] }",
"aws s3api get-bucket-replication --bucket BUCKET_NAME --endpoint-url=RADOSGW_ENDPOINT_URL",
"aws s3api get-bucket-replication --bucket testbucket --endpoint-url=http://host01:80 { \"ReplicationConfiguration\": { \"Role\": \"\", \"Rules\": [ { \"ID\": \"pipe-bkt\", \"Status\": \"Enabled\", \"Priority\": 1, \"Destination\": { Bucket\": \"testbucket\" } } ] } }",
"aws s3api delete-bucket-replication --bucket BUCKET_NAME --endpoint-url=RADOSGW_ENDPOINT_URL",
"aws s3api delete-bucket-replication --bucket testbucket --endpoint-url=http://host01:80",
"radosgw-admin sync policy get --bucket=BUCKET_NAME",
"radosgw-admin sync policy get --bucket=testbucket",
"cat user_policy.json { \"Version\":\"2012-10-17\", \"Statement\": { \"Effect\":\"Deny\", \"Action\": [ \"s3:PutReplicationConfiguration\", \"s3:GetReplicationConfiguration\", \"s3:DeleteReplicationConfiguration\" ], \"Resource\": \"arn:aws:s3:::*\", } }",
"aws --endpoint-url=ENDPOINT_URL iam put-user-policy --user-name USER_NAME --policy-name USER_POLICY_NAME --policy-document POLICY_DOCUMENT_PATH",
"aws --endpoint-url=http://host01:80 iam put-user-policy --user-name newuser1 --policy-name userpolicy --policy-document file://user_policy.json",
"aws --endpoint-url=ENDPOINT_URL iam get-user-policy --user-name USER_NAME --policy-name USER_POLICY_NAME --region us",
"aws --endpoint-url=http://host01:80 iam get-user-policy --user-name newuser1 --policy-name userpolicy --region us",
"[user@client ~]USD vi lifecycle.json",
"{ \"Rules\": [ { \"Filter\": { \"Prefix\": \"images/\" }, \"Status\": \"Enabled\", \"Expiration\": { \"Days\": 1 }, \"ID\": \"ImageExpiration\" } ] }",
"aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api put-bucket-lifecycle-configuration --bucket BUCKET_NAME --lifecycle-configuration file:// PATH_TO_LIFECYCLE_CONFIGURATION_FILE / LIFECYCLE_CONFIGURATION_FILE .json",
"[user@client ~]USD aws --endpoint-url=http://host01:80 s3api put-bucket-lifecycle-configuration --bucket testbucket --lifecycle-configuration file://lifecycle.json",
"aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api get-bucket-lifecycle-configuration --bucket BUCKET_NAME",
"[user@client ~]USD aws --endpoint-url=http://host01:80 s3api get-bucket-lifecycle-configuration --bucket testbucket { \"Rules\": [ { \"Expiration\": { \"Days\": 1 }, \"ID\": \"ImageExpiration\", \"Filter\": { \"Prefix\": \"images/\" }, \"Status\": \"Enabled\" } ] }",
"radosgw-admin lc get --bucket= BUCKET_NAME",
"radosgw-admin lc get --bucket=testbucket { \"prefix_map\": { \"images/\": { \"status\": true, \"dm_expiration\": false, \"expiration\": 1, \"noncur_expiration\": 0, \"mp_expiration\": 0, \"transitions\": {}, \"noncur_transitions\": {} } }, \"rule_map\": [ { \"id\": \"ImageExpiration\", \"rule\": { \"id\": \"ImageExpiration\", \"prefix\": \"\", \"status\": \"Enabled\", \"expiration\": { \"days\": \"1\", \"date\": \"\" }, \"mp_expiration\": { \"days\": \"\", \"date\": \"\" }, \"filter\": { \"prefix\": \"images/\", \"obj_tags\": { \"tagset\": {} } }, \"transitions\": {}, \"noncur_transitions\": {}, \"dm_expiration\": false } } ] }",
"aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api delete-bucket-lifecycle --bucket BUCKET_NAME",
"[user@client ~]USD aws --endpoint-url=http://host01:80 s3api delete-bucket-lifecycle --bucket testbucket",
"aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api get-bucket-lifecycle-configuration --bucket BUCKET_NAME",
"aws --endpoint-url=http://host01:80 s3api get-bucket-lifecycle-configuration --bucket testbucket",
"radosgw-admin lc get --bucket= BUCKET_NAME",
"radosgw-admin lc get --bucket=testbucket",
"[user@client ~]USD vi lifecycle.json",
"{ \"Rules\": [ { \"Filter\": { \"Prefix\": \"images/\" }, \"Status\": \"Enabled\", \"Expiration\": { \"Days\": 1 }, \"ID\": \"ImageExpiration\" }, { \"Filter\": { \"Prefix\": \"docs/\" }, \"Status\": \"Enabled\", \"Expiration\": { \"Days\": 30 }, \"ID\": \"DocsExpiration\" } ] }",
"aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api put-bucket-lifecycle-configuration --bucket BUCKET_NAME --lifecycle-configuration file:// PATH_TO_LIFECYCLE_CONFIGURATION_FILE / LIFECYCLE_CONFIGURATION_FILE .json",
"[user@client ~]USD aws --endpoint-url=http://host01:80 s3api put-bucket-lifecycle-configuration --bucket testbucket --lifecycle-configuration file://lifecycle.json",
"aws --endpointurl= RADOSGW_ENDPOINT_URL : PORT s3api get-bucket-lifecycle-configuration --bucket BUCKET_NAME",
"[user@client ~]USD aws -endpoint-url=http://host01:80 s3api get-bucket-lifecycle-configuration --bucket testbucket { \"Rules\": [ { \"Expiration\": { \"Days\": 30 }, \"ID\": \"DocsExpiration\", \"Filter\": { \"Prefix\": \"docs/\" }, \"Status\": \"Enabled\" }, { \"Expiration\": { \"Days\": 1 }, \"ID\": \"ImageExpiration\", \"Filter\": { \"Prefix\": \"images/\" }, \"Status\": \"Enabled\" } ] }",
"radosgw-admin lc get --bucket= BUCKET_NAME",
"radosgw-admin lc get --bucket=testbucket { \"prefix_map\": { \"docs/\": { \"status\": true, \"dm_expiration\": false, \"expiration\": 1, \"noncur_expiration\": 0, \"mp_expiration\": 0, \"transitions\": {}, \"noncur_transitions\": {} }, \"images/\": { \"status\": true, \"dm_expiration\": false, \"expiration\": 1, \"noncur_expiration\": 0, \"mp_expiration\": 0, \"transitions\": {}, \"noncur_transitions\": {} } }, \"rule_map\": [ { \"id\": \"DocsExpiration\", \"rule\": { \"id\": \"DocsExpiration\", \"prefix\": \"\", \"status\": \"Enabled\", \"expiration\": { \"days\": \"30\", \"date\": \"\" }, \"noncur_expiration\": { \"days\": \"\", \"date\": \"\" }, \"mp_expiration\": { \"days\": \"\", \"date\": \"\" }, \"filter\": { \"prefix\": \"docs/\", \"obj_tags\": { \"tagset\": {} } }, \"transitions\": {}, \"noncur_transitions\": {}, \"dm_expiration\": false } }, { \"id\": \"ImageExpiration\", \"rule\": { \"id\": \"ImageExpiration\", \"prefix\": \"\", \"status\": \"Enabled\", \"expiration\": { \"days\": \"1\", \"date\": \"\" }, \"mp_expiration\": { \"days\": \"\", \"date\": \"\" }, \"filter\": { \"prefix\": \"images/\", \"obj_tags\": { \"tagset\": {} } }, \"transitions\": {}, \"noncur_transitions\": {}, \"dm_expiration\": false } } ] }",
"cephadm shell",
"radosgw-admin lc list [ { \"bucket\": \":testbucket:8b63d584-9ea1-4cf3-8443-a6a15beca943.54187.1\", \"started\": \"Thu, 01 Jan 1970 00:00:00 GMT\", \"status\" : \"UNINITIAL\" }, { \"bucket\": \":testbucket1:8b635499-9e41-4cf3-8443-a6a15345943.54187.2\", \"started\": \"Thu, 01 Jan 1970 00:00:00 GMT\", \"status\" : \"UNINITIAL\" } ]",
"radosgw-admin lc process --bucket= BUCKET_NAME",
"radosgw-admin lc process --bucket=testbucket1",
"radosgw-admin lc process",
"radosgw-admin lc list [ { \"bucket\": \":testbucket:8b63d584-9ea1-4cf3-8443-a6a15beca943.54187.1\", \"started\": \"Thu, 17 Mar 2022 21:48:50 GMT\", \"status\" : \"COMPLETE\" } { \"bucket\": \":testbucket1:8b635499-9e41-4cf3-8443-a6a15345943.54187.2\", \"started\": \"Thu, 17 Mar 2022 20:38:50 GMT\", \"status\" : \"COMPLETE\" } ]",
"cephadm shell",
"ceph config set client.rgw rgw_lifecycle_work_time %D:%D-%D:%D",
"ceph config set client.rgw rgw_lifecycle_work_time 06:00-08:00",
"ceph config get client.rgw rgw_lifecycle_work_time 06:00-08:00",
"ceph osd pool create POOL_NAME",
"ceph osd pool create test.hot.data",
"radosgw-admin zonegroup placement add --rgw-zonegroup default --placement-id PLACEMENT_TARGET --storage-class STORAGE_CLASS",
"radosgw-admin zonegroup placement add --rgw-zonegroup default --placement-id default-placement --storage-class hot.test { \"key\": \"default-placement\", \"val\": { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"STANDARD\", \"hot.test\" ] } }",
"radosgw-admin zone placement add --rgw-zone default --placement-id PLACEMENT_TARGET --storage-class STORAGE_CLASS --data-pool DATA_POOL",
"radosgw-admin zone placement add --rgw-zone default --placement-id default-placement --storage-class hot.test --data-pool test.hot.data { \"key\": \"default-placement\", \"val\": { \"index_pool\": \"test_zone.rgw.buckets.index\", \"storage_classes\": { \"STANDARD\": { \"data_pool\": \"test.hot.data\" }, \"hot.test\": { \"data_pool\": \"test.hot.data\", } }, \"data_extra_pool\": \"\", \"index_type\": 0 }",
"ceph osd pool application enable POOL_NAME rgw",
"ceph osd pool application enable test.hot.data rgw enabled application 'rgw' on pool 'test.hot.data'",
"aws s3api create-bucket --bucket testbucket10 --create-bucket-configuration LocationConstraint=default:default-placement --endpoint-url http://10.0.0.80:8080",
"aws --endpoint=http://10.0.0.80:8080 s3api put-object --bucket testbucket10 --key compliance-upload --body /root/test2.txt",
"ceph osd pool create POOL_NAME",
"ceph osd pool create test.cold.data",
"radosgw-admin zonegroup placement add --rgw-zonegroup default --placement-id PLACEMENT_TARGET --storage-class STORAGE_CLASS",
"radosgw-admin zonegroup placement add --rgw-zonegroup default --placement-id default-placement --storage-class cold.test { \"key\": \"default-placement\", \"val\": { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"STANDARD\", \"cold.test\" ] } }",
"radosgw-admin zone placement add --rgw-zone default --placement-id PLACEMENT_TARGET --storage-class STORAGE_CLASS --data-pool DATA_POOL",
"radosgw-admin zone placement add --rgw-zone default --placement-id default-placement --storage-class cold.test --data-pool test.cold.data",
"ceph osd pool application enable POOL_NAME rgw",
"ceph osd pool application enable test.cold.data rgw enabled application 'rgw' on pool 'test.cold.data'",
"radosgw-admin zonegroup get { \"id\": \"3019de59-ddde-4c5c-b532-7cdd29de09a1\", \"name\": \"default\", \"api_name\": \"default\", \"is_master\": \"true\", \"endpoints\": [], \"hostnames\": [], \"hostnames_s3website\": [], \"master_zone\": \"adacbe1b-02b4-41b8-b11d-0d505b442ed4\", \"zones\": [ { \"id\": \"adacbe1b-02b4-41b8-b11d-0d505b442ed4\", \"name\": \"default\", \"endpoints\": [], \"log_meta\": \"false\", \"log_data\": \"false\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\", \"tier_type\": \"\", \"sync_from_all\": \"true\", \"sync_from\": [], \"redirect_zone\": \"\" } ], \"placement_targets\": [ { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"hot.test\", \"cold.test\", \"STANDARD\" ] } ], \"default_placement\": \"default-placement\", \"realm_id\": \"\", \"sync_policy\": { \"groups\": [] } }",
"radosgw-admin zone get { \"id\": \"adacbe1b-02b4-41b8-b11d-0d505b442ed4\", \"name\": \"default\", \"domain_root\": \"default.rgw.meta:root\", \"control_pool\": \"default.rgw.control\", \"gc_pool\": \"default.rgw.log:gc\", \"lc_pool\": \"default.rgw.log:lc\", \"log_pool\": \"default.rgw.log\", \"intent_log_pool\": \"default.rgw.log:intent\", \"usage_log_pool\": \"default.rgw.log:usage\", \"roles_pool\": \"default.rgw.meta:roles\", \"reshard_pool\": \"default.rgw.log:reshard\", \"user_keys_pool\": \"default.rgw.meta:users.keys\", \"user_email_pool\": \"default.rgw.meta:users.email\", \"user_swift_pool\": \"default.rgw.meta:users.swift\", \"user_uid_pool\": \"default.rgw.meta:users.uid\", \"otp_pool\": \"default.rgw.otp\", \"system_key\": { \"access_key\": \"\", \"secret_key\": \"\" }, \"placement_pools\": [ { \"key\": \"default-placement\", \"val\": { \"index_pool\": \"default.rgw.buckets.index\", \"storage_classes\": { \"cold.test\": { \"data_pool\": \"test.cold.data\" }, \"hot.test\": { \"data_pool\": \"test.hot.data\" }, \"STANDARD\": { \"data_pool\": \"default.rgw.buckets.data\" } }, \"data_extra_pool\": \"default.rgw.buckets.non-ec\", \"index_type\": 0 } } ], \"realm_id\": \"\", \"notif_pool\": \"default.rgw.log:notif\" }",
"aws s3api create-bucket --bucket testbucket10 --create-bucket-configuration LocationConstraint=default:default-placement --endpoint-url http://10.0.0.80:8080",
"radosgw-admin bucket list --bucket testbucket10 { \"ETag\": \"\\\"211599863395c832a3dfcba92c6a3b90\\\"\", \"Size\": 540, \"StorageClass\": \"STANDARD\", \"Key\": \"obj1\", \"VersionId\": \"W95teRsXPSJI4YWJwwSG30KxSCzSgk-\", \"IsLatest\": true, \"LastModified\": \"2023-11-23T10:38:07.214Z\", \"Owner\": { \"DisplayName\": \"test-user\", \"ID\": \"test-user\" } }",
"vi lifecycle.json",
"{ \"Rules\": [ { \"Filter\": { \"Prefix\": \"\" }, \"Status\": \"Enabled\", \"Transitions\": [ { \"Days\": 5, \"StorageClass\": \"hot.test\" }, { \"Days\": 20, \"StorageClass\": \"cold.test\" } ], \"Expiration\": { \"Days\": 365 }, \"ID\": \"double transition and expiration\" } ] }",
"aws s3api put-bucket-lifecycle-configuration --bucket testbucket10 --lifecycle-configuration file://lifecycle.json",
"aws s3api get-bucket-lifecycle-configuration --bucket testbucke10 { \"Rules\": [ { \"Expiration\": { \"Days\": 365 }, \"ID\": \"double transition and expiration\", \"Prefix\": \"\", \"Status\": \"Enabled\", \"Transitions\": [ { \"Days\": 20, \"StorageClass\": \"cold.test\" }, { \"Days\": 5, \"StorageClass\": \"hot.test\" } ] } ] }",
"radosgw-admin bucket list --bucket testbucket10 { \"ETag\": \"\\\"211599863395c832a3dfcba92c6a3b90\\\"\", \"Size\": 540, \"StorageClass\": \"cold.test\", \"Key\": \"obj1\", \"VersionId\": \"W95teRsXPSJI4YWJwwSG30KxSCzSgk-\", \"IsLatest\": true, \"LastModified\": \"2023-11-23T10:38:07.214Z\", \"Owner\": { \"DisplayName\": \"test-user\", \"ID\": \"test-user\" } }",
"aws --endpoint=http:// RGW_PORT :8080 s3api create-bucket --bucket BUCKET_NAME --object-lock-enabled-for-bucket",
"aws --endpoint=http://rgw.ceph.com:8080 s3api create-bucket --bucket worm-bucket --object-lock-enabled-for-bucket",
"aws --endpoint=http:// RGW_PORT :8080 s3api put-object-lock-configuration --bucket BUCKET_NAME --object-lock-configuration '{ \"ObjectLockEnabled\": \"Enabled\", \"Rule\": { \"DefaultRetention\": { \"Mode\": \" RETENTION_MODE \", \"Days\": NUMBER_OF_DAYS }}}'",
"aws --endpoint=http://rgw.ceph.com:8080 s3api put-object-lock-configuration --bucket worm-bucket --object-lock-configuration '{ \"ObjectLockEnabled\": \"Enabled\", \"Rule\": { \"DefaultRetention\": { \"Mode\": \"COMPLIANCE\", \"Days\": 10 }}}'",
"aws --endpoint=http:// RGW_PORT :8080 s3api put-object --bucket BUCKET_NAME --object-lock-mode RETENTION_MODE --object-lock-retain-until-date \" DATE \" --key compliance-upload --body TEST_FILE",
"aws --endpoint=http://rgw.ceph.com:8080 s3api put-object --bucket worm-bucket --object-lock-mode COMPLIANCE --object-lock-retain-until-date \"2022-05-31\" --key compliance-upload --body test.dd { \"ETag\": \"\\\"d560ea5652951637ba9c594d8e6ea8c1\\\"\", \"VersionId\": \"Nhhk5kRS6Yp6dZXVWpZZdRcpSpBKToD\" }",
"aws --endpoint=http:// RGW_PORT :8080 s3api put-object --bucket BUCKET_NAME --object-lock-mode RETENTION_MODE --object-lock-retain-until-date \" DATE \" --key compliance-upload --body PATH",
"aws --endpoint=http://rgw.ceph.com:8080 s3api put-object --bucket worm-bucket --object-lock-mode COMPLIANCE --object-lock-retain-until-date \"2022-05-31\" --key compliance-upload --body /etc/fstab { \"ETag\": \"\\\"d560ea5652951637ba9c594d8e6ea8c1\\\"\", \"VersionId\": \"Nhhk5kRS6Yp6dZXVWpZZdRcpSpBKToD\" }",
"aws --endpoint=http://rgw.ceph.com:8080 s3api put-object-legal-hold --bucket worm-bucket --key compliance-upload --legal-hold Status=ON",
"aws --endpoint=http://rgw.ceph.com:8080 s3api list-objects --bucket worm-bucket",
"aws --endpoint=http://rgw.ceph.com:8080 s3api list-objects --bucket worm-bucket { \"Versions\": [ { \"ETag\": \"\\\"d560ea5652951637ba9c594d8e6ea8c1\\\"\", \"Size\": 288, \"StorageClass\": \"STANDARD\", \"Key\": \"hosts\", \"VersionId\": \"Nhhk5kRS6Yp6dZXVWpZZdRcpSpBKToD\", \"IsLatest\": true, \"LastModified\": \"2022-06-17T08:51:17.392000+00:00\", \"Owner\": { \"DisplayName\": \"Test User in Tenant test\", \"ID\": \"testUSDtest.user\" } } } ] }",
"aws --endpoint=http://rgw.ceph.com:8080 s3api get-object --bucket worm-bucket --key compliance-upload --version-id 'IGOU.vdIs3SPduZglrB-RBaK.sfXpcd' download.1 { \"AcceptRanges\": \"bytes\", \"LastModified\": \"2022-06-17T08:51:17+00:00\", \"ContentLength\": 288, \"ETag\": \"\\\"d560ea5652951637ba9c594d8e6ea8c1\\\"\", \"VersionId\": \"Nhhk5kRS6Yp6dZXVWpZZdRcpSpBKToD\", \"ContentType\": \"binary/octet-stream\", \"Metadata\": {}, \"ObjectLockMode\": \"COMPLIANCE\", \"ObjectLockRetainUntilDate\": \"2023-06-17T08:51:17+00:00\" }",
"radosgw-admin usage show --uid=johndoe --start-date=2022-06-01 --end-date=2022-07-01",
"radosgw-admin usage show --show-log-entries=false",
"radosgw-admin usage trim --start-date=2022-06-01 --end-date=2022-07-31 radosgw-admin usage trim --uid=johndoe radosgw-admin usage trim --uid=johndoe --end-date=2021-04-31",
"radosgw-admin metadata get bucket: BUCKET_NAME radosgw-admin metadata get bucket.instance: BUCKET : BUCKET_ID radosgw-admin metadata get user: USER radosgw-admin metadata set user: USER",
"radosgw-admin metadata list radosgw-admin metadata list bucket radosgw-admin metadata list bucket.instance radosgw-admin metadata list user",
".bucket.meta.prodtx:test%25star:default.84099.6 .bucket.meta.testcont:default.4126.1 .bucket.meta.prodtx:testcont:default.84099.4 prodtx/testcont prodtx/test%25star testcont",
"prodtxUSDprodt test2.buckets prodtxUSDprodt.buckets test2",
"radosgw-admin ratelimit set --ratelimit-scope=user --uid= USER_ID [--max-read-ops= NUMBER_OF_OPERATIONS ] [--max-read-bytes= NUMBER_OF_BYTES ] [--max-write-ops= NUMBER_OF_OPERATIONS ] [--max-write-bytes= NUMBER_OF_BYTES ]",
"radosgw-admin ratelimit set --ratelimit-scope=user --uid=testing --max-read-ops=1024 --max-write-bytes=10240",
"radosgw-admin ratelimit get --ratelimit-scope=user --uid= USER_ID",
"radosgw-admin ratelimit get --ratelimit-scope=user --uid=testing { \"user_ratelimit\": { \"max_read_ops\": 1024, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 10240, \"enabled\": false } }",
"radosgw-admin ratelimit enable --ratelimit-scope=user --uid= USER_ID",
"radosgw-admin ratelimit enable --ratelimit-scope=user --uid=testing { \"user_ratelimit\": { \"max_read_ops\": 1024, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 10240, \"enabled\": true } }",
"radosgw-admin ratelimit disable --ratelimit-scope=user --uid= USER_ID",
"radosgw-admin ratelimit disable --ratelimit-scope=user --uid=testing",
"radosgw-admin ratelimit set --ratelimit-scope=bucket --bucket= BUCKET_NAME [--max-read-ops= NUMBER_OF_OPERATIONS ] [--max-read-bytes= NUMBER_OF_BYTES ] [--max-write-ops= NUMBER_OF_OPERATIONS ] [--max-write-bytes= NUMBER_OF_BYTES ]",
"radosgw-admin ratelimit set --ratelimit-scope=bucket --bucket=mybucket --max-read-ops=1024 --max-write-bytes=10240",
"radosgw-admin ratelimit get --ratelimit-scope=bucket --bucket= BUCKET_NAME",
"radosgw-admin ratelimit get --ratelimit-scope=bucket --bucket=mybucket { \"bucket_ratelimit\": { \"max_read_ops\": 1024, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 10240, \"enabled\": false } }",
"radosgw-admin ratelimit enable --ratelimit-scope=bucket --bucket= BUCKET_NAME",
"radosgw-admin ratelimit enable --ratelimit-scope=bucket --bucket=mybucket { \"bucket_ratelimit\": { \"max_read_ops\": 1024, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 10240, \"enabled\": true } }",
"radosgw-admin ratelimit disable --ratelimit-scope=bucket --bucket= BUCKET_NAME",
"radosgw-admin ratelimit disable --ratelimit-scope=bucket --bucket=mybucket",
"radosgw-admin global ratelimit get",
"radosgw-admin global ratelimit get { \"bucket_ratelimit\": { \"max_read_ops\": 1024, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 0, \"enabled\": false }, \"user_ratelimit\": { \"max_read_ops\": 0, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 0, \"enabled\": false }, \"anonymous_ratelimit\": { \"max_read_ops\": 0, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 0, \"enabled\": false } }",
"radosgw-admin global ratelimit set --ratelimit-scope=bucket [--max-read-ops= NUMBER_OF_OPERATIONS ] [--max-read-bytes= NUMBER_OF_BYTES ] [--max-write-ops= NUMBER_OF_OPERATIONS ] [--max-write-bytes= NUMBER_OF_BYTES ]",
"radosgw-admin global ratelimit set --ratelimit-scope bucket --max-read-ops=1024",
"radosgw-admin global ratelimit enable --ratelimit-scope=bucket",
"radosgw-admin global ratelimit enable --ratelimit-scope bucket",
"radosgw-admin global ratelimit set --ratelimit-scope=user [--max-read-ops= NUMBER_OF_OPERATIONS ] [--max-read-bytes= NUMBER_OF_BYTES ] [--max-write-ops= NUMBER_OF_OPERATIONS ] [--max-write-bytes= NUMBER_OF_BYTES ]",
"radosgw-admin global ratelimit set --ratelimit-scope=user --max-read-ops=1024",
"radosgw-admin global ratelimit enable --ratelimit-scope=user",
"radosgw-admin global ratelimit enable --ratelimit-scope=user",
"radosgw-admin global ratelimit set --ratelimit-scope=anonymous [--max-read-ops= NUMBER_OF_OPERATIONS ] [--max-read-bytes= NUMBER_OF_BYTES ] [--max-write-ops= NUMBER_OF_OPERATIONS ] [--max-write-bytes= NUMBER_OF_BYTES ]",
"radosgw-admin global ratelimit set --ratelimit-scope=anonymous --max-read-ops=1024",
"radosgw-admin global ratelimit enable --ratelimit-scope=anonymous",
"radosgw-admin global ratelimit enable --ratelimit-scope=anonymous",
"radosgw-admin gc list",
"radosgw-admin gc list",
"ceph config set client.rgw rgw_gc_max_concurrent_io 20 ceph config set client.rgw rgw_gc_max_trim_chunk 64",
"ceph config set client.rgw rgw_lc_max_worker 7",
"ceph config set client.rgw rgw_lc_max_wp_worker 7",
"radosgw-admin user create --uid= USER_NAME --display-name=\" DISPLAY_NAME \" [--access-key ACCESS_KEY --secret-key SECRET_KEY ]",
"radosgw-admin user create --uid=test-user --display-name=\"test-user\" --access-key a21e86bce636c3aa1 --secret-key cf764951f1fdde5e { \"user_id\": \"test-user\", \"display_name\": \"test-user\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"subusers\": [], \"keys\": [ { \"user\": \"test-user\", \"access_key\": \"a21e86bce636c3aa1\", \"secret_key\": \"cf764951f1fdde5e\" } ], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"default_storage_class\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\", \"mfa_ids\": [] }",
"radosgw-admin zonegroup placement add --rgw-zonegroup = ZONE_GROUP_NAME --placement-id= PLACEMENT_ID --storage-class = STORAGE_CLASS_NAME --tier-type=cloud-s3",
"radosgw-admin zonegroup placement add --rgw-zonegroup=default --placement-id=default-placement --storage-class=CLOUDTIER --tier-type=cloud-s3 [ { \"key\": \"default-placement\", \"val\": { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"CLOUDTIER\", \"STANDARD\" ], \"tier_targets\": [ { \"key\": \"CLOUDTIER\", \"val\": { \"tier_type\": \"cloud-s3\", \"storage_class\": \"CLOUDTIER\", \"retain_head_object\": \"false\", \"s3\": { \"endpoint\": \"\", \"access_key\": \"\", \"secret\": \"\", \"host_style\": \"path\", \"target_storage_class\": \"\", \"target_path\": \"\", \"acl_mappings\": [], \"multipart_sync_threshold\": 33554432, \"multipart_min_part_size\": 33554432 } } } ] } } ]",
"radosgw-admin zonegroup placement modify --rgw-zonegroup ZONE_GROUP_NAME --placement-id PLACEMENT_ID --storage-class STORAGE_CLASS_NAME --tier-config=endpoint= AWS_ENDPOINT_URL , access_key= AWS_ACCESS_KEY ,secret= AWS_SECRET_KEY , target_path=\" TARGET_BUCKET_ON_AWS \", multipart_sync_threshold=44432, multipart_min_part_size=44432, retain_head_object=true region= REGION_NAME",
"radosgw-admin zonegroup placement modify --rgw-zonegroup default --placement-id default-placement --storage-class CLOUDTIER --tier-config=endpoint=http://10.0.210.010:8080, access_key=a21e86bce636c3aa2,secret=cf764951f1fdde5f, target_path=\"dfqe-bucket-01\", multipart_sync_threshold=44432, multipart_min_part_size=44432, retain_head_object=true region=us-east-1 [ { \"key\": \"default-placement\", \"val\": { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"CLOUDTIER\", \"STANDARD\", \"cold.test\", \"hot.test\" ], \"tier_targets\": [ { \"key\": \"CLOUDTIER\", \"val\": { \"tier_type\": \"cloud-s3\", \"storage_class\": \"CLOUDTIER\", \"retain_head_object\": \"true\", \"s3\": { \"endpoint\": \"http://10.0.210.010:8080\", \"access_key\": \"a21e86bce636c3aa2\", \"secret\": \"cf764951f1fdde5f\", \"region\": \"\", \"host_style\": \"path\", \"target_storage_class\": \"\", \"target_path\": \"dfqe-bucket-01\", \"acl_mappings\": [], \"multipart_sync_threshold\": 44432, \"multipart_min_part_size\": 44432 } } } ] } } ] ]",
"ceph orch restart CEPH_OBJECT_GATEWAY_SERVICE_NAME",
"ceph orch restart rgw.rgw.1 Scheduled to restart rgw.rgw.1.host03.vkfldf on host 'host03'",
"s3cmd --configure Enter new values or accept defaults in brackets with Enter. Refer to user manual for detailed description of all options. Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables. Access Key: a21e86bce636c3aa2 Secret Key: cf764951f1fdde5f Default Region [US]: Use \"s3.amazonaws.com\" for S3 Endpoint and not modify it to the target Amazon S3. S3 Endpoint [s3.amazonaws.com]: 10.0.210.78:80 Use \"%(bucket)s.s3.amazonaws.com\" to the target Amazon S3. \"%(bucket)s\" and \"%(location)s\" vars can be used if the target S3 system supports dns based buckets. DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: 10.0.210.78:80 Encryption password is used to protect your files from reading by unauthorized persons while in transfer to S3 Encryption password: Path to GPG program [/usr/bin/gpg]: When using secure HTTPS protocol all communication with Amazon S3 servers is protected from 3rd party eavesdropping. This method is slower than plain HTTP, and can only be proxied with Python 2.7 or newer Use HTTPS protocol [Yes]: No On some networks all internet access must go through a HTTP proxy. Try setting it here if you can't connect to S3 directly HTTP Proxy server name: New settings: Access Key: a21e86bce636c3aa2 Secret Key: cf764951f1fdde5f Default Region: US S3 Endpoint: 10.0.210.78:80 DNS-style bucket+hostname:port template for accessing a bucket: 10.0.210.78:80 Encryption password: Path to GPG program: /usr/bin/gpg Use HTTPS protocol: False HTTP Proxy server name: HTTP Proxy server port: 0 Test access with supplied credentials? [Y/n] Y Please wait, attempting to list all buckets Success. Your access key and secret key worked fine :-) Now verifying that encryption works Not configured. Never mind. Save settings? [y/N] y Configuration saved to '/root/.s3cfg'",
"s3cmd mb s3:// NAME_OF_THE_BUCKET_FOR_S3",
"s3cmd mb s3://awstestbucket Bucket 's3://awstestbucket/' created",
"s3cmd put FILE_NAME s3:// NAME_OF_THE_BUCKET_ON_S3",
"s3cmd put test.txt s3://awstestbucket upload: 'test.txt' -> 's3://awstestbucket/test.txt' [1 of 1] 21 of 21 100% in 1s 16.75 B/s done",
"<LifecycleConfiguration> <Rule> <ID> RULE_NAME </ID> <Filter> <Prefix></Prefix> </Filter> <Status>Enabled</Status> <Transition> <Days> DAYS </Days> <StorageClass> STORAGE_CLASS_NAME </StorageClass> </Transition> </Rule> </LifecycleConfiguration>",
"cat lc_cloud.xml <LifecycleConfiguration> <Rule> <ID>Archive all objects</ID> <Filter> <Prefix></Prefix> </Filter> <Status>Enabled</Status> <Transition> <Days>2</Days> <StorageClass>CLOUDTIER</StorageClass> </Transition> </Rule> </LifecycleConfiguration>",
"s3cmd setlifecycle FILE_NAME s3:// NAME_OF_THE_BUCKET_FOR_S3",
"s3cmd setlifecycle lc_config.xml s3://awstestbucket s3://awstestbucket/: Lifecycle Policy updated",
"cephadm shell",
"ceph orch restart CEPH_OBJECT_GATEWAY_SERVICE_NAME",
"ceph orch restart rgw.rgw.1 Scheduled to restart rgw.rgw.1.host03.vkfldf on host 'host03'",
"radosgw-admin lc list [ { \"bucket\": \":awstestbucket:552a3adb-39e0-40f6-8c84-00590ed70097.54639.1\", \"started\": \"Mon, 26 Sep 2022 18:32:07 GMT\", \"status\": \"COMPLETE\" } ]",
"[root@client ~]USD radosgw-admin bucket list [ \"awstestbucket\" ]",
"[root@host01 ~]USD aws s3api list-objects --bucket awstestbucket --endpoint=http://10.0.209.002:8080 { \"Contents\": [ { \"Key\": \"awstestbucket/test\", \"LastModified\": \"2022-08-25T16:14:23.118Z\", \"ETag\": \"\\\"378c905939cc4459d249662dfae9fd6f\\\"\", \"Size\": 29, \"StorageClass\": \"STANDARD\", \"Owner\": { \"DisplayName\": \"test-user\", \"ID\": \"test-user\" } } ] }",
"s3cmd ls s3://awstestbucket 2022-08-25 09:57 0 s3://awstestbucket/test.txt",
"s3cmd info s3://awstestbucket/test.txt s3://awstestbucket/test.txt (object): File size: 0 Last mod: Mon, 03 Aug 2022 09:57:49 GMT MIME type: text/plain Storage: CLOUDTIER MD5 sum: 991d2528bb41bb839d1a9ed74b710794 SSE: none Policy: none CORS: none ACL: test-user: FULL_CONTROL x-amz-meta-s3cmd-attrs: atime:1664790668/ctime:1664790668/gid:0/gname:root/md5:991d2528bb41bb839d1a9ed74b710794/mode:33188/mtime:1664790668/uid:0/uname:root",
"[client@client01 ~]USD aws configure AWS Access Key ID [****************6VVP]: AWS Secret Access Key [****************pXqy]: Default region name [us-east-1]: Default output format [json]:",
"[client@client01 ~]USD aws s3 ls s3://dfqe-bucket-01/awstest PRE awstestbucket/",
"[client@client01 ~]USD aws s3 cp s3://dfqe-bucket-01/awstestbucket/test.txt . download: s3://dfqe-bucket-01/awstestbucket/test.txt to ./test.txt",
"radosgw-admin user create --uid= USER_NAME --display-name=\" DISPLAY_NAME \" [--access-key ACCESS_KEY --secret-key SECRET_KEY ]",
"radosgw-admin user create --uid=test-user --display-name=\"test-user\" --access-key a21e86bce636c3aa1 --secret-key cf764951f1fdde5e { \"user_id\": \"test-user\", \"display_name\": \"test-user\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"subusers\": [], \"keys\": [ { \"user\": \"test-user\", \"access_key\": \"a21e86bce636c3aa1\", \"secret_key\": \"cf764951f1fdde5e\" } ], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"default_storage_class\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\", \"mfa_ids\": [] }",
"aws s3 --ca-bundle CA_PERMISSION --profile rgw --endpoint ENDPOINT_URL --region default mb s3:// BUCKET_NAME",
"[root@host01 ~]USD aws s3 --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default mb s3://transition",
"radosgw-admin bucket stats --bucket transition { \"bucket\": \"transition\", \"num_shards\": 11, \"tenant\": \"\", \"zonegroup\": \"b29b0e50-1301-4330-99fc-5cdcfc349acf\", \"placement_rule\": \"default-placement\", \"explicit_placement\": { \"data_pool\": \"\", \"data_extra_pool\": \"\", \"index_pool\": \"\" },",
"[root@host01 ~]USD oc project openshift-storage [root@host01 ~]USD oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.6 True False 4d1h Cluster version is 4.11.6 [root@host01 ~]USD oc get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 4d Ready 2023-06-27T15:23:01Z 4.11.0",
"noobaa namespacestore create azure-blob az --account-key=' ACCOUNT_KEY ' --account-name=' ACCOUNT_NAME' --target-blob-container='_AZURE_CONTAINER_NAME '",
"[root@host01 ~]USD noobaa namespacestore create azure-blob az --account-key='iq3+6hRtt9bQ46QfHKQ0nSm2aP+tyMzdn8dBSRW4XWrFhY+1nwfqEj4hk2q66nmD85E/o5OrrUqo+AStkKwm9w==' --account-name='transitionrgw' --target-blob-container='mcgnamespace'",
"[root@host01 ~]USD noobaa bucketclass create namespace-bucketclass single aznamespace-bucket-class --resource az -n openshift-storage",
"noobaa obc create OBC_NAME --bucketclass aznamespace-bucket-class -n openshift-storage",
"[root@host01 ~]USD noobaa obc create rgwobc --bucketclass aznamespace-bucket-class -n openshift-storage",
"radosgw-admin zonegroup placement add --rgw-zonegroup = ZONE_GROUP_NAME --placement-id= PLACEMENT_ID --storage-class = STORAGE_CLASS_NAME --tier-type=cloud-s3",
"radosgw-admin zonegroup placement add --rgw-zonegroup=default --placement-id=default-placement --storage-class=AZURE --tier-type=cloud-s3 [ { \"key\": \"default-placement\", \"val\": { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"AZURE\", \"STANDARD\" ], \"tier_targets\": [ { \"key\": \"AZURE\", \"val\": { \"tier_type\": \"cloud-s3\", \"storage_class\": \"AZURE\", \"retain_head_object\": \"false\", \"s3\": { \"endpoint\": \"\", \"access_key\": \"\", \"secret\": \"\", \"host_style\": \"path\", \"target_storage_class\": \"\", \"target_path\": \"\", \"acl_mappings\": [], \"multipart_sync_threshold\": 33554432, \"multipart_min_part_size\": 33554432 } } } ] } } ]",
"radosgw-admin zonegroup placement modify --rgw-zonegroup ZONE_GROUP_NAME --placement-id PLACEMENT_ID --storage-class STORAGE_CLASS_NAME --tier-config=endpoint= ENDPOINT_URL , access_key= ACCESS_KEY ,secret= SECRET_KEY , target_path=\" TARGET_BUCKET_ON \", multipart_sync_threshold=44432, multipart_min_part_size=44432, retain_head_object=true region= REGION_NAME",
"radosgw-admin zonegroup placement modify --rgw-zonegroup default --placement-id default-placement --storage-class AZURE --tier-config=endpoint=\"https://s3-openshift-storage.apps.ocp410.0e73azopenshift.com\", access_key=a21e86bce636c3aa2,secret=cf764951f1fdde5f, target_path=\"dfqe-bucket-01\", multipart_sync_threshold=44432, multipart_min_part_size=44432, retain_head_object=true region=us-east-1 [ { \"key\": \"default-placement\", \"val\": { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"AZURE\", \"STANDARD\", \"cold.test\", \"hot.test\" ], \"tier_targets\": [ { \"key\": \"AZURE\", \"val\": { \"tier_type\": \"cloud-s3\", \"storage_class\": \"AZURE\", \"retain_head_object\": \"true\", \"s3\": { \"endpoint\": \"https://s3-openshift-storage.apps.ocp410.0e73azopenshift.com\", \"access_key\": \"a21e86bce636c3aa2\", \"secret\": \"cf764951f1fdde5f\", \"region\": \"\", \"host_style\": \"path\", \"target_storage_class\": \"\", \"target_path\": \"dfqe-bucket-01\", \"acl_mappings\": [], \"multipart_sync_threshold\": 44432, \"multipart_min_part_size\": 44432 } } } ] } } ] ]",
"ceph orch restart CEPH_OBJECT_GATEWAY_SERVICE_NAME",
"ceph orch restart client.rgw.objectgwhttps.host02.udyllp Scheduled to restart client.rgw.objectgwhttps.host02.udyllp on host 'host02",
"cat transition.json { \"Rules\": [ { \"Filter\": { \"Prefix\": \"\" }, \"Status\": \"Enabled\", \"Transitions\": [ { \"Days\": 30, \"StorageClass\": \" STORAGE_CLASS \" } ], \"ID\": \" TRANSITION_ID \" } ] }",
"[root@host01 ~]USD cat transition.json { \"Rules\": [ { \"Filter\": { \"Prefix\": \"\" }, \"Status\": \"Enabled\", \"Transitions\": [ { \"Days\": 30, \"StorageClass\": \"AZURE\" } ], \"ID\": \"Transition Objects in bucket to AZURE Blob after 30 days\" } ] }",
"aws s3api --ca-bundle CA_PERMISSION --profile rgw --endpoint ENDPOINT_URL --region default put-bucket-lifecycle-configuration --lifecycle-configuration file:// BUCKET .json --bucket BUCKET_NAME",
"[root@host01 ~]USD aws s3api --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default put-bucket-lifecycle-configuration --lifecycle-configuration file://transition.json --bucket transition",
"aws s3api --ca-bundle CA_PERMISSION --profile rgw --endpoint ENDPOINT_URL --region default get-bucket-lifecycle-configuration --lifecycle-configuration file:// BUCKET .json --bucket BUCKET_NAME",
"[root@host01 ~]USD aws s3api --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default get-bucket-lifecycle-configuration --bucket transition { \"Rules\": [ { \"ID\": \"Transition Objects in bucket to AZURE Blob after 30 days\", \"Prefix\": \"\", \"Status\": \"Enabled\", \"Transitions\": [ { \"Days\": 30, \"StorageClass\": \"AZURE\" } ] } ] }",
"radosgw-admin lc list [ { \"bucket\": \":transition:d9c4f708-5598-4c44-9d36-849552a08c4d.169377.1\", \"started\": \"Thu, 01 Jan 1970 00:00:00 GMT\", \"status\": \"UNINITIAL\" } ]",
"cephadm shell",
"ceph orch daemon CEPH_OBJECT_GATEWAY_DAEMON_NAME",
"ceph orch daemon restart rgw.objectgwhttps.host02.udyllp ceph orch daemon restart rgw.objectgw.host02.afwvyq ceph orch daemon restart rgw.objectgw.host05.ucpsrr",
"for i in 1 2 3 4 5 do aws s3 --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default cp /etc/hosts s3://transition/transitionUSDi done",
"aws s3 --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default ls s3://transition 2023-06-30 10:24:01 3847 transition1 2023-06-30 10:24:04 3847 transition2 2023-06-30 10:24:07 3847 transition3 2023-06-30 10:24:09 3847 transition4 2023-06-30 10:24:13 3847 transition5",
"rados ls -p default.rgw.buckets.data | grep transition d9c4f708-5598-4c44-9d36-849552a08c4d.169377.1_transition1 d9c4f708-5598-4c44-9d36-849552a08c4d.169377.1_transition4 d9c4f708-5598-4c44-9d36-849552a08c4d.169377.1_transition2 d9c4f708-5598-4c44-9d36-849552a08c4d.169377.1_transition3 d9c4f708-5598-4c44-9d36-849552a08c4d.169377.1_transition5",
"radosgw-admin lc process",
"radosgw-admin lc list [ { \"bucket\": \":transition:d9c4f708-5598-4c44-9d36-849552a08c4d.170017.5\", \"started\": \"Mon, 30 Jun 2023-06-30 16:52:56 GMT\", \"status\": \"COMPLETE\" } ]",
"[root@host01 ~]USD aws s3api list-objects --bucket awstestbucket --endpoint=http://10.0.209.002:8080 { \"Contents\": [ { \"Key\": \"awstestbucket/test\", \"LastModified\": \"2023-06-25T16:14:23.118Z\", \"ETag\": \"\\\"378c905939cc4459d249662dfae9fd6f\\\"\", \"Size\": 29, \"StorageClass\": \"STANDARD\", \"Owner\": { \"DisplayName\": \"test-user\", \"ID\": \"test-user\" } } ] }",
"[root@host01 ~]USD aws s3 --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default ls s3://transition 2023-06-30 17:52:56 0 transition1 2023-06-30 17:51:59 0 transition2 2023-06-30 17:51:59 0 transition3 2023-06-30 17:51:58 0 transition4 2023-06-30 17:51:59 0 transition5",
"[root@host01 ~]USD aws s3api --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default head-object --key transition1 --bucket transition { \"AcceptRanges\": \"bytes\", \"LastModified\": \"2023-06-31T16:52:56+00:00\", \"ContentLength\": 0, \"ETag\": \"\\\"46ecb42fd0def0e42f85922d62d06766\\\"\", \"ContentType\": \"binary/octet-stream\", \"Metadata\": {}, \"StorageClass\": \"CLOUDTIER\" }"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/object_gateway_guide/administration |
function::ip_ntop | function::ip_ntop Name function::ip_ntop - returns a string representation from an integer IP number Synopsis Arguments addr the ip represented as an integer | [
"function ip_ntop:string(addr:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-ip-ntop |
Chapter 1. Preinstalled dynamic plugins | Chapter 1. Preinstalled dynamic plugins Red Hat Developer Hub is preinstalled with a selection of dynamic plugins. The following preinstalled dynamic plugins are enabled by default: @backstage-community/plugin-analytics-provider-segment @backstage-community/plugin-scaffolder-backend-module-quay @backstage-community/plugin-scaffolder-backend-module-regex @backstage/plugin-techdocs-backend @backstage/plugin-techdocs The dynamic plugins that require custom configuration are disabled by default. Upon application startup, for each plugin that is disabled by default, the install-dynamic-plugins init container within the Developer Hub pod log displays a message similar to the following: ======= Skipping disabled dynamic plugin ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic To enable this plugin, add a package with the same name to the Helm chart and change the value in the disabled field to 'false'. For example: global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic disabled: false Note The default configuration for a plugin is extracted from the dynamic-plugins.default.yaml file, however, you can use a pluginConfig entry to override the default configuration. 1.1. Red Hat supported plugins Red Hat supports the following 18 plugins: Name Plugin Version Path and required variables Analytics Provider Segment @backstage-community/plugin-analytics-provider-segment 1.10.4 ./dynamic-plugins/dist/backstage-community-plugin-analytics-provider-segment SEGMENT_WRITE_KEY SEGMENT_TEST_MODE Argo CD @roadiehq/backstage-plugin-argo-cd 2.8.6 ./dynamic-plugins/dist/roadiehq-backstage-plugin-argo-cd Argo CD @roadiehq/backstage-plugin-argo-cd-backend 3.2.3 ./dynamic-plugins/dist/roadiehq-backstage-plugin-argo-cd-backend-dynamic ARGOCD_USERNAME ARGOCD_PASSWORD ARGOCD_INSTANCE1_URL ARGOCD_AUTH_TOKEN ARGOCD_INSTANCE2_URL ARGOCD_AUTH_TOKEN2 GitHub @backstage/plugin-catalog-backend-module-github 0.7.9 ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic GITHUB_ORG GitHub Org @backstage/plugin-catalog-backend-module-github-org 0.3.6 ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-org-dynamic GITHUB_URL GITHUB_ORG Keycloak @backstage-community/plugin-catalog-backend-module-keycloak 3.2.4 ./dynamic-plugins/dist/backstage-community-plugin-catalog-backend-module-keycloak-dynamic KEYCLOAK_BASE_URL KEYCLOAK_LOGIN_REALM KEYCLOAK_REALM KEYCLOAK_CLIENT_ID KEYCLOAK_CLIENT_SECRET Kubernetes @backstage/plugin-kubernetes-backend 0.18.7 ./dynamic-plugins/dist/backstage-plugin-kubernetes-backend-dynamic K8S_CLUSTER_NAME K8S_CLUSTER_URL K8S_CLUSTER_TOKEN OCM @backstage-community/plugin-ocm 5.2.6 ./dynamic-plugins/dist/backstage-community-plugin-ocm OCM @backstage-community/plugin-ocm-backend 5.2.5 ./dynamic-plugins/dist/backstage-community-plugin-ocm-backend-dynamic OCM_HUB_NAME OCM_HUB_URL OCM_SA_TOKEN Quay @backstage-community/plugin-quay 1.14.6 ./dynamic-plugins/dist/backstage-community-plugin-quay Quay @backstage-community/plugin-scaffolder-backend-module-quay 2.2.3 ./dynamic-plugins/dist/backstage-community-plugin-scaffolder-backend-module-quay-dynamic RBAC @backstage-community/plugin-rbac 1.33.6 ./dynamic-plugins/dist/backstage-community-plugin-rbac Regex @backstage-community/plugin-scaffolder-backend-module-regex 2.2.5 ./dynamic-plugins/dist/backstage-community-plugin-scaffolder-backend-module-regex-dynamic Signals @backstage/plugin-signals-backend 0.2.4 ./dynamic-plugins/dist/backstage-plugin-signals-backend-dynamic TechDocs @backstage/plugin-techdocs 1.11.2 ./dynamic-plugins/dist/backstage-plugin-techdocs TechDocs @backstage/plugin-techdocs-backend 1.11.5 ./dynamic-plugins/dist/backstage-plugin-techdocs-backend-dynamic Tekton @backstage-community/plugin-tekton 3.16.5 ./dynamic-plugins/dist/backstage-community-plugin-tekton Topology @backstage-community/plugin-topology 1.29.10 ./dynamic-plugins/dist/backstage-community-plugin-topology Note For more information about configuring KeyCloak, see Configuring dynamic plugins . For more information about configuring TechDocs, see Configuring TechDocs . 1.2. Technology Preview plugins Important Red Hat Developer Hub includes a select number of Technology Preview plugins, available for customers to configure and enable. These plugins are provided with support scoped per Technical Preview terms, might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features, see Technology Preview Features Scope . 1.2.1. Red Hat Technology Preview plugins Red Hat provides Technology Preview support for the following 54 plugins: Name Plugin Version Path and required variables 3scale @backstage-community/plugin-3scale-backend 3.0.4 ./dynamic-plugins/dist/backstage-community-plugin-3scale-backend-dynamic THREESCALE_BASE_URL THREESCALE_ACCESS_TOKEN Ansible Automation Platform (AAP) @janus-idp/backstage-plugin-aap-backend 2.2.1 ./dynamic-plugins/dist/janus-idp-backstage-plugin-aap-backend-dynamic AAP_BASE_URL AAP_AUTH_TOKEN ACR @backstage-community/plugin-acr 1.8.8 ./dynamic-plugins/dist/backstage-community-plugin-acr Argo CD @roadiehq/scaffolder-backend-argocd 1.2.1 ./dynamic-plugins/dist/roadiehq-scaffolder-backend-argocd-dynamic ARGOCD_USERNAME ARGOCD_PASSWORD ARGOCD_INSTANCE1_URL ARGOCD_AUTH_TOKEN ARGOCD_INSTANCE2_URL ARGOCD_AUTH_TOKEN2 Argo CD (Red Hat) @backstage-community/plugin-redhat-argocd 1.10.5 ./dynamic-plugins/dist/backstage-community-plugin-redhat-argocd Azure @backstage/plugin-scaffolder-backend-module-azure 0.2.5 ./dynamic-plugins/dist/backstage-plugin-scaffolder-backend-module-azure-dynamic Azure Devops @backstage-community/plugin-azure-devops 0.6.3 ./dynamic-plugins/dist/backstage-community-plugin-azure-devops Azure Devops @backstage-community/plugin-azure-devops-backend 0.8.0 ./dynamic-plugins/dist/backstage-community-plugin-azure-devops-backend-dynamic AZURE_TOKEN AZURE_ORG Azure Repositories @parfuemerie-douglas/scaffolder-backend-module-azure-repositories 0.3.0 ./dynamic-plugins/dist/parfuemerie-douglas-scaffolder-backend-module-azure-repositories-dynamic Bitbucket Cloud @backstage/plugin-catalog-backend-module-bitbucket-cloud 0.4.4 ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-bitbucket-cloud-dynamic BITBUCKET_WORKSPACE Bitbucket Cloud @backstage/plugin-scaffolder-backend-module-bitbucket-cloud 0.2.5 ./dynamic-plugins/dist/backstage-plugin-scaffolder-backend-module-bitbucket-cloud-dynamic Bitbucket Server @backstage/plugin-catalog-backend-module-bitbucket-server 0.2.4 ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-bitbucket-server-dynamic BITBUCKET_HOST Bitbucket Server @backstage/plugin-scaffolder-backend-module-bitbucket-server 0.2.5 ./dynamic-plugins/dist/backstage-plugin-scaffolder-backend-module-bitbucket-server-dynamic Bulk Import @red-hat-developer-hub/backstage-plugin-bulk-import 1.10.7 ./dynamic-plugins/dist/red-hat-developer-hub-backstage-plugin-bulk-import Bulk Import @red-hat-developer-hub/backstage-plugin-bulk-import-backend 5.2.1 ./dynamic-plugins/dist/red-hat-developer-hub-backstage-plugin-bulk-import-backend-dynamic Datadog @roadiehq/backstage-plugin-datadog 2.4.2 ./dynamic-plugins/dist/roadiehq-backstage-plugin-datadog Dynatrace @backstage-community/plugin-dynatrace 10.0.8 ./dynamic-plugins/dist/backstage-community-plugin-dynatrace Gerrit @backstage/plugin-scaffolder-backend-module-gerrit 0.2.5 ./dynamic-plugins/dist/backstage-plugin-scaffolder-backend-module-gerrit-dynamic GitHub @backstage/plugin-scaffolder-backend-module-github 0.5.5 ./dynamic-plugins/dist/backstage-plugin-scaffolder-backend-module-github-dynamic GitHub Actions @backstage-community/plugin-github-actions 0.6.27 ./dynamic-plugins/dist/backstage-community-plugin-github-actions GitHub Insights @roadiehq/backstage-plugin-github-insights 2.5.1 ./dynamic-plugins/dist/roadiehq-backstage-plugin-github-insights GitHub Issues @backstage-community/plugin-github-issues 0.4.8 ./dynamic-plugins/dist/backstage-community-plugin-github-issues GitHub Pull Requests @roadiehq/backstage-plugin-github-pull-requests 2.6.0 ./dynamic-plugins/dist/roadiehq-backstage-plugin-github-pull-requests GitLab @immobiliarelabs/backstage-plugin-gitlab 6.6.1 ./dynamic-plugins/dist/immobiliarelabs-backstage-plugin-gitlab GitLab @backstage/plugin-catalog-backend-module-gitlab 0.4.4 ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-gitlab-dynamic GitLab @immobiliarelabs/backstage-plugin-gitlab-backend 6.7.0 ./dynamic-plugins/dist/immobiliarelabs-backstage-plugin-gitlab-backend-dynamic GITLAB_HOST GITLAB_TOKEN GitLab @backstage/plugin-scaffolder-backend-module-gitlab 0.6.2 ./dynamic-plugins/dist/backstage-plugin-scaffolder-backend-module-gitlab-dynamic GitLab Org @backstage/plugin-catalog-backend-module-gitlab-org 0.2.5 ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-gitlab-org-dynamic Http Request @roadiehq/scaffolder-backend-module-http-request 5.0.1 ./dynamic-plugins/dist/roadiehq-scaffolder-backend-module-http-request-dynamic Jenkins @backstage-community/plugin-jenkins 0.12.2 ./dynamic-plugins/dist/backstage-community-plugin-jenkins Jenkins @backstage-community/plugin-jenkins-backend 0.6.3 ./dynamic-plugins/dist/backstage-community-plugin-jenkins-backend-dynamic JENKINS_URL JENKINS_USERNAME JENKINS_TOKEN JFrog Artifactory @backstage-community/plugin-jfrog-artifactory 1.10.6 ./dynamic-plugins/dist/backstage-community-plugin-jfrog-artifactory Jira @roadiehq/backstage-plugin-jira 2.8.2 ./dynamic-plugins/dist/roadiehq-backstage-plugin-jira Kubernetes @backstage/plugin-kubernetes 0.11.16 ./dynamic-plugins/dist/backstage-plugin-kubernetes Ldap @backstage/plugin-catalog-backend-module-ldap 0.9.1 ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-ldap-dynamic Lighthouse @backstage-community/plugin-lighthouse 0.4.24 ./dynamic-plugins/dist/backstage-community-plugin-lighthouse MS Graph @backstage/plugin-catalog-backend-module-msgraph 0.6.6 ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-msgraph-dynamic Nexus Repository Manager @backstage-community/plugin-nexus-repository-manager 1.10.9 ./dynamic-plugins/dist/backstage-community-plugin-nexus-repository-manager Notifications @backstage/plugin-notifications 0.3.2 ./dynamic-plugins/dist/backstage-plugin-notifications Notifications @backstage/plugin-notifications-backend 0.4.3 ./dynamic-plugins/dist/backstage-plugin-notifications-backend-dynamic Notifications Module Email @backstage/plugin-notifications-backend-module-email 0.3.5 ./dynamic-plugins/dist/backstage-plugin-notifications-backend-module-email-dynamic EMAIL_HOSTNAME EMAIL_USERNAME EMAIL_PASSWORD EMAIL_SENDER PagerDuty @pagerduty/backstage-plugin 0.15.2 ./dynamic-plugins/dist/pagerduty-backstage-plugin PagerDuty @pagerduty/backstage-plugin-backend 0.9.2 ./dynamic-plugins/dist/pagerduty-backstage-plugin-backend-dynamic PAGERDUTY_API_BASE PAGERDUTY_CLIENT_ID PAGERDUTY_CLIENT_SECRET PAGERDUTY_SUBDOMAIN Pingidentity @backstage-community/plugin-catalog-backend-module-pingidentity 0.1.6 ./dynamic-plugins/dist/backstage-community-plugin-catalog-backend-module-pingidentity-dynamic Scaffolder Relation Processor @backstage-community/plugin-catalog-backend-module-scaffolder-relation-processor 2.0.2 ./dynamic-plugins/dist/backstage-community-plugin-catalog-backend-module-scaffolder-relation-processor-dynamic Security Insights @roadiehq/backstage-plugin-security-insights 2.4.2 ./dynamic-plugins/dist/roadiehq-backstage-plugin-security-insights ServiceNow @backstage-community/plugin-scaffolder-backend-module-servicenow 2.2.5 ./dynamic-plugins/dist/backstage-community-plugin-scaffolder-backend-module-servicenow-dynamic SERVICENOW_BASE_URL SERVICENOW_USERNAME SERVICENOW_PASSWORD Signals @backstage/plugin-signals 0.0.15 ./dynamic-plugins/dist/backstage-plugin-signals SonarQube @backstage-community/plugin-sonarqube 0.8.9 ./dynamic-plugins/dist/backstage-community-plugin-sonarqube SonarQube @backstage-community/plugin-sonarqube-backend 0.3.1 ./dynamic-plugins/dist/backstage-community-plugin-sonarqube-backend-dynamic SONARQUBE_URL SONARQUBE_TOKEN SonarQube @backstage-community/plugin-scaffolder-backend-module-sonarqube 2.2.4 ./dynamic-plugins/dist/backstage-community-plugin-scaffolder-backend-module-sonarqube-dynamic Tech Radar @backstage-community/plugin-tech-radar 1.0.1 ./dynamic-plugins/dist/backstage-community-plugin-tech-radar Tech Radar @backstage-community/plugin-tech-radar-backend 1.0.0 ./dynamic-plugins/dist/backstage-community-plugin-tech-radar-backend-dynamic TECH_RADAR_DATA_URL Utils @roadiehq/scaffolder-backend-module-utils 3.0.1 ./dynamic-plugins/dist/roadiehq-scaffolder-backend-module-utils-dynamic Note A new Technology Preview plugin for Red Hat Ansible Automation Platform (RHAAP) is available, which replaces this older one. See Other installable plugins in the Configuring plugins in Red Hat Developer Hub guide for further details. See Dynamic plugins support matrix . | [
"======= Skipping disabled dynamic plugin ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic disabled: false"
] | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/dynamic_plugins_reference/con-preinstalled-dynamic-plugins |
Autoscaling for Instances | Autoscaling for Instances Red Hat OpenStack Platform 16.2 Configure Autoscaling in Red Hat OpenStack Platform OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/autoscaling_for_instances/index |
Chapter 1. Red Hat build of Keycloak 22.0 | Chapter 1. Red Hat build of Keycloak 22.0 1.1. Overview Red Hat is proud to introduce a new era of identity and access management named Red Hat build of Keycloak. The release of Red Hat build of Keycloak 22.0 replaces any plans for releasing Red Hat Single Sign-On 8.0 or a higher release. Red Hat build of Keycloak is based on the Keycloak project, which enables you to secure your web applications by providing Web SSO capabilities based on popular standards such as OpenID Connect, OAuth 2.0, and SAML 2.0. The Red Hat build of Keycloak server acts as an OpenID Connect or SAML-based identity provider (IdP), allowing your enterprise user directory or third-party IdP to secure your applications by using standards-based security tokens. While preserving the power and functionality of Red Hat Single Sign-on, Red Hat build of Keycloak is faster, more flexible, and efficient. Red Hat build of Keycloak is an application built with Quarkus, which provides developers with flexibility and modularity. Quarkus provides a framework that is optimized for a container-first approach and provides many features for developing cloud-native applications. 1.2. Updates for 22.0.13 This release contains several fixed issues and fixes for the following CVEs: CVE-2024-8698 Improper verification of SAML responses leading to privilege escalation in Red Hat build of Keycloak CVE-2024-8883 Vulnerable redirect URI validation results in Open Redirect 1.3. Updates for 22.0.12 This release contains several fixed issues and fixes for the following CVEs: CVE-2024-4629 An attacker could potentially bypass brute force protection by launching multiple login attempts in parallel. CVE-2024-7341 Session fixation in elytron SAML adapters for better protection against a possible Cookie hijacking. CVE-2024-5967 Leak of configured LDAP bind credentials through the Admin Console. There is the possibility to change the hostURL to the attacker's machine with the appropriate permission. 1.4. Updates for 22.0.11 This release contains several fixed issues including a fix for CVE-2024-4540 . This fix is for a security issue affecting some OIDC confidential clients using PAR (Pushed authorization request). In case you use OIDC confidential clients together with PAR and you use client authentication based on client_id and client_secret sent as parameters in the HTTP request body (method client_secret_post specified in the OIDC specification), it is highly encouraged to rotate the client secrets of your clients after upgrading to this version. 1.5. Updates for 22.0.10 The release includes several fixed issues. For details, see Fixed Issues . 1.6. Updates for 22.0.9 This release includes support for installation on Windows systems. For Windows installations, you use the kc.bat command instead of the kc.sh command. This release also include Fixed Issues . 1.7. Updates for 22.0.8 This release includes a fix for CVE-2023-6927 . The release also includes Fixed Issues . 1.8. Updates for 22.0.7 The release includes several fixed issues. For details, see Fixed Issues . 1.9. New features and enhancements The following release notes apply to Red Hat build of Keycloak 22.0. 1.9.1. New distribution based on Red Hat build of Quarkus Red Hat build of Keycloak 22.0 uses a streamlined distribution model based on Red Hat build of Quarkus instead of Red Hat JBoss Enterprise Application Platform. The new distribution simplifies configuration and operation, resulting in these changes compared to Red Hat Single Sign-On. Simpler configuration procedures with interactive command-line help. Instead of editing opaque and complex XML files, you choose from multiple configuration sources, such as a file, the CLI, environment variables, or an encrypted KeyStore. Faster startup time and low memory footprint. The server distribution is smaller, the container image contains fewer dependencies and Red Hat build of Keycloak performs multiple optimizations, which lead to better runtime performance. JDBC drivers for PostgreSQL, MariaDB, SQL Server, and MySQL included in the distribution. Faster feature updates and fixes to issues. The Red Hat build of Keycloak lifecycle is closely aligned with Keycloak, which means that the codebase is closer to upstream and upgrades with innovations can be delivered faster. Support for built-in metrics. Greater security for the Container Image by making the following changes: The image is based on UBI9 rather than UBI8. Uses of -minimal are replaced by-micro. 1.9.2. New Operator Red Hat build of Keycloak 22 introduces a brand new OpenShift Operator with reimagined Custom Resources (CRs) to make full use of the new Red Hat build of Quarkus based distribution in modern cloud-native environments. The resulting changes extend the Operator's capabilities and remove some of the most prominent limitations compared to Red Hat Single Sign-On Operator. Full Red Hat build of Keycloak server configuration support through Keycloak CR Close alignment of configuration UX with bare metal installations to simplify Operator adoption Support for all databases that the Red Hat build of Keycloak server supports Realm Import CR supports capturing full Realm representation in comparison to a few selected fields in Red Hat Single Sign-On Operator's Realm CR 1.9.3. Admin Console v2 The Admin Console v2 is redesigned to be easier to use and more accessible. The v2 console provides the same capabilities, such as creating client applications, managing users, and monitoring sessions, but now these actions are much easier to perform. 1.9.3.1. Reorganized pages The Admin Console v1 had many pages filled with long lists of controls. You could easily miss the advanced features at the bottom. In the v2 console, that type of page is revised to group the general controls together and the advanced functionality has moved to its own tab. Controls are organized into advanced and general groups 1.9.3.2. Tooltips are easier to use In the v1 console, when you hover over a field, the tooltips block fields you need to set. In the v2 console, you click a question mark (?) to display tooltips. Even expert users find they cannot recall the meaning of every control, so the tooltips are important. Improved tooltips do not hide field names 1.9.3.3. Quick access to related documentation If you need more help, you can click Learn More to see the related documentation topic. You do not need to hunt for the right guide and then search that guide for the right topic. Just click Learn More . Learn More buttons display related documentation 1.9.3.4. Accessibility enhancements The v2 console has major accessibility improvements to provide a better user experience for users with visual impairments and users who use screen readers. For example: For users with low vision or color vision deficiencies, text elements now meet the WCAG 2 AA contrast ratio thresholds, providing clear contrast to background colors. For users who rely on screen readers, form elements include labels and all input fields have accessible names. Also, images now have alternative text. For interactive controls, each focusable element now has an active and unique ID, eliminating confusion and aiding navigation. The new design features many improvements in flow and organization while retaining all functionality. However, one change exists for bearer-only clients, which are clients that use no OAuth flows. This access type is no longer a choice when you create a client, but the bearer-only switch still exists on the server side. See the Server Administration Guide for more details. 1.9.4. FIPS version 140-2 support Support for deploying Red Hat build of Keycloak 22.0 into a FIPS 140-2 enabled environment is available. For the details, see FIPS 140-2 . 1.9.5. OpenJDK 17 support Red Hat build of Keycloak supports OpenJDK 17 for both the server and adapters. The Red Hat build of Keycloak server is supported only on OpenJDK 17. Adapters are supported only on OpenJDK 11 and 17. 1.9.6. Adapter support The following Java Client Adapters are no longer released starting with Red Hat build of Keycloak 22: JBoss EAP 6 JBoss EAP 7 Spring Boot JBoss Fuse 6 JBoss Fuse 7 In contrast to the initial release of OpenID Connect, this protocol is now widely supported across the Java Ecosystem and much better interoperability and support is achieved by using the capabilities available from the technology stack, such as your application server or framework. These capabilities have now reached their end-of-life and are available only from Red Hat Single Sign-On 7.6. Therefore, before the end of the long-term support, consider alternative capabilities for your applications. Whenever you find issues integrating Red Hat build of Keycloak with Red Hat Single Sign-On Client Adapters, you now have compatibility mode settings from the Client Settings in the administration console cases. Therefore, you can disable some aspects of the Red Hat build of Keycloak server to preserve compatibility with older client adapters. More details are available in the tool tips of individual settings. For more details, see the Migration Guide . 1.9.7. Other improvements Red Hat build of Keycloak 22.0 includes the following additional improvements over Red Hat Single Sign-On 7.6. 1.9.7.1. SAML backchannel logout Red Hat build of Keycloak 22.0 includes SAML SOAP Backchannel single-logout, which provides a real backchannel logout capability to a SAML client registered in Red Hat build of Keycloak. This feature adds the capability to receive logout requests sent by SAML clients over SOAP binding. 1.9.7.2. OIDC logout Red Hat Single Sign-On 7.6 included support for OIDC logout. Red Hat build of Keycloak 22.0 contains these improvements to OIDC logout: Support for the client_id parameter, which is based on the OIDC RP-Initiated Logout 1.0 specification. This capability is useful to detect what client should be used for Post Logout Redirect URI verification in case that id_token_hint parameter cannot be used. The logout confirmation screen still needs to be displayed to the user when only the client_id parameter is used without the id_token_hint parameter so clients are encouraged to use the id_token_hint parameter if they do not want the logout confirmation screen to be displayed to the user. The Valid Post Logout Redirect URIs configuration option is added to the OIDC client and is aligned with the OIDC specification. You can use a different set of redirect URIs for redirection after login and logout. The value + used for Valid Post Logout Redirect URIs means that the logout uses the same set of redirect URIs as specified by the option of Valid Redirect URIs . This change also matches the default behavior when migrating from a version due to backwards compatibility. For more details, see OIDC logout in the Server Administration Guide. 1.9.7.3. Search groups by attribute You can now search groups by attribute by using the Admin REST API in a similar way to a client search by attributes. 1.9.7.4. Support for count users based on custom attributes The User API now supports querying the number of users based on custom attributes. The /{realm}/users/count endpoint contains a new q parameter. It expects the following format: 1.9.7.5. View group membership in the Account Console You can now allow users to view their group memberships in the Account Console. A user must have the account, view-groups option for the groups to show up in that console. 1.9.7.6. Essential claim configuration in OpenID Connect identity providers OpenID Connect identity providers support a new configuration to specify that the ID tokens issued by the identity provider must have a specific claim. Otherwise, the user can not authenticate through this broker. The option is disabled by default; when it is enabled, you can specify the name of the JWT token claim to filter and the value to match (supports regular expression format). 1.9.7.7. Support for JWE encrypted ID Tokens and UserInfo responses in OpenID Connect identity providers The OpenID Connect identity providers now support Json Web Encryption (JWE) for the ID Token and the UserInfo response. The providers use the realm keys defined for the selected encryption algorithm to perform the decryption. 1.9.7.8. Hardcoded group mapper The new hardcorded group mapper allows adding a specific group to users brokered from an Identity Provider. 1.9.7.9. User session note mapper The new user session note mapper allows mapping a claim to the user session notes. 1.9.7.10. Improvements in LDAP and Kerberos integration Red Hat build of Keycloak supports multiple LDAP providers in a realm, which support Kerberos integration with the same Kerberos realm. When an LDAP provider is unable to find the user who was authenticated through Kerberos/SPNEGO, Red Hat build of Keycloak tries to fall back to the LDAP provider. Red Hat build of Keycloak has also better support for the case when a single LDAP provider supports multiple Kerberos realms, which are in trust with each other. 1.10. Technology preview and developer preview features Red Hat build of Keycloak includes several technology preview and developer preview features. You are strongly cautioned against using these features in a production environment. They are disabled by default and may be changed or removed at a future release. For more detail on Technology and Developer Preview features, see Developer and Technology Previews . Red Hat build of Keycloak includes several technology preview and developer preview features. You are strongly cautioned against using these features in a production environment. They are disabled by default and may be changed or removed at a future release. For more detail on Technology and Developer Preview features, see Developer and Technology Previews . The following Technology Preview features exist. They are described in the Server Administration Guide . Client secret rotation, which increases security by alleviating problems such as secret leakage Recovery codes, an alternate method of two-factor authentication User profile configuration, which uses a declarative style and supports progressive profiling Scripts, the option to use JavaScript to write custom authenticators Update email flow, which supports users when changing email addresses The following Developer Preview features exist. Developer Preview features are not documented. Token exchange, a process of using a set of credentials or token to obtain a completely different token; this feature was previously a technology preview feature. Fine-grained admin permissions, which assign restricted access policies for managing a realm; this feature was previously a technology preview feature. Map Storage, an alternative way to store realm information in other databases and stores. 1.11. Removed and deprecated features These features were removed: JBoss EAP 6 and 7 OpenID Connect adapters Spring Boot OpenID Connect adapter Java Servlet Filter OpenID Connect adapter JBoss EAP 6 and 7 SAML adapters Account Console v1 Admin Console v1 Technology preview for replacing OpenShift 3 internal IdP with Red Hat build of Keycloak Client and User CRs for the Operator (temporarily removed). Legacy cross-site replication (formerly a Technology Preview feature) Deprecated methods that apply to data providers, user session provider. See the corresponding replacements, which are documented in Javadoc. This feature is deprecated: Loading the Red Hat build of Keycloak JavaScript adapter directly from the Red Hat build of Keycloak server. 1.12. Fixed issues Each release includes fixed issues: Red Hat build of Keycloak 22.0.13 Fixed Issues . Red Hat build of Keycloak 22.0.12 Fixed Issues . Red Hat build of Keycloak 22.0.11 Fixed Issues . Red Hat build of Keycloak 22.0.10 Fixed Issues . Red Hat build of Keycloak 22.0.9 Fixed Issues . Red Hat build of Keycloak 22.0.8 Fixed Issues . Red Hat build of Keycloak 22.0.7 Fixed Issues . 1.13. Known issues The version includes the following known issue: RHBK-721 - Instructions for adding custom attributes to the Account Console do not work 1.14. Supported configurations For the supported configurations for Red Hat build of Keycloak 22.0, see Supported configurations . 1.15. Component details For the list of supported component versions for Red Hat build of Keycloak 22.0, see Component details . | [
"q=<name>:<value> <attribute-name>:<value>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/release_notes/red_hat_build_of_keycloak_22_0 |
Chapter 1. Introduction to VDO on LVM | Chapter 1. Introduction to VDO on LVM The Virtual Data Optimizer (VDO) feature provides inline block-level deduplication, compression, and thin provisioning for storage. You can manage VDO as a type of Logical Volume Manager (LVM) Logical Volumes (LVs), similar to LVM thin-provisioned volumes. VDO volumes on LVM (LVM-VDO) contain the following components: VDO pool LV This is the backing physical device that stores, deduplicates, and compresses data for the VDO LV. The VDO pool LV sets the physical size of the VDO volume, which is the amount of data that VDO can store on the disk. Currently, each VDO pool LV can hold only one VDO LV. As a result, VDO deduplicates and compresses each VDO LV separately. Duplicate data that is stored on separate LVs do not benefit from data optimization of the same VDO volume. VDO LV This is the virtual, provisioned device on top of the VDO pool LV. The VDO LV sets the provisioned, logical size of the VDO volume, which is the amount of data that applications can write to the volume before deduplication and compression occurs. kvdo A kernel module that loads into the Linux Device Mapper layer provides a deduplicated, compressed, and thin provisioned block storage volume. The kvdo module exposes a block device that the VDO pool LV uses to create a VDO LV. The VDO LV is then used by the system. When kvdo receives a request to read a logical block of data from a VDO volume, it maps the requested logical block to the underlying physical block and then reads and returns the requested data. When kvdo receives a request to write a block of data to a VDO volume, it first checks whether the request is a DISCARD or TRIM request or whether the data is uniformly zero. If either of these conditions is met, kvdo updates its block map and acknowledges the request. Otherwise, VDO processes and optimizes the data. The kvdo module utilizes the Universal Deduplication Service (UDS) index on the volume internally and analyzes data, as it is received for duplicates. For each new piece of data, UDS determines if that piece is identical to any previously stored piece of data. If the index finds a match, the storage system can then verify the accuracy of that match and then update internal references to avoid storing the same information more than once. If you are already familiar with the structure of an LVM thin-provisioned implementation, see the following table to understand how the different aspects of VDO are presented to the system. Table 1.1. A comparison of components in VDO on LVM and LVM thin provisioning Physical device Provisioned device VDO on LVM VDO pool LV VDO LV LVM thin provisioning Thin pool Thin volume Since the VDO is thin-provisioned, the file system and applications only see the logical space in use and not the actual available physical space. Use scripting to monitor the available physical space and generate an alert if use exceeds a threshold. For information about monitoring the available VDO space see the Monitoring VDO section. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/deduplicating_and_compressing_logical_volumes_on_rhel/introduction-to-vdo-on-lvm_deduplicating-and-compressing-logical-volumes-on-rhel |
Chapter 1. Configure data sources in Red Hat build of Quarkus | Chapter 1. Configure data sources in Red Hat build of Quarkus Use a unified configuration model to define data sources for Java Database Connectivity (JDBC) and Reactive drivers. Applications use datasources to access relational databases. Quarkus provides a unified configuration model to define datasources for Java Database Connectivity (JDBC) and Reactive database drivers. Quarkus uses Agroal and Vert.x to provide high-performance, scalable datasource connection pooling for JDBC and reactive drivers. The quarkus-jdbc-* and quarkus-reactive-*-client extensions provide build time optimizations and integrate configured datasources with Quarkus features like security, health checks, and metrics. For more information about consuming and using a reactive datasource, see the Quarkus Reactive SQL clients guide. Additionally, refer to the Quarkus Hibernate ORM guide for information on consuming and using a JDBC datasource. 1.1. Get started with configuring datasources in Quarkus For users familiar with the fundamentals, this section provides an overview and code samples to set up datasources quickly. For more advanced configuration with examples, see References . 1.1.1. Zero-config setup in development mode Quarkus simplifies database configuration by offering the Dev Services feature, enabling zero-config database setup for testing or running in development (dev) mode. In dev mode, the suggested approach is to use DevServices and let Quarkus handle the database for you, whereas for production mode, you provide explicit database configuration details pointing to a database managed outside of Quarkus. To use Dev Services, add the appropriate driver extension, such as jdbc-postgresql , for your desired database type to the pom.xml file. In dev mode, if you do not provide any explicit database connection details, Quarkus automatically handles the database setup and provides the wiring between the application and the database. If you provide user credentials, the underlying database will be configured to use them. This is useful if you want to connect to the database with an external tool. To use this feature, ensure a Docker or Podman container runtime is installed, depending on the database type. Certain databases, such as H2, operate in in-memory mode and do not require a container runtime. Tip Prefix the actual connection details for prod mode with %prod. to ensure they are not applied in dev mode. For more information, see the Profiles section of the "Configuration reference" guide. For more information about Dev Services, see Dev Services overview . For more details and optional configurations, see Dev Services for databases . 1.1.2. Configure a JDBC datasource Add the correct JDBC extension for the database of your choice. quarkus-jdbc-db2 quarkus-jdbc-derby Note The Apache Derby database is deprecated in Red Hat build of Quarkus 3.15 and is planned to be removed in a future release. Red Hat will continue to provide development support for Apache Derby during the current release lifecycle. quarkus-jdbc-h2 quarkus-jdbc-mariadb quarkus-jdbc-mssql quarkus-jdbc-mysql quarkus-jdbc-oracle quarkus-jdbc-postgresql Configure your JDBC datasource: quarkus.datasource.db-kind=postgresql 1 quarkus.datasource.username=<your username> quarkus.datasource.password=<your password> quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5432/hibernate_orm_test quarkus.datasource.jdbc.max-size=16 1 This configuration value is only required if there is more than one database extension on the classpath. If only one viable extension is available, Quarkus assumes this is the correct one. When you add a driver to the test scope, Quarkus automatically includes the specified driver in testing. 1.1.2.1. JDBC connection pool size adjustment To protect your database from overloading during load peaks, size the pool adequately to throttle the database load. The optimal pool size depends on many factors, such as the number of parallel application users or the nature of the workload. Be aware that setting the pool size too low might cause some requests to time out while waiting for a connection. For more information about pool size adjustment properties, see the JDBC configuration reference section. 1.1.3. Configure a reactive datasource Add the correct reactive extension for the database of your choice. quarkus-reactive-mssql-client quarkus-reactive-mysql-client quarkus-reactive-oracle-client quarkus-reactive-pg-client Configure your reactive datasource: quarkus.datasource.db-kind=postgresql 1 quarkus.datasource.username=<your username> quarkus.datasource.password=<your password> quarkus.datasource.reactive.url=postgresql:///your_database quarkus.datasource.reactive.max-size=20 1 This configuration value is only required if there is more than one Reactive driver extension on the classpath. 1.2. Configure datasources The following section describes the configuration for single or multiple datasources. For simplicity, we will reference a single datasource as the default (unnamed) datasource. 1.2.1. Configure a single datasource A datasource can be either a JDBC datasource, reactive, or both. This depends on the configuration and the selection of project extensions. Define a datasource with the following configuration property, where db-kind defines which database platform to connect to, for example, h2 : quarkus.datasource.db-kind=h2 Quarkus deduces the JDBC driver class it needs to use from the specified value of the db-kind database platform attribute. Note This step is required only if your application depends on multiple database drivers. If the application operates with a single driver, this driver is detected automatically. Quarkus currently includes the following built-in database kinds: DB2: db2 Derby: derby Note The Apache Derby database is deprecated in Red Hat build of Quarkus 3.15 and is planned to be removed in a future release. Red Hat will continue to provide development support for Apache Derby during the current release lifecycle. H2: h2 MariaDB: mariadb Microsoft SQL Server: mssql MySQL: mysql Oracle: oracle PostgreSQL: postgresql , pgsql or pg To use a database kind that is not built-in, use other and define the JDBC driver explicitly Note You can use any JDBC driver in a Quarkus app in JVM mode as described in Custom databases and drivers . However, using a non-built-in database kind is unlikely to work when compiling your application to a native executable. For native executable builds, it is recommended to either use the available JDBC Quarkus extensions or contribute a custom extension for your specific driver. Configure the following properties to define credentials: quarkus.datasource.username=<your username> quarkus.datasource.password=<your password> You can also retrieve the password from Vault by using a credential provider for your datasource. Until now, the configuration has been the same regardless of whether you are using a JDBC or a reactive driver. When you have defined the database kind and the credentials, the rest depends on what type of driver you are using. It is possible to use JDBC and a reactive driver simultaneously. 1.2.1.1. JDBC datasource JDBC is the most common database connection pattern, typically needed when used in combination with non-reactive Hibernate ORM. To use a JDBC datasource, start with adding the necessary dependencies: For use with a built-in JDBC driver, choose and add the Quarkus extension for your relational database driver from the list below: Derby - quarkus-jdbc-derby Note The Apache Derby database is deprecated in Red Hat build of Quarkus 3.15 and is planned to be removed in a future release. Red Hat will continue to provide development support for Apache Derby during the current release lifecycle. H2 - quarkus-jdbc-h2 Note H2 and Derby databases can be configured to run in "embedded mode"; however, the Derby extension does not support compiling the embedded database engine into native executables. Read Testing with in-memory databases for suggestions regarding integration testing. DB2 - quarkus-jdbc-db2 MariaDB - quarkus-jdbc-mariadb Microsoft SQL Server - quarkus-jdbc-mssql MySQL - quarkus-jdbc-mysql Oracle - quarkus-jdbc-oracle PostgreSQL - quarkus-jdbc-postgresql For example, to add the PostgreSQL driver dependency: ./mvnw quarkus:add-extension -Dextensions="jdbc-postgresql" Note Using a built-in JDBC driver extension automatically includes the Agroal extension, which is the JDBC connection pool implementation applicable for custom and built-in JDBC drivers. However, for custom drivers, Agroal needs to be added explicitly. For use with a custom JDBC driver, add the quarkus-agroal dependency to your project alongside the extension for your relational database driver: ./mvnw quarkus:add-extension -Dextensions="agroal" To use a JDBC driver for another database, use a database with no built-in extension or with a different driver . Configure the JDBC connection by defining the JDBC URL property: quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5432/hibernate_orm_test Note Note the jdbc prefix in the property name. All the configuration properties specific to JDBC have the jdbc prefix. For reactive datasources, the prefix is reactive . For more information about configuring JDBC, see JDBC URL format reference and Quarkus extensions and database drivers reference . 1.2.1.1.1. Custom databases and drivers If you need to connect to a database for which Quarkus does not provide an extension with the JDBC driver, you can use a custom driver instead. For example, if you are using the OpenTracing JDBC driver in your project. Without an extension, the driver will work correctly in any Quarkus app running in JVM mode. However, the driver is unlikely to work when compiling your application to a native executable. If you plan to make a native executable, use the existing JDBC Quarkus extensions, or contribute one for your driver. Warning OpenTracing has been deprecated in favor of OpenTelemetry. For tracing information, please check the related section about Datasource tracing , bellow. A custom driver definition example with the legacy OpenTracing driver: quarkus.datasource.jdbc.driver=io.opentracing.contrib.jdbc.TracingDriver An example for defining access to a database with no built-in support in JVM mode: quarkus.datasource.db-kind=other quarkus.datasource.jdbc.driver=oracle.jdbc.driver.OracleDriver quarkus.datasource.jdbc.url=jdbc:oracle:thin:@192.168.1.12:1521/ORCL_SVC quarkus.datasource.username=scott quarkus.datasource.password=tiger For all the details about the JDBC configuration options and configuring other aspects, such as the connection pool size, refer to the JDBC configuration reference section. 1.2.1.1.2. Consuming the datasource With Hibernate ORM, the Hibernate layer automatically picks up the datasource and uses it. For the in-code access to the datasource, obtain it as any other bean as follows: @Inject AgroalDataSource defaultDataSource; In the above example, the type is AgroalDataSource , a javax.sql.DataSource subtype. Because of this, you can also use javax.sql.DataSource as the injected type. 1.2.1.2. Reactive datasource Quarkus offers several reactive clients for use with a reactive datasource. Add the corresponding extension to your application: MariaDB/MySQL: quarkus-reactive-mysql-client Microsoft SQL Server: quarkus-reactive-mssql-client Oracle: quarkus-reactive-oracle-client PostgreSQL: quarkus-reactive-pg-client The installed extension must be consistent with the quarkus.datasource.db-kind you define in your datasource configuration. After adding the driver, configure the connection URL and define a proper size for your connection pool. quarkus.datasource.reactive.url=postgresql:///your_database quarkus.datasource.reactive.max-size=20 1.2.1.2.1. Reactive connection pool size adjustment To protect your database from overloading during load peaks, size the pool adequately to throttle the database load. The proper size always depends on many factors, such as the number of parallel application users or the nature of the workload. Be aware that setting the pool size too low might cause some requests to time out while waiting for a connection. For more information about pool size adjustment properties, see the Reactive datasource configuration reference section. 1.2.1.3. JDBC and reactive datasources simultaneously When both a JDBC extension and a reactive datasource extension for the same database kind are included, both JDBC and reactive datasources will be created by default. To use the JDBC and reactive datasources simultaneously: %prod.quarkus.datasource.reactive.url=postgresql:///your_database %prod.quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5432/hibernate_orm_test If you do not want to have both a JDBC datasource and a reactive datasource created, use the following configuration. To disable the JDBC datasource explicitly: quarkus.datasource.jdbc=false To disable the reactive datasource explicitly: quarkus.datasource.reactive=false Tip In most cases, the configuration above will be optional as either a JDBC driver or a reactive datasource extension will be present, not both. 1.2.2. Configure multiple datasources Note The Hibernate ORM extension supports defining persistence units by using configuration properties. For each persistence unit, point to the datasource of your choice. Defining multiple datasources works like defining a single datasource, with one important change - you have to specify a name (configuration property) for each datasource. The following example provides three different datasources: the default one a datasource named users a datasource named inventory Each with its configuration: quarkus.datasource.db-kind=h2 quarkus.datasource.username=username-default quarkus.datasource.jdbc.url=jdbc:h2:mem:default quarkus.datasource.jdbc.max-size=13 quarkus.datasource.users.db-kind=h2 quarkus.datasource.users.username=username1 quarkus.datasource.users.jdbc.url=jdbc:h2:mem:users quarkus.datasource.users.jdbc.max-size=11 quarkus.datasource.inventory.db-kind=h2 quarkus.datasource.inventory.username=username2 quarkus.datasource.inventory.jdbc.url=jdbc:h2:mem:inventory quarkus.datasource.inventory.jdbc.max-size=12 Notice there is an extra section in the configuration property. The syntax is as follows: quarkus.datasource.[optional name.][datasource property] . Note Even when only one database extension is installed, named databases need to specify at least one build-time property so that Quarkus can detect them. Generally, this is the db-kind property, but you can also specify Dev Services properties to create named datasources according to the Dev Services for Databases guide. 1.2.2.1. Named datasource injection When using multiple datasources, each DataSource also has the io.quarkus.agroal.DataSource qualifier with the name of the datasource as the value. By using the properties mentioned in the section to configure three different datasources, inject each one of them as follows: @Inject AgroalDataSource defaultDataSource; @Inject @DataSource("users") AgroalDataSource usersDataSource; @Inject @DataSource("inventory") AgroalDataSource inventoryDataSource; 1.2.3. Activate or deactivate datasources When a datasource is configured at build time, it is active by default at runtime. This means that Quarkus will start the corresponding JDBC connection pool or reactive client when the application starts. To deactivate a datasource at runtime, set quarkus.datasource[.optional name].active to false . Quarkus will then skip starting the JDBC connection pool or reactive client during application startup. Any attempt to use the deactivated datasource at runtime results in an exception. This feature is especially useful when you need the application to select one datasource from a predefined set at runtime. Warning If another Quarkus extension relies on an inactive datasource, that extension might fail to start. In such a case, you will need to deactivate that other extension as well. For an example of this scenario, see the Hibernate ORM section. For example, with the following configuration: quarkus.datasource."pg".db-kind=postgres quarkus.datasource."pg".active=false quarkus.datasource."pg".jdbc.url=jdbc:postgresql:///your_database quarkus.datasource."oracle".db-kind=oracle quarkus.datasource."oracle".active=false quarkus.datasource."oracle".jdbc.url=jdbc:oracle:///your_database Setting quarkus.datasource."pg".active=true at runtime will make only the PostgreSQL datasource available, and setting quarkus.datasource."oracle".active=true at runtime will make only the Oracle datasource available. Tip Custom configuration profiles can help simplify such a setup. By appending the following profile-specific configuration to the one above, you can select a persistence unit/datasource at runtime simply by setting quarkus.profile : quarkus.profile=prod,pg or quarkus.profile=prod,oracle . %pg.quarkus.hibernate-orm."pg".active=true %pg.quarkus.datasource."pg".active=true # Add any pg-related runtime configuration here, prefixed with "%pg." %oracle.quarkus.hibernate-orm."oracle".active=true %oracle.quarkus.datasource."oracle".active=true # Add any pg-related runtime configuration here, prefixed with "%pg." Tip It can also be useful to define a CDI bean producer redirecting to the currently active datasource, like this: public class MyProducer { @Inject DataSourceSupport dataSourceSupport; @Inject @DataSource("pg") AgroalDataSource pgDataSourceBean; @Inject @DataSource("oracle") AgroalDataSource oracleDataSourceBean; @Produces @ApplicationScoped public AgroalDataSource dataSource() { if (dataSourceSupport.getInactiveNames().contains("pg")) { return oracleDataSourceBean; } else { return pgDataSourceBean; } } } 1.2.4. Use multiple datasources in a single transaction By default, XA support on datasources is disabled. Therefore, a transaction may include no more than one datasource. Attempting to access multiple non-XA datasources in the same transaction results in an exception similar to the following: To allow using multiple JDBC datasources in the same transaction: Make sure your JDBC driver supports XA. All supported JDBC drivers do , but other JDBC drivers might not. Make sure your database server is configured to enable XA. Enable XA support explicitly for each relevant datasource by setting quarkus.datasource[.optional name].jdbc.transactions to xa . Using XA, a rollback in one datasource will trigger a rollback in every other datasource enrolled in the transaction. Note XA transactions on reactive datasources are not supported at the moment. Note If your transaction involves non-datasource resources, be aware that they might not support XA transactions or might require additional configuration. If XA cannot be enabled for one of your datasources: Be aware that enabling XA for all datasources except one (and only one) is still supported through Last Resource Commit Optimization (LRCO) . If you do not need a rollback for one datasource to trigger a rollback for other datasources, consider splitting your code into multiple transactions. To do so, use QuarkusTransaction.requiringNew() / @Transactional(REQUIRES_NEW) (preferably) or UserTransaction (for more complex use cases). Caution If no other solution works, and to maintain compatibility with Quarkus 3.8 and earlier, set quarkus.transaction-manager.unsafe-multiple-last-resources to allow to enable unsafe transaction handling across multiple non-XA datasources. With this property set to allow, it might happen that a transaction rollback will only be applied to the last non-XA datasource, while other non-XA datasources have already committed their changes, potentially leaving your overall system in an inconsistent state. Alternatively, you can allow the same unsafe behavior, but with warnings when it takes effect: Setting the property to warn-each results in logging a warning on each offending transaction. Setting the property to warn-first results in logging a warning on the first offending transaction. We do not recommend using this configuration property, and we plan to remove it in the future, so you should fix your application accordingly. If you think your use case of this feature is valid and this option should be kept around, open an issue in the Quarkus tracker explaining why. 1.3. Datasource integrations 1.3.1. Datasource health check If you use the quarkus-smallrye-health extension, the quarkus-agroal and reactive client extensions automatically add a readiness health check to validate the datasource. When you access your application's health readiness endpoint, /q/health/ready by default, you receive information about the datasource validation status. If you have multiple datasources, all datasources are checked, and if a single datasource validation failure occurs, the status changes to DOWN . This behavior can be disabled by using the quarkus.datasource.health.enabled property. To exclude only a particular datasource from the health check: quarkus.datasource."datasource-name".health-exclude=true 1.3.2. Datasource metrics If you are using the quarkus-micrometer or quarkus-smallrye-metrics extension, quarkus-agroal can contribute some datasource-related metrics to the metric registry. This can be activated by setting the quarkus.datasource.metrics.enabled property to true . For the exposed metrics to contain any actual values, a metric collection must be enabled internally by the Agroal mechanisms. By default, this metric collection mechanism is enabled for all datasources when a metrics extension is present, and metrics for the Agroal extension are enabled. To disable metrics for a particular datasource, set quarkus.datasource.jdbc.enable-metrics to false , or apply quarkus.datasource.<datasource name>.jdbc.enable-metrics for a named datasource. This disables collecting the metrics and exposing them in the /q/metrics endpoint if the mechanism to collect them is disabled. Conversely, setting quarkus.datasource.jdbc.enable-metrics to true , or quarkus.datasource.<datasource name>.jdbc.enable-metrics for a named datasource explicitly enables metrics collection even if a metrics extension is not in use. This can be useful if you need to access the collected metrics programmatically. They are available after calling dataSource.getMetrics() on an injected AgroalDataSource instance. If the metrics collection for this datasource is disabled, all values result in zero. 1.3.3. Datasource tracing To use tracing with a datasource, you need to add the quarkus-opentelemetry extension to your project. You do not need to declare a different driver to enable tracing. If you use a JDBC driver, you need to follow the instructions in the OpenTelemetry extension . Even with all the tracing infrastructure in place, the datasource tracing is not enabled by default, and you need to enable it by setting this property: # enable tracing quarkus.datasource.jdbc.telemetry=true 1.3.4. Narayana transaction manager integration Integration is automatic if the Narayana JTA extension is also available. You can override this by setting the transactions configuration property: quarkus.datasource.jdbc.transactions for default unnamed datasource quarkus.datasource. <datasource-name> .jdbc.transactions for named datasource For more information, see the Configuration reference section below. To facilitate the storage of transaction logs in a database by using JDBC, see Configuring transaction logs to be stored in a datasource section of the Using transactions in Quarkus guide. 1.3.4.1. Named datasources When using Dev Services, the default datasource will always be created, but to specify a named datasource, you need to have at least one build time property so Quarkus can detect how to create the datasource. You will usually specify the db-kind property or explicitly enable Dev Services by setting quarkus.datasource."name".devservices.enabled=true . 1.3.5. Testing with in-memory databases Some databases like H2 and Derby are commonly used in the embedded mode as a facility to run integration tests quickly. The recommended approach is to use the real database you intend to use in production, especially when Dev Services provide a zero-config database for testing , and running tests against a container is relatively quick and produces expected results on an actual environment. However, it is also possible to use JVM-powered databases for scenarios when the ability to run simple integration tests is required. 1.3.5.1. Support and limitations Embedded databases (H2 and Derby) work in JVM mode. For native mode, the following limitations apply: Derby cannot be embedded into the application in native mode. However, the Quarkus Derby extension allows native compilation of the Derby JDBC client , supporting remote connections. Embedding H2 within your native image is not recommended. Consider using an alternative approach, for example, using a remote connection to a separate database instead. 1.4. References 1.4.1. Common datasource configuration reference Configuration property fixed at build time - All other configuration properties are overridable at runtime Configuration property Type Default quarkus.datasource.health.enabled Whether or not a health check is published in case the smallrye-health extension is present. This is a global setting and is not specific to a datasource. Environment variable: QUARKUS_DATASOURCE_HEALTH_ENABLED boolean true quarkus.datasource.metrics.enabled Whether or not datasource metrics are published in case a metrics extension is present. This is a global setting and is not specific to a datasource. Note This is different from the "jdbc.enable-metrics" property that needs to be set on the JDBC datasource level to enable collection of metrics for that datasource. Environment variable: QUARKUS_DATASOURCE_METRICS_ENABLED boolean false quarkus.datasource.db-kind quarkus.datasource."datasource-name".db-kind The kind of database we will connect to (e.g. h2, postgresql... ). Environment variable: QUARKUS_DATASOURCE_DB_KIND string quarkus.datasource.db-version quarkus.datasource."datasource-name".db-version The version of the database we will connect to (e.g. '10.0'). Caution The version number set here should follow the same numbering scheme as the string returned by java.sql.DatabaseMetaData#getDatabaseProductVersion() for your database's JDBC driver. This numbering scheme may be different from the most popular one for your database; for example Microsoft SQL Server 2016 would be version 13 . As a rule, the version set here should be as high as possible, but must be lower than or equal to the version of any database your application will connect to. A high version will allow better performance and using more features (e.g. Hibernate ORM may generate more efficient SQL, avoid workarounds and take advantage of more database features), but if it is higher than the version of the database you want to connect to, it may lead to runtime exceptions (e.g. Hibernate ORM may generate invalid SQL that your database will reject). Some extensions (like the Hibernate ORM extension) will try to check this version against the actual database version on startup, leading to a startup failure when the actual version is lower or simply a warning in case the database cannot be reached. The default for this property is specific to each extension; the Hibernate ORM extension will default to the oldest version it supports. Environment variable: QUARKUS_DATASOURCE_DB_VERSION string quarkus.datasource.health-exclude quarkus.datasource."datasource-name".health-exclude Whether this particular data source should be excluded from the health check if the general health check for data sources is enabled. By default, the health check includes all configured data sources (if it is enabled). Environment variable: QUARKUS_DATASOURCE_HEALTH_EXCLUDE boolean false quarkus.datasource.active quarkus.datasource."datasource-name".active Whether this datasource should be active at runtime. See this section of the documentation . If the datasource is not active, it won't start with the application, and accessing the corresponding Datasource CDI bean will fail, meaning in particular that consumers of this datasource (e.g. Hibernate ORM persistence units) will fail to start unless they are inactive too. Environment variable: QUARKUS_DATASOURCE_ACTIVE boolean true quarkus.datasource.username quarkus.datasource."datasource-name".username The datasource username Environment variable: QUARKUS_DATASOURCE_USERNAME string quarkus.datasource.password quarkus.datasource."datasource-name".password The datasource password Environment variable: QUARKUS_DATASOURCE_PASSWORD string quarkus.datasource.credentials-provider quarkus.datasource."datasource-name".credentials-provider The credentials provider name Environment variable: QUARKUS_DATASOURCE_CREDENTIALS_PROVIDER string quarkus.datasource.credentials-provider-name quarkus.datasource."datasource-name".credentials-provider-name The credentials provider bean name. This is a bean name (as in @Named ) of a bean that implements CredentialsProvider . It is used to select the credentials provider bean when multiple exist. This is unnecessary when there is only one credentials provider available. For Vault, the credentials provider bean name is vault-credentials-provider . Environment variable: QUARKUS_DATASOURCE_CREDENTIALS_PROVIDER_NAME string Dev Services Type Default quarkus.datasource.devservices.enabled quarkus.datasource."datasource-name".devservices.enabled Whether this Dev Service should start with the application in dev mode or tests. Dev Services are enabled by default unless connection configuration (e.g. the JDBC URL or reactive client URL) is set explicitly. Environment variable: QUARKUS_DATASOURCE_DEVSERVICES_ENABLED boolean quarkus.datasource.devservices.image-name quarkus.datasource."datasource-name".devservices.image-name The container image name for container-based Dev Service providers. This has no effect if the provider is not a container-based database, such as H2 or Derby. Environment variable: QUARKUS_DATASOURCE_DEVSERVICES_IMAGE_NAME string quarkus.datasource.devservices.container-env."environment-variable-name" quarkus.datasource."datasource-name".devservices.container-env."environment-variable-name" Environment variables that are passed to the container. Environment variable: QUARKUS_DATASOURCE_DEVSERVICES_CONTAINER_ENV__ENVIRONMENT_VARIABLE_NAME_ Map<String,String> quarkus.datasource.devservices.container-properties."property-key" quarkus.datasource."datasource-name".devservices.container-properties."property-key" Generic properties that are passed for additional container configuration. Properties defined here are database-specific and are interpreted specifically in each database dev service implementation. Environment variable: QUARKUS_DATASOURCE_DEVSERVICES_CONTAINER_PROPERTIES__PROPERTY_KEY_ Map<String,String> quarkus.datasource.devservices.properties."property-key" quarkus.datasource."datasource-name".devservices.properties."property-key" Generic properties that are added to the database connection URL. Environment variable: QUARKUS_DATASOURCE_DEVSERVICES_PROPERTIES__PROPERTY_KEY_ Map<String,String> quarkus.datasource.devservices.port quarkus.datasource."datasource-name".devservices.port Optional fixed port the dev service will listen to. If not defined, the port will be chosen randomly. Environment variable: QUARKUS_DATASOURCE_DEVSERVICES_PORT int quarkus.datasource.devservices.command quarkus.datasource."datasource-name".devservices.command The container start command to use for container-based Dev Service providers. This has no effect if the provider is not a container-based database, such as H2 or Derby. Environment variable: QUARKUS_DATASOURCE_DEVSERVICES_COMMAND string quarkus.datasource.devservices.db-name quarkus.datasource."datasource-name".devservices.db-name The database name to use if this Dev Service supports overriding it. Environment variable: QUARKUS_DATASOURCE_DEVSERVICES_DB_NAME string quarkus.datasource.devservices.username quarkus.datasource."datasource-name".devservices.username The username to use if this Dev Service supports overriding it. Environment variable: QUARKUS_DATASOURCE_DEVSERVICES_USERNAME string quarkus.datasource.devservices.password quarkus.datasource."datasource-name".devservices.password The password to use if this Dev Service supports overriding it. Environment variable: QUARKUS_DATASOURCE_DEVSERVICES_PASSWORD string quarkus.datasource.devservices.init-script-path quarkus.datasource."datasource-name".devservices.init-script-path The path to a SQL script to be loaded from the classpath and applied to the Dev Service database. This has no effect if the provider is not a container-based database, such as H2 or Derby. Environment variable: QUARKUS_DATASOURCE_DEVSERVICES_INIT_SCRIPT_PATH string quarkus.datasource.devservices.volumes."host-path" quarkus.datasource."datasource-name".devservices.volumes."host-path" The volumes to be mapped to the container. The map key corresponds to the host location; the map value is the container location. If the host location starts with "classpath:", the mapping loads the resource from the classpath with read-only permission. When using a file system location, the volume will be generated with read-write permission, potentially leading to data loss or modification in your file system. This has no effect if the provider is not a container-based database, such as H2 or Derby. Environment variable: QUARKUS_DATASOURCE_DEVSERVICES_VOLUMES__HOST_PATH_ Map<String,String> quarkus.datasource.devservices.reuse quarkus.datasource."datasource-name".devservices.reuse Whether to keep Dev Service containers running after a dev mode session or test suite execution to reuse them in the dev mode session or test suite execution. Within a dev mode session or test suite execution, Quarkus will always reuse Dev Services as long as their configuration (username, password, environment, port bindings, ... ) did not change. This feature is specifically about keeping containers running when Quarkus is not running to reuse them across runs. Warning This feature needs to be enabled explicitly in testcontainers.properties , may require changes to how you configure data initialization in dev mode and tests, and may leave containers running indefinitely, forcing you to stop and remove them manually. See this section of the documentation for more information. This configuration property is set to true by default, so it is mostly useful to disable reuse, if you enabled it in testcontainers.properties but only want to use it for some of your Quarkus applications or datasources. Environment variable: QUARKUS_DATASOURCE_DEVSERVICES_REUSE boolean true 1.4.2. JDBC configuration reference Configuration property fixed at build time - All other configuration properties are overridable at runtime Configuration property Type Default quarkus.datasource.jdbc quarkus.datasource."datasource-name".jdbc If we create a JDBC datasource for this datasource. Environment variable: QUARKUS_DATASOURCE_JDBC boolean true quarkus.datasource.jdbc.driver quarkus.datasource."datasource-name".jdbc.driver The datasource driver class name Environment variable: QUARKUS_DATASOURCE_JDBC_DRIVER string quarkus.datasource.jdbc.transactions quarkus.datasource."datasource-name".jdbc.transactions Whether we want to use regular JDBC transactions, XA, or disable all transactional capabilities. When enabling XA you will need a driver implementing javax.sql.XADataSource . Environment variable: QUARKUS_DATASOURCE_JDBC_TRANSACTIONS enabled : Integrate the JDBC Datasource with the JTA TransactionManager of Quarkus. This is the default. xa : Similarly to enabled , also enables integration with the JTA TransactionManager of Quarkus, but enabling XA transactions as well. Requires a JDBC driver implementing javax.sql.XADataSource disabled : Disables the Agroal integration with the Narayana TransactionManager. This is typically a bad idea, and is only useful in special cases: make sure to not use this without having a deep understanding of the implications. enabled quarkus.datasource.jdbc.enable-metrics quarkus.datasource."datasource-name".jdbc.enable-metrics Enable datasource metrics collection. If unspecified, collecting metrics will be enabled by default if a metrics extension is active. Environment variable: QUARKUS_DATASOURCE_JDBC_ENABLE_METRICS boolean quarkus.datasource.jdbc.tracing quarkus.datasource."datasource-name".jdbc.tracing Enable JDBC tracing. Disabled by default. Environment variable: QUARKUS_DATASOURCE_JDBC_TRACING boolean false quarkus.datasource.jdbc.telemetry quarkus.datasource."datasource-name".jdbc.telemetry Enable OpenTelemetry JDBC instrumentation. Environment variable: QUARKUS_DATASOURCE_JDBC_TELEMETRY boolean false quarkus.datasource.jdbc.url quarkus.datasource."datasource-name".jdbc.url The datasource URL Environment variable: QUARKUS_DATASOURCE_JDBC_URL string quarkus.datasource.jdbc.initial-size quarkus.datasource."datasource-name".jdbc.initial-size The initial size of the pool. Usually you will want to set the initial size to match at least the minimal size, but this is not enforced so to allow for architectures which prefer a lazy initialization of the connections on boot, while being able to sustain a minimal pool size after boot. Environment variable: QUARKUS_DATASOURCE_JDBC_INITIAL_SIZE int quarkus.datasource.jdbc.min-size quarkus.datasource."datasource-name".jdbc.min-size The datasource pool minimum size Environment variable: QUARKUS_DATASOURCE_JDBC_MIN_SIZE int 0 quarkus.datasource.jdbc.max-size quarkus.datasource."datasource-name".jdbc.max-size The datasource pool maximum size Environment variable: QUARKUS_DATASOURCE_JDBC_MAX_SIZE int 20 quarkus.datasource.jdbc.background-validation-interval quarkus.datasource."datasource-name".jdbc.background-validation-interval The interval at which we validate idle connections in the background. Set to 0 to disable background validation. Environment variable: QUARKUS_DATASOURCE_JDBC_BACKGROUND_VALIDATION_INTERVAL Duration 2M quarkus.datasource.jdbc.foreground-validation-interval quarkus.datasource."datasource-name".jdbc.foreground-validation-interval Perform foreground validation on connections that have been idle for longer than the specified interval. Environment variable: QUARKUS_DATASOURCE_JDBC_FOREGROUND_VALIDATION_INTERVAL Duration quarkus.datasource.jdbc.acquisition-timeout quarkus.datasource."datasource-name".jdbc.acquisition-timeout The timeout before cancelling the acquisition of a new connection Environment variable: QUARKUS_DATASOURCE_JDBC_ACQUISITION_TIMEOUT Duration 5S quarkus.datasource.jdbc.leak-detection-interval quarkus.datasource."datasource-name".jdbc.leak-detection-interval The interval at which we check for connection leaks. Environment variable: QUARKUS_DATASOURCE_JDBC_LEAK_DETECTION_INTERVAL Duration This feature is disabled by default. quarkus.datasource.jdbc.idle-removal-interval quarkus.datasource."datasource-name".jdbc.idle-removal-interval The interval at which we try to remove idle connections. Environment variable: QUARKUS_DATASOURCE_JDBC_IDLE_REMOVAL_INTERVAL Duration 5M quarkus.datasource.jdbc.max-lifetime quarkus.datasource."datasource-name".jdbc.max-lifetime The max lifetime of a connection. Environment variable: QUARKUS_DATASOURCE_JDBC_MAX_LIFETIME Duration By default, there is no restriction on the lifespan of a connection. quarkus.datasource.jdbc.transaction-isolation-level quarkus.datasource."datasource-name".jdbc.transaction-isolation-level The transaction isolation level. Environment variable: QUARKUS_DATASOURCE_JDBC_TRANSACTION_ISOLATION_LEVEL undefined , none , read-uncommitted , read-committed , repeatable-read , serializable quarkus.datasource.jdbc.extended-leak-report quarkus.datasource."datasource-name".jdbc.extended-leak-report Collect and display extra troubleshooting info on leaked connections. Environment variable: QUARKUS_DATASOURCE_JDBC_EXTENDED_LEAK_REPORT boolean false quarkus.datasource.jdbc.flush-on-close quarkus.datasource."datasource-name".jdbc.flush-on-close Allows connections to be flushed upon return to the pool. It's not enabled by default. Environment variable: QUARKUS_DATASOURCE_JDBC_FLUSH_ON_CLOSE boolean false quarkus.datasource.jdbc.detect-statement-leaks quarkus.datasource."datasource-name".jdbc.detect-statement-leaks When enabled, Agroal will be able to produce a warning when a connection is returned to the pool without the application having closed all open statements. This is unrelated with tracking of open connections. Disable for peak performance, but only when there's high confidence that no leaks are happening. Environment variable: QUARKUS_DATASOURCE_JDBC_DETECT_STATEMENT_LEAKS boolean true quarkus.datasource.jdbc.new-connection-sql quarkus.datasource."datasource-name".jdbc.new-connection-sql Query executed when first using a connection. Environment variable: QUARKUS_DATASOURCE_JDBC_NEW_CONNECTION_SQL string quarkus.datasource.jdbc.validation-query-sql quarkus.datasource."datasource-name".jdbc.validation-query-sql Query executed to validate a connection. Environment variable: QUARKUS_DATASOURCE_JDBC_VALIDATION_QUERY_SQL string quarkus.datasource.jdbc.validate-on-borrow quarkus.datasource."datasource-name".jdbc.validate-on-borrow Forces connection validation prior to acquisition (foreground validation) regardless of the idle status. Because of the overhead of performing validation on every call, it's recommended to rely on default idle validation instead, and to leave this to false . Environment variable: QUARKUS_DATASOURCE_JDBC_VALIDATE_ON_BORROW boolean false quarkus.datasource.jdbc.pooling-enabled quarkus.datasource."datasource-name".jdbc.pooling-enabled Disable pooling to prevent reuse of Connections. Use this when an external pool manages the life-cycle of Connections. Environment variable: QUARKUS_DATASOURCE_JDBC_POOLING_ENABLED boolean true quarkus.datasource.jdbc.transaction-requirement quarkus.datasource."datasource-name".jdbc.transaction-requirement Require an active transaction when acquiring a connection. Recommended for production. WARNING: Some extensions acquire connections without holding a transaction for things like schema updates and schema validation. Setting this setting to STRICT may lead to failures in those cases. Environment variable: QUARKUS_DATASOURCE_JDBC_TRANSACTION_REQUIREMENT off , warn , strict quarkus.datasource.jdbc.additional-jdbc-properties."property-key" quarkus.datasource."datasource-name".jdbc.additional-jdbc-properties."property-key" Other unspecified properties to be passed to the JDBC driver when creating new connections. Environment variable: QUARKUS_DATASOURCE_JDBC_ADDITIONAL_JDBC_PROPERTIES__PROPERTY_KEY_ Map<String,String> quarkus.datasource.jdbc.tracing.enabled quarkus.datasource."datasource-name".jdbc.tracing.enabled Enable JDBC tracing. Environment variable: QUARKUS_DATASOURCE_JDBC_TRACING_ENABLED boolean false if quarkus.datasource.jdbc.tracing=false and true if quarkus.datasource.jdbc.tracing=true quarkus.datasource.jdbc.tracing.trace-with-active-span-only quarkus.datasource."datasource-name".jdbc.tracing.trace-with-active-span-only Trace calls with active Spans only Environment variable: QUARKUS_DATASOURCE_JDBC_TRACING_TRACE_WITH_ACTIVE_SPAN_ONLY boolean false quarkus.datasource.jdbc.tracing.ignore-for-tracing quarkus.datasource."datasource-name".jdbc.tracing.ignore-for-tracing Ignore specific queries from being traced Environment variable: QUARKUS_DATASOURCE_JDBC_TRACING_IGNORE_FOR_TRACING string Ignore specific queries from being traced, multiple queries can be specified separated by semicolon, double quotes should be escaped with \ quarkus.datasource.jdbc.telemetry.enabled quarkus.datasource."datasource-name".jdbc.telemetry.enabled Enable OpenTelemetry JDBC instrumentation. Environment variable: QUARKUS_DATASOURCE_JDBC_TELEMETRY_ENABLED boolean false if quarkus.datasource.jdbc.telemetry=false and true if quarkus.datasource.jdbc.telemetry=true About the Duration format To write duration values, use the standard java.time.Duration format. See the Duration#parse() Java API documentation for more information. You can also use a simplified format, starting with a number: If the value is only a number, it represents time in seconds. If the value is a number followed by ms , it represents time in milliseconds. In other cases, the simplified format is translated to the java.time.Duration format for parsing: If the value is a number followed by h , m , or s , it is prefixed with PT . If the value is a number followed by d , it is prefixed with P . 1.4.3. JDBC URL reference Each of the supported databases contains different JDBC URL configuration options. The following section gives an overview of each database URL and a link to the official documentation. 1.4.3.1. DB2 jdbc:db2://<serverName>[:<portNumber>]/<databaseName>[:<key1>=<value>;[<key2>=<value2>;]] Example jdbc:db2://localhost:50000/MYDB:user=dbadm;password=dbadm; For more information on URL syntax and additional supported options, see the official documentation . 1.4.3.2. Derby jdbc:derby:[//serverName[:portNumber]/][memory:]databaseName[;property=value[;property=value]] Example jdbc:derby://localhost:1527/myDB , jdbc:derby:memory:myDB;create=true Derby is an embedded database that can run as a server, based on a file, or can run completely in memory. All of these options are available as listed above. For more information, see the official documentation . 1.4.3.3. H2 jdbc:h2:{ {.|mem:}[name] | [file:]fileName | {tcp|ssl}:[//]server[:port][,server2[:port]]/name }[;key=value... ] Example jdbc:h2:tcp://localhost/~/test , jdbc:h2:mem:myDB H2 is a database that can run in embedded or server mode. It can use a file storage or run entirely in memory. All of these options are available as listed above. For more information, see the official documentation . 1.4.3.4. MariaDB jdbc:mariadb:[replication:|failover:|sequential:|aurora:]//<hostDescription>[,<hostDescription>... ]/[database][?<key1>=<value1>[&<key2>=<value2>]] hostDescription:: <host>[:<portnumber>] or address=(host=<host>)[(port=<portnumber>)][(type=(master|slave))] Example jdbc:mariadb://localhost:3306/test For more information, see the official documentation . 1.4.3.5. Microsoft SQL server jdbc:sqlserver://[serverName[\instanceName][:portNumber]][;property=value[;property=value]] Example jdbc:sqlserver://localhost:1433;databaseName=AdventureWorks The Microsoft SQL Server JDBC driver works essentially the same as the others. For more information, see the official documentation . 1.4.3.6. MySQL jdbc:mysql:[replication:|failover:|sequential:|aurora:]//<hostDescription>[,<hostDescription>... ]/[database][?<key1>=<value1>[&<key2>=<value2>]] hostDescription:: <host>[:<portnumber>] or address=(host=<host>)[(port=<portnumber>)][(type=(master|slave))] Example jdbc:mysql://localhost:3306/test For more information, see the official documentation . 1.4.3.6.1. MySQL limitations When compiling a Quarkus application to a native image, the MySQL support for JMX and Oracle Cloud Infrastructure (OCI) integrations are disabled as they are incompatible with GraalVM native images. The lack of JMX support is a natural consequence of running in native mode and is unlikely to be resolved. The integration with OCI is not supported. 1.4.3.7. Oracle jdbc:oracle:driver_type:@database_specifier Example jdbc:oracle:thin:@localhost:1521/ORCL_SVC For more information, see the official documentation . 1.4.3.8. PostgreSQL jdbc:postgresql:[//][host][:port][/database][?key=value... ] Example jdbc:postgresql://localhost/test The defaults for the different parts are as follows: host localhost port 5432 database same name as the username For more information about additional parameters, see the official documentation . 1.4.4. Quarkus extensions and database drivers reference The following tables list the built-in db-kind values, the corresponding Quarkus extensions, and the JDBC drivers used by those extensions. When using one of the built-in datasource kinds, the JDBC and Reactive drivers are resolved automatically to match the values from these tables. Table 1.1. Database platform kind to JDBC driver mapping Database kind Quarkus extension Drivers db2 quarkus-jdbc-db2 JDBC: com.ibm.db2.jcc.DB2Driver XA: com.ibm.db2.jcc.DB2XADataSource derby quarkus-jdbc-derby JDBC: org.apache.derby.jdbc.ClientDriver XA: org.apache.derby.jdbc.ClientXADataSource h2 quarkus-jdbc-h2 JDBC: org.h2.Driver XA: org.h2.jdbcx.JdbcDataSource mariadb quarkus-jdbc-mariadb JDBC: org.mariadb.jdbc.Driver XA: org.mariadb.jdbc.MySQLDataSource mssql quarkus-jdbc-mssql JDBC: com.microsoft.sqlserver.jdbc.SQLServerDriver XA: com.microsoft.sqlserver.jdbc.SQLServerXADataSource mysql quarkus-jdbc-mysql JDBC: com.mysql.cj.jdbc.Driver XA: com.mysql.cj.jdbc.MysqlXADataSource oracle quarkus-jdbc-oracle JDBC: oracle.jdbc.driver.OracleDriver XA: oracle.jdbc.xa.client.OracleXADataSource postgresql quarkus-jdbc-postgresql JDBC: org.postgresql.Driver XA: org.postgresql.xa.PGXADataSource Table 1.2. Database kind to Reactive driver mapping Database kind Quarkus extension Driver oracle reactive-oracle-client io.vertx.oracleclient.spi.OracleDriver mysql reactive-mysql-client io.vertx.mysqlclient.spi.MySQLDriver mssql reactive-mssql-client io.vertx.mssqlclient.spi.MSSQLDriver postgresql reactive-pg-client io.vertx.pgclient.spi.PgDriver Tip This automatic resolution is applicable in most cases so that driver configuration is not needed. 1.4.5. Reactive datasource configuration reference Configuration property fixed at build time - All other configuration properties are overridable at runtime Configuration property Type Default quarkus.datasource.reactive If we create a Reactive datasource for this datasource. Environment variable: QUARKUS_DATASOURCE_REACTIVE boolean true quarkus.datasource.reactive.cache-prepared-statements Whether prepared statements should be cached on the client side. Environment variable: QUARKUS_DATASOURCE_REACTIVE_CACHE_PREPARED_STATEMENTS boolean false quarkus.datasource.reactive.url The datasource URLs. If multiple values are set, this datasource will create a pool with a list of servers instead of a single server. The pool uses round-robin load balancing for server selection during connection establishment. Note that certain drivers might not accommodate multiple values in this context. Environment variable: QUARKUS_DATASOURCE_REACTIVE_URL list of string quarkus.datasource.reactive.max-size The datasource pool maximum size. Environment variable: QUARKUS_DATASOURCE_REACTIVE_MAX_SIZE int 20 quarkus.datasource.reactive.event-loop-size When a new connection object is created, the pool assigns it an event loop. When #event-loop-size is set to a strictly positive value, the pool assigns as many event loops as specified, in a round-robin fashion. By default, the number of event loops configured or calculated by Quarkus is used. If #event-loop-size is set to zero or a negative value, the pool assigns the current event loop to the new connection. Environment variable: QUARKUS_DATASOURCE_REACTIVE_EVENT_LOOP_SIZE int quarkus.datasource.reactive.trust-all Whether all server certificates should be trusted. Environment variable: QUARKUS_DATASOURCE_REACTIVE_TRUST_ALL boolean false quarkus.datasource.reactive.trust-certificate-pem PEM Trust config is disabled by default. Environment variable: QUARKUS_DATASOURCE_REACTIVE_TRUST_CERTIFICATE_PEM boolean false quarkus.datasource.reactive.trust-certificate-pem.certs Comma-separated list of the trust certificate files (Pem format). Environment variable: QUARKUS_DATASOURCE_REACTIVE_TRUST_CERTIFICATE_PEM_CERTS list of string quarkus.datasource.reactive.trust-certificate-jks JKS config is disabled by default. Environment variable: QUARKUS_DATASOURCE_REACTIVE_TRUST_CERTIFICATE_JKS boolean false quarkus.datasource.reactive.trust-certificate-jks.path Path of the key file (JKS format). Environment variable: QUARKUS_DATASOURCE_REACTIVE_TRUST_CERTIFICATE_JKS_PATH string quarkus.datasource.reactive.trust-certificate-jks.password Password of the key file. Environment variable: QUARKUS_DATASOURCE_REACTIVE_TRUST_CERTIFICATE_JKS_PASSWORD string quarkus.datasource.reactive.trust-certificate-pfx PFX config is disabled by default. Environment variable: QUARKUS_DATASOURCE_REACTIVE_TRUST_CERTIFICATE_PFX boolean false quarkus.datasource.reactive.trust-certificate-pfx.path Path to the key file (PFX format). Environment variable: QUARKUS_DATASOURCE_REACTIVE_TRUST_CERTIFICATE_PFX_PATH string quarkus.datasource.reactive.trust-certificate-pfx.password Password of the key. Environment variable: QUARKUS_DATASOURCE_REACTIVE_TRUST_CERTIFICATE_PFX_PASSWORD string quarkus.datasource.reactive.key-certificate-pem PEM Key/cert config is disabled by default. Environment variable: QUARKUS_DATASOURCE_REACTIVE_KEY_CERTIFICATE_PEM boolean false quarkus.datasource.reactive.key-certificate-pem.keys Comma-separated list of the path to the key files (Pem format). Environment variable: QUARKUS_DATASOURCE_REACTIVE_KEY_CERTIFICATE_PEM_KEYS list of string quarkus.datasource.reactive.key-certificate-pem.certs Comma-separated list of the path to the certificate files (Pem format). Environment variable: QUARKUS_DATASOURCE_REACTIVE_KEY_CERTIFICATE_PEM_CERTS list of string quarkus.datasource.reactive.key-certificate-jks JKS config is disabled by default. Environment variable: QUARKUS_DATASOURCE_REACTIVE_KEY_CERTIFICATE_JKS boolean false quarkus.datasource.reactive.key-certificate-jks.path Path of the key file (JKS format). Environment variable: QUARKUS_DATASOURCE_REACTIVE_KEY_CERTIFICATE_JKS_PATH string quarkus.datasource.reactive.key-certificate-jks.password Password of the key file. Environment variable: QUARKUS_DATASOURCE_REACTIVE_KEY_CERTIFICATE_JKS_PASSWORD string quarkus.datasource.reactive.key-certificate-pfx PFX config is disabled by default. Environment variable: QUARKUS_DATASOURCE_REACTIVE_KEY_CERTIFICATE_PFX boolean false quarkus.datasource.reactive.key-certificate-pfx.path Path to the key file (PFX format). Environment variable: QUARKUS_DATASOURCE_REACTIVE_KEY_CERTIFICATE_PFX_PATH string quarkus.datasource.reactive.key-certificate-pfx.password Password of the key. Environment variable: QUARKUS_DATASOURCE_REACTIVE_KEY_CERTIFICATE_PFX_PASSWORD string quarkus.datasource.reactive.reconnect-attempts The number of reconnection attempts when a pooled connection cannot be established on first try. Environment variable: QUARKUS_DATASOURCE_REACTIVE_RECONNECT_ATTEMPTS int 0 quarkus.datasource.reactive.reconnect-interval The interval between reconnection attempts when a pooled connection cannot be established on first try. Environment variable: QUARKUS_DATASOURCE_REACTIVE_RECONNECT_INTERVAL Duration PT1S quarkus.datasource.reactive.hostname-verification-algorithm The hostname verification algorithm to use in case the server's identity should be checked. Should be HTTPS , LDAPS or NONE . NONE is the default value and disables the verification. Environment variable: QUARKUS_DATASOURCE_REACTIVE_HOSTNAME_VERIFICATION_ALGORITHM string NONE quarkus.datasource.reactive.idle-timeout The maximum time a connection remains unused in the pool before it is closed. Environment variable: QUARKUS_DATASOURCE_REACTIVE_IDLE_TIMEOUT Duration no timeout quarkus.datasource.reactive.max-lifetime The maximum time a connection remains in the pool, after which it will be closed upon return and replaced as necessary. Environment variable: QUARKUS_DATASOURCE_REACTIVE_MAX_LIFETIME Duration no timeout quarkus.datasource.reactive.shared Set to true to share the pool among datasources. There can be multiple shared pools distinguished by name, when no specific name is set, the __vertx.DEFAULT name is used. Environment variable: QUARKUS_DATASOURCE_REACTIVE_SHARED boolean false quarkus.datasource.reactive.name Set the pool name, used when the pool is shared among datasources, otherwise ignored. Environment variable: QUARKUS_DATASOURCE_REACTIVE_NAME string quarkus.datasource.reactive.additional-properties."property-key" Other unspecified properties to be passed through the Reactive SQL Client directly to the database when new connections are initiated. Environment variable: QUARKUS_DATASOURCE_REACTIVE_ADDITIONAL_PROPERTIES__PROPERTY_KEY_ Map<String,String> Additional named datasources Type Default quarkus.datasource."datasource-name".reactive If we create a Reactive datasource for this datasource. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE boolean true quarkus.datasource."datasource-name".reactive.cache-prepared-statements Whether prepared statements should be cached on the client side. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_CACHE_PREPARED_STATEMENTS boolean false quarkus.datasource."datasource-name".reactive.url The datasource URLs. If multiple values are set, this datasource will create a pool with a list of servers instead of a single server. The pool uses round-robin load balancing for server selection during connection establishment. Note that certain drivers might not accommodate multiple values in this context. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_URL list of string quarkus.datasource."datasource-name".reactive.max-size The datasource pool maximum size. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_MAX_SIZE int 20 quarkus.datasource."datasource-name".reactive.event-loop-size When a new connection object is created, the pool assigns it an event loop. When #event-loop-size is set to a strictly positive value, the pool assigns as many event loops as specified, in a round-robin fashion. By default, the number of event loops configured or calculated by Quarkus is used. If #event-loop-size is set to zero or a negative value, the pool assigns the current event loop to the new connection. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_EVENT_LOOP_SIZE int quarkus.datasource."datasource-name".reactive.trust-all Whether all server certificates should be trusted. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_TRUST_ALL boolean false quarkus.datasource."datasource-name".reactive.trust-certificate-pem PEM Trust config is disabled by default. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_TRUST_CERTIFICATE_PEM boolean false quarkus.datasource."datasource-name".reactive.trust-certificate-pem.certs Comma-separated list of the trust certificate files (Pem format). Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_TRUST_CERTIFICATE_PEM_CERTS list of string quarkus.datasource."datasource-name".reactive.trust-certificate-jks JKS config is disabled by default. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_TRUST_CERTIFICATE_JKS boolean false quarkus.datasource."datasource-name".reactive.trust-certificate-jks.path Path of the key file (JKS format). Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_TRUST_CERTIFICATE_JKS_PATH string quarkus.datasource."datasource-name".reactive.trust-certificate-jks.password Password of the key file. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_TRUST_CERTIFICATE_JKS_PASSWORD string quarkus.datasource."datasource-name".reactive.trust-certificate-pfx PFX config is disabled by default. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_TRUST_CERTIFICATE_PFX boolean false quarkus.datasource."datasource-name".reactive.trust-certificate-pfx.path Path to the key file (PFX format). Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_TRUST_CERTIFICATE_PFX_PATH string quarkus.datasource."datasource-name".reactive.trust-certificate-pfx.password Password of the key. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_TRUST_CERTIFICATE_PFX_PASSWORD string quarkus.datasource."datasource-name".reactive.key-certificate-pem PEM Key/cert config is disabled by default. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_KEY_CERTIFICATE_PEM boolean false quarkus.datasource."datasource-name".reactive.key-certificate-pem.keys Comma-separated list of the path to the key files (Pem format). Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_KEY_CERTIFICATE_PEM_KEYS list of string quarkus.datasource."datasource-name".reactive.key-certificate-pem.certs Comma-separated list of the path to the certificate files (Pem format). Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_KEY_CERTIFICATE_PEM_CERTS list of string quarkus.datasource."datasource-name".reactive.key-certificate-jks JKS config is disabled by default. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_KEY_CERTIFICATE_JKS boolean false quarkus.datasource."datasource-name".reactive.key-certificate-jks.path Path of the key file (JKS format). Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_KEY_CERTIFICATE_JKS_PATH string quarkus.datasource."datasource-name".reactive.key-certificate-jks.password Password of the key file. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_KEY_CERTIFICATE_JKS_PASSWORD string quarkus.datasource."datasource-name".reactive.key-certificate-pfx PFX config is disabled by default. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_KEY_CERTIFICATE_PFX boolean false quarkus.datasource."datasource-name".reactive.key-certificate-pfx.path Path to the key file (PFX format). Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_KEY_CERTIFICATE_PFX_PATH string quarkus.datasource."datasource-name".reactive.key-certificate-pfx.password Password of the key. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_KEY_CERTIFICATE_PFX_PASSWORD string quarkus.datasource."datasource-name".reactive.reconnect-attempts The number of reconnection attempts when a pooled connection cannot be established on first try. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_RECONNECT_ATTEMPTS int 0 quarkus.datasource."datasource-name".reactive.reconnect-interval The interval between reconnection attempts when a pooled connection cannot be established on first try. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_RECONNECT_INTERVAL Duration PT1S quarkus.datasource."datasource-name".reactive.hostname-verification-algorithm The hostname verification algorithm to use in case the server's identity should be checked. Should be HTTPS , LDAPS or NONE . NONE is the default value and disables the verification. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_HOSTNAME_VERIFICATION_ALGORITHM string NONE quarkus.datasource."datasource-name".reactive.idle-timeout The maximum time a connection remains unused in the pool before it is closed. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_IDLE_TIMEOUT Duration no timeout quarkus.datasource."datasource-name".reactive.max-lifetime The maximum time a connection remains in the pool, after which it will be closed upon return and replaced as necessary. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_MAX_LIFETIME Duration no timeout quarkus.datasource."datasource-name".reactive.shared Set to true to share the pool among datasources. There can be multiple shared pools distinguished by name, when no specific name is set, the __vertx.DEFAULT name is used. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_SHARED boolean false quarkus.datasource."datasource-name".reactive.name Set the pool name, used when the pool is shared among datasources, otherwise ignored. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_NAME string quarkus.datasource."datasource-name".reactive.additional-properties."property-key" Other unspecified properties to be passed through the Reactive SQL Client directly to the database when new connections are initiated. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_ADDITIONAL_PROPERTIES__PROPERTY_KEY_ Map<String,String> About the Duration format To write duration values, use the standard java.time.Duration format. See the Duration#parse() Java API documentation for more information. You can also use a simplified format, starting with a number: If the value is only a number, it represents time in seconds. If the value is a number followed by ms , it represents time in milliseconds. In other cases, the simplified format is translated to the java.time.Duration format for parsing: If the value is a number followed by h , m , or s , it is prefixed with PT . If the value is a number followed by d , it is prefixed with P . 1.4.5.1. Reactive MariaDB/MySQL specific configuration Configuration property fixed at build time - All other configuration properties are overridable at runtime Configuration property Type Default Additional named datasources Type Default quarkus.datasource.reactive.mysql.charset quarkus.datasource."datasource-name".reactive.mysql.charset Charset for connections. Environment variable: QUARKUS_DATASOURCE_REACTIVE_MYSQL_CHARSET string quarkus.datasource.reactive.mysql.collation quarkus.datasource."datasource-name".reactive.mysql.collation Collation for connections. Environment variable: QUARKUS_DATASOURCE_REACTIVE_MYSQL_COLLATION string quarkus.datasource.reactive.mysql.ssl-mode quarkus.datasource."datasource-name".reactive.mysql.ssl-mode Desired security state of the connection to the server. See MySQL Reference Manual . Environment variable: QUARKUS_DATASOURCE_REACTIVE_MYSQL_SSL_MODE disabled , preferred , required , verify-ca , verify-identity disabled quarkus.datasource.reactive.mysql.connection-timeout quarkus.datasource."datasource-name".reactive.mysql.connection-timeout Connection timeout in seconds Environment variable: QUARKUS_DATASOURCE_REACTIVE_MYSQL_CONNECTION_TIMEOUT int quarkus.datasource.reactive.mysql.authentication-plugin quarkus.datasource."datasource-name".reactive.mysql.authentication-plugin The authentication plugin the client should use. By default, it uses the plugin name specified by the server in the initial handshake packet. Environment variable: QUARKUS_DATASOURCE_REACTIVE_MYSQL_AUTHENTICATION_PLUGIN default , mysql-clear-password , mysql-native-password , sha256-password , caching-sha2-password default quarkus.datasource.reactive.mysql.pipelining-limit quarkus.datasource."datasource-name".reactive.mysql.pipelining-limit The maximum number of inflight database commands that can be pipelined. By default, pipelining is disabled. Environment variable: QUARKUS_DATASOURCE_REACTIVE_MYSQL_PIPELINING_LIMIT int quarkus.datasource.reactive.mysql.use-affected-rows quarkus.datasource."datasource-name".reactive.mysql.use-affected-rows Whether to return the number of rows matched by the WHERE clause in UPDATE statements, instead of the number of rows actually changed. Environment variable: QUARKUS_DATASOURCE_REACTIVE_MYSQL_USE_AFFECTED_ROWS boolean false 1.4.5.2. Reactive Microsoft SQL server-specific configuration Configuration property fixed at build time - All other configuration properties are overridable at runtime Configuration property Type Default Datasources Type Default quarkus.datasource.reactive.mssql.packet-size quarkus.datasource."datasource-name".reactive.mssql.packet-size The desired size (in bytes) for TDS packets. Environment variable: QUARKUS_DATASOURCE_REACTIVE_MSSQL_PACKET_SIZE int quarkus.datasource.reactive.mssql.ssl quarkus.datasource."datasource-name".reactive.mssql.ssl Whether SSL/TLS is enabled. Environment variable: QUARKUS_DATASOURCE_REACTIVE_MSSQL_SSL boolean false 1.4.5.3. Reactive Oracle-specific configuration Configuration property fixed at build time - All other configuration properties are overridable at runtime Configuration property Type Default Datasources Type Default 1.4.5.4. Reactive PostgreSQL-specific configuration Configuration property fixed at build time - All other configuration properties are overridable at runtime Configuration property Type Default Datasources Type Default quarkus.datasource.reactive.postgresql.pipelining-limit quarkus.datasource."datasource-name".reactive.postgresql.pipelining-limit The maximum number of inflight database commands that can be pipelined. Environment variable: QUARKUS_DATASOURCE_REACTIVE_POSTGRESQL_PIPELINING_LIMIT int quarkus.datasource.reactive.postgresql.ssl-mode quarkus.datasource."datasource-name".reactive.postgresql.ssl-mode SSL operating mode of the client. See Protection Provided in Different Modes . Environment variable: QUARKUS_DATASOURCE_REACTIVE_POSTGRESQL_SSL_MODE disable , allow , prefer , require , verify-ca , verify-full disable quarkus.datasource.reactive.postgresql.use-layer7-proxy quarkus.datasource."datasource-name".reactive.postgresql.use-layer7-proxy Level 7 proxies can load balance queries on several connections to the actual database. When it happens, the client can be confused by the lack of session affinity and unwanted errors can happen like ERROR: unnamed prepared statement does not exist (26000). See Using a level 7 proxy Environment variable: QUARKUS_DATASOURCE_REACTIVE_POSTGRESQL_USE_LAYER7_PROXY boolean false 1.4.6. Reactive datasource URL reference 1.4.6.1. DB2 db2://[user[:[password]]@]host[:port][/database][?<key1>=<value1>[&<key2>=<value2>]] Example db2://dbuser:[email protected]:50000/mydb Currently, the client supports the following parameter keys: host port user password database Note Configuring parameters in the connection URL overrides the default properties. 1.4.6.2. Microsoft SQL server sqlserver://[user[:[password]]@]host[:port][/database][?<key1>=<value1>[&<key2>=<value2>]] Example sqlserver://dbuser:[email protected]:1433/mydb Currently, the client supports the following parameter keys: host port user password database Note Configuring parameters in the connection URL overrides the default properties. 1.4.6.3. MySQL / MariaDB mysql://[user[:[password]]@]host[:port][/database][?<key1>=<value1>[&<key2>=<value2>]] Example mysql://dbuser:[email protected]:3211/mydb Currently, the client supports the following parameter keys (case-insensitive): host port user password schema socket useAffectedRows Note Configuring parameters in the connection URL overrides the default properties. 1.4.6.4. Oracle 1.4.6.4.1. EZConnect format oracle:thin:@[[protocol:]//]host[:port][/service_name][:server_mode][/instance_name][?connection properties] Example oracle:thin:@mydbhost1:5521/mydbservice?connect_timeout=10sec 1.4.6.4.2. TNS alias format oracle:thin:@<alias_name>[?connection properties] Example oracle:thin:@prod_db?TNS_ADMIN=/work/tns/ 1.4.6.5. PostgreSQL postgresql://[user[:[password]]@]host[:port][/database][?<key1>=<value1>[&<key2>=<value2>]] Example postgresql://dbuser:[email protected]:5432/mydb Currently, the client supports: Following parameter keys: host port user password dbname sslmode Additional properties, such as: application_name fallback_application_name search_path options Note Configuring parameters in the connection URL overrides the default properties. | [
"quarkus.datasource.db-kind=postgresql 1 quarkus.datasource.username=<your username> quarkus.datasource.password=<your password> quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5432/hibernate_orm_test quarkus.datasource.jdbc.max-size=16",
"quarkus.datasource.db-kind=postgresql 1 quarkus.datasource.username=<your username> quarkus.datasource.password=<your password> quarkus.datasource.reactive.url=postgresql:///your_database quarkus.datasource.reactive.max-size=20",
"quarkus.datasource.db-kind=h2",
"quarkus.datasource.username=<your username> quarkus.datasource.password=<your password>",
"./mvnw quarkus:add-extension -Dextensions=\"jdbc-postgresql\"",
"./mvnw quarkus:add-extension -Dextensions=\"agroal\"",
"quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5432/hibernate_orm_test",
"quarkus.datasource.jdbc.driver=io.opentracing.contrib.jdbc.TracingDriver",
"quarkus.datasource.db-kind=other quarkus.datasource.jdbc.driver=oracle.jdbc.driver.OracleDriver quarkus.datasource.jdbc.url=jdbc:oracle:thin:@192.168.1.12:1521/ORCL_SVC quarkus.datasource.username=scott quarkus.datasource.password=tiger",
"@Inject AgroalDataSource defaultDataSource;",
"quarkus.datasource.reactive.url=postgresql:///your_database quarkus.datasource.reactive.max-size=20",
"%prod.quarkus.datasource.reactive.url=postgresql:///your_database %prod.quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5432/hibernate_orm_test",
"quarkus.datasource.jdbc=false",
"quarkus.datasource.reactive=false",
"quarkus.datasource.db-kind=h2 quarkus.datasource.username=username-default quarkus.datasource.jdbc.url=jdbc:h2:mem:default quarkus.datasource.jdbc.max-size=13 quarkus.datasource.users.db-kind=h2 quarkus.datasource.users.username=username1 quarkus.datasource.users.jdbc.url=jdbc:h2:mem:users quarkus.datasource.users.jdbc.max-size=11 quarkus.datasource.inventory.db-kind=h2 quarkus.datasource.inventory.username=username2 quarkus.datasource.inventory.jdbc.url=jdbc:h2:mem:inventory quarkus.datasource.inventory.jdbc.max-size=12",
"@Inject AgroalDataSource defaultDataSource; @Inject @DataSource(\"users\") AgroalDataSource usersDataSource; @Inject @DataSource(\"inventory\") AgroalDataSource inventoryDataSource;",
"quarkus.datasource.\"pg\".db-kind=postgres quarkus.datasource.\"pg\".active=false quarkus.datasource.\"pg\".jdbc.url=jdbc:postgresql:///your_database quarkus.datasource.\"oracle\".db-kind=oracle quarkus.datasource.\"oracle\".active=false quarkus.datasource.\"oracle\".jdbc.url=jdbc:oracle:///your_database",
"%pg.quarkus.hibernate-orm.\"pg\".active=true %pg.quarkus.datasource.\"pg\".active=true Add any pg-related runtime configuration here, prefixed with \"%pg.\" %oracle.quarkus.hibernate-orm.\"oracle\".active=true %oracle.quarkus.datasource.\"oracle\".active=true Add any pg-related runtime configuration here, prefixed with \"%pg.\"",
"public class MyProducer { @Inject DataSourceSupport dataSourceSupport; @Inject @DataSource(\"pg\") AgroalDataSource pgDataSourceBean; @Inject @DataSource(\"oracle\") AgroalDataSource oracleDataSourceBean; @Produces @ApplicationScoped public AgroalDataSource dataSource() { if (dataSourceSupport.getInactiveNames().contains(\"pg\")) { return oracleDataSourceBean; } else { return pgDataSourceBean; } } }",
"Caused by: java.sql.SQLException: Exception in association of connection to existing transaction at io.agroal.narayana.NarayanaTransactionIntegration.associate(NarayanaTransactionIntegration.java:130) Caused by: java.sql.SQLException: Failed to enlist. Check if a connection from another datasource is already enlisted to the same transaction at io.agroal.narayana.NarayanaTransactionIntegration.associate(NarayanaTransactionIntegration.java:121)",
"quarkus.datasource.\"datasource-name\".health-exclude=true",
"enable tracing quarkus.datasource.jdbc.telemetry=true"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/configure_data_sources/datasources |
Appendix A. List of tickets by component | Appendix A. List of tickets by component Bugzilla and JIRA tickets are listed in this document for reference. The links lead to the release notes in this document that describe the tickets. Component Tickets 389-ds-base Jira:RHEL-15907 , Jira:RHEL-5142 , Jira:RHEL-5133 , Jira:RHEL-5130 , Jira:RHEL-16830 , Jira:RHEL-17175 , Jira:RHEL-5111 NetworkManager Jira:RHEL-1441 , Jira:RHEL-1471 , Jira:RHEL-16470 , Jira:RHEL-1469 , Jira:RHEL-24337 , Jira:RHEL-5852 , Bugzilla:1894877 , Jira:RHEL-17619 Release Notes Jira:RHELDOCS-17841 , Jira:RHELDOCS-16861 , Jira:RHELDOCS-16760 , Jira:RHELDOCS-17520 , Jira:RHELDOCS-17803 , Jira:RHELDOCS-16756 , Jira:RHELDOCS-16612 , Jira:RHELDOCS-17102 , Jira:RHELDOCS-17166 , Jira:RHELDOCS-17309 , Jira:RHELDOCS-17545 , Jira:RHELDOCS-17518 , Jira:RHELDOCS-17989 , Jira:RHELDOCS-17702 , Jira:RHELDOCS-17917 , Jira:RHELDOCS-16979 anaconda Jira:RHEL-11384 , Jira:RHEL-13150 , Jira:RHEL-4766 , Jira:RHEL-5638 , Jira:RHEL-10216 , Jira:RHEL-2250 , Jira:RHEL-17205 , Bugzilla:2127473 , Bugzilla:2050140 , Bugzilla:1877697 , Jira:RHEL-4707 , Jira:RHEL-4711 , Bugzilla:1997832 , Jira:RHEL-4741 , Bugzilla:2115783 , Jira:RHEL-4762 , Jira:RHEL-4737 , Jira:RHEL-9633 ansible-collection-microsoft-sql Jira:RHEL-16342 , Jira:RHEL-19092 , Jira:RHEL-19091 , Jira:RHEL-3540 ansible-freeipa Jira:RHEL-4962 , Jira:RHEL-16934 , Jira:RHEL-16939 , Jira:RHEL-19134 , Jira:RHEL-19130 audit Jira:RHEL-14896 bacula Jira:RHEL-6856 bcc Jira:RHEL-16325 bind Bugzilla:1984982 boom-boot Jira:RHEL-16813 bootc-image-builder-container Jira:RHEL-34054 certmonger Jira:RHEL-22302 chrony Jira:RHEL-6522 clang Jira:RHEL-9346 cloud-init Jira:RHEL-7278 , Jira:RHEL-7311 , Jira:RHEL-12122 cmake Jira:RHEL-7393 cockpit-appstream Bugzilla:2030836 cockpit-machines Jira:RHEL-17434 , Bugzilla:2173584 , Jira:RHEL-31993 crash Jira:RHEL-9009 createrepo_c Bugzilla:2056318 crypto-policies Jira:RHEL-15925 , Jira:RHEL-2735 cyrus-sasl Bugzilla:1995600 device-mapper-multipath Jira:RHEL-6678 , Jira:RHEL-1729 , Jira:RHEL-4998 , Jira:RHEL-986 , Jira:RHEL-1830 , Jira:RHEL-17234 , Bugzilla:2033080 , Bugzilla:2011699 , Bugzilla:1926147 distribution Jira:RHEL-17089 , Jira:RHEL-6973 , Jira:RHEL-18157 , Jira:RHEL-22385 dnf Bugzilla:2073510 dnf-plugins-core Jira:RHEL-4600 edk2 Bugzilla:1935497 elfutils Jira:RHEL-12489 fapolicyd Bugzilla:2054740 , Jira:RHEL-24345 , Jira:RHEL-520 firewalld Jira:RHEL-427 , Jira:RHEL-14485 , Jira:RHEL-17708 gcc Jira:RHEL-17638 gcc-toolset-13-binutils Jira:RHEL-23798 gcc-toolset-13-gcc Jira:RHEL-16998 gimp Bugzilla:2047161 git Jira:RHEL-17100 git-lfs Jira:RHEL-17101 glibc Jira:RHEL-14383 , Jira:RHEL-17157 , Jira:RHEL-2491 , Jira:RHEL-19862 , Jira:RHEL-16643 , Jira:RHEL-12362 , Jira:RHEL-3397 , Jira:RHEL-2123 gnupg2 Bugzilla:2070722 gnutls Jira:RHEL-14891 , Bugzilla:2108532 golang Jira:RHEL-11871 , Bugzilla:2111072 , Bugzilla:2092016 grafana Jira:RHEL-7505 grub2 Jira:RHEL-10288 gtk3 Jira:RHEL-11924 httpd Jira:RHEL-6600 ipa Jira:RHEL-11652 , Jira:RHEL-23377 , Bugzilla:1513934 , Jira:RHEL-9984 , Jira:RHEL-22313 , Jira:RHEL-12143 , Bugzilla:2084180 , Bugzilla:2094673 , Jira:RHEL-12154 , Jira:RHEL-4955 iptables Jira:RHEL-14147 jmc-core Bugzilla:1980981 kdump-anaconda-addon Jira:RHEL-11196 kernel Jira:RHEL-11597 , Bugzilla:2041883 , Bugzilla:1613522 , Bugzilla:1995338 , Bugzilla:1570255 , Bugzilla:2177256 , Bugzilla:2178699 , Bugzilla:2023416 , Bugzilla:2021672 , Bugzilla:2027304 , Bugzilla:1955275 , Bugzilla:2142102 , Bugzilla:2040643 , Bugzilla:2186375 , Bugzilla:2183538 , Bugzilla:2206599 , Bugzilla:2167783 , Bugzilla:2000616 , Bugzilla:2013650 , Bugzilla:2132480 , Bugzilla:2059545 , Bugzilla:2005173 , Bugzilla:2128610 , Bugzilla:2129288 , Bugzilla:2013884 , Bugzilla:2149989 kernel / BPF Jira:RHEL-10691 kernel / Crypto Jira:RHEL-20145 kernel / DMA Engine Jira:RHEL-10097 kernel / Debugging-Tracing / rtla Jira:RHEL-10079 kernel / Kernel-Core Jira:RHEL-25967 kernel / Networking / IPSec Jira:RHEL-1015 kernel / Networking / NIC Drivers Jira:RHEL-9308 , Jira:RHEL-24618 , Jira:RHEL-9897 kernel / Networking / Netfilter Jira:RHEL-16630 kernel / Networking / Protocol / tcp Jira:RHEL-21223 , Jira:RHEL-5736 kernel / Platform Enablement / NVMe Jira:RHEL-21545 , Jira:RHEL-14751 , Jira:RHEL-8171 , Jira:RHEL-8164 kernel / Platform Enablement / ppc64 Jira:RHEL-15404 kernel / Security / TPM Jira:RHEL-18985 kernel / Storage / Device Mapper / Crypt Jira:RHEL-23572 kernel / Storage / Multiple Devices (MD) Jira:RHEL-30730 kernel / Storage / Storage Drivers Jira:RHEL-8466 , Jira:RHEL-8104 , Jira:RHEL-25730 kernel / Virtualization Jira:RHEL-1138 kernel / Virtualization / KVM Jira:RHEL-2815 , Jira:RHEL-13007 , Jira:RHEL-7212 , Jira:RHEL-26152 , Jira:RHEL-10019 kernel-rt Bugzilla:2181571 kernel-rt / Other Jira:RHEL-9318 kexec-tools Bugzilla:2113873 , Bugzilla:2064708 keylime Jira:RHEL-11867 , Jira:RHEL-1518 kmod Bugzilla:2103605 kmod-kvdo Jira:RHEL-8354 krb5 Jira:RHEL-4902 , Bugzilla:2060798 , Jira:RHEL-4875 , Jira:RHEL-4889 , Bugzilla:2016312 , Jira:RHEL-4888 libabigail Jira:RHEL-16629 libdnf Jira:RHEL-11238 libkcapi Jira:RHEL-15298 , Jira:RHEL-5367 libotr Bugzilla:2086562 librepo Jira:RHEL-11240 libreswan Jira:RHEL-12278 librhsm Jira:RHEL-14224 libsepol Jira:RHEL-16233 libvirt Bugzilla:2143158 , Bugzilla:2078693 libvirt / General Jira:RHEL-7568 , Jira:RHEL-7043 libvirt / Storage Jira:RHEL-7528 , Jira:RHEL-7416 libxcrypt Bugzilla:2034569 libzip Jira:RHEL-17567 linuxptp Jira:RHEL-2026 llvm-toolset Jira:RHEL-9283 lvm2 Jira:RHEL-8357 , Bugzilla:2038183 make Jira:RHEL-22829 mariadb Jira:RHEL-3638 maven Jira:RHEL-13046 mysql Bugzilla:1991500 nettle Jira:RHEL-14890 nfs-utils Bugzilla:2081114 nftables Jira:RHEL-5980 , Jira:RHEL-14191 nginx Jira:RHEL-14713 nmstate Jira:RHEL-1434 , Jira:RHEL-1605 , Jira:RHEL-1438 , Jira:RHEL-19142 , Jira:RHEL-1420 , Jira:RHEL-1425 nvme-cli Jira:RHEL-1492 nvme-stas Bugzilla:1893841 open-vm-tools Bugzilla:2037657 opencryptoki Jira:RHEL-11412 opensc Jira:RHEL-4079 openscap Bugzilla:2161499 openslp Jira:RHEL-6995 openssh Jira:RHEL-5222 , Jira:RHEL-2469 , Bugzilla:2056884 openssl Jira:RHEL-23474 , Jira:RHEL-17193 , Bugzilla:2168665 , Bugzilla:1975836 , Bugzilla:1681178 , Bugzilla:1685470 osbuild Jira:RHEL-4655 osbuild-composer Bugzilla:2173928 , Jira:RHEL-7999 , Jira:RHEL-4649 oscap-anaconda-addon Jira:RHEL-1824 , Jira:RHELPLAN-44202 p11-kit Jira:RHEL-14834 papi Jira:RHEL-9333 pause-container Bugzilla:2106816 pcp Jira:RHEL-2317 pcs Jira:RHEL-7672 , Jira:RHEL-7582 , Jira:RHEL-7724 , Jira:RHEL-7746 , Jira:RHEL-7744 , Jira:RHEL-7669 , Jira:RHEL-7730 php Jira:RHEL-14699 pki-core Jira:RHELPLAN-145900 podman Jira:RHELPLAN-167829 , Jira:RHELPLAN-167796 , Jira:RHELPLAN-167823 , Jira:RHELPLAN-168180 , Jira:RHELPLAN-168185 , Jira:RHELPLAN-154436 , Bugzilla:2069279 policycoreutils Jira:RHEL-24462 , Jira:RHEL-25263 postgresql Jira:RHEL-3635 procps-ng Jira:RHEL-16278 python3.11-lxml Bugzilla:2157708 qemu-kvm Jira:RHEL-11597 , Jira:RHEL-16695 , Bugzilla:1965079 , Bugzilla:1951814 , Bugzilla:2060839 , Bugzilla:2014229 , Jira:RHEL-7335 , Jira:RHEL-7336 , Bugzilla:1915715 , Jira:RHEL-13335 , Jira:RHEL-333 , Bugzilla:2176010 , Bugzilla:2058982 , Bugzilla:2073872 , Jira:RHEL-7478 , Jira:RHEL-32990 qemu-kvm / Devices Jira:RHEL-1220 qemu-kvm / Graphics Jira:RHEL-7135 qemu-kvm / Live Migration Jira:RHEL-13004 , Jira:RHEL-7096 , Jira:RHEL-7115 qemu-kvm / Networking Jira:RHEL-7337 , Jira:RHEL-21867 qemu-kvm / Storage / NBD Jira:RHEL-33440 realtime-tests Jira:RHEL-9910 rear Jira:RHEL-16864 , Jira:RHEL-10478 , Jira:RHEL-6984 , Jira:RHEL-17393 , Jira:RHEL-24847 restore Bugzilla:1997366 rhel-bootc-container Jira:RHEL-33208 rhel-system-roles Jira:RHEL-1535 , Jira:RHEL-16976 , Jira:RHEL-16542 , Jira:RHEL-16552 , Jira:RHEL-19579 , Jira:RHEL-17668 , Jira:RHEL-21133 , Jira:RHEL-16964 , Jira:RHEL-16541 , Jira:RHEL-15932 , Jira:RHEL-15439 , Jira:RHEL-18962 , Jira:RHEL-16212 , Jira:RHEL-15876 , Jira:RHEL-21117 , Jira:RHEL-16974 , Jira:RHEL-5972 , Jira:RHEL-15037 , Jira:RHEL-19046 , Jira:RHEL-18026 , Jira:RHEL-1683 , Jira:RHEL-3353 , Jira:RHEL-19043 , Jira:RHEL-19040 , Jira:RHEL-17875 , Jira:RHEL-5274 , Jira:RHEL-15870 , Jira:RHEL-15909 , Jira:RHEL-22228 , Jira:RHEL-25508 , Jira:RHEL-21401 , Jira:RHEL-22309 , Bugzilla:1999770 , Bugzilla:2123859 , Jira:RHEL-1172 , Bugzilla:2186218 rsyslog Jira:RHEL-943 , Jira:RHEL-937 , Jira:RHEL-5196 rteval Jira:RHEL-9912 rust Jira:RHEL-12963 s390utils Bugzilla:1932480 samba Jira:RHEL-16476 scap-security-guide Jira:RHEL-21425 , Jira:RHEL-1800 , Bugzilla:2038978 selinux-policy Jira:RHEL-12591 , Jira:RHEL-21452 , Jira:RHEL-1548 , Jira:RHEL-18219 , Jira:RHEL-14246 , Jira:RHEL-1551 , Jira:RHEL-11174 , Jira:RHEL-1553 , Jira:RHEL-14289 , Jira:RHEL-5032 , Jira:RHEL-15432 , Jira:RHEL-19051 , Jira:RHEL-11792 , Bugzilla:2064274 , Jira:RHEL-28814 sos Bugzilla:1869561 sssd Jira:SSSD-7015 , Bugzilla:1608496 sssd_kcm Jira:SSSD-7015 stratis-cli Jira:RHEL-2265 stratisd Jira:RHEL-12898 , Jira:RHEL-16736 stunnel Jira:RHEL-2468 subscription-manager Bugzilla:2163716 , Bugzilla:2136694 synce4l Jira:RHEL-10089 sysstat Jira:RHEL-12009 , Jira:RHEL-26275 systemd Bugzilla:2018112 , Jira:RHEL-6105 systemtap Jira:RHEL-12488 tigervnc Bugzilla:2060308 tuna Jira:RHEL-8859 tuned Bugzilla:2113900 udisks2 Bugzilla:2213769 unbound Bugzilla:2070495 vdo Jira:RHEL-30525 virt-v2v Bugzilla:2168082 virtio-win Jira:RHEL-11810 , Jira:RHEL-11366 , Jira:RHEL-1609 , Jira:RHEL-869 virtio-win / distribution Jira:RHEL-1517 , Jira:RHEL-1860 , Jira:RHEL-574 virtio-win / virtio-win-prewhql Jira:RHEL-1084 , Jira:RHEL-935 , Jira:RHEL-12118 , Jira:RHEL-1212 webkit2gtk3 Jira:RHEL-4157 xdp-tools Jira:RHEL-3382 other Jira:RHELDOCS-17369 , Jira:RHELDOCS-17263 , Jira:RHELDOCS-17060 , Jira:RHELDOCS-17056 , Jira:RHELDOCS-16721 , Jira:RHELDOCS-17372 , Jira:RHELDOCS-16970 , Jira:RHELPLAN-169666 , Jira:RHELDOCS-17000 , Jira:RHELDOCS-16241 , Jira:RHELDOCS-16955 , Jira:RHELDOCS-17053 , Jira:RHELDOCS-17792 , Jira:SSSD-6184 , Jira:RHELDOCS-17040 , Bugzilla:2020529 , Jira:RHELPLAN-27394 , Jira:RHELPLAN-27737 , Jira:RHELDOCS-16861 , Jira:RHELDOCS-17520 , Jira:RHELDOCS-17752 , Jira:RHELDOCS-17803 , Jira:RHELDOCS-17468 , Jira:RHELDOCS-17733 , Bugzilla:1927780 , Jira:RHELPLAN-110763 , Bugzilla:1935544 , Bugzilla:2089200 , Jira:RHELPLAN-99136 , Jira:RHELPLAN-103232 , Bugzilla:1899167 , Bugzilla:1979521 , Jira:RHELPLAN-100087 , Jira:RHELPLAN-100639 , Bugzilla:2058153 , Jira:RHELPLAN-113995 , Jira:RHELPLAN-98983 , Jira:RHELPLAN-131882 , Jira:RHELPLAN-139805 , Jira:RHELDOCS-16756 , Jira:RHELPLAN-153267 , Jira:RHELDOCS-16300 , Jira:RHELDOCS-16432 , Jira:RHELDOCS-16393 , Jira:RHELDOCS-16612 , Jira:RHELDOCS-17102 , Jira:RHELDOCS-17015 , Jira:RHELDOCS-18049 , Jira:RHELDOCS-17135 , Jira:RHELDOCS-17545 , Jira:RHELDOCS-17038 , Jira:RHELDOCS-17495 , Jira:RHELDOCS-17518 , Jira:RHELDOCS-17462 , Jira:RHELDOCS-18106 , Jira:RHELPLAN-157225 , Bugzilla:1640697 , Bugzilla:1697896 , Bugzilla:2047713 , Jira:RHELPLAN-96940 , Jira:RHELPLAN-117234 , Jira:RHELPLAN-119001 , Jira:RHELPLAN-119852 , Bugzilla:2077767 , Bugzilla:2053598 , Bugzilla:2082303 , Jira:RHELPLAN-121049 , Jira:RHELPLAN-157939 , Jira:RHELPLAN-109613 , Bugzilla:2160619 , Jira:RHELDOCS-18064 , Jira:RHELDOCS-16427 , Bugzilla:2173992 , Bugzilla:2185048 , Bugzilla:1970830 , Jira:RHELDOCS-16574 , Jira:RHELDOCS-17719 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.4_release_notes/list_of_tickets_by_component |
Chapter 1. Red Hat Quay release notes | Chapter 1. Red Hat Quay release notes The following sections detail y and z stream release information. 1.1. RHBA-2025:1079 - Red Hat Quay 3.13.4 release Issued 2025-02-20 Red Hat Quay release 3.13.4 is now available with Clair 4.8. The bug fixes that are included in the update are listed in the RHBA-2025:1079 advisory. 1.2. RHBA-2025:0301 - Red Hat Quay 3.13.3 release Issued 2025-01-20 Red Hat Quay release 3.13.3 is now available with Clair 4.8. The bug fixes that are included in the update are listed in the RHBA-2025:0301 advisory. 1.2.1. Red Hat Quay 3.13.3 bug fixes PROJQUAY-8336 . Previously, when using Red Hat Quay with managed Quay and Clair PostgreSQL databases, Red Hat Advanced Cluster Security would scan all running Quay pods and report High Image Vulnerability in Quay PostgreSQL database and Clair PostgreSQL database . This issue has been resolved. 1.3. RHBA-2024:10967 - Red Hat Quay 3.13.2 release Issued 2024-12-17 Red Hat Quay release 3.13.2 is now available with Clair 4.8. The bug fixes that are included in the update are listed in the RHBA-2024:10967 advisory. 1.3.1. Red Hat Quay 3.13.2 new features With this release, a pull-through cache organization can now be created when using the Red Hat Quay v2 UI. For more information, see Using Red Hat Quay to proxy a remote registry . 1.3.2. Red Hat Quay 3.13.2 known issues When using the pull-through proxy feature in Red Hat Quay with quota management enabled, and the organization quota fills up, it is expected that Red Hat Quay removes the least recently used image to free up space for new cached entries. However, images pull by digest are not evicted automatically when the quota is exceeded, which causes subsequent pull attempts to return a Quota has been exceeded on namespace error. As a temporary workaround, you can run a bash shell inside of the Red Hat Quay database pod to make digest-pulled images visible for eviction with the following setting: update tag set hidden = 0; . For more information, see PROJQUAY-8071 . 1.3.3. Red Hat Quay 3.13.2 bug fixes PROJQUAY-8273 , PROJQUAY-6474 . When deploying Red Hat Quay with an custom HorizontalPodAutoscaler component and then setting the component to managed: false in the QuayRegistry custom resource definition (CRD), the Red Hat Quay Operator continuously terminates and resets the minReplicas value to 2 for mirror and clair components. To work around this issue, see Using unmanaged Horizontal Pod Autoscalers . PROJQUAY-8208 . Previously, Red Hat Quay would return a 501 error on repository or organization creation with the authorization type was set to OIDC and restricted users were set. This issue has been resolved. PROJQUAY-8269 . Previously, on the {productnamne} UI, the OAuth scopes page suggested that scopes could be applied to robot accounts. This was not the case. Wording on the OAuth scopes page of the UI has been fixed. 1.4. RHBA-2024:9478 - Red Hat Quay 3.13.1 release Issued 2024-11-18 Red Hat Quay release 3.13.1 is now available with Clair 4.8. The bug fixes that are included in the update are listed in the RHBA-2024:9478 advisory. 1.5. Information about upgrading to 3.13.1 Previously, when attempting to upgrade to Red Hat Quay 3.13, if FIPS mode was enabled for your OpenShift Container Platform cluster with Clair enabled, Clair would not function in your cluster. This issue was resolved in version 3.13.1. Upgrading to Red Hat Quay 3.13 automatically upgrades users to version 3.13.1 so that this issue is avoided. Additionally, if you are upgrading from 3.13 to 3.13.1 and FIPs was enabled, upgrading to 3.13.1 resolves the issue. ( PROJQUAY-8185 ) 1.5.1. Red Hat Quay 3.13.1 enhancements With the release of Red Hat Quay 3.13.1, Hitachi Content Platform (HCP) is now supported for use a storage backend. This allows organizations to leverage HCP for scalable, secure, and reliable object storage within their Red Hat Quay registry deployments. For more information, see HCP Object Storage . 1.5.2. Red Hat Quay 3.13.1 known issues When using Hitachi Content Platform for your object storage, attempting to push an image with a large layer to a Red Hat Quay registry results in the following error: An error occurred (NoSuchUpload) when calling the CompleteMultipartUpload operation: The specified multipart upload does not exist. The upload ID might be invalid, or the multipart upload might have been aborted or completed. This is a known issue and will be fixed in a future version of Red Hat Quay. 1.5.3. Red Hat Quay 3.13.1 bug fixes PROJQUAY-8185 . Previously, when attempting to upgrade Red Hat Quay on OpenShift Container Platform to 3.13 with FIPS mode enabled, the upgrade would fail for deploying using Clair. This issue has been resolved. Upgrading to 3.13.1 does not fail for Red Hat Quay on OpenShift Container Platform using Clair with FIPS mode enabled. PROJQUAY-8024 . Previously, using Hitachi HCP v9.7 as your storage provider would return errors when attempting to pull images. This issue has been resolved. PROJQUAY-5086 . Previously, Red Hat Quay on OpenShift Container Platform would produce information about horizontal pod autoscalers (HPAs) for some components (for example, Clair , Redis , PostgreSQL , and ObjectStorage ) when they were unmanaged by the Operator. This issue has been resolved and information about HPAs are not longer reported for unmanaged components. 1.6. RHBA-2024:8408 - Red Hat Quay 3.13.0 release Issued 2024-10-30 Red Hat Quay release 3.13 is now available with Clair 4.8. The bug fixes that are included in the update are listed in the RHBA-2024:8408 advisory. For the most recent compatibility matrix, see Quay Enterprise 3.x Tested Integrations . For information the release cadence of Red Hat Quay, see the Red Hat Quay Life Cycle Policy . 1.7. Red Hat Quay documentation changes The following documentation changes have been made with the Red Hat Quay 3 release: The Red Hat Quay Builders feature that was originally documented in the Using Red Hat Quay guide has been moved into a new, dedicated book titled " Builders and image automation ". The Red Hat Quay Builders feature that was originally documented in the Red Hat Quay Operator features has been moved into a new, dedicated book titled " Builders and image automation ". A new book titled " Securing Red Hat Quay " has been created. This book covers SSL and TLS for Red Hat Quay, and adding additional certificate authorities (CAs) to your deployment. More content will be added to this book in the future. A new book titled " Managing access and permissions " has been created. This book covers topics related to access controls, repository visibility, and robot accounts by using the UI and the API. More content will be added to this book in the future. 1.8. Upgrading to Red Hat Quay 3.13 With Red Hat Quay 3.13, the volumeSize parameter has been implemented for use with the clairpostgres component of the QuayRegistry custom resource definition (CRD). This replaces the volumeSize parameter that was previously used for the clair component of the same CRD. If your Red Hat Quay 3.12 QuayRegistry custom resource definition (CRD) implemented a volume override for the clair component, you must ensure that the volumeSize field is included under the clairpostgres component of the QuayRegistry CRD. Important Failure to move volumeSize from the clair component to the clairpostgres component will result in a failed upgrade to version 3.13. For example: spec: components: - kind: clair managed: true - kind: clairpostgres managed: true overrides: volumeSize: <volume_size> For more information, see Upgrade Red Hat Quay . 1.9. Red Hat Quay new features and enhancements The following updates have been made to Red Hat Quay. 1.9.1. Red Hat Quay auto-pruning enhancements With the release of Red Hat Quay 3.10, a new auto-pruning feature was released. With that feature, Red Hat Quay administrators could set up auto-pruning policies on namespaces for both users and organizations so that image tags were automatically deleted based on specified criteria. In Red Hat Quay 3.11, this feature was enhanced so that auto-pruning policies could be set up on specified repositories. With Red Hat Quay 3.12, default auto-pruning policies default auto-pruning policies were made to be set up at the registry level on new and existing configurations, which saved Red Hat Quay administrators time, effort, and storage by enforcing registry-wide rules. With the release of Red Hat Quay 3, the following enhancements have been made to the auto-pruning feature. 1.9.1.1. Tag specification patterns in auto-pruning policies Previously, the Red Hat Quay auto-pruning feature could not target or exclude specific image tags. With the release of Red Hat Quay 3, it is now possible to specify a regular expression , or regex to match a subset of tags for both organization- and repository-level auto-pruning policies. This allows Red Hat Quay administrators more granular auto-pruning policies to target only certain image tags for removal. For more information, see Using regular expressions with auto-pruning . 1.9.1.2. Multiple auto-pruning policies Previously, Red Hat Quay only supported a single auto-pruning policy per organization and repository. With the release of Red Hat Quay 3, multiple auto-pruning policies can now be applied to an organization or a repository. These auto-pruning policies can be based on different tag naming (regex) patterns to cater for the different life cycles of images in the same repository or organization. This feature provides more flexibility when automating the image life cycle in your repository. Additional auto-pruning policies can be added on the Red Hat Quay v2 UI by clicking Add Policy on the Auto-Pruning Policies page. They can also be added by using the API. For more information about setting auto-prune policies, see Red Hat Quay auto-pruning overview . 1.9.2. Keyless authentication with robot accounts In versions of Red Hat Quay, robot account tokens were valid for the lifetime of the token unless deleted or regenerated. Tokens that do not expire have security implications for users who do not want to store long-term passwords or manage the deletion, or regeneration, or new authentication tokens. With Red Hat Quay 3, Red Hat Quay administrators are provided the ability to exchange Red Hat Quay robot account tokens for an external OIDC token. This allows robot accounts to leverage short-lived, or ephemeral tokens , that last one hour. Ephemeral tokens are refreshed regularly and can be used to authenticate individual transactions. This feature greatly enhances the security of your Red Hat Quay registry by mitigating the possibility of robot token exposure by removing the tokens after one hour. For more information, see Keyless authentication with robot accounts . 1.10. Red Hat Quay on OpenShift Container Platform new features and enhancements The following updates have been made to Red Hat Quay on OpenShift Container Platform. 1.10.1. Support for certificate-based authentication between Red Hat Quay and PostgreSQL With this release, support for certificate-based authentication between Red Hat Quay and PostgreSQL has been added. This allows Red Hat Quay administrators to supply their own SSL/TLS certificates that can be used for client-side authentication with PostgreSQL or CloudSQL. This provides enhanced security and allows for easier automation for your Red Hat Quay registry. For more information, see Certificate-based authentication between Red Hat Quay and SQL . 1.10.2. Red Hat Quay v2 UI enhancements The following enhancements have been made to the Red Hat Quay v2 UI. 1.10.2.1. Robot federation selection A new configuration page, Set robot federation , has been added to the Red Hat Quay v2 UI. This can be found by navigating to your organization or repository's robot account, clicking the menu kebab, and then clicking Set robot federation . This page is used when configuring keyless authentication with robot accounts, and allows you to add multiple OIDC providers to a single robot account. For more information, see Keyless authentication with robot accounts . 1.11. New Red Hat Quay configuration fields The following configuration fields have been added to Red Hat Quay 3. 1.11.1. Disabling pushes to the Red Hat Quay registry configuration field In some cases, a read-only option for Red Hat Quay is not possible since it requires inserting a service key and other manual configuration changes. With the release of Red Hat Quay 3.13, a new configuration field has been added: DISABLE_PUSHES . When DISABLE_PUSHES is set to true , users are unable to push images or image tags to the registry when using the CLI. Most other registry operations continue as normal when this feature is enabled by using the Red Hat Quay UI. For example, changing tags, editing a repository, robot account creation and deletion, user creation, and so on are all possible by using the UI. When DISABLE_PUSHES is set to true , the Red Hat Quay garbage collector is disabled. As a result, when PERMANENTLY_DELETE_TAGS is enabled, using the Red Hat Quay UI to permanently delete a tag does not result in the immediate deletion of a tag. Instead, the tag stays in the repository until DISABLE_PUSHES is set to false , which re-enables the garbage collector. Red Hat Quay administrators should be aware of this caveat when using DISABLE_PUSHES and PERMANENTLY_DELETE_TAGS together. This field might be useful in some situations such as when Red Hat Quay administrators want to calculate their registry's quota and disable image pushing until after calculation has completed. With this method, administrators can avoid putting putting the whole registry in read-only mode, which affects the database, so that most operations can still be done. Field Type Description DISABLE_PUSHES Boolean Disables pushes of new content to the registry while retaining all other functionality. Differs from read-only mode because database is not set as read-only . Defaults to false . Example DISABLE_PUSHES configuration field # ... DISABLE_PUSHES: true # ... 1.12. API endpoint enhancements 1.12.1. New autoPrunePolicy endpoints tagPattern and tagPatternMatches API parameters have been added to the following API endpoints: createOrganizationAutoPrunePolicy updateOrganizationAutoPrunePolicy createRepositoryAutoPrunePolicy updateRepositoryAutoPrunePolicy createUserAutoPrunePolicy updateUserAutoPrunePolicy These fields enhance the auto-pruning feature by allowing Red Hat Quay administrators more control over what images are pruned. The following table provides descriptions of these fields: Name Description Schema tagPattern optional Tags only matching this pattern (regex) will be pruned. string tagPatternMatches optional Determine whether pruned tags should or should not match the tagPattern. boolean For example API commands, see Red Hat Quay auto-pruning overview . 1.12.2. New federated robot token API endpoints The following API endpoints have been added for the keyless authentication with robot accounts feature: GET oauth2/federation/robot/token . Use this API endpoint to return an expiring robot token using the robot identity federation mechanism. POST /api/v1/organization/{orgname}/robots/{robot_shortname}/federation . Use this API endpoint to create a federation configuration for the specified organization robot. 1.13. Red Hat Quay 3.13 notable technical changes Clair now requires its PostgreSQL database to be version 15. For standalone Red Hat Quay deployments, administrators must manually migrate their database over from PostgreSQL version 13 to version 15. For more information about this procedure, see Upgrading the Clair PostgreSQL database . For Red Hat Quay on OpenShift Container Platform deployments, this update is automatically handled by the Operator so long as your Clair PostgreSQL database is currently using version 13. 1.14. Red Hat Quay 3.13 known issues and limitations The following sections note known issues and limitations for Red Hat Quay 3. 1.14.1. Clair vulnerability report known issue When pushing Suse Enterprise Linux Images with HIGH image vulnerabilities, Clair 4.8.0 does not report these vulnerabilities. This is a known issue and will be fixed in a future version of Red Hat Quay. 1.14.2. FIPS mode known issue If FIPS mode is enabled for your OpenShift Container Platform cluster and you use Clair, you must not upgrade the Red Hat Quay Operator to version 3. If you upgrade, Clair will not function in your cluster. ( PROJQUAY-8185 ) 1.14.3. Registry auto-pruning known issues The following known issues apply to the auto-pruning feature. 1.14.3.1. Policy prioritization known issue Currently, the auto-pruning feature prioritizes the following order when configured: Method: creation_date + organization wide Method: creation_date + repository wide Method: number_of_tags + organization wide Method: number_of_tags + repository wide This means that the auto-pruner first prioritizes, for example, an organization-wide policy set to expire tags by their creation date before it prunes images by the number of tags that it has. There is a known issue when configuring a registry-wide auto-pruning policy. If Red Hat Quay administrators configure a number_of_tags policy before a creation_date policy, it is possible to prune more than the intended set for the number_of_tags policy. This might lead to situations where a repository removes certain image tags unexpectedly. This is not an issue for organization or repository-wide auto-prune policies. This known issue only exists at the registry level. It will be fixed in a future version of Red Hat Quay. 1.14.3.2. Unrecognizable auto-prune tag patterns When creating an auto-prune policy, the pruner cannot recognize \b and \B patterns. This is a common behavior with regular expression patterns, wherein \b and \B match empty strings. Red Hat Quay administrators should avoid using regex patterns that use \B and \b to avoid this issue. ( PROJQUAY-8089 ) 1.14.4. Red Hat Quay v2 UI known issues The Red Hat Quay team is aware of the following known issues on the v2 UI: PROJQUAY-6910 . The new UI can't group and stack the chart on usage logs PROJQUAY-6909 . The new UI can't toggle the visibility of the chart on usage log PROJQUAY-6904 . "Permanently delete" tag should not be restored on new UI PROJQUAY-6899 . The normal user can not delete organization in new UI when enable FEATURE_SUPERUSERS_FULL_ACCESS PROJQUAY-6892 . The new UI should not invoke not required stripe and status page PROJQUAY-6884 . The new UI should show the tip of slack Webhook URL when creating slack notification PROJQUAY-6882 . The new UI global readonly super user can't see all organizations and image repos PROJQUAY-6881 . The new UI can't show all operation types in the logs chart PROJQUAY-6861 . The new UI "Last Modified" of organization always show N/A after target organization's setting is updated PROJQUAY-6860 . The new UI update the time machine configuration of organization show NULL in usage logs PROJQUAY-6859 . Thenew UI remove image repo permission show "undefined" for organization name in audit logs PROJQUAY-6852 . "Tag manifest with the branch or tag name" option in build trigger setup wizard should be checked by default. PROJQUAY-6832 . The new UI should validate the OIDC group name when enable OIDC Directory Sync PROJQUAY-6830 . The new UI should show the sync icon when the team is configured sync team members from OIDC Group PROJQUAY-6829 . The new UI team member added to team sync from OIDC group should be audited in Organization logs page PROJQUAY-6825 . Build cancel operation log can not be displayed correctly in new UI PROJQUAY-6812 . The new UI the "performer by" is NULL of build image in logs page PROJQUAY-6810 . The new UI should highlight the tag name with tag icon in logs page PROJQUAY-6808 . The new UI can't click the robot account to show credentials in logs page PROJQUAY-6807 . The new UI can't see the operations types in log page when quay is in dark mode PROJQUAY-6770 . The new UI build image by uploading Docker file should support .tar.gz or .zip PROJQUAY-6769 . The new UI should not display message "Trigger setup has already been completed" after build trigger setup completed PROJQUAY-6768 . The new UI can't navigate back to current image repo from image build PROJQUAY-6767 . The new UI can't download build logs PROJQUAY-6758 . The new UI should display correct operation number when hover over different operation type PROJQUAY-6757 . The new UI usage log should display the tag expiration time as date format 1.15. Red Hat Quay bug fixes The following issues were fixed with Red Hat Quay 3: PROJQUAY-5681 . Previously, when configuring an image repository with Events and Notifications to receive a Slack notification for Push to Repository and Package Vulnerability Found , no notification was returned of new critical image vulnerability found . This issue has been resolved. PROJQUAY-7244 . Previously, it was not possible to filter for repositories under specific organizations. This issue has been resolved, and you can now filter for repositories under specific organizations. PROJQUAY-7388 . Previously, when Red Hat Quay was configured with OIDC authentication using Microsoft Azure Entra ID and team sync was enabled, removing the team sync resulted in the usage logs chart displaying Undefined . This issue has been resolved. PROJQUAY-7430 . Some public container image registries, for example, Google Cloud Registry, generate longer passwords for the login. When this happens, Red Hat Quay could not mirror images from those registries because the password length exceeded the maximum allowed in the Red Hat Quay database. The actual length limit imposed by the encryption mechanism is lower than 9000 . This implies that while the database can hold up to 9000 characters, the effective limit during encryption is actually 6000 , and be calculated as follows: {Max Password Length} = {field\_max\_length} - {_RESERVED\_FIELD\_SPACE}. A password length of 6000 ensures compatibility with AWS ECR and most registries. PROJQUAY-7599 . Previously, attempting to delete a manifest using a tag name and the Red Hat Quay v2 API resulted in a 405 error code. This was because there was no delete_manifest_by_tagname operation in the API. This issue has been resolved. PROJQUAY-7606 . Users can now create a new team using the dashes ( - ) via the v2 UI. Previously, this could only be done using the API. PROJQUAY-7686 . Previously, the vulnerability page showed vertical scroll bars when provided URLs in the advisories were too big, which caused difficulties in reading information from the page. This issue has been resolved. PROJQUAY-7982 . There was a bug in the console service when using Quay.io for the first time. When attempting to create a user correlated with the console's user, clicking Confirm username refreshed the page and opened the same modal. This issue has been resolved. 1.16. Red Hat Quay feature tracker New features have been added to Red Hat Quay, some of which are currently in Technology Preview. Technology Preview features are experimental features and are not intended for production use. Some features available in releases have been deprecated or removed. Deprecated functionality is still included in Red Hat Quay, but is planned for removal in a future release and is not recommended for new deployments. For the most recent list of deprecated and removed functionality in Red Hat Quay, refer to Table 1.1. Additional details for more fine-grained functionality that has been deprecated and removed are listed after the table. Table 1.1. New features tracker Feature Quay 3.13 Quay 3.12 Quay 3.11 Keyless authentication with robot accounts General Availability - - Certificate-based authentication between Red Hat Quay and SQL General Availability - - Splunk HTTP Event Collector (HEC) support General Availability General Availability - Open Container Initiative 1.1 support General Availability General Availability - Reassigning an OAuth access token General Availability General Availability - Creating an image expiration notification General Availability General Availability - Team synchronization for Red Hat Quay OIDC deployments General Availability General Availability General Availability Configuring resources for managed components on OpenShift Container Platform General Availability General Availability General Availability Configuring AWS STS for Red Hat Quay , Configuring AWS STS for Red Hat Quay on OpenShift Container Platform General Availability General Availability General Availability Red Hat Quay repository auto-pruning General Availability General Availability General Availability FEATURE_UI_V2 Technology Preview Technology Preview Technology Preview 1.16.1. IBM Power, IBM Z, and IBM(R) LinuxONE support matrix Table 1.2. list of supported and unsupported features Feature IBM Power IBM Z and IBM(R) LinuxONE Allow team synchronization via OIDC on Azure Not Supported Not Supported Backing up and restoring on a standalone deployment Supported Supported Clair Disconnected Supported Supported Geo-Replication (Standalone) Supported Supported Geo-Replication (Operator) Supported Not Supported IPv6 Not Supported Not Supported Migrating a standalone to operator deployment Supported Supported Mirror registry Supported Supported PostgreSQL connection pooling via pgBouncer Supported Supported Quay config editor - mirror, OIDC Supported Supported Quay config editor - MAG, Kinesis, Keystone, GitHub Enterprise Not Supported Not Supported Quay config editor - Red Hat Quay V2 User Interface Supported Supported Quay Disconnected Supported Supported Repo Mirroring Supported Supported | [
"An error occurred (NoSuchUpload) when calling the CompleteMultipartUpload operation: The specified multipart upload does not exist. The upload ID might be invalid, or the multipart upload might have been aborted or completed.",
"spec: components: - kind: clair managed: true - kind: clairpostgres managed: true overrides: volumeSize: <volume_size>",
"DISABLE_PUSHES: true"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/red_hat_quay_release_notes/release-notes-313 |
Chapter 5. Deploying a virt-who Configuration | Chapter 5. Deploying a virt-who Configuration After you create a virt-who configuration, Satellite provides a script to automate the deployment process. The script installs virt-who and creates the individual and global virt-who configuration files. For Red Hat products, you must deploy each configuration file on the hypervisor specified in the file. For other products, you must deploy the configuration files on Satellite Server, Capsule Server, or a separate Red Hat Enterprise Linux server that is dedicated to running virt-who. To deploy the files on a hypervisor or Capsule Server, see Section 5.1, "Deploying a virt-who Configuration on a Hypervisor" . To deploy the files on Satellite Server, see Section 5.2, "Deploying a virt-who Configuration on Satellite Server" . To deploy the files on a separate Red Hat Enterprise Linux server, see Section 5.3, "Deploying a virt-who Configuration on a Separate Red Hat Enterprise Linux Server" . 5.1. Deploying a virt-who Configuration on a Hypervisor Use this procedure to deploy a virt-who configuration on the Red Hat hypervisor that you specified in the file. Global values apply only to this hypervisor. You can also use this procedure to deploy a vCenter or Hyper-V virt-who configuration on Capsule Server. Global configuration values apply to all virt-who configurations on the same Capsule Server, and are overwritten each time a new virt-who configuration is deployed. Prerequisites Register the hypervisor to Red Hat Satellite. If you are using Red Hat Virtualization Host (RHVH), update it to the latest version so that the minimum virt-who version is available. Virt-who is available by default on RHVH, but cannot be updated individually from the rhel-7-server-rhvh-4-rpms repository. Create a read-only virt-who user on the hypervisor. Create a virt-who configuration for your virtualization platform. Procedure In the Satellite web UI, navigate to Infrastructure > Virt-who configurations . Click the name of the virt-who configuration. Click the Deploy tab. Under Configuration script , click Download the script . Copy the script to the hypervisor: Make the deployment script executable and run it: After the deployment is complete, delete the script: 5.2. Deploying a virt-who Configuration on Satellite Server Use this procedure to deploy a vCenter or Hyper-V virt-who configuration on Satellite Server. Global configuration values apply to all virt-who configurations on Satellite Server, and are overwritten each time a new virt-who configuration is deployed. Prerequisites Create a read-only virt-who user on the hypervisor or virtualization manager. If you are deploying a Hyper-V virt-who configuration, enable remote management on the Hyper-V hypervisor. Create a virt-who configuration for your virtualization platform. Procedure In the Satellite web UI, navigate to Infrastructure > Virt-who configurations . Click the name of the virt-who configuration. Under Hammer command , click Copy to clipboard . On Satellite Server, paste the Hammer command into your terminal. 5.3. Deploying a virt-who Configuration on a Separate Red Hat Enterprise Linux Server Use this procedure to deploy a vCenter or Hyper-V virt-who configuration on a dedicated Red Hat Enterprise Linux 7 server. The server can be physical or virtual. Global configuration values apply to all virt-who configurations on this server, and are overwritten each time a new virt-who configuration is deployed. Prerequisites Create a read-only virt-who user on the hypervisor or virtualization manager. If you are deploying a Hyper-V virt-who configuration, enable remote management on the Hyper-V hypervisor. Create a virt-who configuration for your virtualization platform. Procedure On the Red Hat Enterprise Linux server, install Satellite Server's CA certificate: Register the Red Hat Enterprise Linux server to Satellite Server: Open a network port for communication between virt-who and Satellite Server: Open a network port for communication between virt-who and each hypervisor or virtualization manager: VMware vCenter: TCP port 443 Microsoft Hyper-V: TCP port 5985 In the Satellite web UI, navigate to Infrastructure > Virt-who configurations . Click the name of the virt-who configuration file. Click the Deploy tab. Under Configuration script , click Download the script . Copy the script to the Red Hat Enterprise Linux server: Make the deployment script executable and run it: After the deployment is complete, delete the script: | [
"scp deploy_virt_who_config_1 .sh root@ hypervisor.example.com :",
"chmod +x deploy_virt_who_config_1 .sh sh deploy_virt_who_config_1 .sh",
"rm deploy_virt_who_config_1",
"rpm -ivh http:// satellite.example.com /pub/katello-ca-consumer-latest.noarch.rpm",
"subscription-manager register --org= organization_label --auto-attach",
"firewall-cmd --add-port=\"443/tcp\" firewall-cmd --add-port=\"443/tcp\" --permanent",
"scp deploy_virt_who_config_1 .sh root@ rhel.example.com :",
"chmod +x deploy_virt_who_config_1 .sh sh deploy_virt_who_config_1 .sh",
"rm deploy_virt_who_config_1"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/configuring_virtual_machine_subscriptions_in_red_hat_satellite/deploying-a-virt-who-configuration |
Providing feedback on JBoss EAP documentation | Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Please include the Document URL , the section number and describe the issue . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuring_messaging/proc_providing-feedback-on-red-hat-documentation_dev-guide-messaging |
Vulnerability reporting with Clair on Red Hat Quay | Vulnerability reporting with Clair on Red Hat Quay Red Hat Quay 3 Vulnerability reporting with Clair on Red Hat Quay Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/vulnerability_reporting_with_clair_on_red_hat_quay/index |
Chapter 6. Designing the replication process | Chapter 6. Designing the replication process Replicating the directory information increases the availability and performance of the directory. Design the replication process to ensure the data is available when and where it is needed. 6.1. Introduction to replication Replication is the mechanism that automatically copies directory data from one Directory Server to another. Using replication, any directory tree or sub-tree stored in its own databases ( replicas ) can be copied between servers. The server holding the main copy of the information automatically copies any updates to all replicas. Replication provides a high-availability directory service and can distribute data geographically. The following is a list of replication benefits: Fault tolerance and failover Replicating directory trees to multiple servers ensures that your directory is available even if client applications cannot access a particular Directory Server because of hardware, software, or network problems. Clients are referred to another Directory Server for read and write operations. Note Failover for add , modify , and delete operations is possible only with multi-supplier replication . Load balancing Replicating the directory tree across servers reduces the access load on any given server resulting in improved server response times. Higher performance Replicating directory entries to a location close to users improves Directory Server performance. Local data management With replication, you can own and manage information locally while sharing it with other Directory Server across the enterprise. 6.1.1. Replication concepts When you consider implementing replication, answer the following fundamental questions: What information do you need to replicate? Which servers hold the main copy, or supplier replica, of that information? Which servers hold the read-only copy, or consumer replica, of that information. What happens when a consumer replica receives a modify request from a client application? To which server must the request be redirected? Learn about the concepts that provide understanding of how Directory Server implements replication: Replica Replication unit Suppliers and consumers Changelog Replication agreement 6.1.1.1. Replica A replica is a database that participates in replication. Directory Server supports the following types of replicas: Supplier replica (read-write) A read-write database that contains the main copy of the directory data. Only the supplier replica processes modify requests from directory clients. Consumer replica (read-only) A read-only database that contains another copy of the information held on the supplier replica. A consumer replica can process search requests from directory clients but refers modify requests to the supplier replica. Directory Server can manage several databases with different roles in replication. For example, you can have the dc=accounting,dc=example,dc=com suffix stored in a supplier replica, and the dc=sales,dc=example,dc=com suffix in a consumer replica. 6.1.1.2. Replication unit The smallest unit of replication is a suffix (namespace). The replication mechanism requires that one suffix corresponds to one database. Directory Server cannot replicate a suffix that is distributed over two or more databases using custom distribution logic. 6.1.1.3. Suppliers and consumers Supplier server A supplier server is a server that replicates updates to other servers. The supplier server maintains a changelog that contains records of each update operation. Consumer server A consumer server is a server that receives updates from other servers. A server can play the role of a supplier and consumer at the same time in the following situations: In cascading replication, when some servers play the role of a hub server . For more information, see Cascading replication . In multi-supplier replication, when several servers manage the supplier read-write replica. Each server sends and receives updates from other servers. For more information, see Multi-supplier replication . Note In Red Hat Directory Server, the supplier server always initiates replication, never the consumer. The supplier server must perform the following actions: Respond to read requests and update requests from directory clients. Maintain state information and a changelog for the replica. The supplier server is always responsible for recording changes made to the read-write replicas that it manages. This ensures that any changes are replicated to consumer servers. Initiate replication to consumer servers. The consumer server must perform the following actions: Respond to read requests. Refer update requests to a supplier server for the replica. When a consumer server receives a request to add, delete, or change an entry, the request is referred to a supplier server. The supplier server then performs the request and replicates these changes. In the special case of cascading replication, the hub server performs the following actions: Respond to read requests. Refer update requests to a supplier server. Initiate replication to consumer servers. 6.1.1.4. Changelog Every supplier server maintains a changelog . The changelog is a record of the modifications that have occurred on a supplier replica. The supplier server pushes these modifications to the replicas stored on other servers. When an entry is added, modified, or deleted, Directory Server records the performed LDAP operation in the changelog file. The changelog is intended only for internal use by the server. If you have applications that need to read the changelog, you need to use the Retro Changelog plug-in for backward compatibility. For details about changelog attributes, refer to Database attributes under cn=changelog,cn=database_name,cn=ldbm database,cn=plugins,cn=config . 6.1.1.5. Replication agreements Servers use replication agreements to define how replication is performed between two servers. A replication agreement describes replication between one supplier and one consumer. The agreement is configured on the supplier server and identifies the following information: The database to replicate. The consumer server to which the data is pushed. The time when replication can occur. The DN and credentials the supplier server must use to bind on the consumer, called the Replication Manager entry or supplier bind DN . How the connection is secured, for example, TLS, StartTLS, client authentication, SASL, or simple authentication. Attributes that you want to replicate. For more details about fractional replication, see Fractional replication . 6.1.2. Data consistency Data consistency refers to how closely the contents of replicated databases match each other at a given time. A supplier determines when consumers must be updated, and initiates replication. Replication can start only after consumers have been initialized. Directory Server can always keep replicas synchronized or schedule updates for a particular time of day or day in a week. Constantly synchronized replicas Constantly synchronized replicas provide better data consistency, however they increase network traffic because of frequent updates. Use constantly synchronized replicas when: You have a reliable, high-speed connection between servers. Your client applications mainly send search , read ,and compare to Directory Server and only a few update operations. Schedule updates of consumers Choose to schedule updates if your directory can have a lower level of data consistency and you want to lower the impact on network traffic. Use scheduled updates when: You have unreliable or periodically available network connections. Client applications mainly send add and modify operations to Directory Server. You need to reduce the connection costs. Data consistency in multi-supplier replication When you have multi-supplier replication, each supplier has loosely consistent replicas, because at any given time, suppliers can have differences in the stored data, even if the replicas are constantly synchronized. The main reasons for the loose consistency are the following: The propagation of modify operations between suppliers has latency. The supplier that serviced the modify operation does not wait for the second supplier to validate it before returning an "operation successful" message to the client. 6.2. Common replication scenarios You can use the following common scenarios to build the replication topology that best suits your needs: Single-supplier replication Multi-supplier replication Cascading replication Mixed environments 6.2.1. Single-supplier replication In the single-supplier replication scenario, a supplier server maintains the main copy of the directory data (read-write replica) and sends updates of this data to one or more consumer servers. All directory modifications occur on the read-write replica on the supplier server, and the consumer servers contain read-only replicas of the data. The supplier server maintains a changelog that records all the changes made to the supplier replica. The following diagram shows the single-supplier replication scenario: Figure 6.1. Single-supplier replication The total number of consumer servers that a single supplier server can manage depends on the speed of the networks and the total number of entries that are modified on a daily basis. 6.2.2. Multi-supplier replication In a multi-supplier replication environment, main copies of the same information can exist on multiple servers, and the directory data can be updated simultaneously in different locations. The changes that occur on each server are replicated to the other servers meaning that each server functions as both a supplier and a consumer. When the same data is modified on multiple servers, replication conflicts occur. Using a conflict resolution procedure, Directory Server uses the most recent change as the valid one. In a multi-supplier environment, each supplier needs to have replication agreements that point to consumers and other suppliers. For example, you configure replication with two suppliers, Supplier A and Supplier B , and two consumers, Consumer C and Consumer D . In addition, you decide that one supplier updates only one consumer. On Supplier A , you create a replication agreement that points to Supplier B and Consumer C . On Supplier B , you create a replication agreement that points to Supplier A and Consumer D . The following diagram illustrates the replication agreements: Figure 6.2. Multi-supplier replication with two suppliers Note Red Hat Directory Server supports a maximum of 20 supplier servers in any replication environment and an unlimited number of hub and consumer servers. Using many suppliers requires creating a range of replication agreements. In addition, each supplier can be configured in different topologies meaning that your Directory Server environment can have 20 different directory trees and even schema differences. Many other variables may have a direct impact on the topology selection. Suppliers can send updates to all other suppliers or to some subset of suppliers. When updates are sent to all suppliers, changes are propagated faster and the overall scenario has better failure tolerance. However, it increases the complexity of supplier configuration and introduces high network and high server demand. Sending updates to a subset of suppliers is much simpler to configure and reduces the network and server loads, but increases the risk of data loss if multiple server failures occur. Fully connected mesh topology The following diagram shows a fully connected mesh topology where four supplier servers replicate data to all other supplier servers. In total, twelve replication agreements exist between the four supplier servers because one replication agreement describes relations between only one supplier and one consumer. If you have 20 suppliers, then you need to create 380 replication agreements in total (20 suppliers with 19 agreements each). If the possibility of two or more servers failing at the same time is small or connection between certain suppliers is better, consider using a partly connected topology. Partly connected topology The following diagram shows a topology where each supplier server replicates data to two supplier servers. Only eight replication agreements exist between the four supplier servers compared to the example topology. 6.2.3. Cascading replication In a cascading replication scenario, a hub server receives updates from a supplier server and sends those updates to consumer servers. The hub server is a hybrid, because it holds a read-only replica, like a typical consumer server, and it also maintains a changelog like a typical supplier server. Hub server forward the supplier data to consumers and refer update requests from directory clients to suppliers. The following diagram shows the cascading replication scenario: Figure 6.3. Cascading replication scenario The following diagram shows how replicas are configured on each server (read-write or read-only) and which servers maintain the changelog. Figure 6.4. Replication traffic and changelogs in cascading replication Cascading replication is useful in the following cases: To balance heavy traffic loads. Because the suppliers in a replication topology manage all update traffic, it may put them under a heavy load to support replication traffic to consumers as well. You can redirect replication traffic to a hub that can service replication updates to a large number of consumers. To reduce connection costs by using a local hub supplier in geographically distributed environments. To increase the performance of your directory service. If you direct all read operations to the consumers, and all update operations to the supplier, you can remove all of the indexes (except system indexes) from your hub server. This will dramatically increase the speed of replication between the supplier and the hub server. Additional resources Using replication for load balancing Using replication for high availability Using replication for local availability 6.2.4. Mixed scenarios Any of the replication scenarios can be combined to suit the needs of the network and directory environment. One common combination is to use a multi-supplier configuration with a cascading configuration. The following diagram shows an example topology for a mixed scenario: Figure 6.5. Combined multi-supplier and cascading replication 6.3. Defining a replication strategy You can determine your replication strategy based on the services you want to provide. The following are common replication strategies that you can implement: If high availability is the primary concern, create a data center with multiple Directory Servers on a single site. Single-supplier replication provides read-failover, while multi-supplier replication provides write-failover. For more details, see Using replication for high availability . If local availability is the primary concern, use replication to distribute data geographically to Directory Servers in local offices around the world. You can maintain the main copy of all information in a single location, such as the company headquarters, or each local site can manage the parts of the directory that are relevant to them. For more details, see Using replication for local availability . To balance the load of requests that Directory Server manages and avoid network congestion, use replication configuration for load balancing. For more details, see Using replication for load balancing . If you use multiple consumers for different locations or sections of the company or if some servers are insecure, then use fractional replication to exclude sensitive or rarely modified information to maintain data integrity without compromising sensitive information. For more details, see Fractional replication . If the network is stretched across a wide geographical area with multiple Directory Servers at multiple sites and local data suppliers connected by multi-supplier replication, use the replication configuration for a wide-area network. For more details, see Replication across a wide-area network . To determine the replication strategy, start by performing a survey of the network, users, applications, and how they use the directory service. 6.3.1. Performing a replication survey Gather information about the network quality and usage to help define the replication strategy: The quality of the LANs and WANs that connect different buildings or remote sites and the amount of available bandwidth. The physical location of users, how many users are at each site, and their usage patterns for understanding how they intend to use the directory service. A site that manages human resource databases or financial information usually creates a heavier load on the directory than a site containing engineering staff that uses the directory only for telephone book purposes. The number of applications that access the directory and the relative percentage of read , search , and compare operations to write operations. If the messaging server uses the directory, find out how many operations it performs for each email message it handles. Other products that use the directory are typically products such as authentication applications or meta-directory applications. For each application, determine the type and frequency of operations that are performed in the directory. The number and size of the entries stored in the directory. 6.3.2. Replication resource requirements Replication requires resources. Consider the following resource requirements when defining the replication strategy: Disk usage On supplier servers, Directory Server writes a changelog after each update operation. Therefore, supplier servers that receive many update operations have higher disk usage. Server threads Each replication agreement creates a dedicated threads, and the CPU load depends on the replication throughput. File descriptors A server uses one file descriptor for a changelog and one file descriptor for each replication agreement. 6.3.3. Managing disk space required for multi-supplier replication In multi-supplier topologies, suppliers maintain additional logs required for replication, including the changelog of the directory edits, state information for updated entries, and tombstone entries for deleted entries. Because these log files can become very large, you must periodically clean up these files to avoid unnecessary usage of the disk space. On each server, you can use the following attributes to configure the replication logs maintenance in a replicated environment: The nsslapd-changelogmaxage attribute sets the maximum age of entries in the changelog. Once an entry is older than the maximum age value, Directory Server deletes the entry. Setting the maximum age of entries keeps the changelog from growing indefinitely. The nsslapd-changelogmaxentries attribute sets the maximum number of entries that the changelog can contain. Note that the nsslapd-changelogmaxentries value must be large enough to contain a complete set of directory information. Otherwise, multi-supplier replication may function with issues. The nsDS5ReplicaPurgeDelay sets the maximum age of tombstone (deleted) entries and state information in the changelog. Once a tombstone or state information entry is older than that age, Directory Server deletes the entry. The nsDS5ReplicaPurgeDelay value applies only to tombstone and state information entries, while nsslapd-changelogmaxage applies to every entry in the changelog, including directory modifications. The nsDS5ReplicaTombstonePurgeInterval attribute sets how often the server runs a purge operation to clean the tombstone and state entries out of the changelog. Ensure that the maximum age is longer than the longest replication update schedule. Otherwise, multi-supplier replication can have issues when updating replicas. 6.3.4. Using replication for high availability Use replication to prevent directory unavailability when a single server fails. At a minimum, replicate the local directory tree to at least one backup server. How often you replicate for fault tolerance depends on your requirements. However, base this decision on the quality of the hardware and networks used by your directory. Unreliable hardware requires more backup servers. Important Do not use replication as a replacement for a regular data backup policy because replication and backups have different purposes. For information on backing up the directory data, see Backing up and restoring Red Hat Directory Server . You can choose the following strategies to prevent directory unavailability: To guarantee write-failover for all directory clients, use a multi-supplier replication . To guarantee read-failover, use single-supplier replication . LDAP client applications are usually configured to search only one LDAP server. If you do not have a custom client application to rotate through LDAP servers located at different DNS hostnames, you can only configure your LDAP client application to look at a single DNS hostname for Directory Server. Therefore, you may need to use either DNS round-robins or network sorts to provide failover to your backup Directory Server. 6.3.5. Using replication for local availability You may need to use replication for local availability depending on the quality of your network and if your data is mission-critical. Use replication for local availability for the following reasons: You require a local main copy of the data. Large, multinational enterprises may need to maintain directory information of interest only to the employees in a certain country. In addition, having a local main copy of the data is important to any enterprise where interoffice politics dictate to control the data at a divisional or organizational level. You have unreliable or intermittently available network connections. International networks have unreliable WANs that cause intermittent network connections. You have periodic, extremely heavy network loads that impact Directory Server performance. Enterprises with aging networks may experience heavy network loads during normal business hours. You want to reduce the network load and the workload on the suppliers. Even if the network is reliable and available, you may want to reduce network costs. 6.3.6. Using replication for load balancing One of the main reasons to replicate directory data is to balance the workload of your network and to improve the directory performance. As directory entries are usually 1 KB in size, every directory search adds approximately 1 KB to your network load. If your directory users perform ten directory searches per day, then the increased network load for every directory user is around 10 KB per day. If you have a slow, heavily loaded, or unreliable WAN, you may need to replicate your directory tree to a local server. However, determine if locally available data is worth the cost of the increased network load caused by replication. If you replicate an entire directory tree to a remote site, you potentially add a larger load on your network in comparison to the traffic that results from users' searches. This is especially true if your directory changes frequently, yet you have only a few users at the remote site who perform a few directory searches per day. The following table compares the load impact of replicating a directory with one million entries, where 100,000 of the entries undergo daily change, with the load impact of having a small remote site of 100 employees that perform 10 searches per day each. Table 6.1. Impact of replication and remote searches on the network Load type Access/day Average entry size Load Replication 100,000 1KB 100MB/day Remote searches 1,000 1KB 1MB/day A compromise between making data available to local sites without overloading the network is to use scheduled replication. For more information on data consistency and replication schedules, see Data consistency . Additional resources Example of network load balancing Example of load balancing for improved performance Example replication strategy for a small site Example replication strategy for a large site 6.3.6.1. Example of network load balancing This example describes an enterprise that has offices in New York (NY) and Los Angeles (LA), and each office manages a separate sub-tree. The following diagram show how the enterprise manages the sub-trees: Figure 6.6. The enterprise NY and LA sub-trees Each office contains a high-speed network, but the connection between the two cities is unreliable. To balance the network load, use the following strategy: Select one server in each office to be the supplier server for the locally managed data. Replicate locally managed data from that supplier to the corresponding supplier server in the remote office. When you have main copy of the data in each location, users do not perform update and search operations over the unreliable connection. As a result, performance is optimized. Replicate the directory tree on each supplier server (including data supplied from the remote office) to at least one local Directory Server to ensure availability of the directory data. Configure cascading replication in each location with an increased number of consumers dedicated to search on the local data to provide further load balancing. The NY office generates more NY specific searches than LA specific searches. The example shows the NY office with three NY data consumers and one LA consumer. The LA office has three LA data consumers and one NY data consumer. Figure 6.7. Example of load balancing for the enterprise Additional resources About suffixes Cascading-replication Multi-supplier replication 6.3.6.2. Example of load balancing for improved performance This example describes an enterprise that has the following characteristics: The directory includes 1,500,000 entries in support of 1,000,000 users. Each user performs ten directory searches per day. A messaging server handles 25,000,000 mail messages per day and performs five directory searches for every mail message. Users are spread across four time zones. This equates to 135,000,000 directory searches per day in total: 1,000,000 users x 10 searches = 10,000,000 user searches per day 25,000,000 mails x 5 searches = 125,000,000 mail searches per day 10,000,000 + 125,000,000 = 135,000,000 all searches per day With an eight-hour business day and users spread across four time zone, the peak usage across four time zones extends to 12 hours. Therefore, the Directory Server must support 135,000,000 directory searches in a 12-hour day. This equates to 3,125 searches per second (135,000,000 / (60*60*12)). If the hardware that runs Directory Server supports 500 reads per second, you must use at least six or seven Directory Servers to support this load. For enterprises with a million directory users, add more Directory Servers for local availability purposes. In such a scenario, you can use the following replication strategy: Place two Directory Servers in a multi-supplier configuration in one city to handle all write traffic. This configuration assumes that you want a single point of control for all directory data. Use supplier servers to replicate to one or more hubs. Point the read , search , and compare requests at the consumers freeing the suppliers to handle only write requests. For more information about hubs, see Cascading-replication . Use the hub to replicate to local sites throughout the enterprise. Replicating to local sites helps balance the load on your servers and your network, and ensures high availability of directory data. At each site, replicate at least once to ensure high availability, at a minimum for read operations. Use DNS sort to ensure that local users always find a local Directory Server they can use for directory searches. 6.3.6.3. Example replication strategy for a small site The example enterprise has the following characteristics: The entire enterprise is contained within a single building. The building has a very fast (100 Mb per second) and lightly used network. The network is very stable, and the server hardware and OS platforms are reliable. A single server can handle the load easily. With such conditions, you need to replicate at least one time to ensure availability when you shut down the primary server for maintenance or hardware upgrades. Also, set up a DNS round-robin to improve LDAP connection performance when one of the Directory Servers becomes unavailable. 6.3.6.4. Example replication strategy for a large site The example enterprise from Example replication strategy for a small site has grown to a larger one and now has the following characteristics: The company is contained within two separate buildings, Building A and Building B. The connection between buildings is slow and very busy during normal business hours. Each building has a very fast (100 Mb per second) and lightly used network. The network within each building is very stable, and the server hardware and OS platforms are reliable. A single server can handle the load within one building easily. With such conditions, your replication strategy contains the following steps: Choose a single server in one of the two buildings to contain the main copy of the directory data. Place the server in the building that contains the largest number of people responsible for the main copy of the directory data, for example, Building A. Replicate at least once within Building A for high availability of directory data. Use a multi-supplier replication configuration to ensure write-failover. Create two replicas in the second Building B. If you do not need close consistency between the supplier and consumer servers, schedule replication to occur only during off-peak hours. 6.3.7. Fractional replication With fractional replication, you can choose a set of attributes that Directory Server does not replicate from a supplier to a consumer or another supplier. Therefore, a database can be replicated without replicating all the information that the database contains. Fractional replication is enabled and configured per replication agreement. Directory Server applies the exclusion of attributes equally to all entries. The excluded attributes always have no value on consumers. Therefore, a client performing a search against the consumer server never sees the excluded attributes, even if search filters explicitly specify these attributes. Use fractional replication in the following situations: A consumer server is connected using a slow network. Excluding rarely changed attributes or larger attributes, such as jpegPhoto , decreases network traffic. A consumer server is placed on an untrusted network such as the public Internet. Excluding sensitive attributes, such as telephone numbers, provides an extra level of protection that ensures no access to the sensitive attributes even if the server access control measures are defeated or the machine is compromised by an attacker. 6.3.8. Replication across a wide area network Wide area networks (WAN) typically have higher latency, a higher bandwidth-delay product, and lower speeds than local area networks. Directory Server supports efficient replication when a supplier and consumer are connected using a wide-area network. Previously, the replication protocols that Directory Server used were highly latency-sensitive, because the supplier sent only one update operation and then waited for a response from the consumer. This led to reduced throughput with higher latencies. Currently, a supplier sends many updates and entries to the consumer without waiting for a response, and the replication throughput is similar to throughput of a local area network. Consider the following performance and security issues when using a WAN: Use the Transport Layer Security (TLS) protocol to secure replication performed across a public network, such as the Internet. Use a T1 or faster internet connection for the network. Avoid constant synchronization between the servers when creating agreements for replication over a WAN. Replication traffic can consume a large portion of the bandwidth and slow down the overall network and internet connections. Additional resources Improving the latency in a multi-supplier replication environment 6.4. Using replication with other Directory Server features To design the replication strategy better, learn about the interaction between replication and other Directory Server features. 6.4.1. Replication and access control The directory stores access control instructions (ACIs) as attributes of entries and Directory Server replicates these ACIs together with other directory content. For example, to restrict access to the directory from a certain host, use only host-specific settings in the ACI. Otherwise, when the ACI is replicated to other servers, the access to the directory will be denied on all servers because Directory Server evaluates ACIs locally. For more information about designing access control for the directory, see Designing access control . 6.4.2. Replication and Directory Server plug-ins Replication works with most of the plug-ins delivered with Directory Server. However, the following plug-ins have limitations and exceptions in multi-supplier environments: Attribute Uniqueness plug-in The Attribute Uniqueness plug-in validates the uniqueness of attribute values added to entries only on a local server. For example, a company requires the mail attribute to be unique for user entries. When two different users are added with the same value for the mail attribute on two different supplier servers at the same time, Directory Server adds these users to the directory because no naming conflict and, as a result, no replication conflict occurs. The Attribute Uniqueness plug-in does not check replicated changes and, as a result, the mail attribute value becomes non-unique in the directory. Referential Integrity plug-in Referential integrity works with multi-supplier replication when it is enabled on only one supplier in the multi-supplier set. This ensures that referential integrity updates occur on only one of the supplier servers and are propagated to the other servers. Auto Membership and MemberOf plug-ins For these two plug-ins to work correctly in a replication environment, configure the plug-ins to perform updates locally on each server. Note By default, plug-ins are disabled, and you must enable them manually. Additional resources Server plug-in functionality reference Listing group membership in user entries 6.4.3. Replication and database links When you use chaining to distribute entries across the directory, the server containing the database link refers to a remote server that contains the actual data. In this environment, you cannot replicate the database link. However, you might replicate the database that contains the actual data on the remote server. 6.4.4. Schema replication In a replicated environment, the schema must be consistent across all of the servers that participate in replication. To ensure schema consistency, make schema modifications only on a single supplier server. If you configured replication between servers, schema replication occurs by default. Standard schema Directory Server uses the following scenario for the standard schema replication: Before pushing data to consumer servers, the supplier server checks if its version of the schema is the same as the version of the schema held on the consumer servers. If the schema entries on both the supplier and the consumers are the same, the replication operation proceeds. If the supplier schema version is more recent than the consumer schema version, the supplier server replicates its schema to the consumer before proceeding with the data replication. If the supplier schema version is older than the consumer schema version, the replication may fail or the server may return errors during replication because the schema on the consumer cannot support the new data. Therefore, never update the schema on a consumer server. You must maintain the schema only on a supplier server in a replicated topology . Directory Server replicates changes to the schema made by using dsconf command, the web console, LDAP modify operations, or made directly to the 99user.ldif file. If you make schema modifications on two supplier servers, consumers receive the data from the two suppliers, each with a different schema. The consumer applies the modifications of the supplier that has the more recent schema version. In this situation, the schema of the consumers always differs from one of the suppliers. To avoid this, always make sure you make schema modifications on one supplier only. You do not need to create special replication agreements to replicate a schema. However, the same Directory Server can hold supplier and consumer replicas. Therefore, always identify the server that functions as a supplier for the schema, and then set up replication agreements between this supplier and all other servers in the replication environment that will function as consumers for the schema information. For more information on standard schema files, see Standard schema . Custom schema Directory Server replicates only a custom schema to all consumers if you use the standard 99user.ldif file as your custom schema. Directory Server does not replicate other custom schema files, or changes to these files, even if you made changes through the web console or the dsconf command. If you use other custom files, you must copy these files to all servers in your topology manually after making changes on the supplier. Additional resources Customization of schema How Directory Server manages schema updates in a replication environment . | null | https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/planning_and_designing_directory_server/assembly_designing-the-replication-process_designing-rhds |
Chapter 2. Ceph Dashboard installation and access | Chapter 2. Ceph Dashboard installation and access As a system administrator, you can install dashboard and access it for the first time. Red Hat Ceph Storage is installed graphically using the Cockpit web interface, or on the command line using the Ansible playbooks provided by the ceph-ansible RPM. Cockpit uses the same Ansible playbooks to install Ceph. Those playbooks install dashboard by default. Therefore, whether you directly use the Ansible playbooks, or use Cockpit to install Ceph, dashboard will be installed. Important Change the default dashboard password. By default, the password for dashboard is p@ssw0rd , which is insecure. You can change the default password before installing Ceph by updating dashboard_admin_password in the all.yml Ansible playbook before using the playbooks to install Ceph, or after install using the same playbook, or dashboard itself. For more information, see the Install Guide , Changing the dashboard password using the dashboard , or Changing the dashboard password using Ansible . 2.1. Installing dashboard using Cockpit Dashboard is installed by default when using the Cockpit web interface to install Red Hat Ceph Storage. You must set a host with the Metrics role for Grafana to be installed on. Prerequisites Consult the Installation Guide for full prerequisites. This procedure only highlights the steps relevant to the dashboard install. Procedure On the Hosts page, add a host and set the Metrics role. Click Add. Complete the remaining Cockpit Ceph Installer prompts. After the deploy process finishes, click the Complete button at the bottom right corner of the page. This opens a window which displays the output of the command ceph status , as well as dashboard access information. At the bottom of the Ceph Cluster Status window, the dashboard access information is displayed, including the URL, user name, and password. Take note of this information. For more information, see Installing Red Hat Ceph Storage using the Cockpit Web User Interface in the Installation Guide . 2.2. Installing dashboard using Ansible Dashboard is installed by default when installing Red Hat Ceph Storage using the Ansible playbooks provided by the ceph-ansible RPM. Prerequisites Consult the Installation Guide for full prerequisites. This procedure only highlights the steps relevant to the dashboard install. Procedure Ensure a [grafana-server] group with a node defined under it exists in the Ansible inventory file. Grafana and Prometheus are installed on this node. In the all.yml Ansible playbook, ensure dashboard_enabled: has not been set to False . There should be a comment indicating the default setting of True . Complete the rest of the steps necessary to install Ceph as outlined in the Installation Guide . After running ansible-playbook site.yml for bare metal installs, or ansible-playbook site-docker.yml for container installs, Ansible will print the dashboard access information. Find the dashboard URL, username, and password towards the end of the playbook output: Take note of the output You can access your dashboard web UI at http://jb-ceph4-mon:8443/ as an 'admin' user with 'p@ssw0rd' password. Note The Ansible playbook does the following: Enables the Prometheus module in ceph-mgr . Enables the dashboard module in ceph-mgr and opens TCP port 8443. Deploys the Prometheus node_exporter daemon to each node in the storage cluster. Opens TCP port 9100. Starts the node_exporter daemon. Deploys Grafana and Prometheus containers under Docker/systemd on the node under [grafana-server] in the Ansible inventory file. Configures Prometheus to gather data from the ceph-mgr nodes and the node-exporters running on each Ceph host Opens TCP port 3000. Creates the dashboard, theme, and user accounts in Grafana. Displays the Ceph Dashboard login page URL. For more information, see Installing a Red Hat Ceph Storage cluster in the Red Hat Ceph Storage Installation Guide . To remove the dashboard, see Purging the Ceph Dashboard using Ansible in the Red Hat Ceph Storage Installation Guide . 2.3. Network port requirements The Ceph dashboard components use certain TCP network ports which must be accessible. By default, the network ports are automatically opened in firewalld during installation of Red Hat Ceph Storage. Table 2.1. TCP Port Requirements Port Use Originating Node Destination Node 8443 The dashboard web interface IP addresses that need access to Ceph Dashboard UI and the node under [grafana-server] in the Ansible inventory file, since the AlertManager service can also initiate connections to the Dashboard for reporting alerts. The Ceph Manager nodes. 3000 Grafana IP addresses that need access to Grafana Dashboard UI and all Ceph Manager hosts and [grafana-server] . The node under [grafana-server] in the Ansible inventory file. 9090 Default Prometheus server for basic Prometheus graphs IP addresses that need access to Prometheus UI and all Ceph Manager hosts and [grafana-server] or Hosts running Prometheus. The node under [grafana-server] in the Ansible inventory file. 9092 Prometheus server for basic Prometheus graphs IP addresses that need access to Prometheus UI and all Ceph Manager hosts and [grafana-server] or Hosts running Prometheus. The node under [grafana-server] in the Ansible inventory file. 9093 Prometheus Alertmanager IP addresses that need access to Alertmanager Web UI and all Ceph Manager hosts and [grafana-server] or Hosts running Prometheus. All Ceph Manager nodes and the node under [grafana-server] in the Ansible inventory file. 9094 Prometheus Alertmanager for configuring a highly available cluster made from multiple instances All Ceph Manager nodes and the node under [grafana-server] in the Ansible inventory file. Prometheus Alertmanager High Availability (peer daemon sync), so both src and dst should be nodes running Prometheus Alertmanager. 9100 The Prometheus node-exporter daemon Hosts running Prometheus that need to view Node Exporter metrics Web UI and all Ceph Manager nodes and [grafana-server] or Hosts running Prometheus. All storage cluster nodes, including MONs, OSDS, [grafana-server] host. 9283 Ceph Manager Prometheus exporter module Hosts running Prometheus that need access to Ceph Exporter metrics Web UI and [grafana-server] . All Ceph Manager nodes. 9287 Ceph iSCSI gateway data All Ceph Manager hosts and [grafana-server] . All Ceph iSCSI gateway nodes. Additional Resources For more information, see the Red Hat Ceph Storage Installation Guide . For more information, see Using and configuring firewalls in Configuring and managing networking . 2.4. Configuring dashboard ports The Red Hat Ceph Storage Dashboard, by default, binds to a TCP/IP address and TCP port. By default, the ceph-mgr daemon hosting the dashboard binds to TCP port 8443 or 8080 when SSL is disabled. If no specific address is configured, the web app binds to :: , which corresponds to all the available IP4 and IP6 addresses. You can change the IP address and the port using the configuration key facility on a cluster-wide level. Prerequisites A Red Hat Ceph Storage cluster. Installation of the Red Hat Ceph Storage Dashboard. Root-level access to all the nodes. Procedure Get the URL for accessing the dashboard: Example Get the current IP and port configuration of the ceph-mgr daemon: Example Set the IP address and the port: Syntax Example Optional: Since the ceph-mgr hosts its own instance of the dashboard, you can configure them separately. Change the IP address and port for a specific manager instance: Syntax Replace : NAME with the ID of the ceph-mgr instance hosting the dashboard. Example Additional Resources See the Knowledgebase article How to update the IP address or Port of the Ceph-dashboard for more details. 2.5. Accessing dashboard Accessing the dashboard allows you to administer and monitor your Red Hat Ceph Storage cluster. Prerequisites Successful installation of Red Hat Ceph Storage Dashboard. NTP is synchronizing clocks properly. Note A time lag can occur between the dashboard node, cluster nodes, and a browser, when the nodes are not properly synced. Ensure all nodes and the system where the browser runs have time synced by NTP. By default, when Red Hat Ceph Storage is deployed, Ansible configures NTP on all nodes. To verify, for Red Hat Enterprise Linux 7, see Configuring NTP Using ntpd , for Red Hat Enterprise Linux 8, see Using the Chrony suite to configure NTP . If you run your browser on another operating system, consult the vendor of that operating system for NTP configuration information. Note When using OpenStack Platform (OSP) with Red Hat Ceph Storage, to enable OSP Safe Mode, use one of the following methods. With Ansible, edit the group_vars/all.yml Ansible playbook, set dashboard_admin_user_ro: true and re-run ansible-playbook against site.yml , or site-container.yml , for bare-metal, or container deployments, respectively. To enable OSP Safe Mode using the ceph command, run ceph dashboard ac-user-set-roles admin read-only . To ensure the changes persist if you run the ceph-ansible Ansible playbook, edit group_vars/all.yml and set dashboard_admin_user_ro: true . Procedure Enter the following URL in a web browser: Replace: HOST_NAME with the host name of the dashboard node. PORT with port 8443 For example: On the login page, enter the username admin and the default password p@ssw0rd if you did not change the password during installation. Figure 2.1. Ceph Dashboard Login Page After logging in, the dashboard default landing page is displayed, which provides a high-level overview of status, performance, and capacity metrics of the Red Hat Ceph Storage cluster. Figure 2.2. Ceph Dashboard Default Landing Page Additional Resources For more information, see Changing the dashboard password using the dashboard in the Dashboard guide . For more information, see Changing the dashboard password using Ansible in the Dashboard guide . 2.6. Changing the dashboard password using Ansible By default, the password for accessing dashboard is set to p@ssw0rd . Important For security reasons, change the password after installation. You can change the dashboard password using Ansible. Prerequisites A running Red Hat Ceph Storage cluster. Access to the Ansible administration node. Procedure Open the Ansible playbook file /usr/share/ceph-ansible/group_vars/all.yml for editing. Uncomment and update the password on this line: to: Replace NEW_PASSWORD with your preferred password. Rerun the Ansible playbook file which deploys or updates the Ceph cluster. For bare metal installs, use the site.yml playbook: For container installs, use the site-docker.yml playbook: Log in using the new password. Additional Resources For more information, see Changing the dashboard password using the dashboard in the Dashboard guide . 2.7. Changing the dashboard password using the dashboard By default, the password for accessing dashboard is set to p@ssw0rd . Important For security reasons, change the password after installation. To change the password using the dashboard, also change the dashboard password setting in Ansible to ensure the password does not revert to the default password if Ansible is used to reconfigure the Red Hat Ceph Storage cluster. Prerequisites A running Red Hat Ceph Storage cluster. Procedure Update the password in the group_vars/all.yml file to prevent the password from being reset to p@ssw0rd when Ansible is used to reconfigure the Ceph cluster. Open the Ansible playbook file /usr/share/ceph-ansible/group_vars/all.yml for editing. Uncomment and update the password on this line: to: Replace NEW_PASSWORD with your preferred password. Change the password in the dashboard web user-interface. Log in to the dashboard: At the top right hand side toolbar, click the dashboard settings icon and then click User management . Locate the admin user in the Username table and click on admin . Above the table title Username , click on the Edit button. Enter the new password and confirm it by reentering it and click Edit User . You will be logged out and taken to the log in screen. A notification will appear confirming the password change. Log back in using the new password. Additional Resources For more information, see Changing the dashboard password using Ansible in the Dashboard guide . 2.8. Changing the Grafana password using Ansible By default, the password for Grafana, used by dashboard, is set to admin . Use this procedure to change the password. Important For security reasons, change the password from the default. Prerequisites A running Red Hat Ceph Storage cluster. Root access to all nodes in the cluster. Procedure Optional: If you do not know which node the Grafana container is running on, find the node listed under [grafana-server] in the Ansible hosts file, usually located at /etc/ansible/hosts : Example On the node where the Grafana container is running, change the password: Syntax Change CONTAINER_ID to the ID of the Grafana container. Change NEW_PASSWORD to the desired Grafana password. Example On the Ansible administration node, use ansible-vault to encrypt the Grafana password, and then add the encrypted password to group_vars/all.yml . Change to the /usr/share/ceph-ansible/ directory: Run ansible-vault and create a new vault password: Example Re-enter the password to confirm it: Example Enter the Grafana password, press enter, and then enter CTRL+D to complete the entry: Syntax Replace NEW_PASSWORD with the Grafana password that was set earlier. Example Take note of the output that begins with grafana_admin_password_vault: !vault | and ends with a few lines of numbers, as it will be used in the step: Example Open for editing group_vars/all.yml and paste the output from above into the file: Example Add a line below the encrypted password with the following: Example Note Using two variables as seen above is required due to a bug in Ansible that breaks the string type when assigning the vault value directly to the Ansible variable. Save and close the file. Re-run ansible-playbook . For container based deployments: Example Note that -i hosts is only necessary if you are not using the default Ansible hosts file location of /etc/ansible/hosts . For bare-metal, RPM based deployments: Example Note that -i hosts is only necessary if you are not using the default Ansible hosts file location of /etc/ansible/hosts . 2.9. Syncing users using Red Hat Single Sign-On for the dashboard Administrators can provide access to users on Red Hat Ceph Storage Dashboard using Red Hat Single Sign-on (SSO) with Lightweight Directory Access Protocol (LDAP) integration. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin level access to the dashboard. Users are added to the dashboard. Root-level access on all the nodes. Red hat Single Sign-On installed from a ZIP file. See the Installing Red Hat Single Sign-On from a zip file for additional information. Procedure Download the Red Hat Single Sign-On 7.4.0 Server on the system where Red Hat Ceph Storage is installed. Unzip the folder: Navigate to the standalone/configuration directory and open the standalone.xml for editing: Replace three instances of localhost and two instances of 127.0.0.1 with the IP address of the machine where Red Hat Single Sign-On is installed. Optional: For Red Hat Enterprise Linux 8, users might get Certificate Authority (CA) issues. Import the custom certificates from CA and move them into the keystore with exact java version. Example To start the server from the bin directory of rh-sso-7.4 folder, run the standalone boot script: Create the admin account in http:_IP_ADDRESS_:8080/auth with a username and password: Note The admin account has to be created only the first time you log into the console. Log into the admin console with the credentials created: To create a realm, click the Master drop-down. In this realm, administrators provide access to users and applications. In the Add Realm window, enter a name for the realm and set the parameter Enabled to ON and click Create: Note The realm name is case-sensitive. In the Realm Settings tab, set the following parameters and click Save: Enabled - ON User-Managed Access - ON Copy the link address of SAML 2.0 Identity Provider Metadata In the Clients tab, click Create: In the Add Client window, set the following parameters and click Save: Client ID - BASE_URL:8443/auth/saml2/metadata Example https://magna082.ceph.redhat.com:8443/auth/saml2/metadata Client Protocol - saml In the Clients window, under Settings tab, set the following parameters and click Save: Client ID - BASE_URL:8443/auth/saml2/metadata Example https://magna082.ceph.redhat.com:8443/auth/saml2/metadata Enabled - ON Client Protocol - saml Include AuthnStatement - ON Sign Documents - ON Signature Algorithm - RSA_SHA1 SAML Signature Key Name - KEY_ID Valid Redirect URLs - BASE_URL:8443/* Example https://magna082.ceph.redhat.com:8443/* Base URL - BASE_URL:8443 Example https://magna082.ceph.redhat.com:8443/ Master SAML Processing URL - http://localhost:8080/auth/realms/ REALM_NAME /protocol/saml/descriptor Example http://localhost:8080/auth/realms/Ceph_LDAP/protocol/saml/descriptor Note Paste the link of SAML 2.0 Identity Provider Metadata from Realm Settings tab. Under Fine Grain SAML Endpoint Configuration, set the parameters: Assertion Consumer Service POST Binding URL - BASE_URL:8443/#/dashboard Example https://magna082.ceph.redhat.com:8443/#/dashboard Assertion Consumer Service Redirect Binding URL - BASE_URL:8443/#/dashboard Example https://magna082.ceph.redhat.com:8443/#/dashboard Logout Service Redirect Binding URL - BASE_URL:8443/ Example https://magna082.ceph.redhat.com:8443/ In the Clients window, Mappers tab, set the following parameters and click Save: Protocol - saml Name - username Mapper Property - User Property Property - username SAML Attribute name - username In the Clients Scope tab, select role_list : In Mappers tab, select role list , set the Single Role Attribute to ON. Select User_Federation tab: In User Federation window, select ldap from the drop-down: In User_Federation window, Settings tab, set the following parameters and click Save: Console Display Name - rh-ldap Import Users - ON Edit_Mode - READ_ONLY Username LDAP attribute - username RDN LDAP attribute - username UUID LDAP attribute - nsuniqueid User Object Classes - inetOrgPerson, organizationalPerson, rhatPerson Connection URL - ldap:://myldap.example.com Example ldap://ldap.corp.redhat.com Click Test Connection . You will get a notification that the LDAP connection is successful. Users DN - ou=users, dc=example, dc=com Example ou=users,dc=redhat,dc=com Bind Type - simple Click Test authentication . You will get a notification that the LDAP authentication is successful. In Mappers tab, select first name row and edit the following parameter and Click Save: LDAP Attribute - givenName In User_Federation tab, Settings tab, Click Synchronize all users : You will get a notification that the sync of users are updated successfully. In the Users tab, search for the user added to the dashboard and click the Search icon: To view the user , click it's row. You should see the federation link as the name provided for the User Federation . Important Do not add users manually. If added manually, delete the user by clicking Delete . Users added to the realm and the dashboard can access the Ceph dashboard with their mail address and password. Example https://magna082.ceph.redhat.com:8443 Additional Resources For adding users to the dashboard, see the Creating users on dashboard section in the Red Hat Ceph Storage Dashboard Guide for more information. For adding roles for users on the dashboard, see the Creating roles on dashboard section in the Red Hat Ceph Storage Dashboard Guide for more information. 2.10. Enabling Single Sign-On for the Ceph Dashboard The Ceph Dashboard supports external authentication of users with the Security Assertion Markup Language (SAML) protocol. Before using single sign-On (SSO) with the Ceph dashboard, create the dashboard user accounts and assign the desired roles. The Ceph Dashboard performs authorization of the users and the authentication process is performed by an existing Identity Provider (IdP). Red Hat uses Keycloak to test the dashboard SSO feature. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Dashboard software. Launch the Dashboard. Root-level access to the Ceph Manager nodes. Installation of the following library packages on the Ceph Manager nodes: python3-saml python3-defusedxml python3-isodate python3-xmlsec Procedure To configure SSO on Ceph Dashboard, run the following command: Bare-metal deployments: Syntax Example Container deployments: Syntax Example Replace CEPH_MGR_NODE with Ceph mgr node. For example, ceph-mgr-hostname CEPH_DASHBOARD_BASE_URL with the base URL where Ceph Dashboard is accessible. IDP_METADATA with the URL to remote or local path or content of the IdP metadata XML. The supported URL types are http, https, and file. Optional : IDP_USERNAME_ATTRIBUTE with the attribute used to get the username from the authentication response. Defaults to uid . Optional : IDP_ENTITY_ID with the IdP entity ID when more than one entity ID exists on the IdP metadata. Optional : SP_X_509_CERT with the file path of the certificate used by Ceph Dashboard for signing and encryption. Optional : SP_PRIVATE_KEY with the file path of the private key used by Ceph Dashboard for signing and encryption. Verify the current SAML 2.0 configuration: Bare-metal deployments: Syntax Container deployments: Syntax To enable SSO, run the following command: Bare-metal deployments: Syntax Container deployments: Syntax Open your dashboard URL. For example: On the SSO page, enter the login credentials. SSO redirects to the dashboard web interface. Additional Resources To disable single sign-on, see Disabling Single Sign-on for the Ceph Dashboard in the Red Hat Ceph StorageDashboard Guide . 2.11. Disabling Single Sign-On for the Ceph Dashboard You can disable single sign on for Ceph Dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Dashboard software. Launch the Dashboard. Root-level access to the Ceph Manager nodes. Single sign-on enabled for Ceph Dashboard Installation of the following library packages on the Ceph Manager nodes: python3-saml python3-defusedxml python3-isodate python3-xmlsec Procedure To view status of SSO, run the following command: Bare-metal deployments: Syntax Container deployments: Syntax Replace CEPH_MGR_NODE with Ceph mgr node. For example, ceph-mgr-hostname To disable SSO, run the following command: Bare-metal deployments: Syntax Container deployments: Syntax Replace CEPH_MGR_NODE with Ceph mgr node. For example, ceph-mgr-hostname Additional Resources To enable single sign-on, see Enabling Single Sign-on for the Ceph Dashboard in the Red Hat Ceph StorageDashboard Guide . | [
"grep grafana-server -A 1 /etc/ansible/hosts [grafana-server] jb-ceph4-mon",
"grep \"dashboard_enabled\" /usr/share/ceph-ansible/group_vars/all.yml #dashboard_enabled: True",
"2019-12-13 15:31:17,871 p=11421 u=admin | TASK [ceph-dashboard : print dashboard URL] ************************************************************ 2019-12-13 15:31:17,871 p=11421 u=admin | task path: /usr/share/ceph-ansible/roles/ceph-dashboard/tasks/main.yml:5 2019-12-13 15:31:17,871 p=11421 u=admin | Friday 13 December 2019 15:31:17 -0500 (0:00:02.189) 0:04:25.380 ******* 2019-12-13 15:31:17,934 p=11421 u=admin | ok: [jb-ceph4-mon] => msg: The dashboard has been deployed! You can access your dashboard web UI at http://jb-ceph4-mon:8443/ as an 'admin' user with 'p@ssw0rd' password.",
"ceph mgr services",
"netstat -ntlp",
"ceph config set mgr mgr/dashboard/server_addr IP_ADDRESS ceph config set mgr mgr/dashboard/server_port PORT ceph config set mgr mgr/dashboard/ssl_server_port PORT",
"ceph config set mgr mgr/dashboard/server_addr 192.168.0.120 ceph config set mgr mgr/dashboard/server_port 8443 ceph config set mgr mgr/dashboard/ssl_server_port 8443",
"ceph config set mgr mgr/dashboard/ NAME /server_addr IP_ADDRESS ceph config set mgr mgr/dashboard/ NAME /server_port PORT ceph config set mgr mgr/dashboard/ NAME /ssl_server_port PORT",
"ceph config set mgr mgr/dashboard/mgrs-0/server_addr 192.168.0.120 ceph config set mgr mgr/dashboard/mgrs-0/server_port 8443 ceph config set mgr mgr/dashboard/mgrs-0/ssl_server_port 8443",
"http:// HOST_NAME : PORT",
"http://dashboard:8443",
"#dashboard_admin_password: p@ssw0rd",
"dashboard_admin_password: NEW_PASSWORD",
"[admin@admin ceph-ansible]USD ansible-playbook -v site.yml",
"[admin@admin ceph-ansible]USD ansible-playbook -v site-docker.yml",
"#dashboard_admin_password: p@ssw0rd",
"dashboard_admin_password: NEW_PASSWORD",
"http:// HOST_NAME :8443",
"[grafana-server] grafana",
"exec CONTAINER_ID grafana-cli admin reset-admin-password --homepath \"/usr/share/grafana\" NEW_PASSWORD",
"podman exec 3f28b0309aee grafana-cli admin reset-admin-password --homepath \"/usr/share/grafana\" NewSecurePassword t=2020-10-29T17:45:58+0000 lvl=info msg=\"Connecting to DB\" logger=sqlstore dbtype=sqlite3 t=2020-10-29T17:45:58+0000 lvl=info msg=\"Starting DB migration\" logger=migrator Admin password changed successfully ✔",
"[admin@admin ~]USD cd /usr/share/ceph-ansible/",
"[admin@admin ceph-ansible]USD ansible-vault encrypt_string --stdin-name 'grafana_admin_password_vault' New Vault password:",
"[admin@admin ceph-ansible]USD ansible-vault encrypt_string --stdin-name 'grafana_admin_password_vault' New Vault password: Confirm New Vault password:",
"ansible-vault encrypt_string --stdin-name 'grafana_admin_password_vault' New Vault password: Confirm New Vault password: Reading plaintext input from stdin. (ctrl-d to end input) NEW_PASSWORD",
"[admin@admin ceph-ansible]USD ansible-vault encrypt_string --stdin-name 'grafana_admin_password_vault' New Vault password: Confirm New Vault password: Reading plaintext input from stdin. (ctrl-d to end input) NewSecurePassword",
"[admin@admin ceph-ansible]USD ansible-vault encrypt_string --stdin-name 'grafana_admin_password_vault' New Vault password: Confirm New Vault password: Reading plaintext input from stdin. (ctrl-d to end input) NewSecurePassword grafana_admin_password_vault: !vault | USDANSIBLE_VAULT;1.1;AES256 38383639646166656130326666633262643836343930373836376331326437353032376165306234 3161386334616632653530383231316631636462363761660a373338373334663434363865356633 66383963323033303662333765383938353630623433346565363534636434643634336430643438 6134306662646365370a343135316633303830653565633736303466636261326361333766613462 39353365343137323163343937636464663534383234326531666139376561663532 Encryption successful",
"grafana_admin_password_vault: !vault | USDANSIBLE_VAULT;1.1;AES256 38383639646166656130326666633262643836343930373836376331326437353032376165306234 3161386334616632653530383231316631636462363761660a373338373334663434363865356633 66383963323033303662333765383938353630623433346565363534636434643634336430643438 6134306662646365370a343135316633303830653565633736303466636261326361333766613462 39353365343137323163343937636464663534383234326531666139376561663532",
"grafana_admin_password: \"{{ grafana_admin_password_vault }}\"",
"[admin@node1 ceph-ansible]USD ansible-playbook --ask-vault-pass -v site-container.yml -i hosts",
"[admin@node1 ceph-ansible]USD ansible-playbook --ask-vault-pass -v site.yml -i hosts",
"unzip rhsso-7.4.0.zip",
"cd standalone/configuration vi standalone.xml",
"keytool -import -noprompt -trustcacerts -alias ca -file ../ca.cer -keystore /etc/java/java-1.8.0-openjdk/java-1.8.0-openjdk-1.8.0.272.b10-3.el8_3.x86_64/lib/security/cacert",
"./standalone.sh",
"ceph dashboard sso setup saml2 CEPH_DASHBOARD_BASE_URL IDP_METADATA IDP_USERNAME_ATTRIBUTE IDP_ENTITY_ID SP_X_509_CERT SP_PRIVATE_KEY",
"ceph dashboard sso setup saml2 http://dashboard_hostname.ceph.redhat.com:8443 idp-metadata.xml username http://10.70.59.125:8080/auth/realms/realm_name /home/certificate.txt /home/private-key.txt",
"exec CEPH_MGR_NODE ceph dashboard sso setup saml2 CEPH_DASHBOARD_BASE_URL IDP_METADATA IDP_USERNAME_ATTRIBUTE IDP_ENTITY_ID SP_X_509_CERT SP_PRIVATE_KEY",
"podman exec ceph-mgr-hostname ceph dashboard sso setup saml2 http://dashboard_hostname.ceph.redhat.com:8443 idp-metadata.xml username http://10.70.59.125:8080/auth/realms/realm_name /home/certificate.txt /home/private-key.txt",
"ceph dashboard sso show saml2",
"exec CEPH_MGR_NODE ceph dashboard sso show saml2",
"ceph dashboard sso enable saml2 SSO is \"enabled\" with \"SAML2\" protocol.",
"exec CEPH_MGR_NODE ceph dashboard sso enable saml2 SSO is \"enabled\" with \"SAML2\" protocol.",
"http://dashboard_hostname.ceph.redhat.com:8443",
"ceph dashboard sso status SSO is \"enabled\" with \"SAML2\" protocol.",
"exec CEPH_MGR_NODE ceph dashboard sso status SSO is \"enabled\" with \"SAML2\" protocol.",
"ceph dashboard sso disable SSO is \"disabled\".",
"exec CEPH_MGR_NODE ceph dashboard sso disable SSO is \"disabled\"."
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/dashboard_guide/ceph-dashboard-installation-and-access |
Chapter 6. Documentation Tools | Chapter 6. Documentation Tools Red Hat Enterprise Linux 6 offers the Doxygen tool for generating documentation from source code and for writing standalone documentation. 6.1. Doxygen Doxygen is a documentation tool that creates reference material both online in HTML and offline in Latex. It does this from a set of documented source files which makes it easy to keep the documentation consistent and correct with the source code. 6.1.1. Doxygen Supported Output and Languages Doxygen has support for output in: RTF (MS Word) PostScript Hyperlinked PDF Compressed HTML Unix man pages Doxygen supports the following programming languages: C C++ C# Objective -C IDL Java VHDL PHP Python Fortran D 6.1.2. Getting Started Doxygen uses a configuration file to determine its settings, therefore it is paramount that this be created correctly. Each project requires its own configuration file. The most painless way to create the configuration file is with the command doxygen -g config-file . This creates a template configuration file that can be easily edited. The variable config-file is the name of the configuration file. If it is committed from the command it is called Doxyfile by default. Another useful option while creating the configuration file is the use of a minus sign ( - ) as the file name. This is useful for scripting as it will cause Doxygen to attempt to read the configuration file from standard input ( stdin ). The configuration file consists of a number of variables and tags, similar to a simple Makefile. For example: TAGNAME = VALUE1 VALUE2... For the most part these can be left alone but should it be required to edit them see the configuration page of the Doxygen documentation website for an extensive explanation of all the tags available. There is also a GUI interface called doxywizard . If this is the preferred method of editing then documentation for this function can be found on the Doxywizard usage page of the Doxygen documentation website. There are eight tags that are useful to become familiar with. INPUT For small projects consisting mainly of C or C++ source and header files it is not required to change anything. However, if the project is large and consists of a source directory or tree, then assign the root directory or directories to the INPUT tag. FILE_PATTERNS File patterns (for example, *.cpp or *.h ) can be added to this tag allowing only files that match one of the patterns to be parsed. RECURSIVE Setting this to yes will allow recursive parsing of a source tree. EXCLUDE and EXCLUDE_PATTERNS These are used to further fine-tune the files that are parsed by adding file patterns to avoid. For example, to omit all test directories from a source tree, use EXCLUDE_PATTERNS = */test/* . EXTRACT_ALL When this is set to yes , doxygen will pretend that everything in the source files is documented to give an idea of how a fully documented project would look. However, warnings regarding undocumented members will not be generated in this mode; set it back to no when finished to correct this. SOURCE_BROWSER and INLINE_SOURCES By setting the SOURCE_BROWSER tag to yes doxygen will generate a cross-reference to analyze a piece of software's definition in its source files with the documentation existing about it. These sources can also be included in the documentation by setting INLINE_SOURCES to yes . 6.1.3. Running Doxygen Running doxygen config-file creates html , rtf , latex , xml , and / or man directories in whichever directory doxygen is started in, containing the documentation for the corresponding filetype. HTML OUTPUT This documentation can be viewed with a HTML browser that supports cascading style sheets (CSS), as well as DHTML and Javascript for some sections. Point the browser (for example, Mozilla, Safari, Konqueror, or Internet Explorer 6) to the index.html in the html directory. LaTeX OUTPUT Doxygen writes a Makefile into the latex directory in order to make it easy to first compile the Latex documentation. To do this, use a recent teTeX distribution. What is contained in this directory depends on whether the USE_PDFLATEX is set to no . Where this is true, typing make while in the latex directory generates refman.dvi . This can then be viewed with xdvi or converted to refman.ps by typing make ps . Note that this requires dvips . There are a number of commands that may be useful. The command make ps_2on1 prints two pages on one physical page. It is also possible to convert to a PDF if a ghostscript interpreter is installed by using the command make pdf . Another valid command is make pdf_2on1 . When doing this set PDF_HYPERLINKS and USE_PDFLATEX tags to yes as this will cause Makefile will only contain a target to build refman.pdf directly. RTF OUTPUT This file is designed to import into Microsoft Word by combining the RTF output into a single file: refman.rtf . Some information is encoded using fields but this can be shown by selecting all ( CTRL+A or Edit -> select all) and then right-click and select the toggle fields option from the drop down menu. XML OUTPUT The output into the xml directory consists of a number of files, each compound gathered by doxygen, as well as an index.xml . An XSLT script, combine.xslt , is also created that is used to combine all the XML files into a single file. Along with this, two XML schema files are created, index.xsd for the index file, and compound.xsd for the compound files, which describe the possible elements, their attributes, and how they are structured. MAN PAGE OUTPUT The documentation from the man directory can be viewed with the man program after ensuring the manpath has the correct man directory in the man path. Be aware that due to limitations with the man page format, information such as diagrams, cross-references and formulas will be lost. 6.1.4. Documenting the Sources There are three main steps to document the sources. First, ensure that EXTRACT_ALL is set to no so warnings are correctly generated and documentation is built properly. This allows doxygen to create documentation for documented members, files, classes and namespaces. There are two ways this documentation can be created: A special documentation block This comment block, containing additional marking so Doxygen knows it is part of the documentation, is in either C or C++. It consists of a brief description, or a detailed description. Both of these are optional. What is not optional, however, is the in body description. This then links together all the comment blocks found in the body of the method or function. Note While more than one brief or detailed description is allowed, this is not recommended as the order is not specified. The following will detail the ways in which a comment block can be marked as a detailed description: C-style comment block, starting with two asterisks (*) in the JavaDoc style. C-style comment block using the Qt style, consisting of an exclamation mark (!) instead of an extra asterisk. The beginning asterisks on the documentation lines can be left out in both cases if that is preferred. A blank beginning and end line in C++ also acceptable, with either three forward slashes or two forward slashes and an exclamation mark. or Alternatively, in order to make the comment blocks more visible a line of asterisks or forward slashes can be used. or Note that the two forwards slashes at the end of the normal comment block start a special comment block. There are three ways to add a brief description to documentation. To add a brief description use \brief above one of the comment blocks. This brief section ends at the end of the paragraph and any further paragraphs are the detailed descriptions. By setting JAVADOC_AUTOBRIEF to yes , the brief description will only last until the first dot followed by a space or new line. Consequentially limiting the brief description to a single sentence. This can also be used with the above mentioned three-slash comment blocks (///). The third option is to use a special C++ style comment, ensuring this does not span more than one line. or The blank line in the above example is required to separate the brief description and the detailed description, and JAVADOC_AUTOBRIEF must to be set to no . Examples of how a documented piece of C++ code using the Qt style can be found on the Doxygen documentation website It is also possible to have the documentation after members of a file, struct, union, class, or enum. To do this add a < marker in the comment block.\ Or in a Qt style as: or or For brief descriptions after a member use: or Examples of these and how the HTML is produced can be viewed on the Doxygen documentation website Documentation at other places While it is preferable to place documentation in front of the code it is documenting, at times it is only possible to put it in a different location, especially if a file is to be documented; after all it is impossible to place the documentation in front of a file. This is best avoided unless it is absolutely necessary as it can lead to some duplication of information. To do this it is important to have a structural command inside the documentation block. Structural commands start with a backslash (\) or an at-sign (@) for JavaDoc and are followed by one or more parameters. In the above example the command \class is used. This indicates that the comment block contains documentation for the class 'Test'. Others are: \struct : document a C-struct \union : document a union \enum : document an enumeration type \fn : document a function \var : document a variable, typedef, or enum value \def : document a #define \typedef : document a type definition \file : document a file \namespace : document a namespace \package : document a Java package \interface : document an IDL interface , the contents of a special documentation block is parsed before being written to the HTML and / Latex output directories. This includes: Special commands are executed. Any white space and asterisks (*) are removed. Blank lines are taken as new paragraphs. Words are linked to their corresponding documentation. Where the word is preceded by a percent sign (%) the percent sign is removed and the word remains. Where certain patterns are found in the text, links to members are created. Examples of this can be found on the automatic link generation page on the Doxygen documentation website. When the documentation is for Latex, HTML tags are interpreted and converted to Latex equivalents. A list of supported HTML tags can be found on the HTML commands page on the Doxygen documentation website. 6.1.5. Resources More information can be found on the Doxygen website. Doxygen homepage Doxygen introduction Doxygen documentation Output formats | [
"/** * ... documentation */",
"/*! * ... documentation */",
"/// /// ... documentation ///",
"//! //! ... documentation //!",
"///////////////////////////////////////////////// /// ... documentation /////////////////////////////////////////////////",
"/********************************************//** * ... documentation ***********************************************/",
"/*! \\brief brief documentation . * brief documentation . * * detailed documentation . */",
"/** Brief documentation . Detailed documentation continues * from here . */",
"/// Brief documentation . /** Detailed documentation . */",
"//! Brief documentation. //! Detailed documentation //! starts here",
"int var; /*!< detailed description after the member */",
"int var; /**< detailed description after the member */",
"int var; //!< detailed description after the member //!<",
"int var; ///< detailed description after the member ///<",
"int var; //!< brief description after the member",
"int var; ///< brief description after the member",
"/*! \\class Test \\brief A test class . A more detailed description of class . */"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/chap-Documentation_Tools |
Chapter 16. Scaling overcloud nodes | Chapter 16. Scaling overcloud nodes If you want to add or remove nodes after the creation of the overcloud, you must update the overcloud. Warning Do not use openstack server delete to remove nodes from the overcloud. Follow the procedures in this section to remove and replace nodes correctly. Note Ensure that your bare metal nodes are not in maintenance mode before you begin scaling out or removing an overcloud node. Use the following table to determine support for scaling each node type: Table 16.1. Scale support for each node type Node type Scale up? Scale down? Notes Controller N N You can replace Controller nodes using the procedures in Chapter 17, Replacing Controller nodes . Compute Y Y Ceph Storage nodes Y N You must have at least 1 Ceph Storage node from the initial overcloud creation. Object Storage nodes Y Y Important Ensure that you have at least 10 GB free space before you scale the overcloud. This free space accommodates image conversion and caching during the node provisioning process. 16.1. Adding nodes to the overcloud Complete the following steps to add more nodes to the director node pool. Note A fresh installation of Red Hat OpenStack Platform does not include certain updates, such as security errata and bug fixes. As a result, if you are scaling up a connected environment that uses the Red Hat Customer Portal or Red Hat Satellite Server, RPM updates are not applied to new nodes. To apply the latest updates to the overcloud nodes, you must do one of the following: Complete an overcloud update of the nodes after the scale-out operation. Use the virt-customize tool to modify the packages to the base overcloud image before the scale-out operation. For more information, see the Red Hat Knowledgebase solution Modifying the Red Hat Linux OpenStack Platform Overcloud Image with virt-customize . Procedure Create a new JSON file called newnodes.json that contains details of the new node that you want to register: Register the new nodes: After you register the new nodes, launch the introspection process for each new node: This process detects and benchmarks the hardware properties of the nodes. Configure the image properties for the node: 16.2. Increasing node counts for roles Complete the following steps to scale overcloud nodes for a specific role, such as a Compute node. Procedure Tag each new node with the role you want. For example, to tag a node with the Compute role, run the following command: To scale the overcloud, you must edit the environment file that contains your node counts and re-deploy the overcloud. For example, to scale your overcloud to 5 Compute nodes, edit the ComputeCount parameter: Rerun the deployment command with the updated file, which in this example is called node-info.yaml : Ensure that you include all environment files and options from your initial overcloud creation. This includes the same scale parameters for non-Compute nodes. Wait until the deployment operation completes. 16.3. Removing or replacing a Compute node In some situations you need to remove a Compute node from the overcloud. For example, you might need to replace a problematic Compute node. When you delete a Compute node the node's index is added by default to the denylist to prevent the index being reused during scale out operations. You can replace the removed Compute node after you have removed the node from your overcloud deployment. Prerequisites The Compute service is disabled on the nodes that you want to remove to prevent the nodes from scheduling new instances. To confirm that the Compute service is disabled, use the following command: If the Compute service is not disabled then disable it: Tip Use the --disable-reason option to add a short explanation on why the service is being disabled. This is useful if you intend to redeploy the Compute service. The workloads on the Compute nodes have been migrated to other Compute nodes. For more information, see Migrating virtual machine instances between Compute nodes . If Instance HA is enabled, choose one of the following options: If the Compute node is accessible, log in to the Compute node as the root user and perform a clean shutdown with the shutdown -h now command. If the Compute node is not accessible, log in to a Controller node as the root user, disable the STONITH device for the Compute node, and shut down the bare metal node: Procedure Source the undercloud configuration: Identify the UUID of the overcloud stack: Identify the UUIDs or hostnames of the Compute nodes that you want to delete: Optional: Run the overcloud deploy command with the --update-plan-only option to update the plans with the most recent configurations from the templates. This ensures that the overcloud configuration is up-to-date before you delete any Compute nodes: Note This step is required if you updated the overcloud node denylist. For more information about adding overcloud nodes to the denylist, see Blacklisting nodes . Delete the Compute nodes from the stack: Replace <overcloud> with the name or UUID of the overcloud stack. Replace <node_1> , and optionally all nodes up to [node_n] , with the Compute service hostname or UUID of the Compute nodes you want to delete. Do not use a mix of UUIDs and hostnames. Use either only UUIDs or only hostnames. Note If the node has already been powered off, this command returns a WARNING message: You can ignore this message. Wait for the Compute nodes to delete. Delete the network agents for each node that you deleted: Check the status of the overcloud stack when the node deletion is complete: Table 16.2. Result Status Description UPDATE_COMPLETE The delete operation completed successfully. UPDATE_FAILED The delete operation failed. A common reason for a failed delete operation is an unreachable IPMI interface on the node that you want to remove. When the delete operation fails, you must manually remove the Compute node. For more information, see Removing a Compute node manually from the overcloud . If Instance HA is enabled, perform the following actions: Clean up the Pacemaker resources for the node: Delete the STONITH device for the node: If you are not replacing the removed Compute nodes on the overcloud, then decrease the ComputeCount parameter in the environment file that contains your node counts. This file is usually named node-info.yaml . For example, decrease the node count from four nodes to three nodes if you removed one node: Decreasing the node count ensures that director does not provision any new nodes when you run openstack overcloud deploy . If you are replacing the removed Compute node on your overcloud deployment, see Replacing a removed Compute node . 16.3.1. Removing a Compute node manually If the openstack overcloud node delete command failed due to an unreachable node, then you must manually complete the removal of the Compute node from the overcloud. Prerequisites Performing the Removing or replacing a Compute node procedure returned a status of UPDATE_FAILED . Procedure Identify the UUID of the overcloud stack: Identify the UUID of the node that you want to manually remove: Move the node that you want to remove to maintenance mode: Note Wait for the Compute service to synchronize its state with the Bare Metal service. This can take up to four minutes. Source the overcloud configuration: Stop all the podman containers on the Compute node that you want to manually remove: Delete the network agents for the node that you removed: Replace <scaled_down_node> with the name of the node to remove. Confirm that the Compute service is disabled on the removed node on the overcloud, to prevent the node from scheduling new instances: If the Compute service is not disabled, disable it: Tip Use the --disable-reason option to add a short explanation on why the service is being disabled. This is useful if you intend to redeploy the Compute service. Delete the Compute service from the node: Remove the deleted Compute service as a resource provider from the Placement service: Source the undercloud configuration: Delete the Compute node from the stack: Replace <overcloud> with the name or UUID of the overcloud stack. Replace <node> with the Compute service host name or UUID of the Compute node that you want to delete. Note If the node has already been powered off, this command returns a WARNING message: You can ignore this message. Wait for the overcloud node to delete. Check the status of the overcloud stack when the node deletion is complete: Table 16.3. Result Status Description UPDATE_COMPLETE The delete operation completed successfully. UPDATE_FAILED The delete operation failed. If the overcloud node fails to delete while in maintenance mode, then the problem might be with the hardware. If Instance HA is enabled, perform the following actions: Clean up the Pacemaker resources for the node: Delete the STONITH device for the node: If you are not replacing the removed Compute node on the overcloud, then decrease the ComputeCount parameter in the environment file that contains your node counts. This file is usually named node-info.yaml . For example, decrease the node count from four nodes to three nodes if you removed one node: Decreasing the node count ensures that director does not provision any new nodes when you run openstack overcloud deploy . If you are replacing the removed Compute node on your overcloud deployment, see Replacing a removed Compute node . 16.3.2. Replacing a removed Compute node To replace a removed Compute node on your overcloud deployment, you can register and inspect a new Compute node or re-add the removed Compute node. You must also configure your overcloud to provision the node. Procedure Optional: To reuse the index of the removed Compute node, configure the RemovalPoliciesMode and the RemovalPolicies parameters for the role to replace the denylist when a Compute node is removed: Replace the removed Compute node: To add a new Compute node, register, inspect, and tag the new node to prepare it for provisioning. For more information, see Configuring a basic overcloud . To re-add a Compute node that you removed manually, remove the node from maintenance mode: Rerun the openstack overcloud deploy command that you used to deploy the existing overcloud. Wait until the deployment process completes. Confirm that director has successfully registered the new Compute node: If you performed step 1 to set the RemovalPoliciesMode for the role to update , then you must reset the RemovalPoliciesMode for the role to the default value, append , to add the Compute node index to the current denylist when a Compute node is removed: Rerun the openstack overcloud deploy command that you used to deploy the existing overcloud. 16.4. Preserving hostnames when replacing nodes that use predictable IP addresses and HostNameMap If you configured your overcloud to use predictable IP addresses, and HostNameMap to map heat-based hostnames to the hostnames of pre-provisioned nodes, then you must configure your overcloud to map the new replacement node index to an IP address and hostname. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Retrieve the physical_resource_id and the removed_rsrc_list for the resource you want to replace: Replace <stack> with the name of the stack the resource belongs to, for example, overcloud . Replace <role> with the name of the role that you want to replace the node for, for example, Compute . Example output: +------------------------+-----------------------------------------------------------+ | Field | Value | +------------------------+-----------------------------------------------------------+ | attributes | {u'attributes': None, u'refs': None, u'refs_map': None, | | | u'removed_rsrc_list': [u'2', u'3']} | 1 | creation_time | 2017-09-05T09:10:42Z | | description | | | links | [{u'href': u'http://192.168.24.1:8004/v1/bd9e6da805594de9 | | | 8d4a1d3a3ee874dd/stacks/overcloud/1c7810c4-8a1e- | | | 4d61-a5d8-9f964915d503/resources/Compute', u'rel': | | | u'self'}, {u'href': u'http://192.168.24.1:8004/v1/bd9e6da | | | 805594de98d4a1d3a3ee874dd/stacks/overcloud/1c7810c4-8a1e- | | | 4d61-a5d8-9f964915d503', u'rel': u'stack'}, {u'href': u'h | | | ttp://192.168.24.1:8004/v1/bd9e6da805594de98d4a1d3a3ee874 | | | dd/stacks/overcloud-Compute-zkjccox63svg/7632fb0b- | | | 80b1-42b3-9ea7-6114c89adc29', u'rel': u'nested'}] | | logical_resource_id | Compute | | physical_resource_id | 7632fb0b-80b1-42b3-9ea7-6114c89adc29 | | required_by | [u'AllNodesDeploySteps', | | | u'ComputeAllNodesValidationDeployment', | | | u'AllNodesExtraConfig', u'ComputeIpListMap', | | | u'ComputeHostsDeployment', u'UpdateWorkflow', | | | u'ComputeSshKnownHostsDeployment', u'hostsConfig', | | | u'SshKnownHostsConfig', u'ComputeAllNodesDeployment'] | | resource_name | Compute | | resource_status | CREATE_COMPLETE | | resource_status_reason | state changed | | resource_type | OS::Heat::ResourceGroup | | updated_time | 2017-09-05T09:10:42Z | +------------------------+-----------------------------------------------------------+ 1 The removed_rsrc_list lists the indexes of nodes that have already been removed for the resource. Retrieve the resource_name to determine the maximum index that heat has applied to a node for this resource: Replace <physical_resource_id> with the ID you retrieved in step 3. Use the resource_name and the removed_rsrc_list to determine the index that heat will apply to a new node: If removed_rsrc_list is empty, then the index will be (current_maximum_index) + 1. If removed_rsrc_list includes the value (current_maximum_index) + 1, then the index will be the available index. Retrieve the ID of the replacement bare-metal node: Update the capability of the replacement node with the new index: Replace <role> with the name of the role that you want to replace the node for, for example, compute . Replace <index> with the index calculated in step 5. Replace <node> with the ID of the bare metal node. The Compute scheduler uses the node capability to match the node on deployment. Assign a hostname to the new node by adding the index to the HostnameMap configuration, for example: 1 Node that you are removing and replacing with the new node. 2 New node. 3 Node that you are removing and replacing with the new node. 4 New node. Note Do not delete the mapping for the removed node from HostnameMap . Add the IP address for the replacement node to the end of each network IP address list in your network IP address mapping file, ips-from-pool-all.yaml . In the following example, the IP address for the new index, overcloud-controller-3 , is added to the end of the IP address list for each ControllerIPs network, and is assigned the same IP address as overcloud-controller-1 because it replaces overcloud-controller-1 . The IP address for the new index, overcloud-compute-8 , is also added to the end of the IP address list for each ComputeIPs network, and is assigned the same IP address as the index it replaces, overcloud-compute-3 : 1 IP address assigned to index 0, host name overcloud-controller-prod-123-0 . 2 IP address assigned to index 1, host name overcloud-controller-prod-456-0 . This node is replaced by index 3. Do not remove this entry. 3 IP address assigned to index 2, host name overcloud-controller-prod-789-0 . 4 IP address assigned to index 3, host name overcloud-controller-prod-456-0 . This is the new node that replaces index 1. 5 IP address assigned to index 0, host name overcloud-compute-0 . 6 IP address assigned to index 1, host name overcloud-compute-3 . This node is replaced by index 2. Do not remove this entry. 7 IP address assigned to index 2, host name overcloud-compute-8 . This is the new node that replaces index 1. 16.5. Replacing Ceph Storage nodes You can use director to replace Ceph Storage nodes in a director-created cluster. For more information, see the Deploying an Overcloud with Containerized Red Hat Ceph guide. 16.6. Replacing Object Storage nodes Follow the instructions in this section to understand how to replace Object Storage nodes without impact to the integrity of the cluster. This example involves a three-node Object Storage cluster in which you want to replace the node overcloud-objectstorage-1 node. The goal of the procedure is to add one more node and then remove the overcloud-objectstorage-1 node. The new node replaces the overcloud-objectstorage-1 node. Procedure Increase the Object Storage count using the ObjectStorageCount parameter. This parameter is usually located in node-info.yaml , which is the environment file that contains your node counts: The ObjectStorageCount parameter defines the quantity of Object Storage nodes in your environment. In this example, scale the quantity of Object Storage nodes from 3 to 4 . Run the deployment command with the updated ObjectStorageCount parameter: After the deployment command completes, the overcloud contains an additional Object Storage node. Replicate data to the new node. Before you remove a node, in this case, overcloud-objectstorage-1 , wait for a replication pass to finish on the new node. Check the replication pass progress in the /var/log/swift/swift.log file. When the pass finishes, the Object Storage service should log entries similar to the following example: To remove the old node from the ring, reduce the ObjectStorageCount parameter to omit the old node. In this example, reduce the ObjectStorageCount parameter to 3 : Create a new environment file named remove-object-node.yaml . This file identifies and removes the specified Object Storage node. The following content specifies the removal of overcloud-objectstorage-1 : Include both the node-info.yaml and remove-object-node.yaml files in the deployment command: Director deletes the Object Storage node from the overcloud and updates the rest of the nodes on the overcloud to accommodate the node removal. Important Include all environment files and options from your initial overcloud creation. This includes the same scale parameters for non-Compute nodes. 16.7. Using skip deploy identifier During a stack update operation puppet, by default, reapplies all manifests. This can result in a time consuming operation, which may not be required. To override the default operation, use the skip-deploy-identifier option. Use this option if you do not want the deployment command to generate a unique identifier for the DeployIdentifier parameter. The software configuration deployment steps only trigger if there is an actual change to the configuration. Use this option with caution and only if you are confident that you do not need to run the software configuration, such as scaling out certain roles. Note If there is a change to the puppet manifest or hierdata, puppet will reapply all manifests even when --skip-deploy-identifier is specified. 16.8. Blacklisting nodes You can exclude overcloud nodes from receiving an updated deployment. This is useful in scenarios where you want to scale new nodes and exclude existing nodes from receiving an updated set of parameters and resources from the core heat template collection. This means that the blacklisted nodes are isolated from the effects of the stack operation. Use the DeploymentServerBlacklist parameter in an environment file to create a blacklist. Setting the blacklist The DeploymentServerBlacklist parameter is a list of server names. Write a new environment file, or add the parameter value to an existing custom environment file and pass the file to the deployment command: Note The server names in the parameter value are the names according to OpenStack Orchestration (heat), not the actual server hostnames. Include this environment file with your openstack overcloud deploy command: Heat blacklists any servers in the list from receiving updated heat deployments. After the stack operation completes, any blacklisted servers remain unchanged. You can also power off or stop the os-collect-config agents during the operation. Warning Exercise caution when you blacklist nodes. Only use a blacklist if you fully understand how to apply the requested change with a blacklist in effect. It is possible to create a hung stack or configure the overcloud incorrectly when you use the blacklist feature. For example, if cluster configuration changes apply to all members of a Pacemaker cluster, blacklisting a Pacemaker cluster member during this change can cause the cluster to fail. Do not use the blacklist during update or upgrade procedures. Those procedures have their own methods for isolating changes to particular servers. When you add servers to the blacklist, further changes to those nodes are not supported until you remove the server from the blacklist. This includes updates, upgrades, scale up, scale down, and node replacement. For example, when you blacklist existing Compute nodes while scaling out the overcloud with new Compute nodes, the blacklisted nodes miss the information added to /etc/hosts and /etc/ssh/ssh_known_hosts . This can cause live migration to fail, depending on the destination host. The Compute nodes are updated with the information added to /etc/hosts and /etc/ssh/ssh_known_hosts during the overcloud deployment where they are no longer blacklisted. Do not modify the /etc/hosts and /etc/ssh/ssh_known_hosts files manually. To modify the /etc/hosts and /etc/ssh/ssh_known_hosts files, run the overcloud deploy command as described in the Clearing the Blacklist section. Clearing the blacklist To clear the blacklist for subsequent stack operations, edit the DeploymentServerBlacklist to use an empty array: Warning Do not omit the DeploymentServerBlacklist parameter. If you omit the parameter, the overcloud deployment uses the previously saved value. | [
"{ \"nodes\":[ { \"mac\":[ \"dd:dd:dd:dd:dd:dd\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.168.24.207\" }, { \"mac\":[ \"ee:ee:ee:ee:ee:ee\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.168.24.208\" } ] }",
"source ~/stackrc (undercloud) USD openstack overcloud node import newnodes.json",
"(undercloud) USD openstack overcloud node introspect [NODE UUID] --provide",
"(undercloud) USD openstack overcloud node configure [NODE UUID]",
"(undercloud) USD openstack baremetal node set --property capabilities='profile:compute,boot_option:local' [NODE UUID]",
"parameter_defaults: ComputeCount: 5",
"(undercloud) USD openstack overcloud deploy --templates -e /home/stack/templates/node-info.yaml [OTHER_OPTIONS]",
"(overcloud)USD openstack compute service list",
"(overcloud)USD openstack compute service set <hostname> nova-compute --disable",
"pcs stonith disable <stonith_resource_name> [stack@undercloud ~]USD source stackrc [stack@undercloud ~]USD openstack baremetal node power off <UUID>",
"(overcloud)USD source ~/stackrc",
"(undercloud)USD openstack stack list",
"(undercloud)USD openstack server list",
"openstack overcloud deploy --update-plan-only --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /home/stack/templates/network-environment.yaml -e /home/stack/templates/storage-environment.yaml -e /home/stack/templates/rhel-registration/environment-rhel-registration.yaml [-e |...]",
"openstack overcloud node delete --stack <overcloud> <node_1> ... [node_n]",
"Ansible failed, check log at /var/lib/mistral/overcloud/ansible.log WARNING: Scale-down configuration error. Manual cleanup of some actions may be necessary. Continuing with node removal.",
"(overcloud)USD for AGENT in USD(openstack network agent list --host <scaled_down_node> -c ID -f value) ; do openstack network agent delete USDAGENT ; done",
"(undercloud)USD openstack stack list",
"sudo pcs resource delete <scaled_down_node> sudo cibadmin -o nodes --delete --xml-text '<node id=\"<scaled_down_node>\"/>' sudo cibadmin -o fencing-topology --delete --xml-text '<fencing-level target=\"<scaled_down_node>\"/>' sudo cibadmin -o status --delete --xml-text '<node_state id=\"<scaled_down_node>\"/>' sudo cibadmin -o status --delete-all --xml-text '<node id=\"<scaled_down_node>\"/>' --force",
"sudo pcs stonith delete <device-name>",
"parameter_defaults: ComputeCount: 3",
"(undercloud)USD openstack stack list",
"(undercloud)USD openstack baremetal node list",
"(undercloud)USD openstack baremetal node maintenance set <node_uuid>",
"(undercloud)USD source ~/overcloudrc",
"sudo systemctl stop tripleo_*",
"(overcloud)USD for AGENT in USD(openstack network agent list --host <scaled_down_node> -c ID -f value) ; do openstack network agent delete USDAGENT ; done",
"(overcloud)USD openstack compute service list",
"(overcloud)USD openstack compute service set <hostname> nova-compute --disable",
"(overcloud)USD openstack compute service delete <service_id>",
"(overcloud)USD openstack resource provider list (overcloud)USD openstack resource provider delete <uuid>",
"(overcloud)USD source ~/stackrc",
"(undercloud)USD openstack overcloud node delete --stack <overcloud> <node>",
"Ansible failed, check log at `/var/lib/mistral/overcloud/ansible.log` WARNING: Scale-down configuration error. Manual cleanup of some actions may be necessary. Continuing with node removal.",
"(undercloud)USD openstack stack list",
"sudo pcs resource delete <scaled_down_node> sudo cibadmin -o nodes --delete --xml-text '<node id=\"<scaled_down_node>\"/>' sudo cibadmin -o fencing-topology --delete --xml-text '<fencing-level target=\"<scaled_down_node>\"/>' sudo cibadmin -o status --delete --xml-text '<node_state id=\"<scaled_down_node>\"/>' sudo cibadmin -o status --delete-all --xml-text '<node id=\"<scaled_down_node>\"/>' --force",
"sudo pcs stonith delete <device-name>",
"parameter_defaults: ComputeCount: 3",
"parameter_defaults: <RoleName>RemovalPoliciesMode: update <RoleName>RemovalPolicies: [{'resource_list': []}]",
"(undercloud)USD openstack baremetal node maintenance unset <node_uuid>",
"(undercloud)USD openstack baremetal node list",
"parameter_defaults: <RoleName>RemovalPoliciesMode: append",
"source ~/stackrc",
"(undercloud)USD openstack stack resource show <stack> <role>",
"+------------------------+-----------------------------------------------------------+ | Field | Value | +------------------------+-----------------------------------------------------------+ | attributes | {u'attributes': None, u'refs': None, u'refs_map': None, | | | u'removed_rsrc_list': [u'2', u'3']} | 1 | creation_time | 2017-09-05T09:10:42Z | | description | | | links | [{u'href': u'http://192.168.24.1:8004/v1/bd9e6da805594de9 | | | 8d4a1d3a3ee874dd/stacks/overcloud/1c7810c4-8a1e- | | | 4d61-a5d8-9f964915d503/resources/Compute', u'rel': | | | u'self'}, {u'href': u'http://192.168.24.1:8004/v1/bd9e6da | | | 805594de98d4a1d3a3ee874dd/stacks/overcloud/1c7810c4-8a1e- | | | 4d61-a5d8-9f964915d503', u'rel': u'stack'}, {u'href': u'h | | | ttp://192.168.24.1:8004/v1/bd9e6da805594de98d4a1d3a3ee874 | | | dd/stacks/overcloud-Compute-zkjccox63svg/7632fb0b- | | | 80b1-42b3-9ea7-6114c89adc29', u'rel': u'nested'}] | | logical_resource_id | Compute | | physical_resource_id | 7632fb0b-80b1-42b3-9ea7-6114c89adc29 | | required_by | [u'AllNodesDeploySteps', | | | u'ComputeAllNodesValidationDeployment', | | | u'AllNodesExtraConfig', u'ComputeIpListMap', | | | u'ComputeHostsDeployment', u'UpdateWorkflow', | | | u'ComputeSshKnownHostsDeployment', u'hostsConfig', | | | u'SshKnownHostsConfig', u'ComputeAllNodesDeployment'] | | resource_name | Compute | | resource_status | CREATE_COMPLETE | | resource_status_reason | state changed | | resource_type | OS::Heat::ResourceGroup | | updated_time | 2017-09-05T09:10:42Z | +------------------------+-----------------------------------------------------------+",
"(undercloud)USD openstack stack resource list <physical_resource_id>",
"(undercloud)USD openstack baremetal node list",
"openstack baremetal node set --property capabilities='node:<role>-<index>,boot_option:local' <node>",
"parameter_defaults: ControllerSchedulerHints: 'capabilities:node': 'controller-%index%' ComputeSchedulerHints: 'capabilities:node': 'compute-%index%' HostnameMap: overcloud-controller-0: overcloud-controller-prod-123-0 overcloud-controller-1: overcloud-controller-prod-456-0 1 overcloud-controller-2: overcloud-controller-prod-789-0 overcloud-controller-3: overcloud-controller-prod-456-0 2 overcloud-compute-0: overcloud-compute-prod-abc-0 overcloud-compute-3: overcloud-compute-prod-abc-3 3 overcloud-compute-8: overcloud-compute-prod-abc-3 4 .",
"parameter_defaults: ControllerIPs: internal_api: - 192.168.1.10 1 - 192.168.1.11 2 - 192.168.1.12 3 - 192.168.1.11 4 storage: - 192.168.2.10 - 192.168.2.11 - 192.168.2.12 - 192.168.2.11 ComputeIPs: internal_api: - 172.17.0.10 5 - 172.17.0.11 6 - 172.17.0.11 7 storage: - 172.17.0.10 - 172.17.0.11 - 172.17.0.11",
"parameter_defaults: ObjectStorageCount: 4",
"source ~/stackrc (undercloud) USD openstack overcloud deploy --templates -e node-info.yaml <environment_files>",
"Mar 29 08:49:05 localhost *object-server: Object replication complete.* Mar 29 08:49:11 localhost *container-server: Replication run OVER* Mar 29 08:49:13 localhost *account-server: Replication run OVER*",
"parameter_defaults: ObjectStorageCount: 3",
"parameter_defaults: ObjectStorageRemovalPolicies: [{'resource_list': ['1']}]",
"(undercloud) USD openstack overcloud deploy --templates -e node-info.yaml <environment_files> -e remove-object-node.yaml",
"openstack overcloud deploy --skip-deploy-identifier",
"parameter_defaults: DeploymentServerBlacklist: - overcloud-compute-0 - overcloud-compute-1 - overcloud-compute-2",
"source ~/stackrc (undercloud) USD openstack overcloud deploy --templates -e server-blacklist.yaml [OTHER OPTIONS]",
"parameter_defaults: DeploymentServerBlacklist: []"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/director_installation_and_usage/assembly_scaling-overcloud-nodes |
3.4. Winbind | 3.4. Winbind Samba must be configured before Winbind can be configured as an identity store for a system. A Samba server must be set up and used for user accounts, or Samba must be configured to use Active Directory as a back end identity store. Configuring Samba is covered in the Samba project documentation . Specifically configuring Samba as an integration point with Active Directory is also covered in the Using Samba for Active Directory Integration section in the Windows Integration Guide . 3.4.1. Enabling Winbind in the authconfig GUI Install the samba-winbind package. This is required for Windows integration features in Samba services, but is not installed by default. Open the authconfig UI. In the Identity & Authentication tab, select Winbind in the User Account Database drop-down menu. Set the information that is required to connect to the Microsoft Active Directory domain controller. Winbind Domain gives the Windows domain to connect to. This should be in the Windows 2000 format, such as DOMAIN . Security Model sets the security model to use for Samba clients. authconfig supports four types of security models: ads configures Samba to act as a domain member in an Active Directory Server realm. To operate in this mode, the krb5-server package must be installed and Kerberos must be configured properly. domain has Samba validate the user name and password by authenticating it through a Windows primary or backup domain controller, much like a Windows server. server has a local Samba server validate the user name and password by authenticating it through another server, such as a Windows server. If the server authentication attempt fails, the system then attempts to authenticate using user mode. user requires a client to log in with a valid user name and password. This mode does support encrypted passwords. The user name format must be domain\user , such as EXAMPLE\jsmith . Note When verifying that a given user exists in the Windows domain, always use the domain\user_name format and escape the backslash (\) character. For example: This is the default option. Winbind ADS Realm gives the Active Directory realm that the Samba server will join. This is only used with the ads security model. Winbind Domain Controllers gives the host name or IP address of the domain controller to use to enroll the system. Template Shell sets which login shell to use for Windows user account settings. Allow offline login allows authentication information to be stored in a local cache. The cache is referenced when a user attempts to authenticate to system resources while the system is offline. 3.4.2. Enabling Winbind in the Command Line Windows domains have several different security models, and the security model used in the domain determines the authentication configuration for the local system. For user and server security models, the Winbind configuration requires only the domain (or workgroup) name and the domain controller host names. The --winbindjoin parameter sets the user to use to connect to the Active Directory domain, and --enablelocalauthorize sets local authorization operations to check the /etc/passwd file. After running the authconfig command, join the Active Directory domain. Note The user name format must be domain\user , such as EXAMPLE\jsmith . When verifying that a given user exists in the Windows domain, always use the domain\user formats and escape the backslash (\) character. For example: For ads and domain security models, the Winbind configuration allows additional configuration for the template shell and realm (ads only). For example: There are a lot of other options for configuring Windows-based authentication and the information for Windows user accounts, such as name formats, whether to require the domain name with the user name, and UID ranges. These options are listed in the authconfig help. | [
"yum install samba-winbind",
"authconfig-gtk",
"getent passwd domain\\\\user DOMAIN\\user:*:16777216:16777216:Name Surname:/home/DOMAIN/user:/bin/bash",
"authconfig --enablewinbind --enablewinbindauth --smbsecurity=user|server --enablewinbindoffline --smbservers=ad.example.com --smbworkgroup=EXAMPLE --update --enablelocauthorize --winbindjoin=admin net join ads",
"getent passwd domain\\\\user DOMAIN\\user:*:16777216:16777216:Name Surname:/home/DOMAIN/user:/bin/bash",
"authconfig --enablewinbind --enablewinbindauth --smbsecurity ads --enablewinbindoffline --smbservers=ad.example.com --smbworkgroup=EXAMPLE --smbrealm EXAMPLE.COM --winbindtemplateshell=/bin/sh --update"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/winbind-auth |
Getting Started with Debezium | Getting Started with Debezium Red Hat Integration 2023.q4 For use with Red Hat Integration 2.3.4 Red Hat Integration Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/getting_started_with_debezium/index |
Appendix A. Tips for Developers | Appendix A. Tips for Developers Every good programming textbook covers problems with memory allocation and the performance of specific functions. As you develop your software, be aware of issues that might increase power consumption on the systems on which the software runs. Although these considerations do not affect every line of code, you can optimize your code in areas which are frequent bottlenecks for performance. Some techniques that are often problematic include: using threads. unnecessary CPU wake-ups and not using wake-ups efficiently. If you must wake up, do everything at once (race to idle) and as quickly as possible. using [f]sync() unnecessarily. unnecessary active polling or using short, regular timeouts. (React to events instead). not using wake-ups efficiently. inefficient disk access. Use large buffers to avoid frequent disk access. Write one large block at a time. inefficient use of timers. Group timers across applications (or even across systems) if possible. excessive I/O, power consumption, or memory usage (including memory leaks) performing unnecessary computation. The following sections examine some of these areas in greater detail. A.1. Using Threads It is widely believed that using threads makes applications perform better and faster, but this is not true in every case. Python Python uses the Global Lock Interpreter [1] , so threading is profitable only for larger I/O operations. Unladen-swallow [2] is a faster implementation of Python with which you might be able to optimize your code. Perl Perl threads were originally created for applications running on systems without forking (such as systems with 32-bit Windows operating systems). In Perl threads, the data is copied for every single thread (Copy On Write). Data is not shared by default, because users should be able to define the level of data sharing. For data sharing the threads::shared module has to be included. However, data is not only then copied (Copy On Write), but the module also creates tied variables for the data, which takes even more time and is even slower. [3] C C threads share the same memory, each thread has its own stack, and the kernel does not have to create new file descriptors and allocate new memory space. C can really use the support of more CPUs for more threads. Therefore, to maximize the performance of your threads, use a low-level language like C or C++. If you use a scripting language, consider writing a C binding. Use profilers to identify poorly performing parts of your code. [4] [1] http://docs.python.org/c-api/init.html#thread-state-and-the-global-interpreter-lock [2] http://code.google.com/p/unladen-swallow/ [3] http://www.perlmonks.org/?node_id=288022 [4] http://people.redhat.com/drepper/lt2009.pdf | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/power_management_guide/developer_tips |
2.6. Devices | 2.6. Devices SR-IOV on the be2net driver, BZ# 602451 The SR-IOV functionality of the Emulex be2net driver is considered a Technology Preview in Red Hat Enterprise Linux 6.3. You must meet the following requirements to use the latest version of SR-IOV support: You must run the latest Emulex firmware (revision 4.1.417.0 or later). The server system BIOS must support the SR-IOV functionality and have virtualization support for Direct I/O VT-d. You must use the GA version of Red Hat Enterprise Linux 6.3. SR-IOV runs on all Emulex-branded and OEM variants of BE3-based hardware, which all require the be2net driver software. Package: kernel-2.6.32-279 iSCSI and FCoE boot iSCSI and FCoE boot support on Broadcom devices is not included in Red Hat Enterprise Linux 6.3. These two features, which are provided by the bnx2i and bnx2fc Broadcom drivers, remain a Technology Preview until further notice. Package: kernel-2.6.32-279 mpt2sas lockless mode The mpt2sas driver is fully supported. However, when used in the lockless mode, the driver is a Technology Preview. Package: kernel-2.6.32-279 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/devices_tp |
Chapter 24. ipaddr6 | Chapter 24. ipaddr6 The IPv6 address of the source server, if available. Can be an array. Data type ip | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/logging/ipaddr6 |
Chapter 18. Requesting CRI-O and Kubelet profiling data by using the Node Observability Operator | Chapter 18. Requesting CRI-O and Kubelet profiling data by using the Node Observability Operator The Node Observability Operator collects and stores the CRI-O and Kubelet profiling data of worker nodes. You can query the profiling data to analyze the CRI-O and Kubelet performance trends and debug the performance-related issues. Important The Node Observability Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 18.1. Workflow of the Node Observability Operator The following workflow outlines on how to query the profiling data using the Node Observability Operator: Install the Node Observability Operator in the OpenShift Container Platform cluster. Create a NodeObservability custom resource to enable the CRI-O profiling on the worker nodes of your choice. Run the profiling query to generate the profiling data. 18.2. Installing the Node Observability Operator The Node Observability Operator is not installed in OpenShift Container Platform by default. You can install the Node Observability Operator by using the OpenShift Container Platform CLI or the web console. 18.2.1. Installing the Node Observability Operator using the CLI You can install the Node Observability Operator by using the OpenShift CLI (oc). Prerequisites You have installed the OpenShift CLI (oc). You have access to the cluster with cluster-admin privileges. Procedure Confirm that the Node Observability Operator is available by running the following command: USD oc get packagemanifests -n openshift-marketplace node-observability-operator Example output NAME CATALOG AGE node-observability-operator Red Hat Operators 9h Create the node-observability-operator namespace by running the following command: USD oc new-project node-observability-operator Create an OperatorGroup object YAML file: cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: node-observability-operator namespace: node-observability-operator spec: targetNamespaces: [] EOF Create a Subscription object YAML file to subscribe a namespace to an Operator: cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: node-observability-operator namespace: node-observability-operator spec: channel: alpha name: node-observability-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF Verification View the install plan name by running the following command: USD oc -n node-observability-operator get sub node-observability-operator -o yaml | yq '.status.installplan.name' Example output install-dt54w Verify the install plan status by running the following command: USD oc -n node-observability-operator get ip <install_plan_name> -o yaml | yq '.status.phase' <install_plan_name> is the install plan name that you obtained from the output of the command. Example output COMPLETE Verify that the Node Observability Operator is up and running: USD oc get deploy -n node-observability-operator Example output NAME READY UP-TO-DATE AVAILABLE AGE node-observability-operator-controller-manager 1/1 1 1 40h 18.2.2. Installing the Node Observability Operator using the web console You can install the Node Observability Operator from the OpenShift Container Platform web console. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. In the Administrator's navigation panel, expand Operators OperatorHub . In the All items field, enter Node Observability Operator and select the Node Observability Operator tile. Click Install . On the Install Operator page, configure the following settings: In the Update channel area, click alpha . In the Installation mode area, click A specific namespace on the cluster . From the Installed Namespace list, select node-observability-operator from the list. In the Update approval area, select Automatic . Click Install . Verification In the Administrator's navigation panel, expand Operators Installed Operators . Verify that the Node Observability Operator is listed in the Operators list. 18.3. Creating the Node Observability custom resource You must create and run the NodeObservability custom resource (CR) before you run the profiling query. When you run the NodeObservability CR, it creates the necessary machine config and machine config pool CRs to enable the CRI-O profiling on the worker nodes matching the nodeSelector . Important If CRI-O profiling is not enabled on the worker nodes, the NodeObservabilityMachineConfig resource gets created. Worker nodes matching the nodeSelector specified in NodeObservability CR restarts. This might take 10 or more minutes to complete. Note Kubelet profiling is enabled by default. The CRI-O unix socket of the node is mounted on the agent pod, which allows the agent to communicate with CRI-O to run the pprof request. Similarly, the kubelet-serving-ca certificate chain is mounted on the agent pod, which allows secure communication between the agent and node's kubelet endpoint. Prerequisites You have installed the Node Observability Operator. You have installed the OpenShift CLI (oc). You have access to the cluster with cluster-admin privileges. Procedure Log in to the OpenShift Container Platform CLI by running the following command: USD oc login -u kubeadmin https://<HOSTNAME>:6443 Switch back to the node-observability-operator namespace by running the following command: USD oc project node-observability-operator Create a CR file named nodeobservability.yaml that contains the following text: apiVersion: nodeobservability.olm.openshift.io/v1alpha2 kind: NodeObservability metadata: name: cluster 1 spec: nodeSelector: kubernetes.io/hostname: <node_hostname> 2 type: crio-kubelet 1 You must specify the name as cluster because there should be only one NodeObservability CR per cluster. 2 Specify the nodes on which the Node Observability agent must be deployed. Run the NodeObservability CR: oc apply -f nodeobservability.yaml Example output nodeobservability.olm.openshift.io/cluster created Review the status of the NodeObservability CR by running the following command: USD oc get nob/cluster -o yaml | yq '.status.conditions' Example output conditions: conditions: - lastTransitionTime: "2022-07-05T07:33:54Z" message: 'DaemonSet node-observability-ds ready: true NodeObservabilityMachineConfig ready: true' reason: Ready status: "True" type: Ready NodeObservability CR run is completed when the reason is Ready and the status is True . 18.4. Running the profiling query To run the profiling query, you must create a NodeObservabilityRun resource. The profiling query is a blocking operation that fetches CRI-O and Kubelet profiling data for a duration of 30 seconds. After the profiling query is complete, you must retrieve the profiling data inside the container file system /run/node-observability directory. The lifetime of data is bound to the agent pod through the emptyDir volume, so you can access the profiling data while the agent pod is in the running status. Important You can request only one profiling query at any point of time. Prerequisites You have installed the Node Observability Operator. You have created the NodeObservability custom resource (CR). You have access to the cluster with cluster-admin privileges. Procedure Create a NodeObservabilityRun resource file named nodeobservabilityrun.yaml that contains the following text: apiVersion: nodeobservability.olm.openshift.io/v1alpha2 kind: NodeObservabilityRun metadata: name: nodeobservabilityrun spec: nodeObservabilityRef: name: cluster Trigger the profiling query by running the NodeObservabilityRun resource: USD oc apply -f nodeobservabilityrun.yaml Review the status of the NodeObservabilityRun by running the following command: USD oc get nodeobservabilityrun nodeobservabilityrun -o yaml | yq '.status.conditions' Example output conditions: - lastTransitionTime: "2022-07-07T14:57:34Z" message: Ready to start profiling reason: Ready status: "True" type: Ready - lastTransitionTime: "2022-07-07T14:58:10Z" message: Profiling query done reason: Finished status: "True" type: Finished The profiling query is complete once the status is True and type is Finished . Retrieve the profiling data from the container's /run/node-observability path by running the following bash script: for a in USD(oc get nodeobservabilityrun nodeobservabilityrun -o yaml | yq .status.agents[].name); do echo "agent USD{a}" mkdir -p "/tmp/USD{a}" for p in USD(oc exec "USD{a}" -c node-observability-agent -- bash -c "ls /run/node-observability/*.pprof"); do f="USD(basename USD{p})" echo "copying USD{f} to /tmp/USD{a}/USD{f}" oc exec "USD{a}" -c node-observability-agent -- cat "USD{p}" > "/tmp/USD{a}/USD{f}" done done | [
"oc get packagemanifests -n openshift-marketplace node-observability-operator",
"NAME CATALOG AGE node-observability-operator Red Hat Operators 9h",
"oc new-project node-observability-operator",
"cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: node-observability-operator namespace: node-observability-operator spec: targetNamespaces: [] EOF",
"cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: node-observability-operator namespace: node-observability-operator spec: channel: alpha name: node-observability-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"oc -n node-observability-operator get sub node-observability-operator -o yaml | yq '.status.installplan.name'",
"install-dt54w",
"oc -n node-observability-operator get ip <install_plan_name> -o yaml | yq '.status.phase'",
"COMPLETE",
"oc get deploy -n node-observability-operator",
"NAME READY UP-TO-DATE AVAILABLE AGE node-observability-operator-controller-manager 1/1 1 1 40h",
"oc login -u kubeadmin https://<HOSTNAME>:6443",
"oc project node-observability-operator",
"apiVersion: nodeobservability.olm.openshift.io/v1alpha2 kind: NodeObservability metadata: name: cluster 1 spec: nodeSelector: kubernetes.io/hostname: <node_hostname> 2 type: crio-kubelet",
"apply -f nodeobservability.yaml",
"nodeobservability.olm.openshift.io/cluster created",
"oc get nob/cluster -o yaml | yq '.status.conditions'",
"conditions: conditions: - lastTransitionTime: \"2022-07-05T07:33:54Z\" message: 'DaemonSet node-observability-ds ready: true NodeObservabilityMachineConfig ready: true' reason: Ready status: \"True\" type: Ready",
"apiVersion: nodeobservability.olm.openshift.io/v1alpha2 kind: NodeObservabilityRun metadata: name: nodeobservabilityrun spec: nodeObservabilityRef: name: cluster",
"oc apply -f nodeobservabilityrun.yaml",
"oc get nodeobservabilityrun nodeobservabilityrun -o yaml | yq '.status.conditions'",
"conditions: - lastTransitionTime: \"2022-07-07T14:57:34Z\" message: Ready to start profiling reason: Ready status: \"True\" type: Ready - lastTransitionTime: \"2022-07-07T14:58:10Z\" message: Profiling query done reason: Finished status: \"True\" type: Finished",
"for a in USD(oc get nodeobservabilityrun nodeobservabilityrun -o yaml | yq .status.agents[].name); do echo \"agent USD{a}\" mkdir -p \"/tmp/USD{a}\" for p in USD(oc exec \"USD{a}\" -c node-observability-agent -- bash -c \"ls /run/node-observability/*.pprof\"); do f=\"USD(basename USD{p})\" echo \"copying USD{f} to /tmp/USD{a}/USD{f}\" oc exec \"USD{a}\" -c node-observability-agent -- cat \"USD{p}\" > \"/tmp/USD{a}/USD{f}\" done done"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/scalability_and_performance/using-node-observability-operator |
3.6. Deploy a VDB via AdminShell | 3.6. Deploy a VDB via AdminShell Prerequisites Red Hat JBoss Data Virtualization must be installed. The JBoss Enterprise Application Platform (EAP) server must be running. Procedure 3.4. Deploy a VDB via AdminShell Open the interactive AdminShell interface Run the ./adminshell.sh command. Open a connection Within the interactive AdminShell, run the connectAsAdmin() command. Deploy your virtual database Run the deploy(" PATH / DATABASE .vdb") command. Close the connection Run the disconnect() command. Exit the interactive shell Enter the exit command to leave the interactive shell. Note In domain mode, when deploying using AdminShell, the VDB is deployed to all available servers. A VDB can also be deployed via the AdminShell console or using a script via the non-interactive AdminShell. For more information on using these, refer to topics on the Administration Shell. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/deploy_a_vdb_via_adminshell1 |
Chapter 2. Administering hosts | Chapter 2. Administering hosts This chapter describes creating, registering, administering, and removing hosts. 2.1. Creating a host in Red Hat Satellite Use this procedure to create a host in Red Hat Satellite. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Create Host . On the Host tab, enter the required details. Click the Ansible Roles tab, and from the Ansible Roles list, select one or more roles that you want to add to the host. Use the arrow icon to manage the roles that you add or remove. On the Puppet Classes tab, select the Puppet classes you want to include. On the Interfaces tab: For each interface, click Edit in the Actions column and configure the following settings as required: Type - For a Bond or BMC interface, use the Type list and select the interface type. MAC address - Enter the MAC address. DNS name - Enter the DNS name that is known to the DNS server. This is used for the host part of the FQDN. Domain - Select the domain name of the provisioning network. This automatically updates the Subnet list with a selection of suitable subnets. IPv4 Subnet - Select an IPv4 subnet for the host from the list. IPv6 Subnet - Select an IPv6 subnet for the host from the list. IPv4 address - If IP address management (IPAM) is enabled for the subnet, the IP address is automatically suggested. Alternatively, you can enter an address. The address can be omitted if provisioning tokens are enabled, if the domain does not manage DNS, if the subnet does not manage reverse DNS, or if the subnet does not manage DHCP reservations. IPv6 address - If IP address management (IPAM) is enabled for the subnet, the IP address is automatically suggested. Alternatively, you can enter an address. Managed - Select this checkbox to configure the interface during provisioning to use the Capsule provided DHCP and DNS services. Primary - Select this checkbox to use the DNS name from this interface as the host portion of the FQDN. Provision - Select this checkbox to use this interface for provisioning. This means TFTP boot will take place using this interface, or in case of image based provisioning, the script to complete the provisioning will be executed through this interface. Note that many provisioning tasks, such as downloading packages by anaconda or Puppet setup in a %post script, will use the primary interface. Virtual NIC - Select this checkbox if this interface is not a physical device. This setting has two options: Tag - Optionally set a VLAN tag. If unset, the tag will be the VLAN ID of the subnet. Attached to - Enter the device name of the interface this virtual interface is attached to. Click OK to save the interface configuration. Optionally, click Add Interface to include an additional network interface. For more information, see Chapter 5, Adding network interfaces . Click Submit to apply the changes and exit. On the Operating System tab, enter the required details. For Red Hat operating systems, select Synced Content for Media Selection . If you want to use non Red Hat operating systems, select All Media , then select the installation media from the Media Selection list. You can select a partition table from the list or enter a custom partition table in the Custom partition table field. You cannot specify both. On the Parameters tab, click Add Parameter to add any parameter variables that you want to pass to job templates at run time. This includes all Puppet Class, Ansible Playbook parameters and host parameters that you want to associate with the host. To use a parameter variable with an Ansible job template, you must add a Host Parameter . When you create a host, you can set system purpose attributes. System purpose attributes help determine which repositories are available on the host. System purpose attributes also help with reporting in the Subscriptions service of the Red Hat Hybrid Cloud Console. In the Host Parameters area, enter the following parameter names with the corresponding values. For the list of values, see Configuring System Purpose using the subscription-manager command-line tool in Automatically installing RHEL 8 . syspurpose_role syspurpose_sla syspurpose_usage syspurpose_addons If you want to create a host with pull mode for remote job execution, add the enable-remote-execution-pull parameter with type boolean set to true . For more information, see Section 13.4, "Transport modes for remote execution" . On the Additional Information tab, enter additional information about the host. Click Submit to complete your provisioning request. CLI procedure To create a host associated to a host group, enter the following command: This command prompts you to specify the root password. It is required to specify the host's IP and MAC address. Other properties of the primary network interface can be inherited from the host group or set using the --subnet , and --domain parameters. You can set additional interfaces using the --interface option, which accepts a list of key-value pairs. For the list of available interface settings, enter the hammer host create --help command. 2.2. Cloning hosts You can clone existing hosts. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . In the Actions menu, click Clone . On the Host tab, ensure to provide a Name different from the original host. On the Interfaces tab, ensure to provide a different IP address. Click Submit to clone the host. For more information, see Section 2.1, "Creating a host in Red Hat Satellite" . 2.3. Associating a virtual machine with Satellite from a hypervisor Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources . Select a compute resource. On the Virtual Machines tab, click Associate VMs to associate all VMs or select Associate VM from the Actions menu to associate a single VM. 2.4. Editing the system purpose of a host You can edit the system purpose attributes for a Red Hat Enterprise Linux host. System purpose allows you to set the intended use of a system on your network and improves reporting accuracy in the Subscriptions service of the Red Hat Hybrid Cloud Console. For more information about system purpose, see Configuring System Purpose using the subscription-manager command-line tool in Automatically installing RHEL 8 . Prerequisites The host that you want to edit must be registered with the subscription-manager. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. On the Overview tab, click Edit on the System purpose card. Select the system purpose attributes for your host. Click Save . CLI procedure Log in to the host and edit the required system purpose attributes. For example, set the usage type to Production , the role to Red Hat Enterprise Linux Server , and add the addon add on. For the list of values, see Configuring System Purpose using the subscription-manager command-line tool in Automatically installing RHEL 8 . Verify the system purpose attributes for this host: 2.5. Editing the system purpose of multiple hosts You can edit the system purpose attributes of Red Hat Enterprise Linux hosts. For more information about system purpose, see Configuring System Purpose using the subscription-manager command-line tool in Automatically installing RHEL 8 . Prerequisites The hosts that you want to edit must be registered with the subscription-manager. Procedure In the Satellite web UI, navigate to Hosts > Content Hosts and select Red Hat Enterprise Linux 8 hosts that you want to edit. Click the Select Action list and select Manage System Purpose . Select the system purpose attributes that you want to assign to the selected hosts. You can select one of the following values: A specific attribute to set an all selected hosts. No Change to keep the attribute set on the selected hosts. None (Clear) to clear the attribute on the selected hosts. Click Assign . 2.6. Changing a module stream for a host If you have a host running Red Hat Enterprise Linux 8, you can modify the module stream for the repositories you install. You can enable, disable, install, update, and remove module streams from your host in the Satellite web UI. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. Click the Content tab, then click the Module streams tab. Click the vertical ellipsis to the module and select the action you want to perform. You get a REX job notification once the remote execution job is complete. 2.7. Enabling custom repositories on content hosts You can enable all custom repositories on content hosts using the Satellite web UI. Procedure In the Satellite web UI, navigate to Hosts > All Hosts and select a host. Select the Content tab, then select Repository sets . From the dropdown, you can filter the Repository type column to Custom . Select the desired number of repositories or click the Select All checkbox to select all repositories, then click the vertical ellipsis, and select Override to Enabled . 2.8. Changing the content source of a host A content source is a Capsule that a host consumes content from. Use this procedure to change the content source for a host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. Click the vertical ellipsis icon to the Edit button and select Change content source . Select Content Source , Lifecycle Content View , and Content Source from the lists. Click Change content source . Note Some lifecycle environments can be unavailable for selection if they are not synced on the selected content source. For more information, see Adding lifecycle environments to Capsule Servers in Managing content . You can either complete the content source change using remote execution or manually. To update configuration on host using remote execution, click Run job invocation . For more information about running remote execution jobs, see Configuring and setting up remote jobs in Managing hosts . To update the content source manually, execute the autogenerated commands from Change content source on the host. 2.9. Changing the environment of a host Use this procedure to change the environment of a host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. On the Content view environment card, click the options icon and select Edit content view environments . Select the environment. Select the content view. Click Save . 2.10. Changing the managed status of a host Hosts provisioned by Satellite are Managed by default. When a host is set to Managed, you can configure additional host parameters from Satellite Server. These additional parameters are listed on the Operating System tab. If you change any settings on the Operating System tab, they will not take effect until you set the host to build and reboot it. If you need to obtain reports about configuration management on systems using an operating system not supported by Satellite, set the host to Unmanaged. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. Click Edit . Click Manage host or Unmanage host to change the host's status. Click Submit . 2.11. Enabling Tracer on a host Use this procedure to enable Tracer on Satellite and access Traces. Tracer displays a list of services and applications that need to be restarted. Traces is the output generated by Tracer in the Satellite web UI. Prerequisites Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server, available in the content view and the lifecycle environment of the host, and enabled for the host. For more information, see Changing the repository sets status for a host in Satellite in Managing content . Remote execution is enabled. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. On the Traces tab, click Enable Traces . Select the provider to install katello-host-tools-tracer from the list. Click Enable Tracer . You get a REX job notification after the remote execution job is complete. 2.12. Restarting applications on a host Use this procedure to restart applications from the Satellite web UI. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the hosts you want to modify. Select the Traces tab. Select applications that you want to restart. Select Restart via remote execution from the Restart app list. You will get a REX job notification once the remote execution job is complete. 2.13. Assigning a host to a specific organization Use this procedure to assign a host to a specific organization. For general information about organizations and how to configure them, see Managing Organizations in Administering Red Hat Satellite . Note If your host is already registered with a different organization, you must first unregister the host before assigning it to a new organization. To unregister the host, run subscription-manager unregister on the host. After you assign the host to a new organization, you can re-register the host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select the checkbox of the host you want to change. From the Select Action list, select Assign Organization . A new option window opens. From the Select Organization list, select the organization that you want to assign your host to. Select the checkbox Fix Organization on Mismatch . Note A mismatch happens if there is a resource associated with a host, such as a domain or subnet, and at the same time not associated with the organization you want to assign the host to. The option Fix Organization on Mismatch will add such a resource to the organization, and is therefore the recommended choice. The option Fail on Mismatch will always result in an error message. For example, reassigning a host from one organization to another will fail, even if there is no actual mismatch in settings. Click Submit . 2.14. Assigning a host to a specific location Use this procedure to assign a host to a specific location. For general information about locations and how to configure them, see Creating a Location in Managing content . Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select the checkbox of the host you want to change. From the Select Action list, select Assign Location . A new option window opens. Navigate to the Select Location list and choose the location that you want for your host. Select the checkbox Fix Location on Mismatch . Note A mismatch happens if there is a resource associated with a host, such as a domain or subnet, and at the same time not associated with the location you want to assign the host to. The option Fix Location on Mismatch will add such a resource to the location, and is therefore the recommended choice. The option Fail on Mismatch will always result in an error message. For example, reassigning a host from one location to another will fail, even if there is no actual mismatch in settings. Click Submit . 2.15. Switching between hosts When you are on a particular host in the Satellite web UI, you can navigate between hosts without leaving the page by using the host switcher. Click ⇄ to the hostname. This displays a list of hosts in alphabetical order with a pagination arrow and a search bar to find the host you are looking for. 2.16. Viewing host details from a content host Use this procedure to view the host details page from a content host. Procedure In the Satellite web UI, navigate to Hosts > Content Hosts Click the content host you want to view. Select the Details tab to see the host details page. The cards in the Details tab show details for the System properties , BIOS , Networking interfaces , Operating system , Provisioning templates , and Provisioning . Registered content hosts show additional cards for Registration details , Installed products , and HW properties providing information about Model , Number of CPU(s) , Sockets , Cores per socket , and RAM . 2.17. Selecting host columns You can select what columns you want to see in the host table on the Hosts > All Hosts page. Note It is not possible to deselect the Name column. The Name column serves as a primary identification method of the host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click Manage columns . Select columns that you want to display. You can select individual columns or column categories. Selecting or deselecting a category selects or deselects all columns in that category. Note Some columns are included in more than one category, but you can display a column of a specific type only once. By selecting or deselecting a specific column, you select or deselect all instances of that column. Verification You can now see the selected columns in the host table. 2.18. Removing a host from Satellite Use this procedure to remove a host from Satellite. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > All Hosts or Hosts > Content Hosts . Note that there is no difference from what page you remove a host, from All Hosts or Content Hosts . In both cases, Satellite removes a host completely. Select the hosts that you want to remove. From the Select Action list, select Delete Hosts . Click Submit to remove the host from Satellite permanently. Warning By default, the Destroy associated VM on host delete setting is set to no . If a host record that is associated with a virtual machine is deleted, the virtual machine will remain on the compute resource. To delete a virtual machine on the compute resource, navigate to Administer > Settings and select the Provisioning tab. Setting Destroy associated VM on host delete to yes deletes the virtual machine if the host record that is associated with the virtual machine is deleted. To avoid deleting the virtual machine in this situation, disassociate the virtual machine from Satellite without removing it from the compute resource or change the setting. CLI procedure Delete your host from Satellite: Alternatively, you can use --name My_Host_Name instead of --id My_Host_ID . 2.18.1. Disassociating a virtual machine from Satellite without removing it from a hypervisor Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select the checkbox to the left of the hosts that you want to disassociate. From the Select Action list, click Disassociate Hosts . Optional: Select the checkbox to keep the hosts for future action. Click Submit . 2.19. Lifecycle status of RHEL hosts Satellite provides multiple mechanisms to display information about upcoming End of Support (EOS) events for your Red Hat Enterprise Linux hosts: Notification banner A column on the Hosts index page Alert on the Hosts index page for each host that runs Red Hat Enterprise Linux with an upcoming EOS event in a year as well as when support has ended Ability to Search for hosts by EOS on the Hosts index page Host status card on the host details page For any hosts that are not running Red Hat Enterprise Linux, Satellite displays Unknown in the RHEL Lifecycle status and Last report columns. EOS notification banner When either the end of maintenance support or the end of extended lifecycle support approaches in a year, you will see a notification banner in the Satellite web UI if you have hosts with that Red Hat Enterprise Linux version. The notification provides information about the Red Hat Enterprise Linux version, the number of hosts running that version in your environment, the lifecycle support, and the expiration date. Along with other information, the Red Hat Enterprise Linux lifecycle column is visible in the notification. 2.19.1. Displaying RHEL lifecycle status You can display the status of the end of support (EOS) for your Red Hat Enterprise Linux hosts in the table on the Hosts index page. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click Manage columns . Select the Content column to expand it. Select RHEL Lifecycle status . Click Save to generate a new column that displays the Red Hat Enterprise Linux lifecycle status. 2.19.2. Host search by RHEL lifecycle status You can use the Search field to search hosts by rhel_lifecycle_status . It can have one of the following values: full_support maintenance_support approaching_end_of_maintenance extended_support approaching_end_of_support support_ended | [
"hammer host create --ask-root-password yes --hostgroup \" My_Host_Group \" --interface=\"primary=true, provision=true, mac= My_MAC_Address , ip= My_IP_Address \" --location \" My_Location \" --name \" My_Host_Name \" --organization \" My_Organization \"",
"subscription-manager syspurpose set usage ' Production ' subscription-manager syspurpose set role ' Red Hat Enterprise Linux Server ' subscription-manager syspurpose add addons ' your_addon '",
"subscription-manager syspurpose",
"hammer host delete --id My_Host_ID --location-id My_Location_ID --organization-id My_Organization_ID"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_hosts/administering_hosts_managing-hosts |
function::ns_tid | function::ns_tid Name function::ns_tid - Returns the thread ID of a target process as seen in a pid namespace Synopsis Arguments None Description This function returns the thread ID of a target process as seen in the target pid namespace if provided, or the stap process namespace. | [
"ns_tid:long()"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ns-tid |
Chapter 28. Ref | Chapter 28. Ref Overview The Ref expression language is really just a way to look up a custom Expression from the Registry . This is particular convenient to use in the XML DSL. The Ref language is part of camel-core . Static import To use the Ref language in your Java application code, include the following import statement in your Java source files: XML example For example, the splitter pattern can reference a custom expression using the Ref language, as follows: Java example The preceding route can also be implemented in the Java DSL, as follows: | [
"import static org.apache.camel.language.ref.RefLanguage.ref;",
"<beans ...> <bean id=\" myExpression \" class=\"com.mycompany.MyCustomExpression\"/> <camelContext> <route> <from uri=\"seda:a\"/> <split> <ref> myExpression </ref> <to uri=\"mock:b\"/> </split> </route> </camelContext> </beans>",
"from(\"seda:a\") .split().ref(\"myExpression\") .to(\"seda:b\");"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/Ref |
Chapter 15. ReplicationController [v1] | Chapter 15. ReplicationController [v1] Description ReplicationController represents the configuration of a replication controller. Type object 15.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta If the Labels of a ReplicationController are empty, they are defaulted to be the same as the Pod(s) that the replication controller manages. Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ReplicationControllerSpec is the specification of a replication controller. status object ReplicationControllerStatus represents the current status of a replication controller. 15.1.1. .spec Description ReplicationControllerSpec is the specification of a replication controller. Type object Property Type Description minReadySeconds integer Minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) replicas integer Replicas is the number of desired replicas. This is a pointer to distinguish between explicit zero and unspecified. Defaults to 1. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#what-is-a-replicationcontroller selector object (string) Selector is a label query over pods that should match the Replicas count. If Selector is empty, it is defaulted to the labels present on the Pod template. Label keys and values that must match in order to be controlled by this replication controller, if empty defaulted to labels on Pod template. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors template object PodTemplateSpec describes the data a pod should have when created from a template 15.1.2. .spec.template Description PodTemplateSpec describes the data a pod should have when created from a template Type object Property Type Description metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PodSpec is a description of a pod. 15.1.3. .spec.template.spec Description PodSpec is a description of a pod. Type object Required containers Property Type Description activeDeadlineSeconds integer Optional duration in seconds the pod may be active on the node relative to StartTime before the system will actively try to mark it failed and kill associated containers. Value must be a positive integer. affinity object Affinity is a group of affinity scheduling rules. automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether a service account token should be automatically mounted. containers array List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated. containers[] object A single application container that you want to run within a pod. dnsConfig object PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy. dnsPolicy string Set DNS policy for the pod. Defaults to "ClusterFirst". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. Possible enum values: - "ClusterFirst" indicates that the pod should use cluster DNS first unless hostNetwork is true, if it is available, then fall back on the default (as determined by kubelet) DNS settings. - "ClusterFirstWithHostNet" indicates that the pod should use cluster DNS first, if it is available, then fall back on the default (as determined by kubelet) DNS settings. - "Default" indicates that the pod should use the default (as determined by kubelet) DNS settings. - "None" indicates that the pod should use empty DNS settings. DNS parameters such as nameservers and search paths should be defined via DNSConfig. enableServiceLinks boolean EnableServiceLinks indicates whether information about services should be injected into pod's environment variables, matching the syntax of Docker links. Optional: Defaults to true. ephemeralContainers array List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. ephemeralContainers[] object An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted. hostAliases array HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. This is only valid for non-hostNetwork pods. hostAliases[] object HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. hostIPC boolean Use the host's ipc namespace. Optional: Default to false. hostNetwork boolean Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false. hostPID boolean Use the host's pid namespace. Optional: Default to false. hostUsers boolean Use the host's user namespace. Optional: Default to true. If set to true or not present, the pod will be run in the host user namespace, useful for when the pod needs a feature only available to the host user namespace, such as loading a kernel module with CAP_SYS_MODULE. When set to false, a new userns is created for the pod. Setting false is useful for mitigating container breakout vulnerabilities even allowing users to run their containers as root without actually having root privileges on the host. This field is alpha-level and is only honored by servers that enable the UserNamespacesSupport feature. hostname string Specifies the hostname of the Pod If not specified, the pod's hostname will be set to a system-defined value. imagePullSecrets array ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod imagePullSecrets[] object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. initContainers array List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ initContainers[] object A single application container that you want to run within a pod. nodeName string NodeName is a request to schedule this pod onto a specific node. If it is non-empty, the scheduler simply schedules this pod onto that node, assuming that it fits resource requirements. nodeSelector object (string) NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ os object PodOS defines the OS parameters of a pod. overhead object (Quantity) Overhead represents the resource overhead associated with running a pod for a given RuntimeClass. This field will be autopopulated at admission time by the RuntimeClass admission controller. If the RuntimeClass admission controller is enabled, overhead must not be set in Pod create requests. The RuntimeClass admission controller will reject Pod create requests which have the overhead already set. If RuntimeClass is configured and selected in the PodSpec, Overhead will be set to the value defined in the corresponding RuntimeClass, otherwise it will remain unset and treated as zero. More info: https://git.k8s.io/enhancements/keps/sig-node/688-pod-overhead/README.md preemptionPolicy string PreemptionPolicy is the Policy for preempting pods with lower priority. One of Never, PreemptLowerPriority. Defaults to PreemptLowerPriority if unset. Possible enum values: - "Never" means that pod never preempts other pods with lower priority. - "PreemptLowerPriority" means that pod can preempt other pods with lower priority. priority integer The priority value. Various system components use this field to find the priority of the pod. When Priority Admission Controller is enabled, it prevents users from setting this field. The admission controller populates this field from PriorityClassName. The higher the value, the higher the priority. priorityClassName string If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. readinessGates array If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to "True" More info: https://git.k8s.io/enhancements/keps/sig-network/580-pod-readiness-gates readinessGates[] object PodReadinessGate contains the reference to a pod condition resourceClaims array ResourceClaims defines which ResourceClaims must be allocated and reserved before the Pod is allowed to start. The resources will be made available to those containers which consume them by name. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. resourceClaims[] object PodResourceClaim references exactly one ResourceClaim through a ClaimSource. It adds a name to it that uniquely identifies the ResourceClaim inside the Pod. Containers that need access to the ResourceClaim reference it with this name. restartPolicy string Restart policy for all containers within the pod. One of Always, OnFailure, Never. In some contexts, only a subset of those values may be permitted. Default to Always. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy Possible enum values: - "Always" - "Never" - "OnFailure" runtimeClassName string RuntimeClassName refers to a RuntimeClass object in the node.k8s.io group, which should be used to run this pod. If no RuntimeClass resource matches the named class, the pod will not be run. If unset or empty, the "legacy" RuntimeClass will be used, which is an implicit class with an empty definition that uses the default runtime handler. More info: https://git.k8s.io/enhancements/keps/sig-node/585-runtime-class schedulerName string If specified, the pod will be dispatched by specified scheduler. If not specified, the pod will be dispatched by default scheduler. schedulingGates array SchedulingGates is an opaque list of values that if specified will block scheduling the pod. If schedulingGates is not empty, the pod will stay in the SchedulingGated state and the scheduler will not attempt to schedule the pod. SchedulingGates can only be set at pod creation time, and be removed only afterwards. This is a beta feature enabled by the PodSchedulingReadiness feature gate. schedulingGates[] object PodSchedulingGate is associated to a Pod to guard its scheduling. securityContext object PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext. serviceAccount string DeprecatedServiceAccount is a depreciated alias for ServiceAccountName. Deprecated: Use serviceAccountName instead. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to use to run this pod. More info: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ setHostnameAsFQDN boolean If true the pod's hostname will be configured as the pod's FQDN, rather than the leaf name (the default). In Linux containers, this means setting the FQDN in the hostname field of the kernel (the nodename field of struct utsname). In Windows containers, this means setting the registry value of hostname for the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters to FQDN. If a pod does not have FQDN, this has no effect. Default to false. shareProcessNamespace boolean Share a single process namespace between all of the containers in a pod. When this is set containers will be able to view and signal processes from other containers in the same pod, and the first process in each container will not be assigned PID 1. HostPID and ShareProcessNamespace cannot both be set. Optional: Default to false. subdomain string If specified, the fully qualified Pod hostname will be "<hostname>.<subdomain>.<pod namespace>.svc.<cluster domain>". If not specified, the pod will not have a domainname at all. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully. May be decreased in delete request. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). If this value is nil, the default grace period will be used instead. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. Defaults to 30 seconds. tolerations array If specified, the pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. topologySpreadConstraints array TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed. topologySpreadConstraints[] object TopologySpreadConstraint specifies how to spread matching pods among the given topology. volumes array List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes volumes[] object Volume represents a named volume in a pod that may be accessed by any container in the pod. 15.1.4. .spec.template.spec.affinity Description Affinity is a group of affinity scheduling rules. Type object Property Type Description nodeAffinity object Node affinity is a group of node affinity scheduling rules. podAffinity object Pod affinity is a group of inter pod affinity scheduling rules. podAntiAffinity object Pod anti affinity is a group of inter pod anti affinity scheduling rules. 15.1.5. .spec.template.spec.affinity.nodeAffinity Description Node affinity is a group of node affinity scheduling rules. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. 15.1.6. .spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 15.1.7. .spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required weight preference Property Type Description preference object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 15.1.8. .spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 15.1.9. .spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 15.1.10. .spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 15.1.11. .spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 15.1.12. .spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 15.1.13. .spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 15.1.14. .spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 15.1.15. .spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 15.1.16. .spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 15.1.17. .spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 15.1.18. .spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 15.1.19. .spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 15.1.20. .spec.template.spec.affinity.podAffinity Description Pod affinity is a group of inter pod affinity scheduling rules. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 15.1.21. .spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 15.1.22. .spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required weight podAffinityTerm Property Type Description podAffinityTerm object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 15.1.23. .spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 15.1.24. .spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 15.1.25. .spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 15.1.26. .spec.template.spec.affinity.podAntiAffinity Description Pod anti affinity is a group of inter pod anti affinity scheduling rules. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 15.1.27. .spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 15.1.28. .spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required weight podAffinityTerm Property Type Description podAffinityTerm object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 15.1.29. .spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 15.1.30. .spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 15.1.31. .spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector LabelSelector A label query over a set of resources, in this case pods. namespaceSelector LabelSelector A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 15.1.32. .spec.template.spec.containers Description List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated. Type array 15.1.33. .spec.template.spec.containers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images Possible enum values: - "Always" means that kubelet always attempts to pull the latest image. Container will fail If the pull fails. - "IfNotPresent" means that kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. - "Never" means that kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present lifecycle object Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. livenessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object ResourceRequirements describes the compute resource requirements. securityContext object SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. startupProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. Possible enum values: - "FallbackToLogsOnError" will read the most recent contents of the container logs for the container status message when the container exits with an error and the terminationMessagePath has no contents. - "File" is the default behavior and will set the container status message to the contents of the container's terminationMessagePath when the container exits. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 15.1.34. .spec.template.spec.containers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 15.1.35. .spec.template.spec.containers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object EnvVarSource represents a source for the value of an EnvVar. 15.1.36. .spec.template.spec.containers[].env[].valueFrom Description EnvVarSource represents a source for the value of an EnvVar. Type object Property Type Description configMapKeyRef object Selects a key from a ConfigMap. fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format secretKeyRef object SecretKeySelector selects a key of a Secret. 15.1.37. .spec.template.spec.containers[].env[].valueFrom.configMapKeyRef Description Selects a key from a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 15.1.38. .spec.template.spec.containers[].env[].valueFrom.fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 15.1.39. .spec.template.spec.containers[].env[].valueFrom.resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 15.1.40. .spec.template.spec.containers[].env[].valueFrom.secretKeyRef Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 15.1.41. .spec.template.spec.containers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 15.1.42. .spec.template.spec.containers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. 15.1.43. .spec.template.spec.containers[].envFrom[].configMapRef Description ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined 15.1.44. .spec.template.spec.containers[].envFrom[].secretRef Description SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined 15.1.45. .spec.template.spec.containers[].lifecycle Description Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. Type object Property Type Description postStart object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. preStop object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. 15.1.46. .spec.template.spec.containers[].lifecycle.postStart Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 15.1.47. .spec.template.spec.containers[].lifecycle.postStart.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.48. .spec.template.spec.containers[].lifecycle.postStart.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.49. .spec.template.spec.containers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.50. .spec.template.spec.containers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.51. .spec.template.spec.containers[].lifecycle.postStart.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.52. .spec.template.spec.containers[].lifecycle.preStop Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 15.1.53. .spec.template.spec.containers[].lifecycle.preStop.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.54. .spec.template.spec.containers[].lifecycle.preStop.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.55. .spec.template.spec.containers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.56. .spec.template.spec.containers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.57. .spec.template.spec.containers[].lifecycle.preStop.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.58. .spec.template.spec.containers[].livenessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 15.1.59. .spec.template.spec.containers[].livenessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.60. .spec.template.spec.containers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 15.1.61. .spec.template.spec.containers[].livenessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.62. .spec.template.spec.containers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.63. .spec.template.spec.containers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.64. .spec.template.spec.containers[].livenessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.65. .spec.template.spec.containers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 15.1.66. .spec.template.spec.containers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 15.1.67. .spec.template.spec.containers[].readinessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 15.1.68. .spec.template.spec.containers[].readinessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.69. .spec.template.spec.containers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 15.1.70. .spec.template.spec.containers[].readinessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.71. .spec.template.spec.containers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.72. .spec.template.spec.containers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.73. .spec.template.spec.containers[].readinessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.74. .spec.template.spec.containers[].resizePolicy Description Resources resize policy for the container. Type array 15.1.75. .spec.template.spec.containers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 15.1.76. .spec.template.spec.containers[].resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 15.1.77. .spec.template.spec.containers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 15.1.78. .spec.template.spec.containers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 15.1.79. .spec.template.spec.containers[].securityContext Description SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object Adds and removes POSIX capabilities from running containers. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. Possible enum values: - "Default" uses the container runtime defaults for readonly and masked paths for /proc. Most container runtimes mask certain paths in /proc to avoid accidental security exposure of special devices or information. - "Unmasked" bypasses the default masking behavior of the container runtime and ensures the newly created /proc the container stays in tact with no modifications. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object SELinuxOptions are the labels to be applied to the container seccompProfile object SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. windowsOptions object WindowsSecurityContextOptions contain Windows-specific options and credentials. 15.1.80. .spec.template.spec.containers[].securityContext.capabilities Description Adds and removes POSIX capabilities from running containers. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 15.1.81. .spec.template.spec.containers[].securityContext.seLinuxOptions Description SELinuxOptions are the labels to be applied to the container Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 15.1.82. .spec.template.spec.containers[].securityContext.seccompProfile Description SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to <kubelet-root-dir>/seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined). 15.1.83. .spec.template.spec.containers[].securityContext.windowsOptions Description WindowsSecurityContextOptions contain Windows-specific options and credentials. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 15.1.84. .spec.template.spec.containers[].startupProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 15.1.85. .spec.template.spec.containers[].startupProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.86. .spec.template.spec.containers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 15.1.87. .spec.template.spec.containers[].startupProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.88. .spec.template.spec.containers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.89. .spec.template.spec.containers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.90. .spec.template.spec.containers[].startupProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.91. .spec.template.spec.containers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 15.1.92. .spec.template.spec.containers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required name devicePath Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 15.1.93. .spec.template.spec.containers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 15.1.94. .spec.template.spec.containers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required name mountPath Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. Possible enum values: - "Bidirectional" means that the volume in a container will receive new mounts from the host or other containers, and its own mounts will be propagated from the container to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rshared" in Linux terminology). - "HostToContainer" means that the volume in a container will receive new mounts from the host or other containers, but filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rslave" in Linux terminology). - "None" means that the volume in a container will not receive new mounts from the host or other containers, and filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode corresponds to "private" in Linux terminology. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 15.1.95. .spec.template.spec.dnsConfig Description PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy. Type object Property Type Description nameservers array (string) A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed. options array A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. options[] object PodDNSConfigOption defines DNS resolver options of a pod. searches array (string) A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed. 15.1.96. .spec.template.spec.dnsConfig.options Description A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. Type array 15.1.97. .spec.template.spec.dnsConfig.options[] Description PodDNSConfigOption defines DNS resolver options of a pod. Type object Property Type Description name string Required. value string 15.1.98. .spec.template.spec.ephemeralContainers Description List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. Type array 15.1.99. .spec.template.spec.ephemeralContainers[] Description An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images Possible enum values: - "Always" means that kubelet always attempts to pull the latest image. Container will fail If the pull fails. - "IfNotPresent" means that kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. - "Never" means that kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present lifecycle object Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. livenessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. name string Name of the ephemeral container specified as a DNS_LABEL. This name must be unique among all containers, init containers and ephemeral containers. ports array Ports are not allowed for ephemeral containers. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object ResourceRequirements describes the compute resource requirements. securityContext object SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. startupProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false targetContainerName string If set, the name of the container from PodSpec that this ephemeral container targets. The ephemeral container will be run in the namespaces (IPC, PID, etc) of this container. If not set then the ephemeral container uses the namespaces configured in the Pod spec. The container runtime must implement support for this feature. If the runtime does not support namespace targeting then the result of setting this field is undefined. terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. Possible enum values: - "FallbackToLogsOnError" will read the most recent contents of the container logs for the container status message when the container exits with an error and the terminationMessagePath has no contents. - "File" is the default behavior and will set the container status message to the contents of the container's terminationMessagePath when the container exits. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 15.1.100. .spec.template.spec.ephemeralContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 15.1.101. .spec.template.spec.ephemeralContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object EnvVarSource represents a source for the value of an EnvVar. 15.1.102. .spec.template.spec.ephemeralContainers[].env[].valueFrom Description EnvVarSource represents a source for the value of an EnvVar. Type object Property Type Description configMapKeyRef object Selects a key from a ConfigMap. fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format secretKeyRef object SecretKeySelector selects a key of a Secret. 15.1.103. .spec.template.spec.ephemeralContainers[].env[].valueFrom.configMapKeyRef Description Selects a key from a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 15.1.104. .spec.template.spec.ephemeralContainers[].env[].valueFrom.fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 15.1.105. .spec.template.spec.ephemeralContainers[].env[].valueFrom.resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 15.1.106. .spec.template.spec.ephemeralContainers[].env[].valueFrom.secretKeyRef Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 15.1.107. .spec.template.spec.ephemeralContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 15.1.108. .spec.template.spec.ephemeralContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. 15.1.109. .spec.template.spec.ephemeralContainers[].envFrom[].configMapRef Description ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined 15.1.110. .spec.template.spec.ephemeralContainers[].envFrom[].secretRef Description SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined 15.1.111. .spec.template.spec.ephemeralContainers[].lifecycle Description Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. Type object Property Type Description postStart object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. preStop object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. 15.1.112. .spec.template.spec.ephemeralContainers[].lifecycle.postStart Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 15.1.113. .spec.template.spec.ephemeralContainers[].lifecycle.postStart.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.114. .spec.template.spec.ephemeralContainers[].lifecycle.postStart.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.115. .spec.template.spec.ephemeralContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.116. .spec.template.spec.ephemeralContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.117. .spec.template.spec.ephemeralContainers[].lifecycle.postStart.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.118. .spec.template.spec.ephemeralContainers[].lifecycle.preStop Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 15.1.119. .spec.template.spec.ephemeralContainers[].lifecycle.preStop.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.120. .spec.template.spec.ephemeralContainers[].lifecycle.preStop.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.121. .spec.template.spec.ephemeralContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.122. .spec.template.spec.ephemeralContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.123. .spec.template.spec.ephemeralContainers[].lifecycle.preStop.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.124. .spec.template.spec.ephemeralContainers[].livenessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 15.1.125. .spec.template.spec.ephemeralContainers[].livenessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.126. .spec.template.spec.ephemeralContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 15.1.127. .spec.template.spec.ephemeralContainers[].livenessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.128. .spec.template.spec.ephemeralContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.129. .spec.template.spec.ephemeralContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.130. .spec.template.spec.ephemeralContainers[].livenessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.131. .spec.template.spec.ephemeralContainers[].ports Description Ports are not allowed for ephemeral containers. Type array 15.1.132. .spec.template.spec.ephemeralContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 15.1.133. .spec.template.spec.ephemeralContainers[].readinessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 15.1.134. .spec.template.spec.ephemeralContainers[].readinessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.135. .spec.template.spec.ephemeralContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 15.1.136. .spec.template.spec.ephemeralContainers[].readinessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.137. .spec.template.spec.ephemeralContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.138. .spec.template.spec.ephemeralContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.139. .spec.template.spec.ephemeralContainers[].readinessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.140. .spec.template.spec.ephemeralContainers[].resizePolicy Description Resources resize policy for the container. Type array 15.1.141. .spec.template.spec.ephemeralContainers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 15.1.142. .spec.template.spec.ephemeralContainers[].resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 15.1.143. .spec.template.spec.ephemeralContainers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 15.1.144. .spec.template.spec.ephemeralContainers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 15.1.145. .spec.template.spec.ephemeralContainers[].securityContext Description SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object Adds and removes POSIX capabilities from running containers. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. Possible enum values: - "Default" uses the container runtime defaults for readonly and masked paths for /proc. Most container runtimes mask certain paths in /proc to avoid accidental security exposure of special devices or information. - "Unmasked" bypasses the default masking behavior of the container runtime and ensures the newly created /proc the container stays in tact with no modifications. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object SELinuxOptions are the labels to be applied to the container seccompProfile object SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. windowsOptions object WindowsSecurityContextOptions contain Windows-specific options and credentials. 15.1.146. .spec.template.spec.ephemeralContainers[].securityContext.capabilities Description Adds and removes POSIX capabilities from running containers. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 15.1.147. .spec.template.spec.ephemeralContainers[].securityContext.seLinuxOptions Description SELinuxOptions are the labels to be applied to the container Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 15.1.148. .spec.template.spec.ephemeralContainers[].securityContext.seccompProfile Description SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to <kubelet-root-dir>/seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined). 15.1.149. .spec.template.spec.ephemeralContainers[].securityContext.windowsOptions Description WindowsSecurityContextOptions contain Windows-specific options and credentials. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 15.1.150. .spec.template.spec.ephemeralContainers[].startupProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 15.1.151. .spec.template.spec.ephemeralContainers[].startupProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.152. .spec.template.spec.ephemeralContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 15.1.153. .spec.template.spec.ephemeralContainers[].startupProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.154. .spec.template.spec.ephemeralContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.155. .spec.template.spec.ephemeralContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.156. .spec.template.spec.ephemeralContainers[].startupProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.157. .spec.template.spec.ephemeralContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 15.1.158. .spec.template.spec.ephemeralContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required name devicePath Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 15.1.159. .spec.template.spec.ephemeralContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. Cannot be updated. Type array 15.1.160. .spec.template.spec.ephemeralContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required name mountPath Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. Possible enum values: - "Bidirectional" means that the volume in a container will receive new mounts from the host or other containers, and its own mounts will be propagated from the container to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rshared" in Linux terminology). - "HostToContainer" means that the volume in a container will receive new mounts from the host or other containers, but filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rslave" in Linux terminology). - "None" means that the volume in a container will not receive new mounts from the host or other containers, and filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode corresponds to "private" in Linux terminology. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 15.1.161. .spec.template.spec.hostAliases Description HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. This is only valid for non-hostNetwork pods. Type array 15.1.162. .spec.template.spec.hostAliases[] Description HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Type object Property Type Description hostnames array (string) Hostnames for the above IP address. ip string IP address of the host file entry. 15.1.163. .spec.template.spec.imagePullSecrets Description ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod Type array 15.1.164. .spec.template.spec.imagePullSecrets[] Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 15.1.165. .spec.template.spec.initContainers Description List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ Type array 15.1.166. .spec.template.spec.initContainers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images Possible enum values: - "Always" means that kubelet always attempts to pull the latest image. Container will fail If the pull fails. - "IfNotPresent" means that kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. - "Never" means that kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present lifecycle object Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. livenessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object ResourceRequirements describes the compute resource requirements. securityContext object SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. startupProbe object Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. Possible enum values: - "FallbackToLogsOnError" will read the most recent contents of the container logs for the container status message when the container exits with an error and the terminationMessagePath has no contents. - "File" is the default behavior and will set the container status message to the contents of the container's terminationMessagePath when the container exits. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 15.1.167. .spec.template.spec.initContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 15.1.168. .spec.template.spec.initContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object EnvVarSource represents a source for the value of an EnvVar. 15.1.169. .spec.template.spec.initContainers[].env[].valueFrom Description EnvVarSource represents a source for the value of an EnvVar. Type object Property Type Description configMapKeyRef object Selects a key from a ConfigMap. fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format secretKeyRef object SecretKeySelector selects a key of a Secret. 15.1.170. .spec.template.spec.initContainers[].env[].valueFrom.configMapKeyRef Description Selects a key from a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 15.1.171. .spec.template.spec.initContainers[].env[].valueFrom.fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 15.1.172. .spec.template.spec.initContainers[].env[].valueFrom.resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 15.1.173. .spec.template.spec.initContainers[].env[].valueFrom.secretKeyRef Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 15.1.174. .spec.template.spec.initContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 15.1.175. .spec.template.spec.initContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. 15.1.176. .spec.template.spec.initContainers[].envFrom[].configMapRef Description ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined 15.1.177. .spec.template.spec.initContainers[].envFrom[].secretRef Description SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined 15.1.178. .spec.template.spec.initContainers[].lifecycle Description Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted. Type object Property Type Description postStart object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. preStop object LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. 15.1.179. .spec.template.spec.initContainers[].lifecycle.postStart Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 15.1.180. .spec.template.spec.initContainers[].lifecycle.postStart.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.181. .spec.template.spec.initContainers[].lifecycle.postStart.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.182. .spec.template.spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.183. .spec.template.spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.184. .spec.template.spec.initContainers[].lifecycle.postStart.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.185. .spec.template.spec.initContainers[].lifecycle.preStop Description LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified. Type object Property Type Description exec object ExecAction describes a "run in container" action. httpGet object HTTPGetAction describes an action based on HTTP Get requests. tcpSocket object TCPSocketAction describes an action based on opening a socket 15.1.186. .spec.template.spec.initContainers[].lifecycle.preStop.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.187. .spec.template.spec.initContainers[].lifecycle.preStop.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.188. .spec.template.spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.189. .spec.template.spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.190. .spec.template.spec.initContainers[].lifecycle.preStop.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.191. .spec.template.spec.initContainers[].livenessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 15.1.192. .spec.template.spec.initContainers[].livenessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.193. .spec.template.spec.initContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 15.1.194. .spec.template.spec.initContainers[].livenessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.195. .spec.template.spec.initContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.196. .spec.template.spec.initContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.197. .spec.template.spec.initContainers[].livenessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.198. .spec.template.spec.initContainers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 15.1.199. .spec.template.spec.initContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 15.1.200. .spec.template.spec.initContainers[].readinessProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 15.1.201. .spec.template.spec.initContainers[].readinessProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.202. .spec.template.spec.initContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 15.1.203. .spec.template.spec.initContainers[].readinessProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.204. .spec.template.spec.initContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.205. .spec.template.spec.initContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.206. .spec.template.spec.initContainers[].readinessProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.207. .spec.template.spec.initContainers[].resizePolicy Description Resources resize policy for the container. Type array 15.1.208. .spec.template.spec.initContainers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 15.1.209. .spec.template.spec.initContainers[].resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 15.1.210. .spec.template.spec.initContainers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 15.1.211. .spec.template.spec.initContainers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 15.1.212. .spec.template.spec.initContainers[].securityContext Description SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence. Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object Adds and removes POSIX capabilities from running containers. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. Possible enum values: - "Default" uses the container runtime defaults for readonly and masked paths for /proc. Most container runtimes mask certain paths in /proc to avoid accidental security exposure of special devices or information. - "Unmasked" bypasses the default masking behavior of the container runtime and ensures the newly created /proc the container stays in tact with no modifications. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object SELinuxOptions are the labels to be applied to the container seccompProfile object SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. windowsOptions object WindowsSecurityContextOptions contain Windows-specific options and credentials. 15.1.213. .spec.template.spec.initContainers[].securityContext.capabilities Description Adds and removes POSIX capabilities from running containers. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 15.1.214. .spec.template.spec.initContainers[].securityContext.seLinuxOptions Description SELinuxOptions are the labels to be applied to the container Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 15.1.215. .spec.template.spec.initContainers[].securityContext.seccompProfile Description SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to <kubelet-root-dir>/seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined). 15.1.216. .spec.template.spec.initContainers[].securityContext.windowsOptions Description WindowsSecurityContextOptions contain Windows-specific options and credentials. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 15.1.217. .spec.template.spec.initContainers[].startupProbe Description Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. Type object Property Type Description exec object ExecAction describes a "run in container" action. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGetAction describes an action based on HTTP Get requests. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocketAction describes an action based on opening a socket terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 15.1.218. .spec.template.spec.initContainers[].startupProbe.exec Description ExecAction describes a "run in container" action. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 15.1.219. .spec.template.spec.initContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 15.1.220. .spec.template.spec.initContainers[].startupProbe.httpGet Description HTTPGetAction describes an action based on HTTP Get requests. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port IntOrString Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https:// 15.1.221. .spec.template.spec.initContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 15.1.222. .spec.template.spec.initContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 15.1.223. .spec.template.spec.initContainers[].startupProbe.tcpSocket Description TCPSocketAction describes an action based on opening a socket Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port IntOrString Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 15.1.224. .spec.template.spec.initContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 15.1.225. .spec.template.spec.initContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required name devicePath Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 15.1.226. .spec.template.spec.initContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 15.1.227. .spec.template.spec.initContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required name mountPath Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. Possible enum values: - "Bidirectional" means that the volume in a container will receive new mounts from the host or other containers, and its own mounts will be propagated from the container to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rshared" in Linux terminology). - "HostToContainer" means that the volume in a container will receive new mounts from the host or other containers, but filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode is recursively applied to all mounts in the volume ("rslave" in Linux terminology). - "None" means that the volume in a container will not receive new mounts from the host or other containers, and filesystems mounted inside the container won't be propagated to the host or other containers. Note that this mode corresponds to "private" in Linux terminology. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 15.1.228. .spec.template.spec.os Description PodOS defines the OS parameters of a pod. Type object Required name Property Type Description name string Name is the name of the operating system. The currently supported values are linux and windows. Additional value may be defined in future and can be one of: https://github.com/opencontainers/runtime-spec/blob/master/config.md#platform-specific-configuration Clients should expect to handle additional values and treat unrecognized values in this field as os: null 15.1.229. .spec.template.spec.readinessGates Description If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to "True" More info: https://git.k8s.io/enhancements/keps/sig-network/580-pod-readiness-gates Type array 15.1.230. .spec.template.spec.readinessGates[] Description PodReadinessGate contains the reference to a pod condition Type object Required conditionType Property Type Description conditionType string ConditionType refers to a condition in the pod's condition list with matching type. 15.1.231. .spec.template.spec.resourceClaims Description ResourceClaims defines which ResourceClaims must be allocated and reserved before the Pod is allowed to start. The resources will be made available to those containers which consume them by name. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. Type array 15.1.232. .spec.template.spec.resourceClaims[] Description PodResourceClaim references exactly one ResourceClaim through a ClaimSource. It adds a name to it that uniquely identifies the ResourceClaim inside the Pod. Containers that need access to the ResourceClaim reference it with this name. Type object Required name Property Type Description name string Name uniquely identifies this resource claim inside the pod. This must be a DNS_LABEL. source object ClaimSource describes a reference to a ResourceClaim. Exactly one of these fields should be set. Consumers of this type must treat an empty object as if it has an unknown value. 15.1.233. .spec.template.spec.resourceClaims[].source Description ClaimSource describes a reference to a ResourceClaim. Exactly one of these fields should be set. Consumers of this type must treat an empty object as if it has an unknown value. Type object Property Type Description resourceClaimName string ResourceClaimName is the name of a ResourceClaim object in the same namespace as this pod. resourceClaimTemplateName string ResourceClaimTemplateName is the name of a ResourceClaimTemplate object in the same namespace as this pod. The template will be used to create a new ResourceClaim, which will be bound to this pod. When this pod is deleted, the ResourceClaim will also be deleted. The name of the ResourceClaim will be <pod name>-<resource name>, where <resource name> is the PodResourceClaim.Name. Pod validation will reject the pod if the concatenated name is not valid for a ResourceClaim (e.g. too long). An existing ResourceClaim with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated resource by mistake. Scheduling and pod startup are then blocked until the unrelated ResourceClaim is removed. This field is immutable and no changes will be made to the corresponding ResourceClaim by the control plane after creating the ResourceClaim. 15.1.234. .spec.template.spec.schedulingGates Description SchedulingGates is an opaque list of values that if specified will block scheduling the pod. If schedulingGates is not empty, the pod will stay in the SchedulingGated state and the scheduler will not attempt to schedule the pod. SchedulingGates can only be set at pod creation time, and be removed only afterwards. This is a beta feature enabled by the PodSchedulingReadiness feature gate. Type array 15.1.235. .spec.template.spec.schedulingGates[] Description PodSchedulingGate is associated to a Pod to guard its scheduling. Type object Required name Property Type Description name string Name of the scheduling gate. Each scheduling gate must have a unique name field. 15.1.236. .spec.template.spec.securityContext Description PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext. Type object Property Type Description fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. Possible enum values: - "Always" indicates that volume's ownership and permissions should always be changed whenever volume is mounted inside a Pod. This the default behavior. - "OnRootMismatch" indicates that volume's ownership and permissions will be changed only when permission and ownership of root directory does not match with expected permissions on the volume. This can help shorten the time it takes to change ownership and permissions of a volume. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object SELinuxOptions are the labels to be applied to the container seccompProfile object SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. supplementalGroups array (integer) A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows. sysctls array Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. sysctls[] object Sysctl defines a kernel parameter to be set windowsOptions object WindowsSecurityContextOptions contain Windows-specific options and credentials. 15.1.237. .spec.template.spec.securityContext.seLinuxOptions Description SELinuxOptions are the labels to be applied to the container Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 15.1.238. .spec.template.spec.securityContext.seccompProfile Description SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to <kubelet-root-dir>/seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined). 15.1.239. .spec.template.spec.securityContext.sysctls Description Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. Type array 15.1.240. .spec.template.spec.securityContext.sysctls[] Description Sysctl defines a kernel parameter to be set Type object Required name value Property Type Description name string Name of a property to set value string Value of a property to set 15.1.241. .spec.template.spec.securityContext.windowsOptions Description WindowsSecurityContextOptions contain Windows-specific options and credentials. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 15.1.242. .spec.template.spec.tolerations Description If specified, the pod's tolerations. Type array 15.1.243. .spec.template.spec.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. Possible enum values: - "NoExecute" Evict any already-running pods that do not tolerate the taint. Currently enforced by NodeController. - "NoSchedule" Do not allow new pods to schedule onto the node unless they tolerate the taint, but allow all pods submitted to Kubelet without going through the scheduler to start, and allow all already-running pods to continue running. Enforced by the scheduler. - "PreferNoSchedule" Like TaintEffectNoSchedule, but the scheduler tries not to schedule new pods onto the node, rather than prohibiting new pods from scheduling onto the node entirely. Enforced by the scheduler. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. Possible enum values: - "Equal" - "Exists" tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 15.1.244. .spec.template.spec.topologySpreadConstraints Description TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed. Type array 15.1.245. .spec.template.spec.topologySpreadConstraints[] Description TopologySpreadConstraint specifies how to spread matching pods among the given topology. Type object Required maxSkew topologyKey whenUnsatisfiable Property Type Description labelSelector LabelSelector LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. MatchLabelKeys cannot be set when LabelSelector isn't set. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule , it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway , it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed. minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default). nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. Possible enum values: - "Honor" means use this scheduling directive when calculating pod topology spread skew. - "Ignore" means ignore this scheduling directive when calculating pod topology spread skew. nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. Possible enum values: - "Honor" means use this scheduling directive when calculating pod topology spread skew. - "Ignore" means ignore this scheduling directive when calculating pod topology spread skew. topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it more imbalanced. It's a required field. Possible enum values: - "DoNotSchedule" instructs the scheduler not to schedule the pod when constraints are not satisfied. - "ScheduleAnyway" instructs the scheduler to schedule the pod even if constraints are not satisfied. 15.1.246. .spec.template.spec.volumes Description List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes Type array 15.1.247. .spec.template.spec.volumes[] Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Property Type Description awsElasticBlockStore object Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. azureDisk object AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object AzureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs object Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. cinder object Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. configMap object Adapts a ConfigMap into a volume. The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling. csi object Represents a source location of a volume to mount, managed by an external CSI driver downwardAPI object DownwardAPIVolumeSource represents a volume containing downward API info. Downward API volumes support ownership management and SELinux relabeling. emptyDir object Represents an empty directory for a pod. Empty directory volumes support ownership management and SELinux relabeling. ephemeral object Represents an ephemeral volume that is handled by a normal storage driver. fc object Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. flexVolume object FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker object Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. gcePersistentDisk object Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. gitRepo object Represents a volume that is populated with the contents of a git repository. Git repo volumes do not support ownership management. Git repo volumes support SELinux relabeling. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs object Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling. hostPath object Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling. iscsi object Represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs object Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling. persistentVolumeClaim object PersistentVolumeClaimVolumeSource references the user's PVC in the same namespace. This volume finds the bound PV and mounts that volume for the pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around another type of volume that is owned by someone else (the system). photonPersistentDisk object Represents a Photon Controller persistent disk resource. portworxVolume object PortworxVolumeSource represents a Portworx volume resource. projected object Represents a projected volume source quobyte object Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling. rbd object Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling. scaleIO object ScaleIOVolumeSource represents a persistent ScaleIO volume secret object Adapts a Secret into a volume. The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling. storageos object Represents a StorageOS persistent volume resource. vsphereVolume object Represents a vSphere volume resource. 15.1.248. .spec.template.spec.volumes[].awsElasticBlockStore Description Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 15.1.249. .spec.template.spec.volumes[].azureDisk Description AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. Possible enum values: - "None" - "ReadOnly" - "ReadWrite" diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared Possible enum values: - "Dedicated" - "Managed" - "Shared" readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 15.1.250. .spec.template.spec.volumes[].azureFile Description AzureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 15.1.251. .spec.template.spec.volumes[].cephfs Description Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 15.1.252. .spec.template.spec.volumes[].cephfs.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 15.1.253. .spec.template.spec.volumes[].cinder Description Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. Type object Required volumeID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 15.1.254. .spec.template.spec.volumes[].cinder.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 15.1.255. .spec.template.spec.volumes[].configMap Description Adapts a ConfigMap into a volume. The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling. Type object Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional specify whether the ConfigMap or its keys must be defined 15.1.256. .spec.template.spec.volumes[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 15.1.257. .spec.template.spec.volumes[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 15.1.258. .spec.template.spec.volumes[].csi Description Represents a source location of a volume to mount, managed by an external CSI driver Type object Required driver Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 15.1.259. .spec.template.spec.volumes[].csi.nodePublishSecretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 15.1.260. .spec.template.spec.volumes[].downwardAPI Description DownwardAPIVolumeSource represents a volume containing downward API info. Downward API volumes support ownership management and SELinux relabeling. Type object Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array Items is a list of downward API volume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 15.1.261. .spec.template.spec.volumes[].downwardAPI.items Description Items is a list of downward API volume file Type array 15.1.262. .spec.template.spec.volumes[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format 15.1.263. .spec.template.spec.volumes[].downwardAPI.items[].fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 15.1.264. .spec.template.spec.volumes[].downwardAPI.items[].resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 15.1.265. .spec.template.spec.volumes[].emptyDir Description Represents an empty directory for a pod. Empty directory volumes support ownership management and SELinux relabeling. Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit Quantity sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 15.1.266. .spec.template.spec.volumes[].ephemeral Description Represents an ephemeral volume that is handled by a normal storage driver. Type object Property Type Description volumeClaimTemplate object PersistentVolumeClaimTemplate is used to produce PersistentVolumeClaim objects as part of an EphemeralVolumeSource. 15.1.267. .spec.template.spec.volumes[].ephemeral.volumeClaimTemplate Description PersistentVolumeClaimTemplate is used to produce PersistentVolumeClaim objects as part of an EphemeralVolumeSource. Type object Required spec Property Type Description metadata ObjectMeta May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes 15.1.268. .spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec Description PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object ResourceRequirements describes the compute resource requirements. selector LabelSelector selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. Possible enum values: - "Block" means the volume will not be formatted with a filesystem and will remain a raw block device. - "Filesystem" means the volume will be or is formatted with a filesystem. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 15.1.269. .spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSource Description TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 15.1.270. .spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 15.1.271. .spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources Description ResourceRequirements describes the compute resource requirements. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 15.1.272. .spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 15.1.273. .spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 15.1.274. .spec.template.spec.volumes[].fc Description Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 15.1.275. .spec.template.spec.volumes[].flexVolume Description FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. 15.1.276. .spec.template.spec.volumes[].flexVolume.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 15.1.277. .spec.template.spec.volumes[].flocker Description Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 15.1.278. .spec.template.spec.volumes[].gcePersistentDisk Description Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 15.1.279. .spec.template.spec.volumes[].gitRepo Description Represents a volume that is populated with the contents of a git repository. Git repo volumes do not support ownership management. Git repo volumes support SELinux relabeling. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 15.1.280. .spec.template.spec.volumes[].glusterfs Description Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling. Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 15.1.281. .spec.template.spec.volumes[].hostPath Description Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath Possible enum values: - "" For backwards compatible, leave it empty if unset - "BlockDevice" A block device must exist at the given path - "CharDevice" A character device must exist at the given path - "Directory" A directory must exist at the given path - "DirectoryOrCreate" If nothing exists at the given path, an empty directory will be created there as needed with file mode 0755, having the same group and ownership with Kubelet. - "File" A file must exist at the given path - "FileOrCreate" If nothing exists at the given path, an empty file will be created there as needed with file mode 0644, having the same group and ownership with Kubelet. - "Socket" A UNIX socket must exist at the given path 15.1.282. .spec.template.spec.volumes[].iscsi Description Represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. Type object Required targetPortal iqn lun Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 15.1.283. .spec.template.spec.volumes[].iscsi.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 15.1.284. .spec.template.spec.volumes[].nfs Description Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling. Type object Required server path Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 15.1.285. .spec.template.spec.volumes[].persistentVolumeClaim Description PersistentVolumeClaimVolumeSource references the user's PVC in the same namespace. This volume finds the bound PV and mounts that volume for the pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around another type of volume that is owned by someone else (the system). Type object Required claimName Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 15.1.286. .spec.template.spec.volumes[].photonPersistentDisk Description Represents a Photon Controller persistent disk resource. Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 15.1.287. .spec.template.spec.volumes[].portworxVolume Description PortworxVolumeSource represents a Portworx volume resource. Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 15.1.288. .spec.template.spec.volumes[].projected Description Represents a projected volume source Type object Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array sources is the list of volume projections sources[] object Projection that may be projected along with other supported volume types 15.1.289. .spec.template.spec.volumes[].projected.sources Description sources is the list of volume projections Type array 15.1.290. .spec.template.spec.volumes[].projected.sources[] Description Projection that may be projected along with other supported volume types Type object Property Type Description configMap object Adapts a ConfigMap into a projected volume. The contents of the target ConfigMap's Data field will be presented in a projected volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. Note that this is identical to a configmap volume source without the default mode. downwardAPI object Represents downward API info for projecting into a projected volume. Note that this is identical to a downwardAPI volume source without the default mode. secret object Adapts a secret into a projected volume. The contents of the target Secret's Data field will be presented in a projected volume as files using the keys in the Data field as the file names. Note that this is identical to a secret volume source without the default mode. serviceAccountToken object ServiceAccountTokenProjection represents a projected service account token volume. This projection can be used to insert a service account token into the pods runtime filesystem for use against APIs (Kubernetes API Server or otherwise). 15.1.291. .spec.template.spec.volumes[].projected.sources[].configMap Description Adapts a ConfigMap into a projected volume. The contents of the target ConfigMap's Data field will be presented in a projected volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. Note that this is identical to a configmap volume source without the default mode. Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional specify whether the ConfigMap or its keys must be defined 15.1.292. .spec.template.spec.volumes[].projected.sources[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 15.1.293. .spec.template.spec.volumes[].projected.sources[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 15.1.294. .spec.template.spec.volumes[].projected.sources[].downwardAPI Description Represents downward API info for projecting into a projected volume. Note that this is identical to a downwardAPI volume source without the default mode. Type object Property Type Description items array Items is a list of DownwardAPIVolume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 15.1.295. .spec.template.spec.volumes[].projected.sources[].downwardAPI.items Description Items is a list of DownwardAPIVolume file Type array 15.1.296. .spec.template.spec.volumes[].projected.sources[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object ObjectFieldSelector selects an APIVersioned field of an object. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object ResourceFieldSelector represents container resources (cpu, memory) and their output format 15.1.297. .spec.template.spec.volumes[].projected.sources[].downwardAPI.items[].fieldRef Description ObjectFieldSelector selects an APIVersioned field of an object. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 15.1.298. .spec.template.spec.volumes[].projected.sources[].downwardAPI.items[].resourceFieldRef Description ResourceFieldSelector represents container resources (cpu, memory) and their output format Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor Quantity Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 15.1.299. .spec.template.spec.volumes[].projected.sources[].secret Description Adapts a secret into a projected volume. The contents of the target Secret's Data field will be presented in a projected volume as files using the keys in the Data field as the file names. Note that this is identical to a secret volume source without the default mode. Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional field specify whether the Secret or its key must be defined 15.1.300. .spec.template.spec.volumes[].projected.sources[].secret.items Description items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 15.1.301. .spec.template.spec.volumes[].projected.sources[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 15.1.302. .spec.template.spec.volumes[].projected.sources[].serviceAccountToken Description ServiceAccountTokenProjection represents a projected service account token volume. This projection can be used to insert a service account token into the pods runtime filesystem for use against APIs (Kubernetes API Server or otherwise). Type object Required path Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 15.1.303. .spec.template.spec.volumes[].quobyte Description Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling. Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 15.1.304. .spec.template.spec.volumes[].rbd Description Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling. Type object Required monitors image Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 15.1.305. .spec.template.spec.volumes[].rbd.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 15.1.306. .spec.template.spec.volumes[].scaleIO Description ScaleIOVolumeSource represents a persistent ScaleIO volume Type object Required gateway system secretRef Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 15.1.307. .spec.template.spec.volumes[].scaleIO.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 15.1.308. .spec.template.spec.volumes[].secret Description Adapts a Secret into a volume. The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling. Type object Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 15.1.309. .spec.template.spec.volumes[].secret.items Description items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 15.1.310. .spec.template.spec.volumes[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 15.1.311. .spec.template.spec.volumes[].storageos Description Represents a StorageOS persistent volume resource. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 15.1.312. .spec.template.spec.volumes[].storageos.secretRef Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 15.1.313. .spec.template.spec.volumes[].vsphereVolume Description Represents a vSphere volume resource. Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 15.1.314. .status Description ReplicationControllerStatus represents the current status of a replication controller. Type object Required replicas Property Type Description availableReplicas integer The number of available replicas (ready for at least minReadySeconds) for this replication controller. conditions array Represents the latest available observations of a replication controller's current state. conditions[] object ReplicationControllerCondition describes the state of a replication controller at a certain point. fullyLabeledReplicas integer The number of pods that have labels matching the labels of the pod template of the replication controller. observedGeneration integer ObservedGeneration reflects the generation of the most recently observed replication controller. readyReplicas integer The number of ready replicas for this replication controller. replicas integer Replicas is the most recently observed number of replicas. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#what-is-a-replicationcontroller 15.1.315. .status.conditions Description Represents the latest available observations of a replication controller's current state. Type array 15.1.316. .status.conditions[] Description ReplicationControllerCondition describes the state of a replication controller at a certain point. Type object Required type status Property Type Description lastTransitionTime Time The last time the condition transitioned from one status to another. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of replication controller condition. 15.2. API endpoints The following API endpoints are available: /api/v1/replicationcontrollers GET : list or watch objects of kind ReplicationController /api/v1/watch/replicationcontrollers GET : watch individual changes to a list of ReplicationController. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/replicationcontrollers DELETE : delete collection of ReplicationController GET : list or watch objects of kind ReplicationController POST : create a ReplicationController /api/v1/watch/namespaces/{namespace}/replicationcontrollers GET : watch individual changes to a list of ReplicationController. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/replicationcontrollers/{name} DELETE : delete a ReplicationController GET : read the specified ReplicationController PATCH : partially update the specified ReplicationController PUT : replace the specified ReplicationController /api/v1/watch/namespaces/{namespace}/replicationcontrollers/{name} GET : watch changes to an object of kind ReplicationController. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/namespaces/{namespace}/replicationcontrollers/{name}/status GET : read status of the specified ReplicationController PATCH : partially update status of the specified ReplicationController PUT : replace status of the specified ReplicationController 15.2.1. /api/v1/replicationcontrollers Table 15.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind ReplicationController Table 15.2. HTTP responses HTTP code Reponse body 200 - OK ReplicationControllerList schema 401 - Unauthorized Empty 15.2.2. /api/v1/watch/replicationcontrollers Table 15.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ReplicationController. deprecated: use the 'watch' parameter with a list operation instead. Table 15.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 15.2.3. /api/v1/namespaces/{namespace}/replicationcontrollers Table 15.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 15.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ReplicationController Table 15.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 15.8. Body parameters Parameter Type Description body DeleteOptions schema Table 15.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ReplicationController Table 15.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 15.11. HTTP responses HTTP code Reponse body 200 - OK ReplicationControllerList schema 401 - Unauthorized Empty HTTP method POST Description create a ReplicationController Table 15.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.13. Body parameters Parameter Type Description body ReplicationController schema Table 15.14. HTTP responses HTTP code Reponse body 200 - OK ReplicationController schema 201 - Created ReplicationController schema 202 - Accepted ReplicationController schema 401 - Unauthorized Empty 15.2.4. /api/v1/watch/namespaces/{namespace}/replicationcontrollers Table 15.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 15.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ReplicationController. deprecated: use the 'watch' parameter with a list operation instead. Table 15.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 15.2.5. /api/v1/namespaces/{namespace}/replicationcontrollers/{name} Table 15.18. Global path parameters Parameter Type Description name string name of the ReplicationController namespace string object name and auth scope, such as for teams and projects Table 15.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ReplicationController Table 15.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 15.21. Body parameters Parameter Type Description body DeleteOptions schema Table 15.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ReplicationController Table 15.23. HTTP responses HTTP code Reponse body 200 - OK ReplicationController schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ReplicationController Table 15.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 15.25. Body parameters Parameter Type Description body Patch schema Table 15.26. HTTP responses HTTP code Reponse body 200 - OK ReplicationController schema 201 - Created ReplicationController schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ReplicationController Table 15.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.28. Body parameters Parameter Type Description body ReplicationController schema Table 15.29. HTTP responses HTTP code Reponse body 200 - OK ReplicationController schema 201 - Created ReplicationController schema 401 - Unauthorized Empty 15.2.6. /api/v1/watch/namespaces/{namespace}/replicationcontrollers/{name} Table 15.30. Global path parameters Parameter Type Description name string name of the ReplicationController namespace string object name and auth scope, such as for teams and projects Table 15.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind ReplicationController. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 15.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 15.2.7. /api/v1/namespaces/{namespace}/replicationcontrollers/{name}/status Table 15.33. Global path parameters Parameter Type Description name string name of the ReplicationController namespace string object name and auth scope, such as for teams and projects Table 15.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ReplicationController Table 15.35. HTTP responses HTTP code Reponse body 200 - OK ReplicationController schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ReplicationController Table 15.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 15.37. Body parameters Parameter Type Description body Patch schema Table 15.38. HTTP responses HTTP code Reponse body 200 - OK ReplicationController schema 201 - Created ReplicationController schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ReplicationController Table 15.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.40. Body parameters Parameter Type Description body ReplicationController schema Table 15.41. HTTP responses HTTP code Reponse body 200 - OK ReplicationController schema 201 - Created ReplicationController schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/workloads_apis/replicationcontroller-v1 |
Chapter 9. Adding RHEL compute machines to an OpenShift Container Platform cluster | Chapter 9. Adding RHEL compute machines to an OpenShift Container Platform cluster In OpenShift Container Platform, you can add Red Hat Enterprise Linux (RHEL) compute machines to a user-provisioned infrastructure cluster or an installation-provisioned infrastructure cluster on the x86_64 architecture. You can use RHEL as the operating system only on compute machines. 9.1. About adding RHEL compute nodes to a cluster In OpenShift Container Platform 4.18, you have the option of using Red Hat Enterprise Linux (RHEL) machines as compute machines in your cluster if you use a user-provisioned or installer-provisioned infrastructure installation on the x86_64 architecture. You must use Red Hat Enterprise Linux CoreOS (RHCOS) machines for the control plane machines in your cluster. If you choose to use RHEL compute machines in your cluster, you are responsible for all operating system life cycle management and maintenance. You must perform system updates, apply patches, and complete all other required tasks. For installer-provisioned infrastructure clusters, you must manually add RHEL compute machines because automatic scaling in installer-provisioned infrastructure clusters adds Red Hat Enterprise Linux CoreOS (RHCOS) compute machines by default. Important Because removing OpenShift Container Platform from a machine in the cluster requires destroying the operating system, you must use dedicated hardware for any RHEL machines that you add to the cluster. Swap memory is disabled on all RHEL machines that you add to your OpenShift Container Platform cluster. You cannot enable swap memory on these machines. 9.2. System requirements for RHEL compute nodes The Red Hat Enterprise Linux (RHEL) compute machine hosts in your OpenShift Container Platform environment must meet the following minimum hardware specifications and system-level requirements: You must have an active OpenShift Container Platform subscription on your Red Hat account. If you do not, contact your sales representative for more information. Production environments must provide compute machines to support your expected workloads. As a cluster administrator, you must calculate the expected workload and add about 10% for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity. Each system must meet the following hardware requirements: Physical or virtual system, or an instance running on a public or private IaaS. Base operating system: Use RHEL 8.8 or a later version with the minimal installation option. Important Adding RHEL 7 compute machines to an OpenShift Container Platform cluster is not supported. If you have RHEL 7 compute machines that were previously supported in a past OpenShift Container Platform version, you cannot upgrade them to RHEL 8. You must deploy new RHEL 8 hosts, and the old RHEL 7 hosts should be removed. See the "Deleting nodes" section for more information. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. If you deployed OpenShift Container Platform in FIPS mode, you must enable FIPS on the RHEL machine before you boot it. See Installing a RHEL 8 system with FIPS mode enabled in the RHEL 8 documentation. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. NetworkManager 1.0 or later. 1 vCPU. Minimum 8 GB RAM. Minimum 15 GB hard disk space for the file system containing /var/ . Minimum 1 GB hard disk space for the file system containing /usr/local/bin/ . Minimum 1 GB hard disk space for the file system containing its temporary directory. The temporary system directory is determined according to the rules defined in the tempfile module in the Python standard library. Each system must meet any additional requirements for your system provider. For example, if you installed your cluster on VMware vSphere, your disks must be configured according to its storage guidelines and the disk.enableUUID=TRUE attribute must be set. Each system must be able to access the cluster's API endpoints by using DNS-resolvable hostnames. Any network security access control that is in place must allow system access to the cluster's API service endpoints. For clusters installed on Microsoft Azure: Ensure the system includes the hardware requirement of a Standard_D8s_v3 virtual machine. Enable Accelerated Networking. Accelerated Networking uses single root I/O virtualization (SR-IOV) to provide Microsoft Azure VMs with a more direct path to the switch. Additional resources Deleting nodes Accelerated Networking for Microsoft Azure VMs 9.2.1. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 9.3. Preparing an image for your cloud Amazon Machine Images (AMI) are required because various image formats cannot be used directly by AWS. You may use the AMIs that Red Hat has provided, or you can manually import your own images. The AMI must exist before the EC2 instance can be provisioned. You will need a valid AMI ID so that the correct RHEL version needed for the compute machines is selected. 9.3.1. Listing latest available RHEL images on AWS AMI IDs correspond to native boot images for AWS. Because an AMI must exist before the EC2 instance is provisioned, you will need to know the AMI ID before configuration. The AWS Command Line Interface (CLI) is used to list the available Red Hat Enterprise Linux (RHEL) image IDs. Prerequisites You have installed the AWS CLI. Procedure Use this command to list RHEL 8.8 Amazon Machine Images (AMI): USD aws ec2 describe-images --owners 309956199498 \ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \ 2 --filters "Name=name,Values=RHEL-8.8*" \ 3 --region us-east-1 \ 4 --output table 5 1 The --owners command option shows Red Hat images based on the account ID 309956199498 . Important This account ID is required to display AMI IDs for images that are provided by Red Hat. 2 The --query command option sets how the images are sorted with the parameters 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' . In this case, the images are sorted by the creation date, and the table is structured to show the creation date, the name of the image, and the AMI IDs. 3 The --filter command option sets which version of RHEL is shown. In this example, since the filter is set by "Name=name,Values=RHEL-8.8*" , then RHEL 8.8 AMIs are shown. 4 The --region command option sets where the region where an AMI is stored. 5 The --output command option sets how the results are displayed. Note When creating a RHEL compute machine for AWS, ensure that the AMI is RHEL 8.8 or a later version of RHEL 8. Example output ------------------------------------------------------------------------------------------------------------ | DescribeImages | +---------------------------+-----------------------------------------------------+------------------------+ | 2021-03-18T14:23:11.000Z | RHEL-8.8.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2 | ami-07eeb4db5f7e5a8fb | | 2021-03-18T14:38:28.000Z | RHEL-8.8.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2 | ami-069d22ec49577d4bf | | 2021-05-18T19:06:34.000Z | RHEL-8.8.0_HVM-20210504-arm64-2-Hourly2-GP2 | ami-01fc429821bf1f4b4 | | 2021-05-18T20:09:47.000Z | RHEL-8.8.0_HVM-20210504-x86_64-2-Hourly2-GP2 | ami-0b0af3577fe5e3532 | +---------------------------+-----------------------------------------------------+------------------------+ Additional resources You may also manually import RHEL images to AWS . 9.4. Preparing the machine to run the playbook Before you can add compute machines that use Red Hat Enterprise Linux (RHEL) as the operating system to an OpenShift Container Platform 4.18 cluster, you must prepare a RHEL 8 machine to run an Ansible playbook that adds the new node to the cluster. This machine is not part of the cluster but must be able to access it. Prerequisites Install the OpenShift CLI ( oc ) on the machine that you run the playbook on. Log in as a user with cluster-admin permission. Procedure Ensure that the kubeconfig file for the cluster and the installation program that you used to install the cluster are on the RHEL 8 machine. One way to accomplish this is to use the same machine that you used to install the cluster. Configure the machine to access all of the RHEL hosts that you plan to use as compute machines. You can use any method that your company allows, including a bastion with an SSH proxy or a VPN. Configure a user on the machine that you run the playbook on that has SSH access to all of the RHEL hosts. Important If you use SSH key-based authentication, you must manage the key with an SSH agent. If you have not already done so, register the machine with RHSM and attach a pool with an OpenShift subscription to it: Register the machine with RHSM: # subscription-manager register --username=<user_name> --password=<password> Pull the latest subscription data from RHSM: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach it: # subscription-manager attach --pool=<pool_id> Enable the repositories required by OpenShift Container Platform 4.18: # subscription-manager repos \ --enable="rhel-8-for-x86_64-baseos-rpms" \ --enable="rhel-8-for-x86_64-appstream-rpms" \ --enable="rhocp-4.18-for-rhel-8-x86_64-rpms" Install the required packages, including openshift-ansible : # yum install openshift-ansible openshift-clients jq The openshift-ansible package provides installation program utilities and pulls in other packages that you require to add a RHEL compute node to your cluster, such as Ansible, playbooks, and related configuration files. The openshift-clients provides the oc CLI, and the jq package improves the display of JSON output on your command line. 9.5. Preparing a RHEL compute node Before you add a Red Hat Enterprise Linux (RHEL) machine to your OpenShift Container Platform cluster, you must register each host with Red Hat Subscription Manager (RHSM), attach an active OpenShift Container Platform subscription, and enable the required repositories. On each host, register with RHSM: # subscription-manager register --username=<user_name> --password=<password> Pull the latest subscription data from RHSM: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach it: # subscription-manager attach --pool=<pool_id> Disable all yum repositories: Disable all the enabled RHSM repositories: # subscription-manager repos --disable="*" List the remaining yum repositories and note their names under repo id , if any: # yum repolist Use yum-config-manager to disable the remaining yum repositories: # yum-config-manager --disable <repo_id> Alternatively, disable all repositories: # yum-config-manager --disable \* Note that this might take a few minutes if you have a large number of available repositories Enable only the repositories required by OpenShift Container Platform 4.18: # subscription-manager repos \ --enable="rhel-8-for-x86_64-baseos-rpms" \ --enable="rhel-8-for-x86_64-appstream-rpms" \ --enable="rhocp-4.18-for-rhel-8-x86_64-rpms" \ --enable="fast-datapath-for-rhel-8-x86_64-rpms" Stop and disable firewalld on the host: # systemctl disable --now firewalld.service Note You must not enable firewalld later. If you do, you cannot access OpenShift Container Platform logs on the worker. 9.6. Attaching the role permissions to RHEL instance in AWS Using the Amazon IAM console in your browser, you may select the needed roles and assign them to a worker node. Procedure From the AWS IAM console, create your desired IAM role . Attach the IAM role to the desired worker node. Additional resources See Required AWS permissions for IAM roles . 9.7. Tagging a RHEL worker node as owned or shared A cluster uses the value of the kubernetes.io/cluster/<clusterid>,Value=(owned|shared) tag to determine the lifetime of the resources related to the AWS cluster. The owned tag value should be added if the resource should be destroyed as part of destroying the cluster. The shared tag value should be added if the resource continues to exist after the cluster has been destroyed. This tagging denotes that the cluster uses this resource, but there is a separate owner for the resource. Procedure With RHEL compute machines, the RHEL worker instance must be tagged with kubernetes.io/cluster/<clusterid>=owned or kubernetes.io/cluster/<cluster-id>=shared . Note Do not tag all existing security groups with the kubernetes.io/cluster/<name>,Value=<clusterid> tag, or the Elastic Load Balancing (ELB) will not be able to create a load balancer. 9.8. Adding a RHEL compute machine to your cluster You can add compute machines that use Red Hat Enterprise Linux as the operating system to an OpenShift Container Platform 4.18 cluster. Prerequisites You installed the required packages and performed the necessary configuration on the machine that you run the playbook on. You prepared the RHEL hosts for installation. Procedure Perform the following steps on the machine that you prepared to run the playbook: Create an Ansible inventory file that is named /<path>/inventory/hosts that defines your compute machine hosts and required variables: 1 Specify the user name that runs the Ansible tasks on the remote compute machines. 2 If you do not specify root for the ansible_user , you must set ansible_become to True and assign the user sudo permissions. 3 Specify the path and file name of the kubeconfig file for your cluster. 4 List each RHEL machine to add to your cluster. You must provide the fully-qualified domain name for each host. This name is the hostname that the cluster uses to access the machine, so set the correct public or private name to access the machine. Navigate to the Ansible playbook directory: USD cd /usr/share/ansible/openshift-ansible Run the playbook: USD ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1 1 For <path> , specify the path to the Ansible inventory file that you created. 9.9. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 9.10. Required parameters for the Ansible hosts file You must define the following parameters in the Ansible hosts file before you add Red Hat Enterprise Linux (RHEL) compute machines to your cluster. Parameter Description Values ansible_user The SSH user that allows SSH-based authentication without requiring a password. If you use SSH key-based authentication, then you must manage the key with an SSH agent. A user name on the system. The default value is root . ansible_become If the values of ansible_user is not root, you must set ansible_become to True , and the user that you specify as the ansible_user must be configured for passwordless sudo access. True . If the value is not True , do not specify and define this parameter. openshift_kubeconfig_path Specifies a path and file name to a local directory that contains the kubeconfig file for your cluster. The path and name of the configuration file. 9.10.1. Optional: Removing RHCOS compute machines from a cluster After you add the Red Hat Enterprise Linux (RHEL) compute machines to your cluster, you can optionally remove the Red Hat Enterprise Linux CoreOS (RHCOS) compute machines to free up resources. Prerequisites You have added RHEL compute machines to your cluster. Procedure View the list of machines and record the node names of the RHCOS compute machines: USD oc get nodes -o wide For each RHCOS compute machine, delete the node: Mark the node as unschedulable by running the oc adm cordon command: USD oc adm cordon <node_name> 1 1 Specify the node name of one of the RHCOS compute machines. Drain all the pods from the node: USD oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1 1 Specify the node name of the RHCOS compute machine that you isolated. Delete the node: USD oc delete nodes <node_name> 1 1 Specify the node name of the RHCOS compute machine that you drained. Review the list of compute machines to ensure that only the RHEL nodes remain: USD oc get nodes -o wide Remove the RHCOS machines from the load balancer for your cluster's compute machines. You can delete the virtual machines or reimage the physical hardware for the RHCOS compute machines. | [
"aws ec2 describe-images --owners 309956199498 \\ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \\ 2 --filters \"Name=name,Values=RHEL-8.8*\" \\ 3 --region us-east-1 \\ 4 --output table 5",
"------------------------------------------------------------------------------------------------------------ | DescribeImages | +---------------------------+-----------------------------------------------------+------------------------+ | 2021-03-18T14:23:11.000Z | RHEL-8.8.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2 | ami-07eeb4db5f7e5a8fb | | 2021-03-18T14:38:28.000Z | RHEL-8.8.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2 | ami-069d22ec49577d4bf | | 2021-05-18T19:06:34.000Z | RHEL-8.8.0_HVM-20210504-arm64-2-Hourly2-GP2 | ami-01fc429821bf1f4b4 | | 2021-05-18T20:09:47.000Z | RHEL-8.8.0_HVM-20210504-x86_64-2-Hourly2-GP2 | ami-0b0af3577fe5e3532 | +---------------------------+-----------------------------------------------------+------------------------+",
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.18-for-rhel-8-x86_64-rpms\"",
"yum install openshift-ansible openshift-clients jq",
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --disable=\"*\"",
"yum repolist",
"yum-config-manager --disable <repo_id>",
"yum-config-manager --disable \\*",
"subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.18-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"",
"systemctl disable --now firewalld.service",
"[all:vars] ansible_user=root 1 #ansible_become=True 2 openshift_kubeconfig_path=\"~/.kube/config\" 3 [new_workers] 4 mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com",
"cd /usr/share/ansible/openshift-ansible",
"ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"oc get nodes -o wide",
"oc adm cordon <node_name> 1",
"oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1",
"oc delete nodes <node_name> 1",
"oc get nodes -o wide"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/machine_management/adding-rhel-compute |
7.47. emacs | 7.47. emacs 7.47.1. RHBA-2015:0238 - emacs bug fix update Updated emacs packages that fix two bugs are now available for Red Hat Enterprise Linux 6. GNU Emacs is a powerful, customizable, self-documenting text editor. It provides special code editing features, a scripting language (elisp), and the capability to read email and news. Bug Fixes BZ# 852516 Previously, the data_space_start value was set inaccurately. As a consequence, the emacs text editor returned the following memory warning message: Emergency (alloc): Warning: past 95% of memory limit To fix this bug, data_space_start has been set correctly, and emacs no longer returns warning messages. BZ# 986989 When using the glyph face encoding, a text face was not removed from the garbage collector. As a consequence, the emacs text editor terminated unexpectedly with a segmentation fault when attempting to remove the face. With this update, the text face is also removed from the garbage collector, and emacs thus no longer crashes in the described scenario. Users of emacs are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-emacs |
5.125. java-1.7.0-openjdk | 5.125. java-1.7.0-openjdk 5.125.1. RHSA-2013:0247 - Important: java-1.7.0-openjdk security update Updated java-1.7.0-openjdk packages that fix several security issues are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. These packages provide the OpenJDK 7 Java Runtime Environment and the OpenJDK 7 Software Development Kit. Security Fixes CVE-2013-0442 , CVE-2013-0445 , CVE-2013-0441 , CVE-2013-1475 , CVE-2013-1476 , CVE-2013-0429 , CVE-2013-0450 , CVE-2013-0425 , CVE-2013-0426 , CVE-2013-0428 , CVE-2013-0444 Multiple improper permission check issues were discovered in the AWT, CORBA, JMX, Libraries, and Beans components in OpenJDK. An untrusted Java application or applet could use these flaws to bypass Java sandbox restrictions. CVE-2013-1478 , CVE-2013-1480 Multiple flaws were found in the way image parsers in the 2D and AWT components handled image raster parameters. A specially-crafted image could cause Java Virtual Machine memory corruption and, possibly, lead to arbitrary code execution with the virtual machine privileges. CVE-2013-0432 A flaw was found in the AWT component's clipboard handling code. An untrusted Java application or applet could use this flaw to access clipboard data, bypassing Java sandbox restrictions. CVE-2013-0435 The default Java security properties configuration did not restrict access to certain com.sun.xml.internal packages. An untrusted Java application or applet could use this flaw to access information, bypassing certain Java sandbox restrictions. This update lists the whole package as restricted. CVE-2013-0431 , CVE-2013-0427 , CVE-2013-0433 , CVE-2013-0434 Multiple improper permission check issues were discovered in the JMX, Libraries, Networking, and JAXP components. An untrusted Java application or applet could use these flaws to bypass certain Java sandbox restrictions. CVE-2013-0424 It was discovered that the RMI component's CGIHandler class used user inputs in error messages without any sanitization. An attacker could use this flaw to perform a cross-site scripting (XSS) attack. CVE-2013-0440 It was discovered that the SSL/TLS implementation in the JSSE component did not properly enforce handshake message ordering, allowing an unlimited number of handshake restarts. A remote attacker could use this flaw to make an SSL/TLS server using JSSE consume an excessive amount of CPU by continuously restarting the handshake. CVE-2013-0443 It was discovered that the JSSE component did not properly validate Diffie-Hellman public keys. An SSL/TLS client could possibly use this flaw to perform a small subgroup attack. This erratum also upgrades the OpenJDK package to IcedTea7 2.3.5. All users of java-1.7.0-openjdk are advised to upgrade to these updated packages, which resolve these issues. All running instances of OpenJDK Java must be restarted for the update to take effect. 5.125.2. RHSA-2013:0275 - Important: java-1.7.0-openjdk security update Updated java-1.7.0-openjdk packages that fix several security issues are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. These packages provide the OpenJDK 7 Java Runtime Environment and the OpenJDK 7 Software Development Kit. Security Fixes CVE-2013-1486 , CVE-2013-1484 Multiple improper permission check issues were discovered in the JMX and Libraries components in OpenJDK. An untrusted Java application or applet could use these flaws to bypass Java sandbox restrictions. CVE-2013-1485 An improper permission check issue was discovered in the Libraries component in OpenJDK. An untrusted Java application or applet could use this flaw to bypass certain Java sandbox restrictions. CVE-2013-0169 It was discovered that OpenJDK leaked timing information when decrypting TLS/SSL protocol encrypted records when CBC-mode cipher suites were used. A remote attacker could possibly use this flaw to retrieve plain text from the encrypted packets by using a TLS/SSL server as a padding oracle. This erratum also upgrades the OpenJDK package to IcedTea7 2.3.7. Refer to the NEWS file for further information: http://icedtea.classpath.org/hg/release/icedtea7-2.3/file/icedtea-2.3.7/NEWS All users of java-1.7.0-openjdk are advised to upgrade to these updated packages, which resolve these issues. All running instances of OpenJDK Java must be restarted for the update to take effect. 5.125.3. RHSA-2012:1386 - Important: java-1.7.0-openjdk security update Updated java-1.7.0-openjdk packages that fix several security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. [Update 13 November 2012] The file list of this advisory was updated to move java-1.7.0-openjdk-devel from the optional repositories to the base repositories. Additionally, java-1.7.0-openjdk for the HPC Node variant was also moved (this package was already in the base repositories for other product variants). No changes have been made to the packages themselves. These packages provide the OpenJDK 7 Java Runtime Environment and the OpenJDK 7 Software Development Kit. Security Fixes CVE-2012-5086 , CVE-2012-5087 , CVE-2012-5088 , CVE-2012-5084 , CVE-2012-5089 Multiple improper permission check issues were discovered in the Beans, Libraries, Swing, and JMX components in OpenJDK. An untrusted Java application or applet could use these flaws to bypass Java sandbox restrictions. CVE-2012-5076 , CVE-2012-5074 The default Java security properties configuration did not restrict access to certain com.sun.org.glassfish packages. An untrusted Java application or applet could use these flaws to bypass Java sandbox restrictions. This update lists those packages as restricted. CVE-2012-5068 , CVE-2012-5071 , CVE-2012-5069 , CVE-2012-5073 , CVE-2012-5072 Multiple improper permission check issues were discovered in the Scripting, JMX, Concurrency, Libraries, and Security components in OpenJDK. An untrusted Java application or applet could use these flaws to bypass certain Java sandbox restrictions. CVE-2012-5079 It was discovered that java.util.ServiceLoader could create an instance of an incompatible class while performing provider lookup. An untrusted Java application or applet could use this flaw to bypass certain Java sandbox restrictions. CVE-2012-5081 It was discovered that the Java Secure Socket Extension (JSSE) SSL/TLS implementation did not properly handle handshake records containing an overly large data length value. An unauthenticated, remote attacker could possibly use this flaw to cause an SSL/TLS server to terminate with an exception. CVE-2012-5070 , CVE-2012-5075 It was discovered that the JMX component in OpenJDK could perform certain actions in an insecure manner. An untrusted Java application or applet could possibly use these flaws to disclose sensitive information. CVE-2012-4416 A bug in the Java HotSpot Virtual Machine optimization code could cause it to not perform array initialization in certain cases. An untrusted Java application or applet could use this flaw to disclose portions of the virtual machine's memory. CVE-2012-5077 It was discovered that the SecureRandom class did not properly protect against the creation of multiple seeders. An untrusted Java application or applet could possibly use this flaw to disclose sensitive information. CVE-2012-3216 It was discovered that the java.io.FilePermission class exposed the hash code of the canonicalized path name. An untrusted Java application or applet could possibly use this flaw to determine certain system paths, such as the current working directory. CVE-2012-5085 This update disables Gopher protocol support in the java.net package by default. Gopher support can be enabled by setting the newly introduced property, "jdk.net.registerGopherProtocol", to true. This erratum also upgrades the OpenJDK package to IcedTea7 2.3.3. Refer to the NEWS file for further information: http://icedtea.classpath.org/hg/release/icedtea7-2.3/file/icedtea-2.3.3/NEWS All users of java-1.7.0-openjdk are advised to upgrade to these updated packages, which resolve these issues. All running instances of OpenJDK Java must be restarted for the update to take effect. 5.125.4. RHBA-2012:1570 - java-1.7.0-openjdk bug fix update Updated java-1.7.0-openjdk packages that fix one bug now available for Red Hat Enterprise Linux 6. The java-1.7.0-openjdk packages provide the OpenJDK 7 Java Runtime Environment and the OpenJDK 7 Java Software Development Kit. Bug Fix BZ# 880352 Previously, the Krb5LoginModule config class did not return a proper KDC list when krb5.conf file contained the "dns_lookup_kdc = true" property setting. With this update, a correct KDC list is returned under these circumstances. All users of java-1.7.0-openjdk are advised to upgrade to these updated packages, which fix this bug. 5.125.5. RHSA-2013:0165 - Important: java-1.7.0-openjdk security update Updated java-1.7.0-openjdk packages that fix two security issues are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. These packages provide the OpenJDK 7 Java Runtime Environment and the OpenJDK 7 Software Development Kit. Security Fix CVE-2012-3174 , CVE-2013-0422 Two improper permission check issues were discovered in the reflection API in OpenJDK. An untrusted Java application or applet could use these flaws to bypass Java sandbox restrictions. This erratum also upgrades the OpenJDK package to IcedTea7 2.3.4. Refer to the NEWS file, linked to in the References, for further information. All users of java-1.7.0-openjdk are advised to upgrade to these updated packages, which resolve these issues. All running instances of OpenJDK Java must be restarted for the update to take effect. 5.125.6. RHSA-2012:1223 - Important: java-1.7.0-openjdk security update Updated java-1.7.0-openjdk packages that fix several security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. These packages provide the OpenJDK 7 Java Runtime Environment and the OpenJDK 7 Software Development Kit. Security Fixes CVE-2012-4681 , CVE-2012-1682 , CVE-2012-3136 Multiple improper permission check issues were discovered in the Beans component in OpenJDK. An untrusted Java application or applet could use these flaws to bypass Java sandbox restrictions. CVE-2012-0547 A hardening fix was applied to the AWT component in OpenJDK, removing functionality from the restricted SunToolkit class that was used in combination with other flaws to bypass Java sandbox restrictions. All users of java-1.7.0-openjdk are advised to upgrade to these updated packages, which resolve these issues. All running instances of OpenJDK Java must be restarted for the update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/java-1.7.0-openjdk |
13.8. Configuring IPoIB | 13.8. Configuring IPoIB 13.8.1. Understanding the role of IPoIB As mentioned in Section 1.1, "Comparing IP to non-IP Networks" , most networks are IP networks. InfiniBand is not. The role of IPoIB is to provide an IP network emulation layer on top of InfiniBand RDMA networks. This allows existing applications to run over InfiniBand networks unmodified. However, the performance of those applications is considerably lower than if the application were written to use RDMA communication natively. Since most InfiniBand networks have some set of applications that really must get all of the performance they can out of the network, and then some other applications for which a degraded rate of performance is acceptable if it means that the application does not need to be modified to use RDMA communications, IPoIB is there to allow those less critical applications to run on the network as they are. Because both iWARP and RoCE/IBoE networks are actually IP networks with RDMA layered on top of their IP link layer, they have no need of IPoIB. As a result, the kernel will refuse to create any IPoIB devices on top of iWARP or RoCE/IBoE RDMA devices. 13.8.2. Understanding IPoIB communication modes IPoIB devices can be configured to run in either datagram or connected mode. The difference is in what type of queue pair the IPoIB layer attempts to open with the machine at the other end of the communication. For datagram mode, an unreliable, disconnected queue pair is opened. For connected mode, a reliable, connected queue pair is opened. When using datagram mode, the unreliable, disconnected queue pair type does not allow any packets larger than the InfiniBand link-layer's MTU. The IPoIB layer adds a 4 byte IPoIB header on top of the IP packet being transmitted. As a result, the IPoIB MTU must be 4 bytes less than the InfiniBand link-layer MTU. As 2048 is a common InfiniBand link-layer MTU, the common IPoIB device MTU in datagram mode is 2044. When using connected mode, the reliable, connected queue pair type allows messages that are larger than the InfiniBand link-layer MTU and the host adapter handles packet segmentation and reassembly at each end. As a result, there is no size limit imposed on the size of IPoIB messages that can be sent by the InfiniBand adapters in connected mode. However, there is still the limitation that an IP packet only has a 16 bit size field, and is therefore limited to 65535 as the maximum byte count. The maximum allowed MTU is actually smaller than that because we have to account for various TCP/IP headers that must also fit in that size. As a result, the IPoIB MTU in connected mode is capped at 65520 in order to make sure there is sufficient room for all needed TCP headers. The connected mode option generally has higher performance, but it also consumes more kernel memory. Because most systems care more about performance than memory consumption, connected mode is the most commonly used mode. However, if a system is configured for connected mode, it must still send multicast traffic in datagram mode (the InfiniBand switches and fabric cannot pass multicast traffic in connected mode) and it will also fall back to datagram mode when communicating with any hosts not configured for connected mode. Administrators should be aware that if they intend to run programs that send multicast data, and those programs try to send multicast data up to the maximum MTU on the interface, then it is necessary to configure the interface for datagram operation or find some way to configure the multicast application to cap their packet send size at a size that will fit in datagram sized packets. 13.8.3. Understanding IPoIB hardware addresses IPoIB devices have a 20 byte hardware addresses. The deprecated utility ifconfig is unable to read all 20 bytes and should never be used to try and find the correct hardware address for an IPoIB device. The ip utilities from the iproute package work properly. The first 4 bytes of the IPoIB hardware address are flags and the queue pair number. The 8 bytes are the subnet prefix. When the IPoIB device is first created, it will have the default subnet prefix of 0xfe:80:00:00:00:00:00:00 . The device will use the default subnet prefix (0xfe80000000000000) until it makes contact with the subnet manager, at which point it will reset the subnet prefix to match what the subnet manager has configured it to be. The final 8 bytes are the GUID address of the InfiniBand port that the IPoIB device is attached to. Because both the first 4 bytes and the 8 bytes can change from time to time, they are not used or matched against when specifying the hardware address for an IPoIB interface. Section Section 13.5.2, "Usage of 70-persistent-ipoib.rules" explains how to derive the address by leaving the first 12 bytes out of the ATTR{address} field in the udev rules file so that device matching will happen reliably. When configuring IPoIB interfaces, the HWADDR field of the configuration file can contain all 20 bytes, but only the last 8 bytes are actually used to match against and find the hardware specified by a configuration file. However, if the TYPE=InfiniBand entry is not spelled correctly in the device configuration file, and ifup-ib is not the actual script used to open the IPoIB interface, then an error about the system being unable to find the hardware specified by the configuration will be issued. For IPoIB interfaces, the TYPE= field of the configuration file must be either InfiniBand or infiniband (the entry is case sensitive, but the scripts will accept these two specific spellings). 13.8.4. Understanding InfiniBand P_Key subnets An InfiniBand fabric can be logically segmented into virtual subnets by the use of different P_Key subnets. This is highly analogous to using VLANs on Ethernet interfaces. All switches and hosts must be a member of the default P_Key subnet, but administrators can create additional subnets and limit members of those subnets to subsets of the hosts or switches in the fabric. A P_Key subnet must be defined by the subnet manager before a host can use it. See section Section 13.6.4, "Creating a P_Key definition" for information on how to define a P_Key subnet using the opensm subnet manager. For IPoIB interfaces, once a P_Key subnet has been created, we can create additional IPoIB configuration files specifically for those P_Key subnets. Just like VLAN interfaces on Ethernet devices, each IPoIB interface will behave as though it were on a completely different fabric from other IPoIB interfaces that share the same link but have different P_Key values. There are special requirements for the names of IPoIB P_Key interfaces. All IPoIB P_Key s range from 0x0000 to 0x7fff , and the high bit, 0x8000 , denotes that membership in a P_Key is full membership instead of partial membership. The Linux kernel's IPoIB driver only supports full membership in P_Key subnets, so for any subnet that Linux can connect to, the high bit of the P_Key number will always be set. That means that if a Linux computer joins P_Key 0x0002 , its actual P_Key number once joined will be 0x8002 , denoting that we are full members of P_Key 0x0002 . For this reason, when creating a P_Key definition in an opensm partitions.conf file as depicted in section Section 13.6.4, "Creating a P_Key definition" , it is required to specify a P_Key value without 0x8000 , but when defining the P_Key IPoIB interfaces on the Linux clients, add the 0x8000 value to the base P_Key value. 13.8.5. Configure InfiniBand Using the Text User Interface, nmtui The text user interface tool nmtui can be used to configure InfiniBand in a terminal window. Issue the following command to start the tool: The text user interface appears. Any invalid command prints a usage message. To navigate, use the arrow keys or press Tab to step forwards and press Shift + Tab to step back through the options. Press Enter to select an option. The Space bar toggles the status of a check box. From the starting menu, select Edit a connection . Select Add , the New Connection screen opens. Figure 13.1. The NetworkManager Text User Interface Add an InfiniBand Connection menu Select InfiniBand , the Edit connection screen opens. Follow the on-screen prompts to complete the configuration. Figure 13.2. The NetworkManager Text User Interface Configuring a InfiniBand Connection menu See Section 13.8.9.1, "Configuring the InfiniBand Tab" for definitions of the InfiniBand terms. See Section 3.2, "Configuring IP Networking with nmtui" for information on installing nmtui . 13.8.6. Configure IPoIB using the command-line tool, nmcli First determine if renaming the default IPoIB device(s) is required, and if so, follow the instructions in section Section 13.5.2, "Usage of 70-persistent-ipoib.rules" to rename the devices using udev renaming rules. Users can force the IPoIB interfaces to be renamed without performing a reboot by removing the ib_ipoib kernel module and then reloading it as follows: Once the devices have the name required, use the nmcli tool to create the IPoIB interface(s). The following examples display two ways: Example 13.3. Creating and modifying IPoIB in two separate commands. or you can run nmcli c add and nmcli c modify in one command, as follows: Example 13.4. Creating and modifying IPoIB in one command. At these points, an IPoIB interface named mlx4_ib0 has been created and set to use connected mode, with the maximum connected mode MTU, DHCP for IPv4 and IPv6 . If using IPoIB interfaces for cluster traffic and an Ethernet interface for out-of-cluster communications, it is likely that disabling default routes and any default name server on the IPoIB interfaces will be required. This can be done as follows: If a P_Key interface is required, create one using nmcli as follows: 13.8.7. Configure IPoIB Using the command line First determine if renaming the default IPoIB device(s) is required, and if so, follow the instructions in section Section 13.5.2, "Usage of 70-persistent-ipoib.rules" to rename the devices using udev renaming rules. Users can force the IPoIB interfaces to be renamed without performing a reboot by removing the ib_ipoib kernel module and then reloading it as follows: Once the devices have the name required, administrators can create ifcfg files with their preferred editor to control the devices. A typical IPoIB configuration file with static IPv4 addressing looks as follows: The DEVICE field must match the custom name created in any udev renaming rules. The NAME entry need not match the device name. If the GUI connection editor is started, the NAME field is what is used to present a name for this connection to the user. The TYPE field must be InfiniBand in order for InfiniBand options to be processed properly. CONNECTED_MODE is either yes or no , where yes will use connected mode and no will use datagram mode for communications (see section Section 13.8.2, "Understanding IPoIB communication modes" ). For P_Key interfaces, this is a typical configuration file: For all P_Key interface files, the PHYSDEV directive is required and must be the name of the parent device. The PKEY directive must be set to yes , and PKEY_ID must be the number of the interface (either with or without the 0x8000 membership bit added in). The device name, however, must be the four digit hexadecimal representation of the PKEY_ID combined with the 0x8000 membership bit using the logical OR operator as follows: NAME=USD{PHYSDEV}.USD((0x8000 | USDPKEY_ID)) By default, the PKEY_ID in the file is treated as a decimal number and converted to hexadecimal and then combined using the logical OR operator with 0x8000 to arrive at the proper name for the device, but users may specify the PKEY_ID in hexadecimal by prepending the standard 0x prefix to the number. 13.8.8. Testing an RDMA network after IPoIB is configured Once IPoIB is configured, it is possible to use IP addresses to specify RDMA devices. Due to the ubiquitous nature of using IP addresses and host names to specify machines, most RDMA applications use this as their preferred, or in some cases only, way of specifying remote machines or local devices to connect to. To test the functionality of the IPoIB layer, it is possible to use any standard IP network test tool and provide the IP address of the IPoIB devices to be tested. For example, the ping command between the IP addresses of the IPoIB devices should now work. There are two different RDMA performance testing packages included with Red Hat Enterprise Linux, qperf and perftest . Either of these may be used to further test the performance of an RDMA network. However, when using any of the applications that are part of the perftest package, or using the qperf application, there is a special note on address resolution. Even though the remote host is specified using an IP address or host name of the IPoIB device, it is allowed for the test application to actually connect through a different RDMA interface. The reason for this is because the process of converting from the host name or IP address to an RDMA address allows any valid RDMA address pair between the two machines to be used. If there are multiple ways for the client to connect to the server, then the programs might choose to use a different path if there is a problem with the path specified. For example, if there are two ports on each machine connected to the same InfiniBand subnet, and an IP address for the second port on each machine is given, it is likely that the program will find the first port on each machine is a valid connection method and use them instead. In this case, command-line options to any of the perftest programs can be used to tell them which card and port to bind to, as was done with ibping in Section 13.7, "Testing Early InfiniBand RDMA operation" , in order to ensure that testing occurs over the specific ports required to be tested. For qperf , the method of binding to ports is slightly different. The qperf program operates as a server on one machine, listening on all devices (including non-RDMA devices). The client may connect to qperf using any valid IP address or host name for the server. Qperf will first attempt to open a data connection and run the requested test(s) over the IP address or host name given on the client command line, but if there is any problem using that address, qperf will fall back to attempting to run the test on any valid path between the client and server. For this reason, to force qperf to test over a specific link, use the -loc_id and -rem_id options to the qperf client in order to force the test to run on a specific link. 13.8.9. Configure IPoIB Using a GUI To configure an InfiniBand connection using a graphical tool, use nm-connection-editor Procedure 13.4. Adding a New InfiniBand Connection Using nm-connection-editor Enter nm-connection-editor in a terminal: Click the Add button. The Choose a Connection Type window appears. Select InfiniBand and click Create . The Editing InfiniBand connection 1 window appears. On the InfiniBand tab, select the transport mode from the drop-down list you want to use for the InfiniBand connection. Enter the InfiniBand MAC address. Review and confirm the settings and then click the Save button. Edit the InfiniBand-specific settings by referring to Section 13.8.9.1, "Configuring the InfiniBand Tab" . Procedure 13.5. Editing an Existing InfiniBand Connection Follow these steps to edit an existing InfiniBand connection. Enter nm-connection-editor in a terminal: Select the connection you want to edit and click the Edit button. Select the General tab. Configure the connection name, auto-connect behavior, and availability settings. Five settings in the Editing dialog are common to all connection types, see the General tab: Connection name - Enter a descriptive name for your network connection. This name will be used to list this connection in the menu of the Network window. Automatically connect to this network when it is available - Select this box if you want NetworkManager to auto-connect to this connection when it is available. See the section called "Editing an Existing Connection with control-center" for more information. All users may connect to this network - Select this box to create a connection available to all users on the system. Changing this setting may require root privileges. See Section 3.4.5, "Managing System-wide and Private Connection Profiles with a GUI" for details. Automatically connect to VPN when using this connection - Select this box if you want NetworkManager to auto-connect to a VPN connection when it is available. Select the VPN from the drop-down menu. Firewall Zone - Select the Firewall Zone from the drop-down menu. See the Red Hat Enterprise Linux 7 Security Guide for more information on Firewall Zones. Edit the InfiniBand-specific settings by referring to the Section 13.8.9.1, "Configuring the InfiniBand Tab" . Saving Your New (or Modified) Connection and Making Further Configurations Once you have finished editing your InfiniBand connection, click the Save button to save your customized configuration. Then, to configure: IPv4 settings for the connection, click the IPv4 Settings tab and proceed to Section 5.4, "Configuring IPv4 Settings" or IPv6 settings for the connection, click the IPv6 Settings tab and proceed to Section 5.5, "Configuring IPv6 Settings" . 13.8.9.1. Configuring the InfiniBand Tab If you have already added a new InfiniBand connection (see Procedure 13.4, "Adding a New InfiniBand Connection Using nm-connection-editor" for instructions), you can edit the InfiniBand tab to set the parent interface and the InfiniBand ID. Transport mode Datagram or Connected mode can be selected from the drop-down list. Select the same mode the rest of your IPoIB network is using. Device MAC address The MAC address of the InfiniBand capable device to be used for the InfiniBand network traffic.This hardware address field will be pre-filled if you have InfiniBand hardware installed. MTU Optionally sets a Maximum Transmission Unit (MTU) size to be used for packets to be sent over the InfiniBand connection. 13.8.10. Additional Resources Installed Documentation /usr/share/doc/initscripts- version /sysconfig.txt - Describes configuration files and their directives. Online Documentation https://www.kernel.org/doc/Documentation/infiniband/ipoib.txt A description of the IPoIB driver. Includes references to relevant RFCs. | [
"~]USD nmtui",
"~]USD rmmod ib_ipoib ~]USD modprobe ib_ipoib",
"~]USD nmcli con add type infiniband con-name mlx4_ib0 ifname mlx4_ib0 transport-mode connected mtu 65520 Connection 'mlx4_ib0' (8029a0d7-8b05-49ff-a826-2a6d722025cc) successfully added. ~]USD nmcli con edit mlx4_ib0 ===| nmcli interactive connection editor |=== Editing existing 'infiniband' connection: 'mlx4_ib0' Type 'help' or '?' for available commands. Type 'describe [>setting<.>prop<]' for detailed property description. You may edit the following settings: connection, infiniband, ipv4, ipv6 nmcli> set infiniband.mac-address 80:00:02:00:fe:80:00:00:00:00:00:00:f4:52:14:03:00:7b:cb:a3 nmcli> save Connection 'mlx4_ib3' (8029a0d7-8b05-49ff-a826-2a6d722025cc) successfully updated. nmcli> quit",
"nmcli con add type infiniband con-name mlx4_ib0 ifname mlx4_ib0 transport-mode connected mtu 65520 infiniband.mac-address 80:00:02:00:fe:80:00:00:00:00:00:00:f4:52:14:03:00:7b:cb:a3",
"~]USD nmcli con edit mlx4_ib0 ===| nmcli interactive connection editor |=== Editing existing 'infiniband' connection: 'mlx4_ib0' Type 'help' or '?' for available commands. Type 'describe [>setting<.>prop<]' for detailed property description. You may edit the following settings: connection, infiniband, ipv4, ipv6 nmcli> set ipv4.ignore-auto-dns yes nmcli> set ipv4.ignore-auto-routes yes nmcli> set ipv4.never-default true nmcli> set ipv6.ignore-auto-dns yes nmcli> set ipv6.ignore-auto-routes yes nmcli> set ipv6.never-default true nmcli> save Connection 'mlx4_ib0' (8029a0d7-8b05-49ff-a826-2a6d722025cc) successfully updated. nmcli> quit",
"~]USD nmcli con add type infiniband con-name mlx4_ib0.8002 ifname mlx4_ib0.8002 parent mlx4_ib0 p-key 0x8002 Connection 'mlx4_ib0.8002' (4a9f5509-7bd9-4e89-87e9-77751a1c54b4) successfully added. ~]USD nmcli con modify mlx4_ib0.8002 infiniband.mtu 65520 infiniband.transport-mode connected ipv4.ignore-auto-dns yes ipv4.ignore-auto-routes yes ipv4.never-default true ipv6.ignore-auto-dns yes ipv6.ignore-auto-routes yes ipv6.never-default true",
"~]USD rmmod ib_ipoib ~]USD modprobe ib_ipoib",
"~]USD more ifcfg-mlx4_ib0 DEVICE=mlx4_ib0 TYPE=InfiniBand ONBOOT=yes HWADDR=80:00:00:4c:fe:80:00:00:00:00:00:00:f4:52:14:03:00:7b:cb:a1 BOOTPROTO=none IPADDR=172.31.0.254 PREFIX=24 NETWORK=172.31.0.0 BROADCAST=172.31.0.255 IPV4_FAILURE_FATAL=yes IPV6INIT=no MTU=65520 CONNECTED_MODE=yes NAME=mlx4_ib0",
"~]USD more ifcfg-mlx4_ib0.8002 DEVICE=mlx4_ib0.8002 PHYSDEV=mlx4_ib0 PKEY=yes PKEY_ID=2 TYPE=InfiniBand ONBOOT=yes HWADDR=80:00:00:4c:fe:80:00:00:00:00:00:00:f4:52:14:03:00:7b:cb:a1 BOOTPROTO=none IPADDR=172.31.2.254 PREFIX=24 NETWORK=172.31.2.0 BROADCAST=172.31.2.255 IPV4_FAILURE_FATAL=yes IPV6INIT=no MTU=65520 CONNECTED_MODE=yes NAME=mlx4_ib0.8002",
"~]USD nm-connection-editor",
"~]USD nm-connection-editor"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-configuring_ipoib |
Chapter 3. Installing and Using Collections | Chapter 3. Installing and Using Collections 3.1. Introduction to Ansible Collections Ansible Collections are the new way of distributing, maintaining, and consuming automation. By combining multiple types of Ansible content such as playbooks, roles, modules, and plugins, you can benefit from improvements in flexibility and scalability. The Ansible Collections are an option to the traditional RHEL System Roles format. Using the RHEL System Roles in the Ansible Collection format is almost the same as using it in the traditional RHEL System Roles format. The difference is that Ansible Collections use the concept of a fully qualified collection name (FQCN), which consists of a namespace and the collection name . The namespace we use is redhat and the collection name is rhel_system_roles . So, while the traditional RHEL System Roles format for the kernel_settings role is presented as rhel-system-roles.kernel_settings (with dashes), using the Collection fully qualified collection name for the kernel_settings role would be presented as redhat.rhel_system_roles.kernel_settings (with underscores). The combination of a namespace and a collection name guarantees that the objects are unique. It also ensures that objects are shared across the Ansible Collections and namespaces without any conflicts. Additional resources To use the Red Hat Certified Collections by accessing the Automation Hub , you must have an Ansible Automation Platform (AAP subscription). 3.2. Collections structure Collections are a package format for Ansible content. The data structure is as below: docs/: local documentation for the collection, with examples, if the role provides the documentation galaxy.yml: source data for the MANIFEST.json that will be part of the Ansible Collection package playbooks/: playbooks are available here tasks/: this holds 'task list files' for include_tasks/import_tasks usage plugins/: all Ansible plugins and modules are available here, each in its subdirectory modules/: Ansible modules modules_utils/: common code for developing modules lookup/: search for a plugin filter/: Jinja2 filter plugin connection/: connection plugins required if not using the default roles/: directory for Ansible roles tests/: tests for the collection's content 3.3. Installing Collections by using the CLI Collections are a distribution format for Ansible content that can include playbooks, roles, modules, and plugins. You can install Collections through Ansible Galaxy, through the browser, or by using the command line. Prerequisites Access and permissions to one or more managed nodes . Access and permissions to a control node , which is a system from which Red Hat Ansible Core configures other systems. On the control node: The ansible-core and rhel-system-roles packages are installed. An inventory file which lists the managed nodes. Procedure Install the collection via RPM package: # yum install rhel-system-roles After the installation is finished, the roles are available as redhat.rhel_system_roles.<role_name> . Additionally, you can find the documentation for each role at /usr/share/ansible/collections/ansible_collections/redhat/rhel_system_roles/roles/<role_name>/README.md . Verification steps To verify the installation, run the kernel_settings role with check mode on your localhost. You must also use the --become parameter because it is necessary for the Ansible package module. However, the parameter will not change your system: Run the following command: USD ansible-playbook -c local -i localhost, --check --become /usr/share/ansible/collections/ansible_collections/redhat/rhel_system_roles/tests/kernel_settings/tests_default.yml The last line of the command output should contain the value failed=0 . Note The comma after localhost is mandatory. You must add it even if there is only one host on the list. Without it, ansible-playbook would identify localhost as a file or a directory. Additional resources The ansible-playbook man page. The -i option of the ansible-playbook command 3.4. Installing Collections from Automation Hub If you are using the Automation Hub, you can install the RHEL System Roles Collection hosted on the Automation Hub. Prerequisites Access and permissions to one or more managed nodes . Access and permissions to a control node , which is a system from which Red Hat Ansible Core configures other systems. On the control node: The ansible-core and rhel-system-roles packages are installed. An inventory file which lists the managed nodes. Procedure Define Red Hat Automation Hub as the default source for content in the ansible.cfg configuration file. See Configuring Red Hat Automation Hub as the primary source for content . Install the redhat.rhel_system_roles collection from the Automation Hub: # ansible-galaxy collection install redhat.rhel_system_roles After the installation is finished, the roles are available as redhat.rhel_system_roles.<role_name> . Additionally, you can find the documentation for each role at /usr/share/ansible/collections/ansible_collections/redhat/rhel_system_roles/roles/<role_name>/README.md . Verification steps To verify the install, run the kernel_settings role with check mode on your localhost. You must also use the --become parameter because it is necessary for the Ansible package module. However, the parameter will not change your system: Run the following command: USD ansible-playbook -c local -i localhost, --check --become /usr/share/ansible/collections/ansible_collections/redhat/rhel_system_roles/tests/kernel_settings/tests_default.yml The last line of the command output should contain the value failed=0 . Note The comma after localhost is mandatory. You must add it even if there is only one host on the list. Without it, ansible-playbook would identify localhost as a file or a directory. Additional resources The ansible-playbook man page. The -i option of the ansible-playbook command 3.5. Applying a local logging System Role using Collections Following is an example using Collections to prepare and apply an Ansible playbook to configure a logging solution on a set of separate machines. Prerequisites A Collection format of rhel-system-roles is installed either from an rpm package or from the Automation Hub. Procedure Create a playbook that defines the required role: Create a new YAML file and open it in a text editor, for example: # vi logging-playbook.yml Insert the following content into the YAML file: Execute the playbook on a specific inventory: # ansible-playbook -i inventory-file logging-playbook.yml Where: inventory-file is the name of your inventory file. logging-playbook.yml is the playbook you use. Verification steps Test the syntax of the configuration files /etc/rsyslog.conf and /etc/rsyslog.d : # rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run (level 1), master config /etc/rsyslog.conf rsyslogd: End of config validation run. Bye. Verify that the system sends messages to the log: Send a test message: View the /var/log/messages log, for example: The hostname is the hostname of the client system. The log displays the user name of the user that entered the logger command, in this case, root . | [
"yum install rhel-system-roles",
"ansible-playbook -c local -i localhost, --check --become /usr/share/ansible/collections/ansible_collections/redhat/rhel_system_roles/tests/kernel_settings/tests_default.yml",
"ansible-galaxy collection install redhat.rhel_system_roles",
"ansible-playbook -c local -i localhost, --check --become /usr/share/ansible/collections/ansible_collections/redhat/rhel_system_roles/tests/kernel_settings/tests_default.yml",
"vi logging-playbook.yml",
"--- - name: Deploying basics input and implicit files output hosts: all roles: - redhat.rhel_system_roles.logging vars: logging_inputs: - name: system_input type: basics logging_outputs: - name: files_output type: files logging_flows: - name: flow1 inputs: [system_input] outputs: [files_output]",
"ansible-playbook -i inventory-file logging-playbook.yml",
"rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run (level 1), master config /etc/rsyslog.conf rsyslogd: End of config validation run. Bye.",
"logger test",
"cat /var/log/messages Aug 5 13:48:31 hostname root[6778]: test"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/automating_system_administration_by_using_rhel_system_roles_in_rhel_7.9/installing-and-using-collections_automating-system-administration-by-using-rhel-system-roles |
Chapter 62. Kubernetes HPA | Chapter 62. Kubernetes HPA Since Camel 2.23 Both producer and consumer are supported The Kubernetes HPA component is one of the Kubernetes Components which provides a producer to execute kubernetes Horizontal Pod Autoscaler operations and a consumer to consume events related to Horizontal Pod Autoscaler objects. 62.1. Dependencies When using kubernetes-hpa with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 62.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 62.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 62.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 62.3. Component Options The Kubernetes HPA component supports 4 options, which are listed below. Name Description Default Type kubernetesClient (common) Autowired To use an existing kubernetes client. KubernetesClient bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 62.4. Endpoint Options The Kubernetes HPA endpoint is configured using URI syntax: with the following path and query parameters: 62.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (common) Required Kubernetes Master url. String 62.4.2. Query Parameters (33 parameters) Name Description Default Type apiVersion (common) The Kubernetes API Version to use. String dnsDomain (common) The dns domain, used for ServiceCall EIP. String kubernetesClient (common) Default KubernetesClient to use if provided. KubernetesClient namespace (common) The namespace. String portName (common) The port name, used for ServiceCall EIP. String portProtocol (common) The port protocol, used for ServiceCall EIP. tcp String crdGroup (consumer) The Consumer CRD Resource Group we would like to watch. String crdName (consumer) The Consumer CRD Resource name we would like to watch. String crdPlural (consumer) The Consumer CRD Resource Plural we would like to watch. String crdScope (consumer) The Consumer CRD Resource Scope we would like to watch. String crdVersion (consumer) The Consumer CRD Resource Version we would like to watch. String labelKey (consumer) The Consumer Label key when watching at some resources. String labelValue (consumer) The Consumer Label value when watching at some resources. String poolSize (consumer) The Consumer pool size. 1 int resourceName (consumer) The Consumer Resource Name we would like to watch. String bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern operation (producer) Producer operation to do on Kubernetes. String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 62.5. Message Headers The Kubernetes HPA component supports 7 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesHPAName (producer) Constant: KUBERNETES_HPA_NAME The HPA name. String CamelKubernetesHPASpec (producer) Constant: KUBERNETES_HPA_SPEC The spec for a HPA. HorizontalPodAutoscalerSpec CamelKubernetesHPALabels (producer) Constant: KUBERNETES_HPA_LABELS The HPA labels. Map CamelKubernetesEventAction (consumer) Constant: KUBERNETES_EVENT_ACTION Action watched by the consumer. Enum values: ADDED MODIFIED DELETED ERROR BOOKMARK Action CamelKubernetesEventTimestamp (consumer) Constant: KUBERNETES_EVENT_TIMESTAMP Timestamp of the action watched by the consumer. long 62.6. Supported producer operation listHPA listHPAByLabels getHPA createHPA updateHPA deleteHPA 62.7. Kubernetes HPA Producer Examples listHPA: this operation lists the HPAs on a kubernetes cluster. from("direct:list"). toF("kubernetes-hpa:///?kubernetesClient=#kubernetesClient&operation=listHPA"). to("mock:result"); This operation returns a List of HPAs from your cluster. listDeploymentsByLabels: this operation lists the HPAs by labels on a kubernetes cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_HPA_LABELS, labels); } }); toF("kubernetes-hpa:///?kubernetesClient=#kubernetesClient&operation=listHPAByLabels"). to("mock:result"); This operation returns a List of HPAs from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 62.8. Kubernetes HPA Consumer Example fromF("kubernetes-hpa://%s?oauthToken=%s&namespace=default&resourceName=test", host, authToken).process(new KubernertesProcessor()).to("mock:result"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); HorizontalPodAutoscaler hpa = exchange.getIn().getBody(HorizontalPodAutoscaler.class); log.info("Got event with hpa name: " + hpa.getMetadata().getName() + " and action " + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } } This consumer returns a list of events on the namespace default for the hpa test. 62.9. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>",
"kubernetes-hpa:masterUrl",
"from(\"direct:list\"). toF(\"kubernetes-hpa:///?kubernetesClient=#kubernetesClient&operation=listHPA\"). to(\"mock:result\");",
"from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_HPA_LABELS, labels); } }); toF(\"kubernetes-hpa:///?kubernetesClient=#kubernetesClient&operation=listHPAByLabels\"). to(\"mock:result\");",
"fromF(\"kubernetes-hpa://%s?oauthToken=%s&namespace=default&resourceName=test\", host, authToken).process(new KubernertesProcessor()).to(\"mock:result\"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); HorizontalPodAutoscaler hpa = exchange.getIn().getBody(HorizontalPodAutoscaler.class); log.info(\"Got event with hpa name: \" + hpa.getMetadata().getName() + \" and action \" + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } }"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-hpa-component-starter |
Chapter 6. Generic ephemeral volumes | Chapter 6. Generic ephemeral volumes 6.1. Overview Generic ephemeral volumes are a type of ephemeral volume that can be provided by all storage drivers that support persistent volumes and dynamic provisioning. Generic ephemeral volumes are similar to emptyDir volumes in that they provide a per-pod directory for scratch data, which is usually empty after provisioning. Generic ephemeral volumes are specified inline in the pod spec and follow the pod's lifecycle. They are created and deleted along with the pod. Generic ephemeral volumes have the following features: Storage can be local or network-attached. Volumes can have a fixed size that pods are not able to exceed. Volumes might have some initial data, depending on the driver and parameters. Typical operations on volumes are supported, assuming that the driver supports them, including snapshotting, cloning, resizing, and storage capacity tracking. Note Generic ephemeral volumes do not support offline snapshots and resize. Due to this limitation, the following Container Storage Interface (CSI) drivers do not support the following features for generic ephemeral volumes: Azure Disk CSI driver does not support resize. Cinder CSI driver does not support snapshot. 6.2. Lifecycle and persistent volume claims The parameters for a volume claim are allowed inside a volume source of a pod. Labels, annotations, and the whole set of fields for persistent volume claims (PVCs) are supported. When such a pod is created, the ephemeral volume controller then creates an actual PVC object (from the template shown in the Creating generic ephemeral volumes procedure) in the same namespace as the pod, and ensures that the PVC is deleted when the pod is deleted. This triggers volume binding and provisioning in one of two ways: Either immediately, if the storage class uses immediate volume binding. With immediate binding, the scheduler is forced to select a node that has access to the volume after it is available. When the pod is tentatively scheduled onto a node ( WaitForFirstConsumervolume binding mode). This volume binding option is recommended for generic ephemeral volumes because then the scheduler can choose a suitable node for the pod. In terms of resource ownership, a pod that has generic ephemeral storage is the owner of the PVCs that provide that ephemeral storage. When the pod is deleted, the Kubernetes garbage collector deletes the PVC, which then usually triggers deletion of the volume because the default reclaim policy of storage classes is to delete volumes. You can create quasi-ephemeral local storage by using a storage class with a reclaim policy of retain: the storage outlives the pod, and in this case, you must ensure that volume clean-up happens separately. While these PVCs exist, they can be used like any other PVC. In particular, they can be referenced as data source in volume cloning or snapshotting. The PVC object also holds the current status of the volume. Additional resources Creating generic ephemeral volumes 6.3. Security You can enable the generic ephemeral volume feature to allows users who can create pods to also create persistent volume claims (PVCs) indirectly. This feature works even if these users do not have permission to create PVCs directly. Cluster administrators must be aware of this. If this does not fit your security model, use an admission webhook that rejects objects such as pods that have a generic ephemeral volume. The normal namespace quota for PVCs still applies, so even if users are allowed to use this new mechanism, they cannot use it to circumvent other policies. 6.4. Persistent volume claim naming Automatically created persistent volume claims (PVCs) are named by a combination of the pod name and the volume name, with a hyphen (-) in the middle. This naming convention also introduces a potential conflict between different pods, and between pods and manually created PVCs. For example, pod-a with volume scratch and pod with volume a-scratch both end up with the same PVC name, pod-a-scratch . Such conflicts are detected, and a PVC is only used for an ephemeral volume if it was created for the pod. This check is based on the ownership relationship. An existing PVC is not overwritten or modified, but this does not resolve the conflict. Without the right PVC, a pod cannot start. Important Be careful when naming pods and volumes inside the same namespace so that naming conflicts do not occur. 6.5. Creating generic ephemeral volumes Procedure Create the pod object definition and save it to a file. Include the generic ephemeral volume information in the file. my-example-pod-with-generic-vols.yaml kind: Pod apiVersion: v1 metadata: name: my-app spec: containers: - name: my-frontend image: busybox:1.28 volumeMounts: - mountPath: "/mnt/storage" name: data command: [ "sleep", "1000000" ] volumes: - name: data 1 ephemeral: volumeClaimTemplate: metadata: labels: type: my-app-ephvol spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "gp2-csi" resources: requests: storage: 1Gi 1 Generic ephemeral volume claim | [
"kind: Pod apiVersion: v1 metadata: name: my-app spec: containers: - name: my-frontend image: busybox:1.28 volumeMounts: - mountPath: \"/mnt/storage\" name: data command: [ \"sleep\", \"1000000\" ] volumes: - name: data 1 ephemeral: volumeClaimTemplate: metadata: labels: type: my-app-ephvol spec: accessModes: [ \"ReadWriteOnce\" ] storageClassName: \"gp2-csi\" resources: requests: storage: 1Gi"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/storage/generic-ephemeral-volumes |
Chapter 12. Managing OpenID Connect and SAML Clients | Chapter 12. Managing OpenID Connect and SAML Clients Clients are entities that can request authentication of a user. Clients come in two forms. The first type of client is an application that wants to participate in single-sign-on. These clients just want Red Hat build of Keycloak to provide security for them. The other type of client is one that is requesting an access token so that it can invoke other services on behalf of the authenticated user. This section discusses various aspects around configuring clients and various ways to do it. 12.1. Managing OpenID Connect clients OpenID Connect is the recommended protocol to secure applications. It was designed from the ground up to be web friendly and it works best with HTML5/JavaScript applications. 12.1.1. Creating an OpenID Connect client To protect an application that uses the OpenID connect protocol, you create a client. Procedure Click Clients in the menu. Click Create client . Create client Leave Client type set to OpenID Connect . Enter a Client ID. This ID is an alphanumeric string that is used in OIDC requests and in the Red Hat build of Keycloak database to identify the client. Supply a Name for the client. If you plan to localize this name, set up a replacement string value. For example, a string value such as USD{myapp}. See the Server Developer Guide for more information. Click Save . This action creates the client and bring you to the Settings tab, where you can perform Basic configuration . 12.1.2. Basic configuration The Settings tab includes many options to configure this client. Settings tab 12.1.2.1. General Settings Client ID The alphanumeric ID string that is used in OIDC requests and in the Red Hat build of Keycloak database to identify the client. Name The name for the client in Red Hat build of Keycloak UI screen. To localize the name, set up a replacement string value. For example, a string value such as USD{myapp}. See the Server Developer Guide for more information. Description The description of the client. This setting can also be localized. Always Display in Console Always list this client in the Account Console even if this user does not have an active session. 12.1.2.2. Access Settings Root URL If Red Hat build of Keycloak uses any configured relative URLs, this value is prepended to them. Home URL Provides the default URL for when the auth server needs to redirect or link back to the client. Valid Redirect URIs Required field. Enter a URL pattern and click + to add and - to remove existing URLs and click Save . Exact (case sensitive) string matching is used to compare valid redirect URIs. You can use wildcards at the end of the URL pattern. For example http://host.com/path/* . To avoid security issues, if the passed redirect URI contains the userinfo part or its path manages access to parent directory ( /../ ) no wildcard comparison is performed but the standard and secure exact string matching. The full wildcard * valid redirect URI can also be configured to allow any http or https redirect URI. Please do not use it in production environments. Exclusive redirect URI patterns are typically more secure. See Unspecific Redirect URIs for more information. Web Origins Enter a URL pattern and click + to add and - to remove existing URLs. Click Save. This option handles Cross-Origin Resource Sharing (CORS) . If browser JavaScript attempts an AJAX HTTP request to a server whose domain is different from the one that the JavaScript code came from, the request must use CORS. The server must handle CORS requests, otherwise the browser will not display or allow the request to be processed. This protocol protects against XSS, CSRF, and other JavaScript-based attacks. Domain URLs listed here are embedded within the access token sent to the client application. The client application uses this information to decide whether to allow a CORS request to be invoked on it. Only Red Hat build of Keycloak client adapters support this feature. See Securing Applications and Services Guide for more information. Admin URL Callback endpoint for a client. The server uses this URL to make callbacks like pushing revocation policies, performing backchannel logout, and other administrative operations. For Red Hat build of Keycloak servlet adapters, this URL can be the root URL of the servlet application. For more information, see Securing Applications and Services Guide . 12.1.2.3. Capability Config Client authentication The type of OIDC client. ON For server-side clients that perform browser logins and require client secrets when making an Access Token Request. This setting should be used for server-side applications. OFF For client-side clients that perform browser logins. As it is not possible to ensure that secrets can be kept safe with client-side clients, it is important to restrict access by configuring correct redirect URIs. Authorization Enables or disables fine-grained authorization support for this client. Standard Flow If enabled, this client can use the OIDC Authorization Code Flow . Direct Access Grants If enabled, this client can use the OIDC Direct Access Grants . Implicit Flow If enabled, this client can use the OIDC Implicit Flow . Service account roles If enabled, this client can authenticate to Red Hat build of Keycloak and retrieve access token dedicated to this client. In terms of OAuth2 specification, this enables support of Client Credentials Grant for this client. Auth 2.0 Device Authorization Grant If enabled, this client can use the OIDC Device Authorization Grant . OIDC CIBA Grant If enabled, this client can use the OIDC Client Initiated Backchannel Authentication Grant . 12.1.2.4. Login settings Login theme A theme to use for login, OTP, grant registration, and forgotten password pages. Consent required If enabled, users have to consent to client access. For client-side clients that perform browser logins. As it is not possible to ensure that secrets can be kept safe with client-side clients, it is important to restrict access by configuring correct redirect URIs. Display client on screen This switch applies if Consent Required is Off . Off The consent screen will contain only the consents corresponding to configured client scopes. On There will be also one item on the consent screen about this client itself. Client consent screen text Applies if Consent required and Display client on screen are enabled. Contains the text that will be on the consent screen about permissions for this client. 12.1.2.5. Logout settings Front channel logout If Front Channel Logout is enabled, the application should be able to log out users through the front channel as per OpenID Connect Front-Channel Logout specification. If enabled, you should also provide the Front-Channel Logout URL . Front-channel logout URL URL that will be used by Red Hat build of Keycloak to send logout requests to clients through the front-channel. Backchannel logout URL URL that will cause the client to log itself out when a logout request is sent to this realm (via end_session_endpoint). If omitted, no logout requests are sent to the client. Backchannel logout session required Specifies whether a session ID Claim is included in the Logout Token when the Backchannel Logout URL is used. Backchannel logout revoke offline sessions Specifies whether a revoke_offline_access event is included in the Logout Token when the Backchannel Logout URL is used. Red Hat build of Keycloak will revoke offline sessions when receiving a Logout Token with this event. 12.1.3. Advanced configuration After completing the fields on the Settings tab, you can use the other tabs to perform advanced configuration. 12.1.3.1. Advanced tab When you click the Advanced tab, additional fields are displayed. For details on a specific field, click the question mark icon for that field. However, certain fields are described in detail in this section. 12.1.3.2. Fine grain OpenID Connect configuration Logo URL URL that references a logo for the Client application. Policy URL URL that the Relying Party Client provides to the End-User to read about how the profile data will be used. Terms of Service URL URL that the Relying Party Client provides to the End-User to read about the Relying Party's terms of service. Signed and Encrypted ID Token Support Red Hat build of Keycloak can encrypt ID tokens according to the Json Web Encryption (JWE) specification. The administrator determines if ID tokens are encrypted for each client. The key used for encrypting the ID token is the Content Encryption Key (CEK). Red Hat build of Keycloak and a client must negotiate which CEK is used and how it is delivered. The method used to determine the CEK is the Key Management Mode. The Key Management Mode that Red Hat build of Keycloak supports is Key Encryption. In Key Encryption: The client generates an asymmetric cryptographic key pair. The public key is used to encrypt the CEK. Red Hat build of Keycloak generates a CEK per ID token Red Hat build of Keycloak encrypts the ID token using this generated CEK Red Hat build of Keycloak encrypts the CEK using the client's public key. The client decrypts this encrypted CEK using their private key The client decrypts the ID token using the decrypted CEK. No party, other than the client, can decrypt the ID token. The client must pass its public key for encrypting CEK to Red Hat build of Keycloak. Red Hat build of Keycloak supports downloading public keys from a URL provided by the client. The client must provide public keys according to the Json Web Keys (JWK) specification. The procedure is: Open the client's Keys tab. Toggle JWKS URL to ON. Input the client's public key URL in the JWKS URL textbox. Key Encryption's algorithms are defined in the Json Web Algorithm (JWA) specification. Red Hat build of Keycloak supports: RSAES-PKCS1-v1_5(RSA1_5) RSAES OAEP using default parameters (RSA-OAEP) RSAES OAEP 256 using SHA-256 and MFG1 (RSA-OAEP-256) The procedure to select the algorithm is: Open the client's Advanced tab. Open Fine Grain OpenID Connect Configuration . Select the algorithm from ID Token Encryption Content Encryption Algorithm pulldown menu. 12.1.3.3. OpenID Connect Compatibility Modes This section exists for backward compatibility. Click the question mark icons for details on each field. OAuth 2.0 Mutual TLS Certificate Bound Access Tokens Enabled Mutual TLS binds an access token and a refresh token together with a client certificate, which is exchanged during a TLS handshake. This binding prevents an attacker from using stolen tokens. This type of token is a holder-of-key token. Unlike bearer tokens, the recipient of a holder-of-key token can verify if the sender of the token is legitimate. If this setting is on, the workflow is: A token request is sent to the token endpoint in an authorization code flow or hybrid flow. Red Hat build of Keycloak requests a client certificate. Red Hat build of Keycloak receives the client certificate. Red Hat build of Keycloak successfully verifies the client certificate. If verification fails, Red Hat build of Keycloak rejects the token. In the following cases, Red Hat build of Keycloak will verify the client sending the access token or the refresh token: A token refresh request is sent to the token endpoint with a holder-of-key refresh token. A UserInfo request is sent to UserInfo endpoint with a holder-of-key access token. A logout request is sent to non-OIDC compliant Red Hat build of Keycloak proprietary Logout endpoint with a holder-of-key refresh token. See Mutual TLS Client Certificate Bound Access Tokens in the OAuth 2.0 Mutual TLS Client Authentication and Certificate Bound Access Tokens for more details. Note Currently, Red Hat build of Keycloak client adapters do not support holder-of-key token verification. Red Hat build of Keycloak adapters treat access and refresh tokens as bearer tokens. OAuth 2.0 Demonstrating Proof-of-Possession at the Application Layer (DPoP) DPoP binds an access token and a refresh token together with the public part of a client's key pair. This binding prevents an attacker from using stolen tokens. This type of token is a holder-of-key token. Unlike bearer tokens, the recipient of a holder-of-key token can verify if the sender of the token is legitimate. If the client switch OAuth 2.0 DPoP Bound Access Tokens Enabled is on, the workflow is: A token request is sent to the token endpoint in an authorization code flow or hybrid flow. Red Hat build of Keycloak requests a DPoP proof. Red Hat build of Keycloak receives the DPoP proof. Red Hat build of Keycloak successfully verifies the DPoP proof. If verification fails, Red Hat build of Keycloak rejects the token. If the switch OAuth 2.0 DPoP Bound Access Tokens Enabled is off, the client can still send DPoP proof in the token request. In that case, Red Hat build of Keycloak will verify DPoP proof and will add the thumbprint to the token. But if the switch is off, DPoP binding is not enforced by the Red Hat build of Keycloak server for this client. It is recommended to have this switch on if you want to make sure that particular client always uses DPoP binding. In the following cases, Red Hat build of Keycloak will verify the client sending the access token or the refresh token: A token refresh request is sent to the token endpoint with a holder-of-key refresh token. This verification is done only for public clients as described in the DPoP specification. For confidential clients, the verification is not done as client authentication with proper client credentials is in place to ensure that request comes from the legitimate client. For public clients, both access tokens and refresh tokens are DPoP bound. For confidential clients, only access tokens are DPoP bound. A UserInfo request is sent to UserInfo endpoint with a holder-of-key access token. A logout request is sent to a non-OIDC compliant Red Hat build of Keycloak proprietary logout endpoint Logout endpoint with a holder-of-key refresh token. This verification is done only for public clients as described above. See OAuth 2.0 Demonstrating Proof of Possession (DPoP) for more details. Note Currently, Red Hat build of Keycloak client adapters do not support DPoP holder-of-key token verification. Red Hat build of Keycloak adapters treat access and refresh tokens as bearer tokens. Note DPoP is Technology Preview and is not fully supported. This feature is disabled by default. To enable start the server with --features=preview or --features=dpop Advanced Settings for OIDC The Advanced Settings for OpenID Connect allows you to configure overrides at the client level for session and token timeouts . Configuration Description Access Token Lifespan The value overrides the realm option with same name. Client Session Idle The value overrides the realm option with same name. The value should be shorter than the global SSO Session Idle . Client Session Max The value overrides the realm option with same name. The value should be shorter than the global SSO Session Max . Client Offline Session Idle This setting allows you to configure a shorter offline session idle timeout for the client. The timeout is amount of time the session remains idle before Red Hat build of Keycloak revokes its offline token. If not set, realm Offline Session Idle is used. Client Offline Session Max This setting allows you to configure a shorter offline session max lifespan for the client. The lifespan is the maximum time before Red Hat build of Keycloak revokes the corresponding offline token. This option needs Offline Session Max Limited enabled globally in the realm, and defaults to Offline Session Max . Proof Key for Code Exchange Code Challenge Method If an attacker steals an authorization code of a legitimate client, Proof Key for Code Exchange (PKCE) prevents the attacker from receiving the tokens that apply to the code. An administrator can select one of these options: (blank) Red Hat build of Keycloak does not apply PKCE unless the client sends appropriate PKCE parameters to Red Hat build of Keycloaks authorization endpoint. S256 Red Hat build of Keycloak applies to the client PKCE whose code challenge method is S256. plain Red Hat build of Keycloak applies to the client PKCE whose code challenge method is plain. See RFC 7636 Proof Key for Code Exchange by OAuth Public Clients for more details. ACR to Level of Authentication (LoA) Mapping In the advanced settings of a client, you can define which Authentication Context Class Reference (ACR) value is mapped to which Level of Authentication (LoA) . This mapping can be specified also at the realm as mentioned in the ACR to LoA Mapping . A best practice is to configure this mapping at the realm level, which allows to share the same settings across multiple clients. The Default ACR Values can be used to specify the default values when the login request is sent from this client to Red Hat build of Keycloak without acr_values parameter and without a claims parameter that has an acr claim attached. See official OIDC dynamic client registration specification . Warning Note that default ACR values are used as the default level, however it cannot be reliably used to enforce login with the particular level. For example, assume that you configure the Default ACR Values to level 2. Then by default, users will be required to authenticate with level 2. However when the user explicitly attaches the parameter into login request such as acr_values=1 , then the level 1 will be used. As a result, if the client really requires level 2, the client is encouraged to check the presence of the acr claim inside ID Token and double-check that it contains the requested level 2. For further details see Step-up Authentication and the official OIDC specification . 12.1.4. Confidential client credentials If the Client authentication of the client is set to ON , the credentials of the client must be configured under the Credentials tab. Credentials tab The Client Authenticator drop-down list specifies the type of credential to use for your client. Client ID and Secret This choice is the default setting. The secret is automatically generated. Click Regenerate to recreate the secret if necessary. Signed JWT Signed JWT is "Signed Json Web Token". When choosing this credential type you will have to also generate a private key and certificate for the client in the tab Keys . The private key will be used to sign the JWT, while the certificate is used by the server to verify the signature. Keys tab Click on the Generate new keys button to start this process. Generate keys Select the archive format you want to use. Enter a key password . Enter a store password . Click Generate . When you generate the keys, Red Hat build of Keycloak will store the certificate and you download the private key and certificate for your client. You can also generate keys using an external tool and then import the client's certificate by clicking Import Certificate . Import certificate Select the archive format of the certificate. Enter the store password. Select the certificate file by clicking Import File . Click Import . Importing a certificate is unnecessary if you click Use JWKS URL . In this case, you can provide the URL where the public key is published in JWK format. With this option, if the key is ever changed, Red Hat build of Keycloak reimports the key. If you are using a client secured by Red Hat build of Keycloak adapter, you can configure the JWKS URL in this format, assuming that https://myhost.com/myapp is the root URL of your client application: https://myhost.com/myapp/k_jwks See Server Developer Guide for more details. Signed JWT with Client Secret If you select this option, you can use a JWT signed by client secret instead of the private key. The client secret will be used to sign the JWT by the client. X509 Certificate Red Hat build of Keycloak will validate if the client uses proper X509 certificate during the TLS Handshake. X509 certificate The validator also checks the Subject DN field of the certificate with a configured regexp validation expression. For some use cases, it is sufficient to accept all certificates. In that case, you can use (.*?)(?:USD) expression. Two ways exist for Red Hat build of Keycloak to obtain the Client ID from the request: The client_id parameter in the query (described in Section 2.2 of the OAuth 2.0 Specification ). Supply client_id as a form parameter. 12.1.5. Client Secret Rotation Important Please note that Client Secret Rotation support is in development. Use this feature experimentally. For a client with Confidential Client authentication Red Hat build of Keycloak supports the functionality of rotating client secrets through Client Policies . The client secrets rotation policy provides greater security in order to alleviate problems such as secret leakage. Once enabled, Red Hat build of Keycloak supports up to two concurrently active secrets for each client. The policy manages rotations according to the following settings: Secret expiration: [seconds] - When the secret is rotated, this is the expiration of time of the new secret. The amount, in seconds , added to the secret creation date. Calculated at policy execution time. Rotated secret expiration: [seconds] - When the secret is rotated, this value is the remaining expiration time for the old secret. This value should be always smaller than Secret expiration. When the value is 0, the old secret will be immediately removed during client rotation. The amount, in seconds , added to the secret rotation date. Calculated at policy execution time. Remaining expiration time for rotation during update: [seconds] - Time period when an update to a dynamic client should perform client secret rotation. Calculated at policy execution time. When a client secret rotation occurs, a new main secret is generated and the old client main secret becomes the secondary secret with a new expiration date. 12.1.5.1. Rules for client secret rotation Rotations do not occur automatically or through a background process. In order to perform the rotation, an update action is required on the client, either through the Red Hat build of Keycloak Admin Console through the function of Regenerate Secret , in the client's credentials tab or Admin REST API. When invoking a client update action, secret rotation occurs according to the rules: When the value of Secret expiration is less than the current date. During dynamic client registration client-update request, the client secret will be automatically rotated if the value of Remaining expiration time for rotation during update match the period between the current date and the Secret expiration . Additionally it is possible through Admin REST API to force a client secret rotation at any time. Note During the creation of new clients, if the client secret rotation policy is active, the behavior will be applied automatically. Warning To apply the secret rotation behavior to an existing client, update that client after you define the policy so that the behavior is applied. 12.1.6. Creating an OIDC Client Secret Rotation Policy The following is an example of defining a secret rotation policy: Procedure Click Realm Settings in the menu. Click Client Policies tab. On the Profiles page, click Create client profile . Create a profile Enter any name for Name . Enter a description that helps you identify the purpose of the profile for Description . Click Save . This action creates the profile and enables you to configure executors. Click Add executor to configure an executor for this profile. Create a profile executor Select secret-rotation for Executor Type . Enter the maximum duration time of each secret, in seconds, for Secret Expiration . Enter the maximum duration time of each rotated secret, in seconds, for Rotated Secret Expiration . Warning Remember that the Rotated Secret Expiration value must always be less than Secret Expiration . Enter the amount of time, in seconds, after which any update action will update the client for Remain Expiration Time . Click Add . In the example above: Each secret is valid for one week. The rotated secret expires after two days. The window for updating dynamic clients starts one day before the secret expires. Return to the Client Policies tab. Click Policies . Click Create client policy . Create the Client Secret Rotation Policy Enter any name for Name . Enter a description that helps you identify the purpose of the policy for Description . Click Save . This action creates the policy and enables you to associate policies with profiles. It also allows you to configure the conditions for policy execution. Under Conditions, click Add condition . Create the Client Secret Rotation Policy Condition To apply the behavior to all confidential clients select client-access-type in the Condition Type field Note To apply to a specific group of clients, another approach would be to select the client-roles type in the Condition Type field. In this way, you could create specific roles and assign a custom rotation configuration to each role. Add confidential to the field Client Access Type . Click Add . Back in the policy setting, under Client Profiles , click Add client profile and then select Weekly Client Secret Rotation Profile from the list and then click Add . Client Secret Rotation Policy Note To apply the secret rotation behavior to an existing client, follow the following steps: Using the Admin Console Click Clients in the menu. Click a client. Click the Credentials tab. Click Re-generate of the client secret. Using client REST services it can be executed in two ways: Through an update operation on a client Through the regenerate client secret endpoint 12.1.7. Using a service account Each OIDC client has a built-in service account . Use this service account to obtain an access token. Procedure Click Clients in the menu. Select your client. Click the Settings tab. Toggle Client authentication to On . Select Service accounts roles . Click Save . Configure your client credentials . Click the Scope tab. Verify that you have roles or toggle Full Scope Allowed to ON . Click the Service Account Roles tab Configure the roles available to this service account for your client. Roles from access tokens are the intersection of: Role scope mappings of a client combined with the role scope mappings inherited from linked client scopes. Service account roles. The REST URL to invoke is /realms/{realm-name}/protocol/openid-connect/token . This URL must be invoked as a POST request and requires that you post the client credentials with the request. By default, client credentials are represented by the clientId and clientSecret of the client in the Authorization: Basic header but you can also authenticate the client with a signed JWT assertion or any other custom mechanism for client authentication. You also need to set the grant_type parameter to "client_credentials" as per the OAuth2 specification. For example, the POST invocation to retrieve a service account can look like this: The response would be similar to this Access Token Response from the OAuth 2.0 specification. Only the access token is returned by default. No refresh token is returned and no user session is created on the Red Hat build of Keycloak side upon successful authentication by default. Due to the lack of a refresh token, re-authentication is required when the access token expires. However, this situation does not mean any additional overhead for the Red Hat build of Keycloak server because sessions are not created by default. In this situation, logout is unnecessary. However, issued access tokens can be revoked by sending requests to the OAuth2 Revocation Endpoint as described in the OpenID Connect Endpoints section. Additional resources For more details, see Client Credentials Grant . 12.1.8. Audience support Typically, the environment where Red Hat build of Keycloak is deployed consists of a set of confidential or public client applications that use Red Hat build of Keycloak for authentication. Services ( Resource Servers in the OAuth 2 specification ) are also available that serve requests from client applications and provide resources to these applications. These services require an Access token (Bearer token) to be sent to them to authenticate a request. This token is obtained by the frontend application upon login to Red Hat build of Keycloak. In the environment where trust among services is low, you may encounter this scenario: A frontend client application requires authentication against Red Hat build of Keycloak. Red Hat build of Keycloak authenticates a user. Red Hat build of Keycloak issues a token to the application. The application uses the token to invoke an untrusted service. The untrusted service returns the response to the application. However, it keeps the applications token. The untrusted service then invokes a trusted service using the applications token. This results in broken security as the untrusted service misuses the token to access other services on behalf of the client application. This scenario is unlikely in environments with a high level of trust between services but not in environments where trust is low. In some environments, this workflow may be correct as the untrusted service may have to retrieve data from a trusted service to return data to the original client application. An unlimited audience is useful when a high level of trust exists between services. Otherwise, the audience should be limited. You can limit the audience and, at the same time, allow untrusted services to retrieve data from trusted services. In this case, ensure that the untrusted service and the trusted service are added as audiences to the token. To prevent any misuse of the access token, limit the audience on the token and configure your services to verify the audience on the token. The flow will change as follows: A frontend application authenticates against Red Hat build of Keycloak. Red Hat build of Keycloak authenticates a user. Red Hat build of Keycloak issues a token to the application. The application knows that it will need to invoke an untrusted service so it places scope=<untrusted service> in the authentication request sent to Red Hat build of Keycloak (see Client Scopes section for more details about the scope parameter). The token issued to the application contains a reference to the untrusted service in its audience ( "audience": [ "<untrusted service>" ] ) which declares that the client uses this access token to invoke the untrusted service. The untrusted service invokes a trusted service with the token. Invocation is not successful because the trusted service checks the audience on the token and find that its audience is only for the untrusted service. This behavior is expected and security is not broken. If the client wants to invoke the trusted service later, it must obtain another token by reissuing the SSO login with scope=<trusted service> . The returned token will then contain the trusted service as an audience: "audience": [ "<trusted service>" ] Use this value to invoke the <trusted service> . 12.1.8.1. Setup When setting up audience checking: Ensure that services are configured to check audience on the access token sent to them by adding the flag verify-token-audience in the adapter configuration. See Adapter configuration for details. Ensure that access tokens issued by Red Hat build of Keycloak contain all necessary audiences. Audiences can be added using the client roles as described in the section or hardcoded. See Hardcoded audience . 12.1.8.2. Automatically add audience An Audience Resolve protocol mapper is defined in the default client scope roles . The mapper checks for clients that have at least one client role available for the current token. The client ID of each client is then added as an audience, which is useful if your service clients rely on client roles. Service client could be usually a client without any flows enabled, which may not have any tokens issued directly to itself. It represents an OAuth 2 Resource Server . For example, for a service client and a confidential client, you can use the access token issued for the confidential client to invoke the service client REST service. The service client will be automatically added as an audience to the access token issued for the confidential client if the following are true: The service client has any client roles defined on itself. Target user has at least one of those client roles assigned. Confidential client has the role scope mappings for the assigned role. Note If you want to ensure that the audience is not added automatically, do not configure role scope mappings directly on the confidential client. Instead, you can create a dedicated client scope that contains the role scope mappings for the client roles of your dedicated client scope. Assuming that the client scope is added as an optional client scope to the confidential client, the client roles and the audience will be added to the token if explicitly requested by the scope=<trusted service> parameter. Note The frontend client itself is not automatically added to the access token audience, therefore allowing easy differentiation between the access token and the ID token, since the access token will not contain the client for which the token is issued as an audience. If you need the client itself as an audience, see the hardcoded audience option. However, using the same client as both frontend and REST service is not recommended. 12.1.8.3. Hardcoded audience When your service relies on realm roles or does not rely on the roles in the token at all, it can be useful to use a hardcoded audience. A hardcoded audience is a protocol mapper, that will add the client ID of the specified service client as an audience to the token. You can use any custom value, for example a URL, if you want to use a different audience than the client ID. You can add the protocol mapper directly to the frontend client. If the protocol mapper is added directly, the audience will always be added as well. For more control over the protocol mapper, you can create the protocol mapper on the dedicated client scope, which will be called for example good-service . Audience protocol mapper From the Client details tab of the good-service client, you can generate the adapter configuration and confirm that verify-token-audience is set to true . This action forces the adapter to verify the audience if you use this configuration. You need to ensure that the confidential client is able to request good-service as an audience in its tokens. On the confidential client: Click the Client Scopes tab. Assign good-service as an optional (or default) client scope. See Client Scopes Linking section for more details. You can optionally Evaluate Client Scopes and generate an example access token. good-service will be added to the audience of the generated access token if good-service is included in the scope parameter, when you assigned it as an optional client scope. In your confidential client application, ensure that the scope parameter is used. The value good-service must be included when you want to issue the token for accessing good-service . See: parameters forwarding section if your application uses the servlet adapter. javascript adapter section if your application uses the javascript adapter. Note Both the Audience and Audience Resolve protocol mappers add the audiences to the access token only, by default. The ID Token typically contains only a single audience, the client ID for which the token was issued, a requirement of the OpenID Connect specification. However, the access token does not necessarily have the client ID, which was the token issued for, unless the audience mappers added it. 12.2. Creating a SAML client Red Hat build of Keycloak supports SAML 2.0 for registered applications. POST and Redirect bindings are supported. You can choose to require client signature validation. You can have the server sign and/or encrypt responses as well. Procedure Click Clients in the menu. Click Create client to go to the Create client page. Set Client type to SAML . Create client Enter the Client ID of the client. This is often a URL and is the expected issuer value in SAML requests sent by the application. Click Save . This action creates the client and brings you to the Settings tab. The following sections describe each setting on this tab. 12.2.1. Settings tab The Settings tab includes many options to configure this client. Client settings 12.2.1.1. General settings Client ID The alphanumeric ID string that is used in OIDC requests and in the Red Hat build of Keycloak database to identify the client. This value must match the issuer value sent with AuthNRequests. Red Hat build of Keycloak pulls the issuer from the Authn SAML request and match it to a client by this value. Name The name for the client in a Red Hat build of Keycloak UI screen. To localize the name, set up a replacement string value. For example, a string value such as USD{myapp}. See the Server Developer Guide for more information. Description The description of the client. This setting can also be localized. Always Display in Console Always list this client in the Account Console even if this user does not have an active session. 12.2.1.2. Access Settings Root URL When Red Hat build of Keycloak uses a configured relative URL, this value is prepended to the URL. Home URL If Red Hat build of Keycloak needs to link to a client, this URL is used. Valid Redirect URIs Enter a URL pattern and click the + sign to add. Click the - sign to remove. Click Save to save these changes. Wildcards values are allowed only at the end of a URL. For example, http://host.com/*USDUSD . This field is used when the exact SAML endpoints are not registered and Red Hat build of Keycloak pulls the Assertion Consumer URL from a request. IDP-Initiated SSO URL name URL fragment name to reference client when you want to do IDP Initiated SSO. Leaving this empty will disable IDP Initiated SSO. The URL you will reference from your browser will be: server-root /realms/{realm}/protocol/saml/clients/{client-url-name} IDP Initiated SSO Relay State Relay state you want to send with SAML request when you want to do IDP Initiated SSO. Master SAML Processing URL This URL is used for all SAML requests and the response is directed to the SP. It is used as the Assertion Consumer Service URL and the Single Logout Service URL. If login requests contain the Assertion Consumer Service URL then those login requests will take precedence. This URL must be validated by a registered Valid Redirect URI pattern. 12.2.1.3. SAML capabilities Name ID Format The Name ID Format for the subject. This format is used if no name ID policy is specified in a request, or if the Force Name ID Format attribute is set to ON. Force Name ID Format If a request has a name ID policy, ignore it and use the value configured in the Admin Console under Name ID Format . Force POST Binding By default, Red Hat build of Keycloak responds using the initial SAML binding of the original request. By enabling Force POST Binding , Red Hat build of Keycloak responds using the SAML POST binding even if the original request used the redirect binding. Force artifact binding If enabled, response messages are returned to the client through the SAML ARTIFACT binding system. Include AuthnStatement SAML login responses may specify the authentication method used, such as password, as well as timestamps of the login and the session expiration. Include AuthnStatement is enabled by default, so that the AuthnStatement element will be included in login responses. Setting this to OFF prevents clients from determining the maximum session length, which can create client sessions that do not expire. Include OneTimeUse Condition If enable, a OneTimeUse Condition is included in login responses. Optimize REDIRECT signing key lookup When set to ON, the SAML protocol messages include the Red Hat build of Keycloak native extension. This extension contains a hint with the signing key ID. The SP uses the extension for signature validation instead of attempting to validate the signature using keys. This option applies to REDIRECT bindings where the signature is transferred in query parameters and this information is not found in the signature information. This is contrary to POST binding messages where key ID is always included in document signature. This option is used when Red Hat build of Keycloak server and adapter provide the IDP and SP. This option is only relevant when Sign Documents is set to ON. 12.2.1.4. Signature and Encryption Sign Documents When set to ON, Red Hat build of Keycloak signs the document using the realms private key. Sign Assertions The assertion is signed and embedded in the SAML XML Auth response. Signature Algorithm The algorithm used in signing SAML documents. Note that SHA1 based algorithms are deprecated and may be removed in a future release. We recommend the use of some more secure algorithm instead of *_SHA1 . Also, with *_SHA1 algorithms, verifying signatures do not work if the SAML client runs on Java 17 or higher. SAML Signature Key Name Signed SAML documents sent using POST binding contain the identification of the signing key in the KeyName element. This action can be controlled by the SAML Signature Key Name option. This option controls the contents of the Keyname . KEY_ID The KeyName contains the key ID. This option is the default option. CERT_SUBJECT The KeyName contains the subject from the certificate corresponding to the realm key. This option is expected by Microsoft Active Directory Federation Services. NONE The KeyName hint is completely omitted from the SAML message. Canonicalization Method The canonicalization method for XML signatures. 12.2.1.5. Login settings Login theme A theme to use for login, OTP, grant registration, and forgotten password pages. Consent required If enabled, users have to consent to client access. For client-side clients that perform browser logins. As it is not possible to ensure that secrets can be kept safe with client-side clients, it is important to restrict access by configuring correct redirect URIs. Display client on screen This switch applies if Consent Required is Off . Off The consent screen will contain only the consents corresponding to configured client scopes. On There will be also one item on the consent screen about this client itself. Client consent screen text Applies if Consent required and Display client on screen are enabled. Contains the text that will be on the consent screen about permissions for this client. 12.2.1.6. Logout settings Front channel logout If Front Channel Logout is enabled, the application requires a browser redirect to perform a logout. For example, the application may require a cookie to be reset which could only be done via a redirect. If Front Channel Logout is disabled, Red Hat build of Keycloak invokes a background SAML request to log out of the application. 12.2.2. Keys tab Encrypt Assertions Encrypts the assertions in SAML documents with the realms private key. The AES algorithm uses a key size of 128 bits. Client Signature Required If Client Signature Required is enabled, documents coming from a client are expected to be signed. Red Hat build of Keycloak will validate this signature using the client public key or cert set up in the Keys tab. Allow ECP Flow If true, this application is allowed to use SAML ECP profile for authentication. 12.2.3. Advanced tab This tab has many fields for specific situations. Some fields are covered in other topics. For details on other fields, click the question mark icon. 12.2.3.1. Fine Grain SAML Endpoint Configuration Logo URL URL that references a logo for the Client application. Policy URL URL that the Relying Party Client provides to the End-User to read about how the profile data will be used. Terms of Service URL URL that the Relying Party Client provides to the End-User to read about the Relying Party's terms of service. Assertion Consumer Service POST Binding URL POST Binding URL for the Assertion Consumer Service. Assertion Consumer Service Redirect Binding URL Redirect Binding URL for the Assertion Consumer Service. Logout Service POST Binding URL POST Binding URL for the Logout Service. Logout Service Redirect Binding URL Redirect Binding URL for the Logout Service. Logout Service Artifact Binding URL Artifact Binding URL for the Logout Service. When set together with the Force Artifact Binding option, Artifact binding is forced for both login and logout flows. Artifact binding is not used for logout unless this property is set. Logout Service SOAP Binding URL Redirect Binding URL for the Logout Service. Only applicable if back channel logout is used. Artifact Binding URL URL to send the HTTP artifact messages to. Artifact Resolution Service URL of the client SOAP endpoint where to send the ArtifactResolve messages to. 12.2.4. IDP Initiated login IDP Initiated Login is a feature that allows you to set up an endpoint on the Red Hat build of Keycloak server that will log you into a specific application/client. In the Settings tab for your client, you need to specify the IDP Initiated SSO URL Name . This is a simple string with no whitespace in it. After this you can reference your client at the following URL: root/realms/{realm}/protocol/saml/clients/{url-name} The IDP initiated login implementation prefers POST over REDIRECT binding (check saml bindings for more information). Therefore the final binding and SP URL are selected in the following way: If the specific Assertion Consumer Service POST Binding URL is defined (inside Fine Grain SAML Endpoint Configuration section of the client settings) POST binding is used through that URL. If the general Master SAML Processing URL is specified then POST binding is used again throughout this general URL. As the last resort, if the Assertion Consumer Service Redirect Binding URL is configured (inside Fine Grain SAML Endpoint Configuration ) REDIRECT binding is used with this URL. If your client requires a special relay state, you can also configure this on the Settings tab in the IDP Initiated SSO Relay State field. Alternatively, browsers can specify the relay state in a RelayState query parameter, i.e. root/realms/{realm}/protocol/saml/clients/{url-name}?RelayState=thestate . When using identity brokering , it is possible to set up an IDP Initiated Login for a client from an external IDP. The actual client is set up for IDP Initiated Login at broker IDP as described above. The external IDP has to set up the client for application IDP Initiated Login that will point to a special URL pointing to the broker and representing IDP Initiated Login endpoint for a selected client at the brokering IDP. This means that in client settings at the external IDP: IDP Initiated SSO URL Name is set to a name that will be published as IDP Initiated Login initial point, Assertion Consumer Service POST Binding URL in the Fine Grain SAML Endpoint Configuration section has to be set to the following URL: broker-root/realms/{broker-realm}/broker/{idp-name}/endpoint/clients/{client-id} , where: broker-root is base broker URL broker-realm is name of the realm at broker where external IDP is declared idp-name is name of the external IDP at broker client-id is the value of IDP Initiated SSO URL Name attribute of the SAML client defined at broker. It is this client, which will be made available for IDP Initiated Login from the external IDP. Please note that you can import basic client settings from the brokering IDP into client settings of the external IDP - just use SP Descriptor available from the settings of the identity provider in the brokering IDP, and add clients/ client-id to the endpoint URL. 12.2.5. Using an entity descriptor to create a client Instead of registering a SAML 2.0 client manually, you can import the client using a standard SAML Entity Descriptor XML file. The Client page includes an Import client option. Add client Procedure Click Browse . Load the file that contains the XML entity descriptor information. Review the information to ensure everything is set up correctly. Some SAML client adapters, such as mod-auth-mellon , need the XML Entity Descriptor for the IDP. You can find this descriptor by going to this URL: where realm is the realm of your client. 12.3. Client links To link from one client to another, Red Hat build of Keycloak provides a redirect endpoint: /realms/realm_name/clients/{client-id}/redirect . If a client accesses this endpoint using a HTTP GET request, Red Hat build of Keycloak returns the configured base URL for the provided Client and Realm in the form of an HTTP 307 (Temporary Redirect) in the response's Location header. As a result of this, a client needs only to know the Realm name and the Client ID to link to them. This indirection avoids hard-coding client base URLs. As an example, given the realm master and the client-id account : This URL temporarily redirects to: http://host:port/realms/master/account 12.4. OIDC token and SAML assertion mappings Applications receiving ID tokens, access tokens, or SAML assertions may require different roles and user metadata. You can use Red Hat build of Keycloak to: Hardcode roles, claims and custom attributes. Pull user metadata into a token or assertion. Rename roles. You perform these actions in the Mappers tab in the Admin Console. Mappers tab New clients do not have built-in mappers but they can inherit some mappers from client scopes. See the client scopes section for more details. Protocol mappers map items (such as an email address, for example) to a specific claim in the identity and access token. The function of a mapper should be self-explanatory from its name. You add pre-configured mappers by clicking Add Builtin . Each mapper has a set of common settings. Additional settings are available, depending on the mapper type. Click Edit to a mapper to access the configuration screen to adjust these settings. Mapper config Details on each option can be viewed by hovering over its tooltip. You can use most OIDC mappers to control where the claim gets placed. You opt to include or exclude the claim from the id and access tokens by adjusting the Add to ID token and Add to access token switches. You can add mapper types as follows: Procedure Go to the Mappers tab. Click Configure a new mapper . Add mapper Select a Mapper Type from the list box. 12.4.1. Priority order Mapper implementations have priority order . Priority order is not the configuration property of the mapper. It is the property of the concrete implementation of the mapper. Mappers are sorted by the order in the list of mappers. The changes in the token or assertion are applied in that order with the lowest applying first. Therefore, the implementations that are dependent on other implementations are processed in the necessary order. For example, to compute the roles which will be included with a token: Resolve audiences based on those roles. Process a JavaScript script that uses the roles and audiences already available in the token. 12.4.2. OIDC user session note mappers User session details are defined using mappers and are automatically included when you use or enable a feature on a client. Click Add builtin to include session details. Impersonated user sessions provide the following details: IMPERSONATOR_ID : The ID of an impersonating user. IMPERSONATOR_USERNAME : The username of an impersonating user. Service account sessions provide the following details: clientId : The client ID of the service account. client_id : The client ID of the service account. clientAddress : The remote host IP of the service account's authenticated device. clientHost : The remote host name of the service account's authenticated device. 12.4.3. Script mapper Use the Script Mapper to map claims to tokens by running user-defined JavaScript code. For more details about deploying scripts to the server, see JavaScript Providers . When scripts deploy, you should be able to select the deployed scripts from the list of available mappers. 12.4.4. Using lightweight access token The access token in Red Hat build of Keycloak contains sensitive information, including Personal Identifiable Information (PII). Therefore, if the resource server does not want to disclose this type of information to third party entities such as clients, Red Hat build of Keycloak supports lightweight access tokens that remove PII from access tokens. Further, when the resource server acquires the PII removed from the access token, it can acquire the PII by sending the access token to Red Hat build of Keycloak's token introspection endpoint. Information that cannot be removed from a lightweight access token Protocol mappers can controls which information is put onto an access token and the lightweight access token use the protocol mappers. Therefore, the following information cannot be removed from the lightweight access. exp , iat , auth_time , jti , iss , sub , typ , azp , nonce , session_state , sid , scope , cnf Using a lightweight access token in Red Hat build of Keycloak By applying use-lightweight-access-token executor of client policies to a client, the client can receive a lightweight access token instead of an access token. The lightweight access token contains a claim controlled by a protocol mapper where its setting Add to lightweight access token (default OFF) is turned ON. Also, by turning ON its setting Add to token introspection of the protocol mapper, the client can obtain the claim by sending the access token to Red Hat build of Keycloak's token introspection endpoint. 12.5. Generating client adapter config Red Hat build of Keycloak can generate configuration files that you can use to install a client adapter in your application's deployment environment. A number of adapter types are supported for OIDC and SAML. Click on the Action menu and select the Download adapter config option Select the Format Option you want configuration generated for. All Red Hat build of Keycloak client adapters for OIDC and SAML are supported. The mod-auth-mellon Apache HTTPD adapter for SAML is supported as well as standard SAML entity descriptor files. 12.6. Client scopes Use Red Hat build of Keycloak to define a shared client configuration in an entity called a client scope . A client scope configures protocol mappers and role scope mappings for multiple clients. Client scopes also support the OAuth 2 scope parameter. Client applications use this parameter to request claims or roles in the access token, depending on the requirement of the application. To create a client scope, follow these steps: Click Client Scopes in the menu. Client scopes list Click Create . Name your client scope. Click Save . A client scope has similar tabs to regular clients. You can define protocol mappers and role scope mappings . These mappings can be inherited by other clients and are configured to inherit from this client scope. 12.6.1. Protocol When you create a client scope, choose the Protocol . Clients linked in the same scope must have the same protocol. Each realm has a set of pre-defined built-in client scopes in the menu. SAML protocol: The role_list . This scope contains one protocol mapper for the roles list in the SAML assertion. OpenID Connect protocol: Several client scopes are available: roles This scope is not defined in the OpenID Connect specification and is not added automatically to the scope claim in the access token. This scope has mappers, which are used to add the roles of the user to the access token and add audiences for clients that have at least one client role. These mappers are described in more detail in the Audience section . web-origins This scope is also not defined in the OpenID Connect specification and not added to the scope claiming the access token. This scope is used to add allowed web origins to the access token allowed-origins claim. microprofile-jwt This scope handles claims defined in the MicroProfile/JWT Auth Specification . This scope defines a user property mapper for the upn claim and a realm role mapper for the groups claim. These mappers can be changed so different properties can be used to create the MicroProfile/JWT specific claims. offline_access This scope is used in cases when clients need to obtain offline tokens. More details on offline tokens is available in the Offline Access section and in the OpenID Connect specification . profile email address phone The client scopes profile , email , address and phone are defined in the OpenID Connect specification . These scopes do not have any role scope mappings defined but they do have protocol mappers defined. These mappers correspond to the claims defined in the OpenID Connect specification. For example, when you open the phone client scope and open the Mappers tab, you will see the protocol mappers which correspond to the claims defined in the specification for the scope phone . Client scope mappers When the phone client scope is linked to a client, the client automatically inherits all the protocol mappers defined in the phone client scope. Access tokens issued for this client contain the phone number information about the user, assuming that the user has a defined phone number. Built-in client scopes contain the protocol mappers as defined in the specification. You are free to edit client scopes and create, update, or remove any protocol mappers or role scope mappings. 12.6.2. Consent related settings Client scopes contain options related to the consent screen. Those options are useful if the linked client if Consent Required is enabled on the client. Display On Consent Screen If Display On Consent Screen is enabled, and the scope is added to a client that requires consent, the text specified in Consent Screen Text will be displayed on the consent screen. This text is shown when the user is authenticated and before the user is redirected from Red Hat build of Keycloak to the client. If Display On Consent Screen is disabled, this client scope will not be displayed on the consent screen. Consent Screen Text The text displayed on the consent screen when this client scope is added to a client when consent required defaults to the name of client scope. The value for this text can be customised by specifying a substitution variable with USD{var-name} strings. The customised value is configured within the property files in your theme. See the Server Developer Guide for more information on customisation. 12.6.3. Link client scope with the client Linking between a client scope and a client is configured in the Client Scopes tab of the client. Two ways of linking between client scope and client are available. Default Client Scopes This setting is applicable to the OpenID Connect and SAML clients. Default client scopes are applied when issuing OpenID Connect tokens or SAML assertions for a client. The client will inherit Protocol Mappers and Role Scope Mappings that are defined on the client scope. For the OpenID Connect Protocol, the Mappers and Role Scope Mappings are always applied, regardless of the value used for the scope parameter in the OpenID Connect authorization request. Optional Client Scopes This setting is applicable only for OpenID Connect clients. Optional client scopes are applied when issuing tokens for this client but only when requested by the scope parameter in the OpenID Connect authorization request. 12.6.3.1. Example For this example, assume the client has profile and email linked as default client scopes, and phone and address linked as optional client scopes. The client uses the value of the scope parameter when sending a request to the OpenID Connect authorization endpoint. scope=openid phone The scope parameter contains the string, with the scope values divided by spaces. The value openid is the meta-value used for all OpenID Connect requests. The token will contain mappers and role scope mappings from the default client scopes profile and email as well as phone , an optional client scope requested by the scope parameter. 12.6.4. Evaluating Client Scopes The Mappers tab contains the protocol mappers and the Scope tab contains the role scope mappings declared for this client. They do not contain the mappers and scope mappings inherited from client scopes. It is possible to see the effective protocol mappers (that is the protocol mappers defined on the client itself as well as inherited from the linked client scopes) and the effective role scope mappings used when generating a token for a client. Procedure Click the Client Scopes tab for the client. Open the sub-tab Evaluate . Select the optional client scopes that you want to apply. This will also show you the value of the scope parameter. This parameter needs to be sent from the application to the Red Hat build of Keycloak OpenID Connect authorization endpoint. Evaluating client scopes Note To send a custom value for a scope parameter from your application, see the parameters forwarding section , for servlet adapters or the javascript adapter section , for javascript adapters. All examples are generated for the particular user and issued for the particular client, with the specified value of the scope parameter. The examples include all of the claims and role mappings used. 12.6.5. Client scopes permissions When issuing tokens to a user, the client scope applies only if the user is permitted to use it. When a client scope does not have any role scope mappings defined, each user is permitted to use this client scope. However, when a client scope has role scope mappings defined, the user must be a member of at least one of the roles. There must be an intersection between the user roles and the roles of the client scope. Composite roles are factored into evaluating this intersection. If a user is not permitted to use the client scope, no protocol mappers or role scope mappings will be used when generating tokens. The client scope will not appear in the scope value in the token. 12.6.6. Realm default client scopes Use Realm Default Client Scopes to define sets of client scopes that are automatically linked to newly created clients. Procedure Click the Client Scopes tab for the client. Click Default Client Scopes . From here, select the client scopes that you want to add as Default Client Scopes to newly created clients and Optional Client Scopes . Default client scopes When a client is created, you can unlink the default client scopes, if needed. This is similar to removing Default Roles . 12.6.7. Scopes explained Client scope Client scopes are entities in Red Hat build of Keycloak that are configured at the realm level and can be linked to clients. Client scopes are referenced by their name when a request is sent to the Red Hat build of Keycloak authorization endpoint with a corresponding value of the scope parameter. See the client scopes linking section for more details. Role scope mapping This is available under the Scope tab of a client or client scope. Use Role scope mapping to limit the roles that can be used in the access tokens. See the Role Scope Mappings section for more details. 12.7. Client Policies To make it easy to secure client applications, it is beneficial to realize the following points in a unified way. Setting policies on what configuration a client can have Validation of client configurations Conformance to a required security standards and profiles such as Financial-grade API (FAPI) and OAuth 2.1 To realize these points in a unified way, Client Policies concept is introduced. 12.7.1. Use-cases Client Policies realize the following points mentioned as follows. Setting policies on what configuration a client can have Configuration settings on the client can be enforced by client policies during client creation/update, but also during OpenID Connect requests to Red Hat build of Keycloak server, which are related to particular client. Red Hat build of Keycloak supports similar thing also through the Client Registration Policies described in the Securing Applications and Services Guide . However, Client Registration Policies can only cover OIDC Dynamic Client Registration. Client Policies cover not only what Client Registration Policies can do, but other client registration and configuration ways. The current plans are for Client Registration to be replaced by Client Policies. Validation of client configurations Red Hat build of Keycloak supports validation whether the client follows settings like Proof Key for Code Exchange, Request Object Signing Algorithm, Holder-of-Key Token, and so on some endpoints like Authorization Endpoint, Token Endpoint, and so on. These can be specified by each setting item (on Admin Console, switch, pull-down menu and so on). To make the client application secure, the administrator needs to set many settings in the appropriate way, which makes it difficult for the administrator to secure the client application. Client Policies can do these validation of client configurations mentioned just above and they can also be used to autoconfigure some client configuration switches to meet the advanced security requirements. In the future, individual client configuration settings may be replaced by Client Policies directly performing required validations. Conformance to a required security standards and profiles such as FAPI and OAuth 2.1 The Global client profiles are client profiles pre-configured in Red Hat build of Keycloak by default. They are pre-configured to be compliant with standard security profiles like FAPI and OAuth 2.1 , which makes it easy for the administrator to secure their client application to be compliant with the particular security profile. At this moment, Red Hat build of Keycloak has global profiles for the support of FAPI and OAuth 2.1 specifications. The administrator will just need to configure the client policies to specify which clients should be compliant with the FAPI and OAuth 2.1. The administrator can configure client profiles and client policies, so that Red Hat build of Keycloak clients can be easily made compliant with various other security profiles like SPA, Native App, Open Banking and so on. 12.7.2. Protocol The client policy concept is independent of any specific protocol. However, Red Hat build of Keycloak currently supports it only just for the OpenID Connect (OIDC) protocol . 12.7.3. Architecture Client Policies consists of the four building blocks: Condition, Executor, Profile and Policy. 12.7.3.1. Condition A condition determines to which client a policy is adopted and when it is adopted. Some conditions are checked at the time of client create/update when some other conditions are checked during client requests (OIDC Authorization request, Token endpoint request and so on). The condition checks whether one specified criteria is satisfied. For example, some condition checks whether the access type of the client is confidential. The condition can not be used solely by itself. It can be used in a policy that is described afterwards. A condition can be configurable the same as other configurable providers. What can be configured depends on each condition's nature. The following conditions are provided: The way of creating/updating a client Dynamic Client Registration (Anonymous or Authenticated with Initial access token or Registration access token) Admin REST API (Admin Console and so on) So for example when creating a client, a condition can be configured to evaluate to true when this client is created by OIDC Dynamic Client Registration without initial access token (Anonymous Dynamic Client Registration). So this condition can be used for example to ensure that all clients registered through OIDC Dynamic Client Registration are FAPI or OAuth 2.1 compliant. Author of a client (Checked by presence to the particular role or group) On OpenID Connect dynamic client registration, an author of a client is the end user who was authenticated to get an access token for generating a new client, not Service Account of the existing client that actually accesses the registration endpoint with the access token. On registration by Admin REST API, an author of a client is the end user like the administrator of the Red Hat build of Keycloak. Client Access Type (confidential, public, bearer-only) For example when a client sends an authorization request, a policy is adopted if this client is confidential. Confidential client has enabled client authentication when public client has disabled client authentication. Bearer-only is a deprecated client type. Client Scope Evaluates to true if the client has a particular client scope (either as default or as an optional scope used in current request). This can be used for example to ensure that OIDC authorization requests with scope fapi-example-scope need to be FAPI compliant. Client Role Applies for clients with the client role of the specified name. Typically you can create a client role of specified name to requested clients and use it as a "marker role" to make sure that specified client policy will be applied for requested clients. Note A use-case often exists for requiring the application of a particular client policy for the specified clients such as my-client-1 and my-client-2 . The best way to achieve this result is to use a Client Role condition in your policy and then a create client role of specified name to requested clients. This client role can be used as a "marker role" used solely for marking that particular client policy for particular clients. Client Domain Name, Host or IP Address Applied for specific domain names of client. Or for the cases when the administrator registers/updates client from particular Host or IP Address. Any Client This condition always evaluates to true. It can be used for example to ensure that all clients in the particular realm are FAPI compliant. 12.7.3.2. Executor An executor specifies what action is executed on a client to which a policy is adopted. The executor executes one or several specified actions. For example, some executor checks whether the value of the parameter redirect_uri in the authorization request matches exactly with one of the pre-registered redirect URIs on Authorization Endpoint and rejects this request if not. The executor can not be used solely by itself. It can be used in a profile that is described afterwards. An executor can be configurable the same as other configurable providers. What can be configured depends on the nature of each executor. An executor acts on various events. An executor implementation can ignore certain types of events (For example, executor for checking OIDC request object acts just on the OIDC authorization request). Events are: Creating a client (including creation through dynamic client registration) Updating a client Sending an authorization request Sending a token request Sending a token refresh request Sending a token revocation request Sending a token introspection request Sending a userinfo request Sending a logout request with a refresh token (note that logout with refresh token is proprietary Red Hat build of Keycloak functionality unsupported by any specification. It is rather recommended to rely on the official OIDC logout ). On each event, an executor can work in multiple phases. For example, on creating/updating a client, the executor can modify the client configuration by autoconfigure specific client settings. After that, the executor validates this configuration in validation phase. One of several purposes for this executor is to realize the security requirements of client conformance profiles like FAPI and OAuth 2.1. To do so, the following executors are needed: Enforce secure Client Authentication method is used for the client Enforce Holder-of-key tokens are used Enforce Proof Key for Code Exchange (PKCE) is used Enforce secure signature algorithm for Signed JWT client authentication (private-key-jwt) is used Enforce HTTPS redirect URI and make sure that configured redirect URI does not contain wildcards Enforce OIDC request object satisfying high security level Enforce Response Type of OIDC Hybrid Flow including ID Token used as detached signature as described in the FAPI 1 specification, which means that ID Token returned from Authorization response won't contain user profile data Enforce more secure state and nonce parameters treatment for preventing CSRF Enforce more secure signature algorithm when client registration Enforce binding_message parameter is used for CIBA requests Enforce Client Secret Rotation Enforce Client Registration Access Token Enforce checking if a client is the one to which an intent was issued in a use case where an intent is issued before starting an authorization code flow to get an access token like UK OpenBanking Enforce prohibiting implicit and hybrid flow Enforce checking if a PAR request includes necessary parameters included by an authorization request Enforce DPoP-binding tokens is used (available when dpop feature is enabled) Enforce using lightweight access token Enforce that refresh token rotation is skipped and there is no refresh token returned from the refresh token response Enforce a valid redirect URI that the OAuth 2.1 specification requires 12.7.3.3. Profile A profile consists of several executors, which can realize a security profile like FAPI and OAuth 2.1. Profile can be configured by the Admin REST API (Admin Console) together with its executors. Three global profiles exist and they are configured in Red Hat build of Keycloak by default with pre-configured executors compliant with the FAPI 1 Baseline, FAPI 1 Advanced, FAPI CIBA, FAPI 2 and OAuth 2.1 specifications. More details exist in the FAPI and OAuth 2.1 section of the Securing Applications and Services Guide . 12.7.3.4. Policy A policy consists of several conditions and profiles. The policy can be adopted to clients satisfying all conditions of this policy. The policy refers several profiles and all executors of these profiles execute their task against the client that this policy is adopted to. 12.7.4. Configuration Policies, profiles, conditions, executors can be configured by Admin REST API, which means also the Admin Console. To do so, there is a tab Realm Realm Settings Client Policies , which means the administrator can have client policies per realm. The Global Client Profiles are automatically available in each realm. However there are no client policies configured by default. This means that the administrator is always required to create any client policy if they want for example the clients of his realm to be FAPI compliant. Global profiles cannot be updated, but the administrator can easily use them as a template and create their own profile if they want to do some slight changes in the global profile configurations. There is JSON Editor available in the Admin Console, which simplifies the creation of new profile based on some global profile. 12.7.5. Backward Compatibility Client Policies can replace Client Registration Policies described in the Securing Applications and Services Guide . However, Client Registration Policies also still co-exist. This means that for example during a Dynamic Client Registration request to create/update a client, both client policies and client registration policies are applied. The current plans are for the Client Registration Policies feature to be removed and the existing client registration policies will be migrated into new client policies automatically. 12.7.6. Client Secret Rotation Example See an example configuration for client secret rotation . | [
"https://myhost.com/myapp/k_jwks",
"POST /realms/demo/protocol/openid-connect/token Authorization: Basic cHJvZHVjdC1zYS1jbGllbnQ6cGFzc3dvcmQ= Content-Type: application/x-www-form-urlencoded grant_type=client_credentials",
"HTTP/1.1 200 OK Content-Type: application/json;charset=UTF-8 Cache-Control: no-store Pragma: no-cache { \"access_token\":\"2YotnFZFEjr1zCsicMWpAA\", \"token_type\":\"bearer\", \"expires_in\":60 }",
"\"audience\": [ \"<trusted service>\" ]",
"root/realms/{realm}/protocol/saml/descriptor",
"http://host:port/realms/master/clients/account/redirect",
"scope=openid phone"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_administration_guide/assembly-managing-clients_server_administration_guide |
Governance | Governance Red Hat Advanced Cluster Management for Kubernetes 2.11 Governance | [
"delete secret -n <namespace> <secret> 1",
"delete pod -n <namespace> -l <pod-label> 1",
"openssl x509 -noout -text -in ./observability.crt",
"-n open-cluster-management-observability create secret tls alertmanager-byo-ca --cert ./ca.crt --key ./ca.key -n open-cluster-management-observability create secret tls alertmanager-byo-cert --cert ./ingress.crt --key ./ingress.key",
"edit secret -n openshift-gatekeeper-system gatekeeper-webhook-server-cert",
"delete pod -n openshift-gatekeeper-system -l control-plane=controller-manager",
"get secret <your-secret-name> -n open-cluster-management -o jsonpath='{.data.tls\\.crt}' | base64 -d | openssl x509 -text -noout",
"Validity Not Before: Jul 13 15:17:50 2023 GMT 1 Not After : Jul 12 15:17:50 2024 GMT 2",
"for ns in multicluster-engine open-cluster-management ; do echo \"USDns:\" ; oc get secret -n USDns -o custom-columns=Name:.metadata.name,Expiration:.metadata.annotations.service\\\\.beta\\\\.openshift\\\\.io/expiry | grep -v '<none>' ; echo \"\"; done",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement-policy-role spec: predicates: - requiredClusterSelector: labelSelector: matchExpressions: - {key: environment, operator: In, values: [\"dev\"]}",
"apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-role placementRef: name: placement-policy-role 1 kind: Placement apiGroup: cluster.open-cluster-management.io subjects: 2 - name: policy-role kind: Policy apiGroup: policy.open-cluster-management.io",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: annotations: policy.open-cluster-management.io/standards: policy.open-cluster-management.io/categories: policy.open-cluster-management.io/controls: policy.open-cluster-management.io/description: spec: disabled: remediationAction: dependencies: - apiVersion: policy.open-cluster-management.io/v1 compliance: kind: Policy name: namespace: policy-templates: - objectDefinition: apiVersion: kind: metadata: name: spec: --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: bindingOverrides: remediationAction: subFilter: name: placementRef: name: kind: Placement apiGroup: cluster.open-cluster-management.io subjects: - name: kind: apiGroup: --- apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: spec:",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-role annotations: policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/categories: AC Access Control policy.open-cluster-management.io/controls: AC-3 Access Enforcement policy.open-cluster-management.io/description: spec: remediationAction: inform disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-role-example spec: remediationAction: inform # the policy-template spec.remediationAction is overridden by the preceding parameter value for spec.remediationAction. severity: high namespaceSelector: include: [\"default\"] object-templates: - complianceType: mustonlyhave # role definition should exact match objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: sample-role rules: - apiGroups: [\"extensions\", \"apps\"] resources: [\"deployments\"] verbs: [\"get\", \"list\", \"watch\", \"delete\",\"patch\"] --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-role placementRef: name: placement-policy-role kind: Placement apiGroup: cluster.open-cluster-management.io subjects: - name: policy-role kind: Policy apiGroup: policy.open-cluster-management.io --- apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement-policy-role spec: predicates: - requiredClusterSelector: labelSelector: matchExpressions: - {key: environment, operator: In, values: [\"dev\"]}",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-config spec: namespaceSelector: include: [\"default\"] exclude: [] matchExpressions: [] matchLabels: {} remediationAction: inform severity: low evaluationInterval: compliant: noncompliant: object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 kind: Pod metadata: name: pod spec: containers: - image: pod-image name: pod-name ports: - containerPort: 80 - complianceType: musthave objectDefinition: apiVersion: v1 kind: ConfigMap metadata: name: myconfig namespace: default data: testData: hello spec:",
"apiVersion: policy.open-cluster-management.io/v1 kind: CertificatePolicy metadata: name: certificate-policy-example spec: namespaceSelector: include: [\"default\"] exclude: [] matchExpressions: [] matchLabels: {} labelSelector: myLabelKey: myLabelValue remediationAction: severity: minimumDuration: minimumCADuration: maximumDuration: maximumCADuration: allowedSANPattern: disallowedSANPattern:",
"apiVersion: policy.open-cluster-management.io/v1beta1 kind: PolicySet metadata: name: demo-policyset spec: policies: - policy-demo --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: demo-policyset-pb placementRef: apiGroup: cluster.open-cluster-management.io kind: Placement name: demo-policyset-pr subjects: - apiGroup: policy.open-cluster-management.io kind: PolicySet name: demo-policyset --- apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: demo-policyset-pr spec: predicates: - requiredClusterSelector: labelSelector: matchExpressions: - key: name operator: In values: - local-cluster",
"apiVersion: policy.open-cluster-management.io/v1beta1 kind: PolicySet metadata: name: pci namespace: default spec: description: Policies for PCI compliance policies: - policy-pod - policy-namespace status: compliant: NonCompliant placement: - placementBinding: binding1 placement: placement1 policySet: policyset-ps",
"apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: governance-policy-framework namespace: cluster1 annotations: policy-evaluation-concurrency: \"2\" spec: installNamespace: open-cluster-management-agent-addon",
"apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: governance-policy-framework namespace: cluster1 annotations: client-qps: \"30\" client-burst: \"45\" spec: installNamespace: open-cluster-management-agent-addon",
"apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: config-policy-controller namespace: cluster1 annotations: policy-evaluation-concurrency: \"5\" spec: installNamespace: open-cluster-management-agent-addon",
"apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: config-policy-controller namespace: cluster1 annotations: client-qps: \"20\" client-burst: \"100\" spec: installNamespace: open-cluster-management-agent-addon",
"apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: config-policy-controller namespace: cluster1 annotations: log-level: \"2\" spec: installNamespace: open-cluster-management-agent-addon",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: my-config-policy spec: object-templates: - complianceType: musthave recordDiff: Log objectDefinition: apiVersion: v1 kind: ConfigMap metadata: name: my-configmap data: fieldToUpdate: \"2\"",
"Logging the diff: --- default/my-configmap : existing +++ default/my-configmap : updated @@ -2,3 +2,3 @@ data: - fieldToUpdate: \"1\" + fieldToUpdate: \"2\" kind: ConfigMap",
"get -n local-cluster managedclusteraddon cert-policy-controller | grep -B4 'type: ManifestApplied'",
"- lastTransitionTime: \"2023-01-26T15:42:22Z\" message: manifests of addon are applied successfully reason: AddonManifestApplied status: \"True\" type: ManifestApplied",
"-n <PostgreSQL namespace> port-forward <PostgreSQL pod name> 5432:5432",
"psql 'postgres://postgres:@127.0.0.1:5432/postgres'",
"CREATE USER \"rhacm-policy-compliance-history\" WITH PASSWORD '<replace with password>'; CREATE DATABASE \"rhacm-policy-compliance-history\" WITH OWNER=\"rhacm-policy-compliance-history\";",
"-n open-cluster-management create secret generic governance-policy-database \\ 1 --from-literal=\"user=rhacm-policy-compliance-history\" --from-literal=\"password=rhacm-policy-compliance-history\" --from-literal=\"host=<replace with host name of the Postgres server>\" \\ 2 --from-literal=\"dbname=ocm-compliance-history\" --from-literal=\"sslmode=verify-full\" --from-file=\"ca=<replace>\" 3",
"-n open-cluster-management label secret governance-policy-database cluster.open-cluster-management.io/backup=\"\"",
"python -c 'import urllib.parse; import sys; print(urllib.parse.quote(sys.argv[1]))' 'USDuper<Secr&t%>'",
"curl -H \"Authorization: Bearer USD(oc whoami --show-token)\" \"https://USD(oc -n open-cluster-management get route governance-history-api -o jsonpath='{.spec.host}')/api/v1/compliance-events\"",
"{\"data\":[],\"metadata\":{\"page\":1,\"pages\":0,\"per_page\":20,\"total\":0}}",
"{\"message\":\"The database is unavailable\"}",
"{\"message\":\"Internal Error\"}",
"-n open-cluster-management get events --field-selector reason=OCMComplianceEventsDBError",
"-n open-cluster-management logs -l name=governance-policy-propagator -f",
"2024-03-05T12:17:14.500-0500 info compliance-events-api complianceeventsapi/complianceeventsapi_controller.go:261 The database connection failed: pq: password authentication failed for user \"rhacm-policy-compliance-history\"",
"-n open-cluster-management edit secret governance-policy-database",
"echo \"https://USD(oc -n open-cluster-management get route governance-history-api -o=jsonpath='{.spec.host}')\"",
"https://governance-history-api-open-cluster-management.apps.openshift.redhat.com",
"apiVersion: addon.open-cluster-management.io/v1alpha1 kind: AddOnDeploymentConfig metadata: name: governance-policy-framework namespace: open-cluster-management spec: customizedVariables: - name: complianceHistoryAPIURL value: <replace with URL from previous command>",
"edit ClusterManagementAddOn governance-policy-framework",
"- group: addon.open-cluster-management.io resource: addondeploymentconfigs defaultConfig: name: governance-policy-framework namespace: open-cluster-management",
"-n <manage-cluster-namespace> edit ManagedClusterAddOn governance-policy-framework",
"- group: addon.open-cluster-management.io resource: addondeploymentconfigs name: governance-policy-framework namespace: open-cluster-management",
"-n open-cluster-management-agent-addon get deployment governance-policy-framework -o jsonpath='{.spec.template.spec.containers[1].args}'",
"[\"--enable-lease=true\",\"--hub-cluster-configfile=/var/run/klusterlet/kubeconfig\",\"--leader-elect=false\",\"--log-encoder=console\",\"--log-level=0\",\"--v=-1\",\"--evaluation-concurrency=2\",\"--client-max-qps=30\",\"--client-burst=45\",\"--disable-spec-sync=true\",\"--cluster-namespace=local-cluster\",\"--compliance-api-url=https://governance-history-api-open-cluster-management.apps.openshift.redhat.com\"]",
"-n open-cluster-management-agent-addon logs deployment/governance-policy-framework -f",
"024-03-05T19:28:38.063Z info policy-status-sync statussync/policy_status_sync.go:750 Failed to record the compliance event with the compliance API. Will requeue. {\"statusCode\": 503, \"message\": \"\"}",
"-n open-cluster-management edit AddOnDeploymentConfig governance-policy-framework",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: namespace: annotations: policy.open-cluster-management.io/standards: policy.open-cluster-management.io/categories: policy.open-cluster-management.io/controls: policy.open-cluster-management.io/description: spec: remediationAction: disabled: policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: spec: remediationAction: severity: object-templates: - complianceType: objectDefinition: kind: Namespace apiVersion: v1 metadata: name:",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: namespace: annotations: policy.open-cluster-management.io/standards: policy.open-cluster-management.io/categories: policy.open-cluster-management.io/controls: policy.open-cluster-management.io/description: spec: remediationAction: disabled: policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: spec: remediationAction: severity: namespaceSelector: exclude: include: matchLabels: matchExpressions: object-templates: - complianceType: objectDefinition: apiVersion: v1 kind: Pod metadata: name: spec: containers: - image: name:",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: namespace: annotations: policy.open-cluster-management.io/standards: policy.open-cluster-management.io/categories: policy.open-cluster-management.io/controls: policy.open-cluster-management.io/description: spec: remediationAction: disabled: policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: spec: remediationAction: severity: namespaceSelector: exclude: include: matchLabels: matchExpressions: object-templates: - complianceType: mustonlyhave objectDefinition: apiVersion: v1 kind: LimitRange metadata: name: spec: limits: - default: memory: defaultRequest: memory: type:",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: namespace: annotations: policy.open-cluster-management.io/standards: policy.open-cluster-management.io/categories: policy.open-cluster-management.io/controls: policy.open-cluster-management.io/description: spec: remediationAction: disabled: policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: spec: remediationAction: severity: namespaceSelector: exclude: include: matchLabels: matchExpressions: object-templates: - complianceType: objectDefinition: apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: spec: privileged: allowPrivilegeEscalation: allowedCapabilities: volumes: hostNetwork: hostPorts: hostIPC: hostPID: runAsUser: seLinux: supplementalGroups: fsGroup:",
"violation - couldn't find mapping resource with kind PodSecurityPolicy, please check if you have CRD deployed",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: namespace: annotations: policy.open-cluster-management.io/standards: policy.open-cluster-management.io/categories: policy.open-cluster-management.io/controls: policy.open-cluster-management.io/description: spec: remediationAction: disabled: policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: spec: remediationAction: severity: namespaceSelector: exclude: include: matchLabels: matchExpressions: object-templates: - complianceType: objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: rules: - apiGroups: resources: verbs:",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: namespace: annotations: policy.open-cluster-management.io/standards: policy.open-cluster-management.io/categories: policy.open-cluster-management.io/controls: policy.open-cluster-management.io/description: spec: remediationAction: disabled: policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: spec: remediationAction: severity: namespaceSelector: exclude: include: matchLabels: matchExpressions: object-templates: - complianceType: objectDefinition: kind: RoleBinding # role binding must exist apiVersion: rbac.authorization.k8s.io/v1 metadata: name: subjects: - kind: name: apiGroup: roleRef: kind: name: apiGroup:",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: namespace: annotations: policy.open-cluster-management.io/standards: policy.open-cluster-management.io/categories: policy.open-cluster-management.io/controls: policy.open-cluster-management.io/description: spec: remediationAction: disabled: policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: spec: remediationAction: severity: namespaceSelector: exclude: include: matchLabels: matchExpressions: object-templates: - complianceType: objectDefinition: apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: allowHostDirVolumePlugin: allowHostIPC: allowHostNetwork: allowHostPID: allowHostPorts: allowPrivilegeEscalation: allowPrivilegedContainer: fsGroup: readOnlyRootFilesystem: requiredDropCapabilities: runAsUser: seLinuxContext: supplementalGroups: users: volumes:",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: namespace: annotations: policy.open-cluster-management.io/standards: policy.open-cluster-management.io/categories: policy.open-cluster-management.io/controls: policy.open-cluster-management.io/description: spec: remediationAction: disabled: policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: spec: remediationAction: severity: object-templates: - complianceType: objectDefinition: apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: spec: encryption:",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: comp-operator-ns spec: remediationAction: inform # will be overridden by remediationAction in parent policy severity: high object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: openshift-compliance",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: comp-operator-operator-group spec: remediationAction: inform # will be overridden by remediationAction in parent policy severity: high object-templates: - complianceType: musthave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: comp-operator-subscription spec: remediationAction: inform # will be overridden by remediationAction in parent policy severity: high object-templates: - complianceType: musthave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator namespace: openshift-compliance spec: channel: \"4.x\" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: compliance-suite-e8 spec: remediationAction: inform severity: high object-templates: - complianceType: musthave # this template checks if scan has completed by checking the status field objectDefinition: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: e8 namespace: openshift-compliance profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-e8 - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: rhcos4-e8 settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: compliance-suite-e8 spec: remediationAction: inform severity: high object-templates: - complianceType: musthave # this template checks if scan has completed by checking the status field objectDefinition: apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: e8 namespace: openshift-compliance status: phase: DONE",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: compliance-suite-e8-results spec: remediationAction: inform severity: high object-templates: - complianceType: mustnothave # this template reports the results for scan suite: e8 by looking at ComplianceCheckResult CRs objectDefinition: apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceCheckResult metadata: namespace: openshift-compliance labels: compliance.openshift.io/check-status: FAIL compliance.openshift.io/suite: e8",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: compliance-cis-scan spec: remediationAction: inform severity: high object-templates: - complianceType: musthave # this template creates ScanSettingBinding:cis objectDefinition: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis namespace: openshift-compliance profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis-node settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: compliance-suite-cis spec: remediationAction: inform severity: high object-templates: - complianceType: musthave # this template checks if scan has completed by checking the status field objectDefinition: apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: cis namespace: openshift-compliance status: phase: DONE",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: compliance-suite-cis-results spec: remediationAction: inform severity: high object-templates: - complianceType: mustnothave # this template reports the results for scan suite: cis by looking at ComplianceCheckResult CRs objectDefinition: apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceCheckResult metadata: namespace: openshift-compliance labels: compliance.openshift.io/check-status: FAIL compliance.openshift.io/suite: cis",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-imagemanifestvuln-example-sub spec: remediationAction: enforce # will be overridden by remediationAction in parent policy severity: high object-templates: - complianceType: musthave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: container-security-operator namespace: openshift-operators spec: # channel: quay-v3.3 # specify a specific channel if desired installPlanApproval: Automatic name: container-security-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-imagemanifestvuln-status spec: remediationAction: inform # will be overridden by remediationAction in parent policy severity: high object-templates: - complianceType: musthave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: namespace: openshift-operators spec: displayName: Red Hat Quay Container Security Operator status: phase: Succeeded # check the CSV status to determine if operator is running or not",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-imagemanifestvuln-example-imv spec: remediationAction: inform # will be overridden by remediationAction in parent policy severity: high namespaceSelector: exclude: [\"kube-*\"] include: [\"*\"] object-templates: - complianceType: mustnothave # mustnothave any ImageManifestVuln object objectDefinition: apiVersion: secscan.quay.redhat.com/v1alpha1 kind: ImageManifestVuln # checking for a Kind",
"apiVersion: policy.open-cluster-management.io/v1beta1 kind: PolicyAutomation metadata: name: policyname-policy-automation spec: automationDef: extra_vars: your_var: your_value name: Policy Compliance Template secret: ansible-tower type: AnsibleJob mode: disabled policyRef: policyname",
"metadata: annotations: policy.open-cluster-management.io/rerun: \"true\"",
"ca.crt: '{{ fromSecret \"openshift-config\" \"ca-config-map-secret\" \"ca.crt\" | base64dec | toRawJson | toLiteral }}'",
"func fromSecret (ns string, secretName string, datakey string) (dataValue string, err error)",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-fromsecret namespace: test spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 data: 1 USER_NAME: YWRtaW4= PASSWORD: '{{ fromSecret \"default\" \"localsecret\" \"PASSWORD\" }}' 2 kind: Secret 3 metadata: name: demosecret namespace: test type: Opaque remediationAction: enforce severity: low",
"ca.crt: '{{ fromSecret \"openshift-config\" \"ca-config-map-secret\" \"ca.crt\" | base64dec | toRawJson | toLiteral }}'",
"func fromConfigMap (ns string, configmapName string, datakey string) (dataValue string, err Error)",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-fromcm-lookup namespace: test-templates spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: kind: ConfigMap 1 apiVersion: v1 metadata: name: demo-app-config namespace: test data: 2 app-name: sampleApp app-description: \"this is a sample app\" log-file: '{{ fromConfigMap \"default\" \"logs-config\" \"log-file\" }}' 3 log-level: '{{ fromConfigMap \"default\" \"logs-config\" \"log-level\" }}' 4 remediationAction: enforce severity: low",
"func fromClusterClaim (clusterclaimName string) (dataValue string, err Error)",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-clusterclaims 1 namespace: default spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: kind: ConfigMap apiVersion: v1 metadata: name: sample-app-config namespace: default data: 2 platform: '{{ fromClusterClaim \"platform.open-cluster-management.io\" }}' 3 product: '{{ fromClusterClaim \"product.open-cluster-management.io\" }}' version: '{{ fromClusterClaim \"version.openshift.io\" }}' remediationAction: enforce severity: low",
"func lookup (apiversion string, kind string, namespace string, name string, labelselector ...string) (value string, err Error)",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-lookup namespace: test-templates spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: kind: ConfigMap apiVersion: v1 metadata: name: demo-app-config namespace: test data: 1 app-name: sampleApp app-description: \"this is a sample app\" metrics-url: | 2 http://{{ (lookup \"v1\" \"Service\" \"default\" \"metrics\").spec.clusterIP }}:8080 remediationAction: enforce severity: low",
"func base64enc (data string) (enc-data string)",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-fromsecret namespace: test spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: data: USER_NAME: '{{ fromConfigMap \"default\" \"myconfigmap\" \"admin-user\" | base64enc }}'",
"func base64dec (enc-data string) (data string)",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-fromsecret namespace: test spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: data: app-name: | \"{{ ( lookup \"v1\" \"Secret\" \"testns\" \"mytestsecret\") .data.appname ) | base64dec }}\"",
"func indent (spaces int, data string) (padded-data string)",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-fromsecret namespace: test spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: data: Ca-cert: | {{ ( index ( lookup \"v1\" \"Secret\" \"default\" \"mycert-tls\" ).data \"ca.pem\" ) | base64dec | indent 4 }}",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-fromsecret namespace: test spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: data: Ca-cert: | {{ ( index ( lookup \"v1\" \"Secret\" \"default\" \"mycert-tls\" ).data \"ca.pem\" ) | base64dec | autoindent }}",
"func toInt (input interface{}) (output int)",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-template-function namespace: test spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: spec: vlanid: | {{ (fromConfigMap \"site-config\" \"site1\" \"vlan\") | toInt }}",
"func toBool (input string) (output bool)",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-template-function namespace: test spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: spec: enabled: | {{ (fromConfigMap \"site-config\" \"site1\" \"enabled\") | toBool }}",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-template-function namespace: test spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: spec: enabled: | {{hub (lookup \"v1\" \"Secret\" \"default\" \"my-hub-secret\").data.message | protect hub}}",
"key: '{{ \"[\\\"10.10.10.10\\\", \\\"1.1.1.1\\\"]\" | toLiteral }}'",
"key: [\"10.10.10.10\", \"1.1.1.1\"]",
"complianceType: musthave objectDefinition: apiVersion: v1 kind: Secret metadata: name: my-secret-copy data: '{{ copySecretData \"default\" \"my-secret\" }}' 1",
"complianceType: musthave objectDefinition: apiVersion: v1 kind: ConfigMap metadata: name: my-secret-copy data: '{{ copyConfigMapData \"default\" \"my-configmap\" }}'",
"object-templates-raw: | {{- range (lookup \"v1\" \"ConfigMap\" \"default\" \"\").items }} {{- if eq .data.name \"Sea Otter\" }} - complianceType: musthave objectDefinition: kind: ConfigMap apiVersion: v1 metadata: name: {{ .metadata.name }} namespace: {{ .metadata.namespace }} labels: species-category: mammal {{- end }} {{- end }}",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: create-infra-machineset spec: remediationAction: enforce severity: low object-templates-raw: | {{- /* Specify the parameters needed to create the MachineSet */ -}} {{- USDmachineset_role := \"infra\" }} {{- USDregion := \"ap-southeast-1\" }} {{- USDzones := list \"ap-southeast-1a\" \"ap-southeast-1b\" \"ap-southeast-1c\" }} {{- USDinfrastructure_id := (lookup \"config.openshift.io/v1\" \"Infrastructure\" \"\" \"cluster\").status.infrastructureName }} {{- USDworker_ms := (index (lookup \"machine.openshift.io/v1beta1\" \"MachineSet\" \"openshift-machine-api\" \"\").items 0) }} {{- /* Generate the MachineSet for each zone as specified */ -}} {{- range USDzone := USDzones }} - complianceType: musthave objectDefinition: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: {{ USDinfrastructure_id }} name: {{ USDinfrastructure_id }}-{{ USDmachineset_role }}-{{ USDzone }} namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: {{ USDinfrastructure_id }} machine.openshift.io/cluster-api-machineset: {{ USDinfrastructure_id }}-{{ USDmachineset_role }}-{{ USDzone }} template: metadata: labels: machine.openshift.io/cluster-api-cluster: {{ USDinfrastructure_id }} machine.openshift.io/cluster-api-machine-role: {{ USDmachineset_role }} machine.openshift.io/cluster-api-machine-type: {{ USDmachineset_role }} machine.openshift.io/cluster-api-machineset: {{ USDinfrastructure_id }}-{{ USDmachineset_role }}-{{ USDzone }} spec: metadata: labels: node-role.kubernetes.io/{{ USDmachineset_role }}: \"\" taints: - key: node-role.kubernetes.io/{{ USDmachineset_role }} effect: NoSchedule providerSpec: value: ami: id: {{ USDworker_ms.spec.template.spec.providerSpec.value.ami.id }} apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: encrypted: true iops: 2000 kmsKey: arn: '' volumeSize: 500 volumeType: io1 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 instanceType: {{ USDworker_ms.spec.template.spec.providerSpec.value.instanceType }} iamInstanceProfile: id: {{ USDinfrastructure_id }}-worker-profile kind: AWSMachineProviderConfig placement: availabilityZone: {{ USDzone }} region: {{ USDregion }} securityGroups: - filters: - name: tag:Name values: - {{ USDinfrastructure_id }}-worker-sg subnet: filters: - name: tag:Name values: - {{ USDinfrastructure_id }}-private-{{ USDzone }} tags: - name: kubernetes.io/cluster/{{ USDinfrastructure_id }} value: owned userDataSecret: name: worker-user-data {{- end }}",
"create -f policy.yaml -n <policy-namespace>",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy1 spec: remediationAction: \"enforce\" # or inform disabled: false # or true namespaceSelector: include: - \"default\" - \"my-namespace\" policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: operator # namespace: # will be supplied by the controller via the namespaceSelector spec: remediationAction: \"inform\" object-templates: - complianceType: \"musthave\" # at this level, it means the role must exist and must have the following rules apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: example objectDefinition: rules: - complianceType: \"musthave\" # at this level, it means if the role exists the rule is a musthave apiGroups: [\"extensions\", \"apps\"] resources: [\"deployments\"] verbs: [\"get\", \"list\", \"watch\", \"create\", \"delete\",\"patch\"]",
"apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding1 placementRef: name: placement1 apiGroup: cluster.open-cluster-management.io kind: Placement subjects: - name: policy1 apiGroup: policy.open-cluster-management.io kind: Policy",
"get policies.policy.open-cluster-management.io <policy-name> -n <policy-namespace> -o yaml",
"describe policies.policy.open-cluster-management.io <policy-name> -n <policy-namespace>",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-pod annotations: policy.open-cluster-management.io/categories: 'SystemAndCommunicationsProtections,SystemAndInformationIntegrity' policy.open-cluster-management.io/controls: 'control example' policy.open-cluster-management.io/standards: 'NIST,HIPAA' policy.open-cluster-management.io/description: spec: complianceType: musthave namespaces: exclude: [\"kube*\"] include: [\"default\"] pruneObjectBehavior: None object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 kind: Pod metadata: name: pod1 spec: containers: - name: pod-name image: 'pod-image' ports: - containerPort: 80 remediationAction: enforce disabled: false",
"apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-pod placementRef: name: placement-pod kind: Placement apiGroup: cluster.open-cluster-management.io subjects: - name: policy-pod kind: Policy apiGroup: policy.open-cluster-management.io",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement-pod spec: predicates: - requiredClusterSelector: labelSelector: matchLabels: cloud: \"IBM\"",
"apply -f <policyset-filename>",
"edit policysets <your-policyset-name>",
"apply -f <your-added-policy.yaml>",
"delete policies.policy.open-cluster-management.io <policy-name> -n <policy-namespace>",
"create -f configpolicy-1.yaml",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-1 namespace: my-policies policy-templates: - apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: mustonlyhave-configuration spec: namespaceSelector: include: [\"default\"] exclude: [\"kube-system\"] remediationAction: inform disabled: false complianceType: mustonlyhave object-templates:",
"apply -f <policy-file-name> --namespace=<namespace>",
"get policies.policy.open-cluster-management.io --namespace=<namespace>",
"get policies.policy.open-cluster-management.io <policy-name> -n <namespace> -o yaml",
"describe policies.policy.open-cluster-management.io <name> -n <namespace>",
"delete policies.policy.open-cluster-management.io <policy-name> -n <namespace>",
"get policies.policy.open-cluster-management.io <policy-name> -n <namespace>",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-configure-subscription-admin-hub annotations: policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration spec: remediationAction: inform disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-configure-subscription-admin-hub spec: remediationAction: inform severity: low object-templates: - complianceType: musthave objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: open-cluster-management:subscription-admin rules: - apiGroups: - app.k8s.io resources: - applications verbs: - '*' - apiGroups: - apps.open-cluster-management.io resources: - '*' verbs: - '*' - apiGroups: - \"\" resources: - configmaps - secrets - namespaces verbs: - '*' - complianceType: musthave objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: open-cluster-management:subscription-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: open-cluster-management:subscription-admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kube:admin - apiGroup: rbac.authorization.k8s.io kind: User name: system:admin --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-configure-subscription-admin-hub placementRef: name: placement-policy-configure-subscription-admin-hub kind: Placement apiGroup: cluster.open-cluster-management.io subjects: - name: policy-configure-subscription-admin-hub kind: Policy apiGroup: policy.open-cluster-management.io --- apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement-policy-configure-subscription-admin-hub spec: predicates: - requiredClusterSelector: labelSelector: matchExpressions: - {key: name, operator: In, values: [\"local-cluster\"]}",
"apply -f policy-configure-subscription-admin-hub.yaml",
"apiVersion: cluster.open-cluster-management.io/v1beta2 kind: ManagedClusterSetBinding metadata: name: default namespace: policies spec: clusterSet: default",
"apply -f managed-cluster.yaml",
"kustomize build --enable-alpha-plugins | oc apply -f -",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: install-quay namespace: open-cluster-management-global-set spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1beta1 kind: OperatorPolicy metadata: name: install-quay spec: remediationAction: enforce severity: critical complianceType: musthave upgradeApproval: None subscription: channel: stable-3.11 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"-n <managed cluster namespace> get operatorpolicy install-quay",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/description: name: moderate-compliance-scan namespace: default spec: dependencies: 1 - apiVersion: policy.open-cluster-management.io/v1 compliance: Compliant kind: Policy name: upstream-compliance-operator namespace: default disabled: false policy-templates: - extraDependencies: 2 - apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy name: scan-setting-prerequisite compliance: Compliant ignorePending: false 3 objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: moderate-compliance-scan spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: moderate namespace: openshift-compliance profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-moderate - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-moderate-node settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default remediationAction: enforce severity: low",
"apiVersion: operator.gatekeeper.sh/v1alpha1 kind: Gatekeeper metadata: name: gatekeeper spec: audit: replicas: 1 auditEventsInvolvedNamespace: Enabled 1 logLevel: DEBUG auditInterval: 10s constraintViolationLimit: 55 auditFromCache: Enabled auditChunkSize: 66 emitAuditEvents: Enabled containerArguments: 2 - name: \"\" value: \"\" resources: limits: cpu: 500m memory: 150Mi requests: cpu: 500m memory: 130Mi validatingWebhook: Enabled mutatingWebhook: Enabled webhook: replicas: 3 emitAdmissionEvents: Enabled admissionEventsInvolvedNamespace: Enabled 3 disabledBuiltins: - http.send operations: 4 - \"CREATE\" - \"UPDATE\" - \"CONNECT\" failurePolicy: Fail containerArguments: 5 - name: \"\" value: \"\" resources: limits: cpu: 480m memory: 140Mi requests: cpu: 400m memory: 120Mi nodeSelector: region: \"EMEA\" affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: auditKey: \"auditValue\" topologyKey: topology.kubernetes.io/zone tolerations: - key: \"Example\" operator: \"Exists\" effect: \"NoSchedule\" podAnnotations: some-annotation: \"this is a test\" other-annotation: \"another test\" config: 6 matches: - excludedNamespaces: [\"test-*\", \"my-namespace\"] processes: [\"*\"] disableDefaultMatches: false 7",
"apiVersion: operator.gatekeeper.sh/v1alpha1 kind: Gatekeeper metadata: name: gatekeeper spec: audit: replicas: 2 logLevel: DEBUG auditFromCache: Automatic",
"get configs.config.gatekeeper.sh config -n openshift-gatekeeper-system",
"apiVersion: config.gatekeeper.sh/v1alpha1 kind: Config metadata: name: config namespace: \"openshift-gatekeeper-system\" spec: sync: syncOnly: - group: \"\" version: \"v1\" kind: \"Namespace\" - group: \"\" version: \"v1\" kind: \"Pod\"",
"explain gatekeeper.spec.audit.auditFromCache",
"delete policies.policy.open-cluster-management.io <policy-gatekeeper-operator-name> -n <namespace>",
"get policies.policy.open-cluster-management.io <policy-gatekeeper-operator-name> -n <namespace>",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: require-gatekeeper-labels-on-ns spec: remediationAction: inform 1 disabled: false policy-templates: - objectDefinition: apiVersion: templates.gatekeeper.sh/v1beta1 kind: ConstraintTemplate metadata: name: k8srequiredlabels annotations: policy.open-cluster-management.io/severity: low 2 spec: crd: spec: names: kind: K8sRequiredLabels validation: openAPIV3Schema: properties: labels: type: array items: string targets: - target: admission.k8s.gatekeeper.sh rego: | package k8srequiredlabels violation[{\"msg\": msg, \"details\": {\"missing_labels\": missing}}] { provided := {label | input.review.object.metadata.labels[label]} required := {label | label := input.parameters.labels[_]} missing := required - provided count(missing) > 0 msg := sprintf(\"you must provide labels: %v\", [missing]) } - objectDefinition: apiVersion: constraints.gatekeeper.sh/v1beta1 kind: K8sRequiredLabels metadata: name: ns-must-have-gk annotations: policy.open-cluster-management.io/severity: low 3 spec: enforcementAction: dryrun match: kinds: - apiGroups: [\"\"] kinds: [\"Namespace\"] parameters: labels: [\"gatekeeper\"]",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-gatekeeper-admission spec: remediationAction: inform 1 severity: low object-templates: - complianceType: mustnothave objectDefinition: apiVersion: v1 kind: Event metadata: namespace: openshift-gatekeeper-system 2 annotations: constraint_action: deny constraint_kind: K8sRequiredLabels constraint_name: ns-must-have-gk event_type: violation",
"generators: - policy-generator-config.yaml 1",
"apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: config-data-policies policyDefaults: namespace: policies policySets: [] policies: - name: config-data manifests: - path: configmap.yaml 1",
"apiVersion: v1 kind: ConfigMap metadata: name: my-config namespace: default data: key1: value1 key2: value2",
"object-templates-raw: | {{- range (lookup \"v1\" \"ConfigMap\" \"my-namespace\" \"\").items }} - complianceType: musthave objectDefinition: kind: ConfigMap apiVersion: v1 metadata: name: {{ .metadata.name }} namespace: {{ .metadata.namespace }} labels: i-am-from: template {{- end }}",
"apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: config-data-policies policyDefaults: namespace: policies policySets: [] policies: - name: config-data manifests: - path: manifest.yaml 1",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement-config-data namespace: policies spec: predicates: - requiredClusterSelector: labelSelector: matchExpressions: [] --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-config-data namespace: policies placementRef: apiGroup: cluster.open-cluster-management.io kind: Placement name: placement-config-data subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: config-data --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/description: name: config-data namespace: policies spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: config-data spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 data: key1: value1 key2: value2 kind: ConfigMap metadata: name: my-config namespace: default remediationAction: inform severity: low",
"apiVersion: v1 kind: Namespace metadata: name: openshift-compliance --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator namespace: openshift-compliance spec: channel: release-0.1 name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: install-compliance-operator policyDefaults: namespace: policies placement: labelSelector: matchExpressions: - key: vendor operator: In values: - \"OpenShift\" policies: - name: install-compliance-operator manifests: - path: compliance-operator.yaml",
"generators: - policy-generator-config.yaml",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement-install-compliance-operator namespace: policies spec: predicates: - requiredClusterSelector: labelSelector: matchExpressions: - key: vendor operator: In values: - OpenShift --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-install-compliance-operator namespace: policies placementRef: apiGroup: cluster.open-cluster-management.io kind: Placement name: placement-install-compliance-operator subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-compliance-operator --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/description: name: install-compliance-operator namespace: policies spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-compliance-operator spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: openshift-compliance - complianceType: musthave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator namespace: openshift-compliance spec: channel: release-0.1 name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace - complianceType: musthave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - compliance-operator remediationAction: enforce severity: low"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html-single/governance/index |
Chapter 4. Bug fixes | Chapter 4. Bug fixes This section describes bugs with significant impact on users that were fixed in this release of Red Hat Ceph Storage. In addition, the section includes descriptions of fixed known issues found in versions. 4.1. Ceph manager plug ins Python tasks no longer wait for the GIL Previously, the Ceph manager daemon held the Python global interpreter lock (GIL) during some RPCs with the Ceph MDS, due to which, other Python tasks are starved waiting for the GIL. With this fix, the GIL is released during all libcephfs / librbd calls and other Python tasks may acquire the GIL normally. Bugzilla:2219093 4.2. The Cephadm utility cephadm can differentiate between a duplicated hostname and no longer adds the same host to a cluster Previously, cephadm would consider a host with a shortname and a host with its FQDN as two separate hosts, causing the same host to be added twice to a cluster. With this fix, cephadm now recognizes the difference between a host shortname and the FQDN, and does not add the host again to the system. Bugzilla:2049445 cephadm no longer reports that a non-existing label is removed from the host Previously, in cephadm , there was no check to verify if a label existed before removing it from a host. Due to this, the ceph orch host label rm command would report that a label was removed from the host, even when the label was non-existent. For example, a misspelled label. With this fix, the command now provides clear feedback whether the label specified was successfully removed or not to the user. Bugzilla:2113901 The keepalive daemons communicate and enter the main/primary state Previously, keepalive configurations were populated with IPs that matched the host IP reported from the ceph orch host ls command. As a result, if the VIP was configured on a different subnet than the host IP listed, the keepalive daemons were not able to communicate, resulting in the keepalive daemons to enter a primary state. With this fix, the IPs of keepalive peers in the keepalive configuration are now chosen to match the subnet of the VIP. The keepalive daemons can now communicate even if the VIP is in a different subnet than the host IP from ceph orch host ls command. In this case, only one keepalive daemon enters primary state. Bugzilla:2222010 Stopped crash daemons now have the correct state Previously, when a crash daemon stopped, the return code gave an error state, rather than the expected stopped state, causing systemd to think that the service had failed. With this fix, the return code gives the expected stopped state. Bugzilla:2126465 HA proxy now binds to the frontend port on the VIP Previously, in Cephadm, multiple ingress services could not be deployed on the same host with the same frontend port as the port binding occurred across all host networks. With this fix, multiple ingress services can now be present on the same host with the same frontend port as long as the services use different VIPs and different monitoring ports are set for the ingress service in the specification. Bugzilla:2231452 4.3. Ceph File System User-space Ceph File System (CephFS) work as expected post upgrade Previously, the user-space CephFS client would sometimes crash during a cluster upgrade. This would occur due to stale feature bits on the MDS side that were held on the user-space side. With this fix, ensure that the user-space CephFS client has updated MDS feature bits that allows the clients to work as expected after a cluster upgrade. Bugzilla:2247174 Blocklist and evict client for large session metadata Previously, large client metadata buildup in the MDS would sometimes cause the MDS to switch to read-only mode. With this fix, the client that is causing the buildup is blocklisted and evicted, allowing the MDS to work as expected. Bugzilla:2238663 Deadlocks no longer occur between the unlink and reintegration requests Previously, when fixing async dirop bug, a regression was introduced by commits, causing deadlocks between the unlink and reintegration request. With this fix, the old commits are reverted and there is no longer a deadlock between unlink and reintegration requests. Bugzilla:2228635 Client always sends a caps revocation acknowledgement to the MDS daemon Previously, whenever an MDS daemon sent a caps revocation request to a client and during this time, if the client released the caps and removed the inode, then the client would drop the request directly, but the MDS daemon would need to wait for a caps revoking acknowledgement from the client. Due to this, even when there was no need for caps revocation, the MDS daemon would continue waiting for an acknowledgement from the client, causing a warning in MDS Daemon health status. With this fix, the client always sends a caps revocation acknowledgement to the MDS Daemon, even when there is no inode existing and the MDS Daemon no longer stays stuck. Bugzilla:2228000 MDS locks are obtained in the correct order Previously, MDS would acquire metadata tree locks in the wrong order, resulting in a create and getattr RPC request to deadlock. With this fix, locks are obtained in the correct order in MDS and the requests no longer deadlock. Bugzilla:2235338 Sending split_realms information is skipped from CephFS MDS Previously, the split_realms information would be incorrectly sent from the CephFS MDS which could not be correctly decoded by kclient . Due to this, the clients would not care about the split_realms and treat it as a corrupted snaptrace. With this fix, split_realms are not sent to kclient and no crashes take place. Bugzilla:2228003 Snapshot data is no longer lost after setting writing flags Previously, in clients, if the writing flag was set to '1' when the Fb caps were used, it would be skipped in case of any dirty caps and reuse the existing capsnap, which is incorrect. Due to this, two consecutive snapshots would be overwritten and lose data. With this fix, the writing flags are correctly set and no snapshot data is lost. Bugzilla:2224241 Thread renaming no longer fails Previously, in a few rare cases, during renaming, if another thread tried to lookup the dst dentry, there were chances for it to get inconsistent result, wherein both the src dentry and dst dentry would link to the same inode simultaneously. Due to this,the rename request would fail as two different dentries were being linked to the same inode. With this fix, the thread waits for the renaming action to finish and everything works as expected. Bugzilla:2227987 Revocation requests no longer get stuck Previously, before the revoke request was sent out, which would increase the 'seq', if the clients released the corresponding caps and sent out the cap update request with the old seq , the MDS would miss checking the seq (s) and cap calculation. Due to this, the revocation requests would be stuck infinitely and would throw warnings about the revocation requests not responding from clients. With this fix, an acknowledgement is always sent for revocation requests and they no longer get stuck. Bugzilla:2227992 Errors are handled gracefully in MDLog::_recovery_thread Previously, a write would fail if the MDS was already blocklisted due to the fs fail issued by the QA tests. For instance, the QA test test_rebuild_moved_file (tasks/data-scan) would fail due to this reason. With this fix, the write failures are gracefully handled in MDLog::_recovery_thread . Bugzilla:2228358 Ceph client now verifies the cause of lagging before sending out an alarm Previously, Ceph would sometimes send out false alerts warning of laggy OSDs. For example, X client(s) laggy due to laggy OSDs . These alerts were sent out without verifying that the lagginess was actually due to the OSD, and not due to some other cause. With this fix, the X client(s) laggy due to laggy OSDs message is only sent out if some clients and an OSD is laggy. Bugzilla:2247187 4.4. Ceph Dashboard Grafana panels for performance of daemons in the Ceph Dashboard now show correct data Previously, the labels exporter were not compatible with the queries used in the Grafana dashboard. Due to this, the Grafana panels were empty for Ceph daemons performance in the Ceph Dashboard. With this fix, the label names are made compatible with the Grafana dashboard queries and the Grafana panels for performance of daemons show correct data. Bugzilla:2241309 Edit layering and deep-flatten features disabled on the Dashboard Previously, in the Ceph dashboard, it was possible to allow editing the layering & deep-flatten features, which are immutable, resulting in an error - rbd: failed to update image features: (22) Invalid argument . With this fix, editing the layering & deep-flatten features are disabled and everything works as expected. Bugzilla:2166708 ceph_daemon label is added to the labeled performance counters in Ceph exporter Previously, in Ceph exporter, adding the ceph_daemon label to the labeled performance counters was missed. With this fix, ceph_daemon label is added to the labeled performance counters in Ceph exporter. ceph daemon label is now present on all Ceph daemons performance metrics and instance_id label for Ceph Object Gateway performance metrics. Bugzilla:2240972 Protecting snapshot is enabled only if layering for its parent image is enabled Previously, protecting snapshot was enabled even if layering was disabled for its parent image. This caused errors when trying to protect the snapshot of an image for which layering was disabled. With this fix, protecting snapshot is disabled if layering for an image is disabled. Protecting snapshot is enabled only if layering for its parent image is enabled. Bugzilla:2166705 Newly added host details are now visible on the cluster expansion review page Previously, users could not see the information about the hosts that were added in the step. With this fix, hosts that were added in the step are now visible on the cluster expansion review page. Bugzilla:2232567 Ceph Object Gateway page now loads properly on the Ceph dashboard. Previously, an incorrect regex matching caused the dashboard to break when trying to load the Ceph Object Gateway page. The Ceph Object Gateway page would not load with specific configurations like rgw_frontends like beast port=80 ssl_port=443 . With this fix, the regex matching in the codebase is updated and the Ceph Object Gateway page loads without any issues. Bugzilla:2238470 4.5. Ceph Object Gateway Ceph Object Gateway daemon no longer crashes where phoneNumbers.addr is NULL Previously, due to a syntax error, the query for select * from s3object[*].phonenumbers where phoneNumbers.addr is NULL; would cause the Ceph Object Gateway daemon to crash. With this fix the wrong syntax is identified and reported, no longer causing the daemon to crash. Bugzilla:2230234 Ceph Object Gateway daemon no longer crashes with cast( trim) queries Previously, due to the trim skip type checking within the query for select cast( trim( leading 132140533849470.72 from _3 ) as float) from s3object; , the Ceph Object Gateway daemon would crash. With this fix the type is checked and is identified if wrong and reported, no longer causing the daemon to crash. Bugzilla:2248866 Ceph Object Gateway daemon no longer crashes with "where" clause in an s3select JSON query. Previously, due to a syntax error, an s3select JSON query with a "where" clause would cause the the Ceph Object Gateway daemon to crash. With this fix the wrong syntax is identified and reported, no longer causing the daemon to crash. Bugzilla:2225434 Ceph Object Gateway daemon no longer crashes with s3 select phonenumbers.type query Previously, due to a syntax error, the query for select phonenumbers.type from s3object[*].phonenumbers; would cause the Ceph Object Gateway daemon to crash. With this fix the wrong syntax is identified and reported, no longer causing the daemon to crash. Bugzilla:2230230 Ceph Object Gateway daemon validates arguments and no longer crashes Previously, due to an operator with missing arguments, the daemon would crash when trying to access the nonexistent arguments. With this fix the daemon validates the number of arguments per operator and the daemon no longer crashes. Bugzilla:2230233 Ceph Object Gateway daemon no longer crashes with the trim command Previously, due to the trim skip type checking within the query for select trim(LEADING '1' from '111abcdef111') from s3object; , the Ceph Object Gateway daemon would crash. With this fix, the type is checked and is identified if wrong and reported, no longer causing the daemon to crash. Bugzilla:2248862 Ceph Object Gateway daemon no longer crashes if a big value is entered Previously, due to too large of a value entry, the query for select DATE_DIFF(SECOND, utcnow(),date_add(year,1111111111111111111, utcnow())) from s3object; would cause the Ceph Object Gateway daemon to crash. With this fix, the crash is identified and an error is reported. Bugzilla:2245145 Ceph Object Gateway now parses the CSV objects without processing failures Previously, Ceph Object Gateway failed to properly parse CSV objects. When the process failed, the requests would stop without a proper error message. With this fix, the CSV parser works as expected and processes the CSV objects with no failures. Bugzilla:2241907 Object version instance IDs beginning with a hyphen are restored Previously, when restoring the index on a versioned bucket, object versions with an instance ID beginning with a hyphen would not be properly restored into the bucket index. With this fix, instance IDs beginning with a hyphen are now recognized and restored into the bucket index, as expected. Bugzilla:2247138 Multi-delete function notifications work as expected Previously, due to internal errors, such as a race condition in the code, the Ceph Object Gateway would crash or react unexpectedly when multi-delete functions were performed and the notifications were set for bucket deletions. With this fix, notifications for multi-delete function work as expected. Bugzilla:2239173 RADOS object multipart upload workflows complete properly Previously, in some cases, a RADOS object that was part of a multipart upload workflow objects that were created on a upload would cause certain parts to not complete or stop in the middle of the upload. With this fix, all parts upload correctly, once the multipart upload workflow is complete. Bugzilla:2008835 Users belonging to a different tenant than the bucket owner can now manage notifications Previously, a user that belonged to a different tenant than the bucket owner was not able to manage notifications. For example, modify, get, or delete. With this fix, any user with the correct permissions can manage the notifications for the buckets. Bugzilla:2180415 Ability to perform NFS setattr on buckets is removed Previously, changing the attributes stored on a bucket via export as an NFS directory triggered an inconsistency in the Ceph Object gateway bucket information cache. Due to this, subsequent accesses to the bucket via NFS failed. With this fix, the ability to perform NFS setattr on buckets is removed and attempts to perform NFS setattr on a bucket, for example, chown on the directory, have no effect. Note This might change in future releases. Bugzilla:2241145 Testing for reshardable bucket layouts is added to prevent crashes Previously, with the added bucket layout code to enable dynamic bucket resharding with multi-site, there was no check to verify if the bucket layout supported resharding during dynamic, immediate, or rescheduled resharding. Due to this, the Ceph Object gateway daemon would crash in case of dynamic bucket resharding and the radosgw-admin command would crash in case of immediate or scheduled resharding. With this fix, a test for reshardable bucket layouts is added and the crashes no longer occur. When immediate and scheduled resharding occurs, an error message is displayed. When dynamic bucket resharding occurs, the bucket is skipped. Bugzilla:2242987 The user modify -placement-id command can now be used with an empty --storage-class argument Previously, if the --storage-class argument was not used when running the 'user modify --placement-id' command, the command would fail. With this fix, the --storage-class argument can be left empty without causing the command to fail. Bugzilla:2228157 Initialization now only unregisters watches that were previously registered Previously, in some cases, an error in initialization could cause an attempt to unregister a watch that was never registered. This would result in some command line tools crashing unpredictably. With this fix, only previously registered watches are unregistered. Bugzilla:2224078 Multi-site replication now maintains consistent states between zones and prevents overwriting deleted objects Previously, a race condition in multi-site replication would allow objects that should be deleted to be copied back from another site, resulting in an inconsistent state between zones. As a result, the zone which is receiving the workload ends up with some objects which should be deleted still present. With this fix, a custom header is added to pass the destination zone's trace string and is then checked against the object's replication trace. If there is a match, a 304 response is returned, preventing the full sync from overwriting a deleted object. Bugzilla:2219427 The memory footprint of Ceph Object Gateway has significantly been reduced Previously, in some cases, a memory leak associated with Lua scripting integration caused excessive RGW memory growth. With this fix, the leak is fixed and the memory footprint for Ceph Object Gateway is significantly reduced. Bugzilla:2032001 Bucket index performance no longer impacted during versioned object operations Previously, in some cases, space leaks would occur and reduce bucket index performance. This was caused by a race condition related to updates of object logical head (OLH), which relates to versioned bucket current version calculations during updates. With this fix, logic errors in OLH update operations are fixed and space is no longer being leaked during versioned object operations. Bugzilla:2219467 Delete markers are working correctly with the LC rule Previously, optimization was attempted to reuse a sal object handle. Due to this, delete markers were not being generated as expected. With this fix, the change to re-use sal object handle for get-object-attributes is reverted and delete markers are created correctly. Bugzilla:2248116 SQL engine no longer causes Ceph Object Gateway crash with illegal calculations Previously, in some cases, the SQL engine would throw an exception that was not handled, causing a Ceph Object Gateway crash. This was caused due to an illegal SQL calculation of a date-time operation. With this fix, the exception is handled with an emitted error message, instead of crashing. Bugzilla:2246150 The select trim (LEADING '1' from '111abcdef111') from s3object; query now works when capitals are used in query Previously, if LEADING or TRAILING were written in all capitals, the string would not properly read, causing a float type to be referred to as a string type, thus leading to a wrong output. With this fix, type checking is introduced before completing the query, and LEADING and TRAILING work written either capitalized or in lower case. Bugzilla:2245575 JSON parsing now works for select _1.authors.name from s3object[*] limit 1 query Previously, an anonymous array given in the select _1.authors.name from s3object[*] limit 1 would give the wrong value output. With this fix, JSON parsing works, even if an anonymous array is provided to the query. Bugzilla:2236462 4.6. Multi-site Ceph Object Gateway Client no longer resets the connection for an incorrect Content-Length header field value Previously, when returning an error page to the client, for example, a 404 or 403 condition, the </body> and </html> closing tags were missing, although their presence was accounted for in the request's Content-Length header field value. Due to this, depending on the client, the TCP connection between the client and the Rados Gateway would be closed by an RST packet from the client on account of incorrect Content-Length header field value, instead of a FIN packet under normal circumstances. With this fix, send the </body> and </html> closing tags to the client under all the required conditions. The value of the Content-Length header field correctly represents the length of data sent to the client, and the client no longer resets the connection for an incorrect Content-Length reason. Bugzilla:2189412 Sync notification are sent with the correct object size Previously, when an object was synced between zones, and sync notifications were configured, the notification was sent with zero as the size of the object. With this fix, sync notifications are sent with the correct object size. Bugzilla:2238921 Multi-site sync properly filters and checks according to allowed zones and filters Previously, when using the multi-site sync policy, certain commands, such as radosgw-admin sync status , would not filter restricted zones or empty sync group names. The lack of filter caused the output of these commands to be misleading. With this fix, restricted zones are no longer checked or reported and empty sync group names are filtered out of the status results. Bugzilla:2159966 4.7. RADOS The ceph version command no longer returns the empty version list Previously, if the MDS daemon was not deployed in the cluster then the ceph version command returned an empty version list for MDS daemons that represented version inconsistency. This should not be shown if the daemon is not deployed in the cluster. With this fix, the daemon version information is skipped if the daemon version map is empty and the ceph version command returns the version information only for the Ceph daemons which are deployed in the cluster. Bugzilla:2110933 ms_osd_compression_algorithm now displays the correct value Previously, an incorrect value in ms_osd_compression_algorithm displayed a list of algorithms instead of the default value, causing a discrepancy by listing a set of algorithms instead of one. With this fix, only the default value is displayed when using the ms_osd_compression_algorithm command. Bugzilla:2155380 MGR no longer disconnects from the cluster without retries Previously, during network issues, clusters would disconnect with MGR without retries and the authentication of monclient would fail. With this fix, retries are added in scenarios where hunting and connection would both fail. Bugzilla:2106031 Increased timeout retry value for client_mount_timeout Previously, due to the mishandling of the client_mount_timeout configurable, the timeout for authenticating a client to monitors could reach up to 10 retries disregarding its high default value of 5 minutes. With this fix, the single-retry behavior of the configurable is restored and the authentication timeout works as expected. Bugzilla:2233800 4.8. RBD Mirroring Demoted mirror snapshot is removed following the promotion of the image Previously, due to an implementation defect, the demoted mirror snapshots would not be removed following the promotion of the image, whether on the secondary image or on the primary image. Due to this, demoted mirror snapshots would pile up and consume storage space. With this fix, the implementation defect is fixed and the appropriate demoted mirror snapshot is removed following the promotion of the image. Bugzilla:2237304 Non-primary images are now deleted when the primary image is deleted Previously, a race condition in the rbd-mirror daemon image replayer prevented a non-primary image from being deleted when the primary was deleted. Due to this, the non-primary image would not be deleted and the storage space was used. With this fix, the rbd-mirror image replayer is modified to eliminate the race condition. Non-primary images are now deleted when the primary image is deleted. Bugzilla:2230056 The librbd client correctly propagates the block-listing error to the caller Previously, when the rbd_support module's RADOS client was block-listed, the module's mirror_snapshot_schedule handler would not always shut down correctly. The handler's librbd client would not propagate the block-list error, thereby stalling the handler's shutdown. This lead to the failures of the mirror_snapshot_schedule handler and the rbd_support module to automatically recover from repeated client block-listing. The rbd_support module stopped scheduling mirror snapshots after its client was repeatedly block-listed. With this fix, the race in the librbd client between its exclusive lock acquisition and handling of block-listing is fixed. This allows the librbd client to propagate the block-listing error correctly to the caller, for example, the mirror_snapshot_schedule handler, while waiting to acquire an exclusive lock. The mirror_snapshot_schedule handler and the rbd_support_module automatically recovers from repeated client block-listing. Bugzilla:2237303 | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/release_notes/bug-fixes |
Builds using Shipwright | Builds using Shipwright OpenShift Container Platform 4.17 An extensible build framework to build container images on an OpenShift cluster Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/builds_using_shipwright/index |
Chapter 10. Uninstalling a cluster on RHOSP from your own infrastructure | Chapter 10. Uninstalling a cluster on RHOSP from your own infrastructure You can remove a cluster that you deployed to Red Hat OpenStack Platform (RHOSP) on user-provisioned infrastructure. 10.1. Downloading playbook dependencies The Ansible playbooks that simplify the removal process on user-provisioned infrastructure require several Python modules. On the machine where you will run the process, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 10.2. Removing a cluster from RHOSP that uses your own infrastructure You can remove an OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP) that uses your own infrastructure. To complete the removal process quickly, run several Ansible playbooks. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies." You have the playbooks that you used to install the cluster. You modified the playbooks that are prefixed with down- to reflect any changes that you made to their corresponding installation playbooks. For example, changes to the bootstrap.yaml file are reflected in the down-bootstrap.yaml file. All of the playbooks are in a common directory. Procedure On a command line, run the playbooks that you downloaded: USD ansible-playbook -i inventory.yaml \ down-bootstrap.yaml \ down-control-plane.yaml \ down-compute-nodes.yaml \ down-load-balancers.yaml \ down-network.yaml \ down-security-groups.yaml Remove any DNS record changes you made for the OpenShift Container Platform installation. OpenShift Container Platform is removed from your infrastructure. | [
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk",
"sudo alternatives --set python /usr/bin/python3",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml down-control-plane.yaml down-compute-nodes.yaml down-load-balancers.yaml down-network.yaml down-security-groups.yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_openstack/uninstalling-openstack-user |
Proof of Concept - Deploying Red Hat Quay | Proof of Concept - Deploying Red Hat Quay Red Hat Quay 3 Deploying Red Hat Quay Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/proof_of_concept_-_deploying_red_hat_quay/index |
Chapter 2. Understanding authentication | Chapter 2. Understanding authentication For users to interact with OpenShift Container Platform, they must first authenticate to the cluster. The authentication layer identifies the user associated with requests to the OpenShift Container Platform API. The authorization layer then uses information about the requesting user to determine if the request is allowed. As an administrator, you can configure authentication for OpenShift Container Platform. 2.1. Users A user in OpenShift Container Platform is an entity that can make requests to the OpenShift Container Platform API. An OpenShift Container Platform User object represents an actor which can be granted permissions in the system by adding roles to them or to their groups. Typically, this represents the account of a developer or administrator that is interacting with OpenShift Container Platform. Several types of users can exist: User type Description Regular users This is the way most interactive OpenShift Container Platform users are represented. Regular users are created automatically in the system upon first login or can be created via the API. Regular users are represented with the User object. Examples: joe alice System users Many of these are created automatically when the infrastructure is defined, mainly for the purpose of enabling the infrastructure to interact with the API securely. They include a cluster administrator (with access to everything), a per-node user, users for use by routers and registries, and various others. Finally, there is an anonymous system user that is used by default for unauthenticated requests. Examples: system:admin system:openshift-registry system:node:node1.example.com Service accounts These are special system users associated with projects; some are created automatically when the project is first created, while project administrators can create more for the purpose of defining access to the contents of each project. Service accounts are represented with the ServiceAccount object. Examples: system:serviceaccount:default:deployer system:serviceaccount:foo:builder Each user must authenticate in some way to access OpenShift Container Platform. API requests with no authentication or invalid authentication are authenticated as requests by the anonymous system user. After authentication, policy determines what the user is authorized to do. 2.2. Groups A user can be assigned to one or more groups , each of which represent a certain set of users. Groups are useful when managing authorization policies to grant permissions to multiple users at once, for example allowing access to objects within a project, versus granting them to users individually. In addition to explicitly defined groups, there are also system groups, or virtual groups , that are automatically provisioned by the cluster. The following default virtual groups are most important: Virtual group Description system:authenticated Automatically associated with all authenticated users. system:authenticated:oauth Automatically associated with all users authenticated with an OAuth access token. system:unauthenticated Automatically associated with all unauthenticated users. 2.3. API authentication Requests to the OpenShift Container Platform API are authenticated using the following methods: OAuth access tokens Obtained from the OpenShift Container Platform OAuth server using the <namespace_route> /oauth/authorize and <namespace_route> /oauth/token endpoints. Sent as an Authorization: Bearer... header. Sent as a websocket subprotocol header in the form base64url.bearer.authorization.k8s.io.<base64url-encoded-token> for websocket requests. X.509 client certificates Requires an HTTPS connection to the API server. Verified by the API server against a trusted certificate authority bundle. The API server creates and distributes certificates to controllers to authenticate themselves. Any request with an invalid access token or an invalid certificate is rejected by the authentication layer with a 401 error. If no access token or certificate is presented, the authentication layer assigns the system:anonymous virtual user and the system:unauthenticated virtual group to the request. This allows the authorization layer to determine which requests, if any, an anonymous user is allowed to make. 2.3.1. OpenShift Container Platform OAuth server The OpenShift Container Platform master includes a built-in OAuth server. Users obtain OAuth access tokens to authenticate themselves to the API. When a person requests a new OAuth token, the OAuth server uses the configured identity provider to determine the identity of the person making the request. It then determines what user that identity maps to, creates an access token for that user, and returns the token for use. 2.3.1.1. OAuth token requests Every request for an OAuth token must specify the OAuth client that will receive and use the token. The following OAuth clients are automatically created when starting the OpenShift Container Platform API: OAuth client Usage openshift-browser-client Requests tokens at <namespace_route>/oauth/token/request with a user-agent that can handle interactive logins. [1] openshift-challenging-client Requests tokens with a user-agent that can handle WWW-Authenticate challenges. <namespace_route> refers to the namespace route. This is found by running the following command: USD oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host All requests for OAuth tokens involve a request to <namespace_route>/oauth/authorize . Most authentication integrations place an authenticating proxy in front of this endpoint, or configure OpenShift Container Platform to validate credentials against a backing identity provider. Requests to <namespace_route>/oauth/authorize can come from user-agents that cannot display interactive login pages, such as the CLI. Therefore, OpenShift Container Platform supports authenticating using a WWW-Authenticate challenge in addition to interactive login flows. If an authenticating proxy is placed in front of the <namespace_route>/oauth/authorize endpoint, it sends unauthenticated, non-browser user-agents WWW-Authenticate challenges rather than displaying an interactive login page or redirecting to an interactive login flow. Note To prevent cross-site request forgery (CSRF) attacks against browser clients, only send Basic authentication challenges with if a X-CSRF-Token header is on the request. Clients that expect to receive Basic WWW-Authenticate challenges must set this header to a non-empty value. If the authenticating proxy cannot support WWW-Authenticate challenges, or if OpenShift Container Platform is configured to use an identity provider that does not support WWW-Authenticate challenges, you must use a browser to manually obtain a token from <namespace_route>/oauth/token/request . 2.3.1.2. API impersonation You can configure a request to the OpenShift Container Platform API to act as though it originated from another user. For more information, see User impersonation in the Kubernetes documentation. 2.3.1.3. Authentication metrics for Prometheus OpenShift Container Platform captures the following Prometheus system metrics during authentication attempts: openshift_auth_basic_password_count counts the number of oc login user name and password attempts. openshift_auth_basic_password_count_result counts the number of oc login user name and password attempts by result, success or error . openshift_auth_form_password_count counts the number of web console login attempts. openshift_auth_form_password_count_result counts the number of web console login attempts by result, success or error . openshift_auth_password_total counts the total number of oc login and web console login attempts. | [
"oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/authentication_and_authorization/understanding-authentication |
4.212. oprofile | 4.212. oprofile 4.212.1. RHBA-2011:1712 - oprofile bug fix and enhancement update An updated oprofile package that fixes one bug and adds two enhancements is now available for Red Hat Enterprise Linux 6. OProfile is a system-wide profiler for Linux systems. The profiling runs transparently in the background and profile data can be collected at any time. OProfile uses the hardware performance counters provided on many processors, and can use the Real Time Clock (RTC) for profiling on processors without counters. Bug Fix BZ# 717860 Previously, OProfile could encounter a buffer overrun in the OProfile daemon. This update modifes oprofiled so that OProfile now checks and reports if the filename is too large for the buffer. Enhancements BZ# 696565 Previously, the OProfile profiler did not provide the performance monitoring events for the Intel Sandy Bridge processor. This update provides the files for the Intel Sandy Bridge processor specific performance events and adds the code to identify Intel Sandy Bridge processors. Now, OProfile provides Intel Sandy Bridge specific events. BZ# 695851 Previously, the OProfile profiler did not identify some Intel Westmere processors causing OProfile to use the fallback Intel Architected events. Now, OProfile provides Intel Westmere specific events for Intel Westmere-EX processors (model 0x2f). All OProfile users are advised to upgrade to this updated package which fixes this bug and adds these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/oprofile |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/using_amq_streams_on_rhel/making-open-source-more-inclusive |
Chapter 27. Dataset | Chapter 27. Dataset Both producer and consumer are supported Testing of distributed and asynchronous processing is notoriously difficult. The Mock , Test and DataSet endpoints work great with the Camel Testing Framework to simplify your unit and integration testing using Enterprise Integration Patterns and Camel's large range of Components together with the powerful Bean Integration. The DataSet component provides a mechanism to easily perform load & soak testing of your system. It works by allowing you to create DataSet instances both as a source of messages and as a way to assert that the data set is received. Camel will use the throughput logger when sending datasets. 27.1. Dependencies When using dataset with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-dataset-starter</artifactId> </dependency> 27.2. URI format Where name is used to find the DataSet instance in the Registry Camel ships with a support implementation of org.apache.camel.component.dataset.DataSet , the org.apache.camel.component.dataset.DataSetSupport class, that can be used as a base for implementing your own DataSet. Camel also ships with some implementations that can be used for testing: org.apache.camel.component.dataset.SimpleDataSet , org.apache.camel.component.dataset.ListDataSet and org.apache.camel.component.dataset.FileDataSet , all of which extend DataSetSupport . 27.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 27.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 27.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 27.4. Component Options The Dataset component supports 5 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean log (producer) To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean exchangeFormatter (advanced) Autowired Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter. ExchangeFormatter 27.5. Endpoint Options The Dataset endpoint is configured using URI syntax: with the following path and query parameters: 27.5.1. Path Parameters (1 parameters) Name Description Default Type name (common) Required Name of DataSet to lookup in the registry. DataSet 27.5.2. Query Parameters (21 parameters) Name Description Default Type dataSetIndex (common) Controls the behaviour of the CamelDataSetIndex header. For Consumers: - off = the header will not be set - strict/lenient = the header will be set For Producers: - off = the header value will not be verified, and will not be set if it is not present = strict = the header value must be present and will be verified = lenient = the header value will be verified if it is present, and will be set if it is not present. Enum values: strict lenient off lenient String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean initialDelay (consumer) Time period in millis to wait before starting sending messages. 1000 long minRate (consumer) Wait until the DataSet contains at least this number of messages. 0 int preloadSize (consumer) Sets how many messages should be preloaded (sent) before the route completes its initialization. 0 long produceDelay (consumer) Allows a delay to be specified which causes a delay when a message is sent by the consumer (to simulate slow processing). 3 long exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern assertPeriod (producer) Sets a grace period after which the mock endpoint will re-assert to ensure the preliminary assertion is still valid. This is used for example to assert that exactly a number of messages arrives. For example if expectedMessageCount(int) was set to 5, then the assertion is satisfied when 5 or more message arrives. To ensure that exactly 5 messages arrives, then you would need to wait a little period to ensure no further message arrives. This is what you can use this method for. By default this period is disabled. long consumeDelay (producer) Allows a delay to be specified which causes a delay when a message is consumed by the producer (to simulate slow processing). 0 long expectedCount (producer) Specifies the expected number of message exchanges that should be received by this endpoint. Beware: If you want to expect that 0 messages, then take extra care, as 0 matches when the tests starts, so you need to set a assert period time to let the test run for a while to make sure there are still no messages arrived; for that use setAssertPeriod(long). An alternative is to use NotifyBuilder, and use the notifier to know when Camel is done routing some messages, before you call the assertIsSatisfied() method on the mocks. This allows you to not use a fixed assert period, to speedup testing times. If you want to assert that exactly n'th message arrives to this mock endpoint, then see also the setAssertPeriod(long) method for further details. -1 int failFast (producer) Sets whether assertIsSatisfied() should fail fast at the first detected failed expectation while it may otherwise wait for all expected messages to arrive before performing expectations verifications. Is by default true. Set to false to use behavior as in Camel 2.x. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean log (producer) To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class. false boolean reportGroup (producer) A number that is used to turn on throughput logging based on groups of the size. int resultMinimumWaitTime (producer) Sets the minimum expected amount of time (in millis) the assertIsSatisfied() will wait on a latch until it is satisfied. long resultWaitTime (producer) Sets the maximum amount of time (in millis) the assertIsSatisfied() will wait on a latch until it is satisfied. long retainFirst (producer) Specifies to only retain the first n'th number of received Exchanges. This is used when testing with big data, to reduce memory consumption by not storing copies of every Exchange this mock endpoint receives. Important: When using this limitation, then the getReceivedCounter() will still return the actual number of received Exchanges. For example if we have received 5000 Exchanges, and have configured to only retain the first 10 Exchanges, then the getReceivedCounter() will still return 5000 but there is only the first 10 Exchanges in the getExchanges() and getReceivedExchanges() methods. When using this method, then some of the other expectation methods is not supported, for example the expectedBodiesReceived(Object... ) sets a expectation on the first number of bodies received. You can configure both setRetainFirst(int) and setRetainLast(int) methods, to limit both the first and last received. -1 int retainLast (producer) Specifies to only retain the last n'th number of received Exchanges. This is used when testing with big data, to reduce memory consumption by not storing copies of every Exchange this mock endpoint receives. Important: When using this limitation, then the getReceivedCounter() will still return the actual number of received Exchanges. For example if we have received 5000 Exchanges, and have configured to only retain the last 20 Exchanges, then the getReceivedCounter() will still return 5000 but there is only the last 20 Exchanges in the getExchanges() and getReceivedExchanges() methods. When using this method, then some of the other expectation methods is not supported, for example the expectedBodiesReceived(Object... ) sets a expectation on the first number of bodies received. You can configure both setRetainFirst(int) and setRetainLast(int) methods, to limit both the first and last received. -1 int sleepForEmptyTest (producer) Allows a sleep to be specified to wait to check that this endpoint really is empty when expectedMessageCount(int) is called with zero. long copyOnExchange (producer (advanced)) Sets whether to make a deep copy of the incoming Exchange when received at this mock endpoint. Is by default true. true boolean 27.6. Configuring DataSet Camel will lookup in the Registry for a bean implementing the DataSet interface. So you can register your own DataSet as: <bean id="myDataSet" class="com.mycompany.MyDataSet"> <property name="size" value="100"/> </bean> 27.7. Example For example, to test that a set of messages are sent to a queue and then consumed from the queue without losing any messages: // send the dataset to a queue from("dataset:foo").to("activemq:SomeQueue"); // now lets test that the messages are consumed correctly from("activemq:SomeQueue").to("dataset:foo"); The above would look in the Registry to find the foo DataSet instance which is used to create the messages. Then you create a DataSet implementation, such as using the SimpleDataSet as described below, configuring things like how big the data set is and what the messages look like etc. 27.8. DataSetSupport (abstract class) The DataSetSupport abstract class is a nice starting point for new DataSets, and provides some useful features to derived classes. 27.8.1. Properties on DataSetSupport Property Type Default Description defaultHeaders Map<String,Object> null Specifies the default message body. For SimpleDataSet it is a constant payload; though if you want to create custom payloads per message, create your own derivation of DataSetSupport . outputTransformer org.apache.camel.Processor null size long 10 Specifies how many messages to send/consume. reportCount long -1 Specifies the number of messages to be received before reporting progress. Useful for showing progress of a large load test. If < 0, then size / 5, if is 0 then size , else set to reportCount value. 27.9. SimpleDataSet The SimpleDataSet extends DataSetSupport , and adds a default body. 27.9.1. Additional Properties on SimpleDataSet Property Type Default Description defaultBody Object <hello>world!</hello> Specifies the default message body. By default, the SimpleDataSet produces the same constant payload for each exchange. If you want to customize the payload for each exchange, create a Camel Processor and configure the SimpleDataSet to use it by setting the outputTransformer property. 27.10. ListDataSet The List`DataSet` extends DataSetSupport , and adds a list of default bodies. 27.10.1. Additional Properties on ListDataSet Property Type Default Description defaultBodies List<Object> empty LinkedList<Object> Specifies the default message body. By default, the ListDataSet selects a constant payload from the list of defaultBodies using the CamelDataSetIndex . If you want to customize the payload, create a Camel Processor and configure the ListDataSet to use it by setting the outputTransformer property. size long the size of the defaultBodies list Specifies how many messages to send/consume. This value can be different from the size of the defaultBodies list. If the value is less than the size of the defaultBodies list, some of the list elements will not be used. If the value is greater than the size of the defaultBodies list, the payload for the exchange will be selected using the modulus of the CamelDataSetIndex and the size of the defaultBodies list (i.e. CamelDataSetIndex % defaultBodies.size() ) 27.11. FileDataSet The FileDataSet extends ListDataSet , and adds support for loading the bodies from a file. 27.11.1. Additional Properties on FileDataSet Property Type Default Description sourceFile File null Specifies the source file for payloads delimiter String \z Specifies the delimiter pattern used by a java.util.Scanner to split the file into multiple payloads. 27.12. Spring Boot Auto-Configuration The component supports 11 options, which are listed below. Name Description Default Type camel.component.dataset-test.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.dataset-test.enabled Whether to enable auto configuration of the dataset-test component. This is enabled by default. Boolean camel.component.dataset-test.exchange-formatter Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter. The option is a org.apache.camel.spi.ExchangeFormatter type. ExchangeFormatter camel.component.dataset-test.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.dataset-test.log To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class. false Boolean camel.component.dataset.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.dataset.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.dataset.enabled Whether to enable auto configuration of the dataset component. This is enabled by default. Boolean camel.component.dataset.exchange-formatter Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter. The option is a org.apache.camel.spi.ExchangeFormatter type. ExchangeFormatter camel.component.dataset.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.dataset.log To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class. false Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-dataset-starter</artifactId> </dependency>",
"dataset:name[?options]",
"dataset:name",
"<bean id=\"myDataSet\" class=\"com.mycompany.MyDataSet\"> <property name=\"size\" value=\"100\"/> </bean>",
"// send the dataset to a queue from(\"dataset:foo\").to(\"activemq:SomeQueue\"); // now lets test that the messages are consumed correctly from(\"activemq:SomeQueue\").to(\"dataset:foo\");"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-dataset-component-starter |
Chapter 1. Using manifests for a disconnected Satellite Server | Chapter 1. Using manifests for a disconnected Satellite Server Only users on a disconnected Satellite Server create and manage subscription manifests from the Customer Portal. Users on a connected Satellite Server create and manage their subscription manifests in the Manifests section of the Red Hat Hybrid Cloud Console. For information about creating and managing subscription manifests for a connected Satellite Server, see Creating and managing a manifest for a connected Satellite Server . | null | https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/creating_and_managing_manifests_for_a_disconnected_satellite_server/using_manifests_con |
4.4. Virtual Memory: The Details | 4.4. Virtual Memory: The Details First, we must introduce a new concept: virtual address space . Virtual address space is the maximum amount of address space available to an application. The virtual address space varies according to the system's architecture and operating system. Virtual address space depends on the architecture because it is the architecture that defines how many bits are available for addressing purposes. Virtual address space also depends on the operating system because the manner in which the operating system was implemented may introduce additional limits over and above those imposed by the architecture. The word "virtual" in virtual address space means this is the total number of uniquely-addressable memory locations available to an application, but not the amount of physical memory either installed in the system, or dedicated to the application at any given time. In the case of our example application, its virtual address space is 15000 bytes. To implement virtual memory, it is necessary for the computer system to have special memory management hardware. This hardware is often known as an MMU (Memory Management Unit). Without an MMU, when the CPU accesses RAM, the actual RAM locations never change -- memory address 123 is always the same physical location within RAM. However, with an MMU, memory addresses go through a translation step prior to each memory access. This means that memory address 123 might be directed to physical address 82043 at one time, and physical address 20468 another time. As it turns out, the overhead of individually tracking the virtual to physical translations for billions of bytes of memory would be too great. Instead, the MMU divides RAM into pages -- contiguous sections of memory of a set size that are handled by the MMU as single entities. Keeping track of these pages and their address translations might sound like an unnecessary and confusing additional step. However, it is crucial to implementing virtual memory. For that reason, consider the following point. Taking our hypothetical application with the 15000 byte virtual address space, assume that the application's first instruction accesses data stored at address 12374. However, also assume that our computer only has 12288 bytes of physical RAM. What happens when the CPU attempts to access address 12374? What happens is known as a page fault . 4.4.1. Page Faults A page fault is the sequence of events occurring when a program attempts to access data (or code) that is in its address space, but is not currently located in the system's RAM. The operating system must handle page faults by somehow making the accessed data memory resident, allowing the program to continue operation as if the page fault had never occurred. In the case of our hypothetical application, the CPU first presents the desired address (12374) to the MMU. However, the MMU has no translation for this address. So, it interrupts the CPU and causes software, known as a page fault handler, to be executed. The page fault handler then determines what must be done to resolve this page fault. It can: Find where the desired page resides on disk and read it in (this is normally the case if the page fault is for a page of code) Determine that the desired page is already in RAM (but not allocated to the current process) and reconfigure the MMU to point to it Point to a special page containing only zeros, and allocate a new page for the process only if the process ever attempts to write to the special page (this is called a copy on write page, and is often used for pages containing zero-initialized data) Get the desired page from somewhere else (which is discussed in more detail later) While the first three actions are relatively straightforward, the last one is not. For that, we need to cover some additional topics. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s1-memory-virt-details |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.