title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 24. Bottom-Up Service Development | Chapter 24. Bottom-Up Service Development Abstract There are many instances where you have Java code that already implements a set of functionality that you want to expose as part of a service oriented application. You may also simply want to avoid using WSDL to define your interface. Using JAX-WS annotations, you can add the information required to service enable a Java class. You can also create a Service Endpoint Interface (SEI) that can be used in place of a WSDL contract. If you want a WSDL contract, Apache CXF provides tools to generate a contract from annotated Java code. 24.1. Introduction to JAX-WS Service Development To create a service starting from Java you must do the following: Section 24.2, "Creating the SEI" a Service Endpoint Interface (SEI) that defines the methods you want to expose as a service. Note You can work directly from a Java class, but working from an interface is the recommended approach. Interfaces are better suited for sharing with the developers who are responsible for developing the applications consuming your service. The interface is smaller and does not provide any of the service's implementation details. Section 24.3, "Annotating the Code" the required annotations to your code. Section 24.4, "Generating WSDL" the WSDL contract for your service. Note If you intend to use the SEI as the service's contract, it is not necessary to generate a WSDL contract. Chapter 31, Publishing a Service the service as a service provider. 24.2. Creating the SEI Overview The service endpoint interface (SEI) is the piece of Java code that is shared between a service implementation and the consumers that make requests on that service. The SEI defines the methods implemented by the service and provides details about how the service will be exposed as an endpoint. When starting with a WSDL contract, the SEI is generated by the code generators. However, when starting from Java, it is the developer's responsibility to create the SEI. There are two basic patterns for creating an SEI: Green field development - In this pattern, you are developing a new service without any existing Java code or WSDL. It is best to start by creating the SEI. You can then distribute the SEI to any developers that are responsible for implementing the service providers and consumers that use the SEI. Note The recommended way to do green field service development is to start by creating a WSDL contract that defines the service and its interfaces. See Chapter 26, A Starting Point WSDL Contract . Service enablement - In this pattern, you typically have an existing set of functionality that is implemented as a Java class, and you want to service enable it. This means that you must do two things: Create an SEI that contains only the operations that are going to be exposed as part of the service. Modify the existing Java class so that it implements the SEI. Note Although you can add the JAX-WS annotations to a Java class, it is not recommended. Writing the interface The SEI is a standard Java interface. It defines a set of methods that a class implements. It can also define a number of member fields and constants to which the implementing class has access. In the case of an SEI the methods defined are intended to be mapped to operations exposed by a service. The SEI corresponds to a wsdl:portType element. The methods defined by the SEI correspond to wsdl:operation elements in the wsdl:portType element. Note JAX-WS defines an annotation that allows you to specify methods that are not exposed as part of a service. However, the best practice is to leave those methods out of the SEI. Example 24.1, "Simple SEI" shows a simple SEI for a stock updating service. Example 24.1. Simple SEI Implementing the interface Because the SEI is a standard Java interface, the class that implements it is a standard Java class. If you start with a Java class you must modify it to implement the interface. If you start with the SEI, the implementation class implements the SEI. Example 24.2, "Simple Implementation Class" shows a class for implementing the interface in Example 24.1, "Simple SEI" . Example 24.2. Simple Implementation Class 24.3. Annotating the Code 24.3.1. Overview of JAX-WS Annotations The JAX-WS annotations specify the metadata used to map the SEI to a fully specified service definition. Among the information provided in the annotations are the following: The target namespace for the service. The name of the class used to hold the request message The name of the class used to hold the response message If an operation is a one way operation The binding style the service uses The name of the class used for any custom exceptions The namespaces under which the types used by the service are defined Note Most of the annotations have sensible defaults and it is not necessary to provide values for them. However, the more information you provide in the annotations, the better your service definition is specified. A well-specified service definition increases the likelihood that all parts of a distributed application will work together. 24.3.2. Required Annotations Overview In order to create a service from Java code you are only required to add one annotation to your code. You must add the @WebService annotation on both the SEI and the implementation class. The @WebService annotation The @WebService annotation is defined by the javax.jws.WebService interface and it is placed on an interface or a class that is intended to be used as a service. @WebService has the properties described in Table 24.1, " @WebService Properties" Table 24.1. @WebService Properties Property Description name Specifies the name of the service interface. This property is mapped to the name attribute of the wsdl:portType element that defines the service's interface in a WSDL contract. The default is to append PortType to the name of the implementation class. [a] targetNamespace Specifies the target namespace where the service is defined. If this property is not specified, the target namespace is derived from the package name. serviceName Specifies the name of the published service. This property is mapped to the name attribute of the wsdl:service element that defines the published service. The default is to use the name of the service's implementation class. wsdlLocation Specifies the URL where the service's WSDL contract is stored. This must be specified using a relative URL. The default is the URL where the service is deployed. endpointInterface Specifies the full name of the SEI that the implementation class implements. This property is only specified when the attribute is used on a service implementation class. portName Specifies the name of the endpoint at which the service is published. This property is mapped to the name attribute of the wsdl:port element that specifies the endpoint details for a published service. The default is the append Port to the name of the service's implementation class. [a] When you generate WSDL from an SEI the interface's name is used in place of the implementation class' name. Note It is not necessary to provide values for any of the @WebService annotation's properties. However, we recommended that you provide as much information as you can. Annotating the SEI The SEI requires that you add the @WebService annotation. Because the SEI is the contract that defines the service, you should specify as much detail as possible about the service in the @WebService annotation's properties. Example 24.3, "Interface with the @WebService Annotation" shows the interface defined in Example 24.1, "Simple SEI" with the @WebService annotation. Example 24.3. Interface with the @WebService Annotation The @WebService annotation in Example 24.3, "Interface with the @WebService Annotation" does the following: Specifies that the value of the name attribute of the wsdl:portType element defining the service interface is quoteUpdater . Specifies that the target namespace of the service is http://demos.redhat.com . Specifies that the value of the name of the wsdl:service element defining the published service is updateQuoteService . Specifies that the service will publish its WSDL contract at http://demos.redhat.com/quoteExampleService?wsdl . Specifies that the value of the name attribute of the wsdl:port element defining the endpoint exposing the service is updateQuotePort . Annotating the service implementation In addition to annotating the SEI with the @WebService annotation, you also must annotate the service implementation class with the @WebService annotation. When adding the annotation to the service implementation class you only need to specify the endpointInterface property. As shown in Example 24.4, "Annotated Service Implementation Class" the property must be set to the full name of the SEI. Example 24.4. Annotated Service Implementation Class 24.3.3. Optional Annotations Abstract While the @WebService annotation is sufficient for service enabling a Java interface or a Java class, it does not fully describe how the service will be exposed as a service provider. The JAX-WS programming model uses a number of optional annotations for adding details about your service, such as the binding it uses, to the Java code. You add these annotations to the service's SEI. The more details you provide in the SEI the easier it is for developers to implement applications that can use the functionality it defines. It also makes the WSDL documents generated by the tools more specific. Overview Defining the Binding Properties with Annotations If you are using a SOAP binding for your service, you can use JAX-WS annotations to specify a number of the bindings properties. These properties correspond directly to the properties you can specify in a service's WSDL contract. Some of the settings, such as the parameter style, can restrict how you implement a method. These settings can also effect which annotations can be used when annotating method parameters. The @SOAPBinding annotation The @SOAPBinding annotation is defined by the javax.jws.soap.SOAPBinding interface. It provides details about the SOAP binding used by the service when it is deployed. If the @SOAPBinding annotation is not specified, a service is published using a wrapped doc/literal SOAP binding. You can put the @SOAPBinding annotation on the SEI and any of the SEI's methods. When it is used on a method, setting of the method's @SOAPBinding annotation take precedence. Table 24.2, " @SOAPBinding Properties" shows the properties for the @SOAPBinding annotation. Table 24.2. @SOAPBinding Properties Property Values Description style Style.DOCUMENT (default) Style.RPC Specifies the style of the SOAP message. If RPC style is specified, each message part within the SOAP body is a parameter or return value and appears inside a wrapper element within the soap:body element. The message parts within the wrapper element correspond to operation parameters and must appear in the same order as the parameters in the operation. If DOCUMENT style is specified, the contents of the SOAP body must be a valid XML document, but its form is not as tightly constrained. use Use.LITERAL (default) Use.ENCODED [a] Specifies how the data of the SOAP message is streamed. parameterStyle [b] ParameterStyle.BARE ParameterStyle.WRAPPED (default) Specifies how the method parameters, which correspond to message parts in a WSDL contract, are placed into the SOAP message body. If BARE is specified, each parameter is placed into the message body as a child element of the message root. If WRAPPED is specified, all of the input parameters are wrapped into a single element on a request message and all of the output parameters are wrapped into a single element in the response message. [a] Use.ENCODED is not currently supported. [b] If you set the style to RPC you must use the WRAPPED parameter style. Document bare style parameters Document bare style is the most direct mapping between Java code and the resulting XML representation of the service. When using this style, the schema types are generated directly from the input and output parameters defined in the operation's parameter list. You specify you want to use bare document\literal style by using the @SOAPBinding annotation with its style property set to Style.DOCUMENT, and its parameterStyle property set to ParameterStyle.BARE. To ensure that an operation does not violate the restrictions of using document style when using bare parameters, your operations must adhere to the following conditions: The operation must have no more than one input or input/output parameter. If the operation has a return type other than void , it must not have any output or input/output parameters. If the operation has a return type of void , it must have no more than one output or input/output parameter. Note Any parameters that are placed in the SOAP header using the @WebParam annotation or the @WebResult annotation are not counted against the number of allowed parameters. Document wrapped parameters Document wrapped style allows a more RPC like mapping between the Java code and the resulting XML representation of the service. When using this style, the parameters in the method's parameter list are wrapped into a single element by the binding. The disadvantage of this is that it introduces an extra-layer of indirection between the Java implementation and how the messages are placed on the wire. To specify that you want to use wrapped document\literal style use the @SOAPBinding annotation with its style property set to Style.DOCUMENT, and its parameterStyle property set to ParameterStyle.WRAPPED. You have some control over how the wrappers are generated by using the the section called "The @RequestWrapper annotation" annotation and the the section called "The @ResponseWrapper annotation" annotation. Example Example 24.5, "Specifying a Document Bare SOAP Binding with the SOAP Binding Annotation" shows an SEI that uses document bare SOAP messages. Example 24.5. Specifying a Document Bare SOAP Binding with the SOAP Binding Annotation Overview Defining Operation Properties with Annotations When the runtime maps your Java method definitions into XML operation definitions it provides details such as: What the exchanged messages look like in XML If the message can be optimized as a one way message The namespaces where the messages are defined The @WebMethod annotation The @WebMethod annotation is defined by the javax.jws.WebMethod interface. It is placed on the methods in the SEI. The @WebMethod annotation provides the information that is normally represented in the wsdl:operation element describing the operation to which the method is associated. Table 24.3, " @WebMethod Properties" describes the properties of the @WebMethod annotation. Table 24.3. @WebMethod Properties Property Description operationName Specifies the value of the associated wsdl:operation element's name . The default value is the name of the method. action Specifies the value of the soapAction attribute of the soap:operation element generated for the method. The default value is an empty string. exclude Specifies if the method should be excluded from the service interface. The default is false. The @RequestWrapper annotation The @RequestWrapper annotation is defined by the javax.xml.ws.RequestWrapper interface. It is placed on the methods in the SEI. The @RequestWrapper annotation specifies the Java class implementing the wrapper bean for the method parameters of the request message starting a message exchange. It also specifies the element names, and namespaces, used by the runtime when marshalling and unmarshalling the request messages. Table 24.4, " @RequestWrapper Properties" describes the properties of the @RequestWrapper annotation. Table 24.4. @RequestWrapper Properties Property Description localName Specifies the local name of the wrapper element in the XML representation of the request message. The default value is either the name of the method, or the value of the the section called "The @WebMethod annotation" annotation's operationName property. targetNamespace Specifies the namespace under which the XML wrapper element is defined. The default value is the target namespace of the SEI. className Specifies the full name of the Java class that implements the wrapper element. Note Only the className property is required. Important If the method is also annotated with the @SOAPBinding annotation, and its parameterStyle property is set to ParameterStyle.BARE , this annotation is ignored. The @ResponseWrapper annotation The @ResponseWrapper annotation is defined by the javax.xml.ws.ResponseWrapper interface. It is placed on the methods in the SEI. The @ResponseWrapper specifies the Java class implementing the wrapper bean for the method parameters in the response message in the message exchange. It also specifies the element names, and namespaces, used by the runtime when marshaling and unmarshalling the response messages. Table 24.5, " @ResponseWrapper Properties" describes the properties of the @ResponseWrapper annotation. Table 24.5. @ResponseWrapper Properties Property Description localName Specifies the local name of the wrapper element in the XML representation of the response message. The default value is either the name of the method with Response appended, or the value of the the section called "The @WebMethod annotation" annotation's operationName property with Response appended. targetNamespace Specifies the namespace where the XML wrapper element is defined. The default value is the target namespace of the SEI. className Specifies the full name of the Java class that implements the wrapper element. Note Only the className property is required. Important If the method is also annotated with the @SOAPBinding annotation and its parameterStyle property is set to ParameterStyle.BARE , this annotation is ignored. The @WebFault annotation The @WebFault annotation is defined by the javax.xml.ws.WebFault interface. It is placed on exceptions that are thrown by your SEI. The @WebFault annotation is used to map the Java exception to a wsdl:fault element. This information is used to marshall the exceptions into a representation that can be processed by both the service and its consumers. Table 24.6, " @WebFault Properties" describes the properties of the @WebFault annotation. Table 24.6. @WebFault Properties Property Description name Specifies the local name of the fault element. targetNamespace Specifies the namespace under which the fault element is defined. The default value is the target namespace of the SEI. faultName Specifies the full name of the Java class that implements the exception. Important The name property is required. The @Oneway annotation The @Oneway annotation is defined by the javax.jws.Oneway interface. It is placed on the methods in the SEI that will not require a response from the service. The @Oneway annotation tells the run time that it can optimize the execution of the method by not waiting for a response and by not reserving any resources to process a response. This annotation can only be used on methods that meet the following criteria: They return void They have no parameters that implement the Holder interface They do not throw any exceptions that can be passed back to a consumer Example Example 24.6, "SEI with Annotated Methods" shows an SEI with its methods annotated. Example 24.6. SEI with Annotated Methods Overview Defining Parameter Properties with Annotations The method parameters in the SEI correspond to the wsdl:message elements and their wsdl:part elements. JAX-WS provides annotations that allow you to describe the wsdl:part elements that are generated for the method parameters. The @WebParam annotation The @WebParam annotation is defined by the javax.jws.WebParam interface. It is placed on the parameters of the methods defined in the SEI. The @WebParam annotation allows you to specify the direction of the parameter, if the parameter will be placed in the SOAP header, and other properties of the generated wsdl:part . Table 24.7, " @WebParam Properties" describes the properties of the @WebParam annotation. Table 24.7. @WebParam Properties Property Values Description name Specifies the name of the parameter as it appears in the generated WSDL document. For RPC bindings, this is the name of the wsdl:part representing the parameter. For document bindings, this is the local name of the XML element representing the parameter. Per the JAX-WS specification, the default is arg N , where N is replaced with the zero-based argument index (i.e., arg0, arg1, etc.). targetNamespace Specifies the namespace for the parameter. It is only used with document bindings where the parameter maps to an XML element. The default is to use the service's namespace. mode Mode.IN (default) [a] Mode.OUT Mode.INOUT Specifies the direction of the parameter. header false (default) true Specifies if the parameter is passed as part of the SOAP header. partName Specifies the value of the name attribute of the wsdl:part element for the parameter. This property is used for document style SOAP bindings. [a] Any parameter that implements the Holder interface is mapped to Mode.INOUT by default. The @WebResult annotation The @WebResult annotation is defined by the javax.jws.WebResult interface. It is placed on the methods defined in the SEI. The @WebResult annotation allows you to specify the properties of the wsdl:part that is generated for the method's return value. Table 24.8, " @WebResult Properties" describes the properties of the @WebResult annotation. Table 24.8. @WebResult Properties Property Description name Specifies the name of the return value as it appears in the generated WSDL document. For RPC bindings, this is the name of the wsdl:part representing the return value. For document bindings, this is the local name of the XML element representing the return value. The default value is return. targetNamespace Specifies the namespace for the return value. It is only used with document bindings where the return value maps to an XML element. The default is to use the service's namespace. header Specifies if the return value is passed as part of the SOAP header. partName Specifies the value of the name attribute of the wsdl:part element for the return value. This property is used for document style SOAP bindings. Example Example 24.7, "Fully Annotated SEI" shows an SEI that is fully annotated. Example 24.7. Fully Annotated SEI 24.3.4. Apache CXF Annotations 24.3.4.1. WSDL Documentation @WSDLDocumentation annotation The @WSDLDocumentation annotation is defined by the org.apache.cxf.annotations.WSDLDocumentation interface. It can be placed on the SEI or the SEI methods. This annotation enables you to add documentation, which will then appear within wsdl:documentation elements after the SEI is converted to WSDL. By default, the documentation elements appear inside the port type, but you can specify the placement property to make the documentation appear at other locations in the WSDL file. Section 24.3.4.2, "@WSDLDocumentation properties" shows the properties supported by the @WSDLDocumentation annotation. 24.3.4.2. @WSDLDocumentation properties Property Description value (Required) A string containing the documentation text. placement (Optional) Specifies where in the WSDL file this documentation is to appear. For the list of possible placement values, see the section called "Placement in the WSDL contract" . faultClass (Optional) If the placement is set to be FAULT_MESSAGE , PORT_TYPE_OPERATION_FAULT , or BINDING_OPERATION_FAULT , you must also set this property to the Java class that represents the fault. @WSDLDocumentationCollection annotation The @WSDLDocumentationCollection annotation is defined by the org.apache.cxf.annotations.WSDLDocumentationCollection interface. It can be placed on the SEI or the SEI methods. This annotation is used to insert multiple documentation elements at a single placement location or at various placement locations. Placement in the WSDL contract To specify where the documentation should appear in the WSDL contract, you can specify the placement property, which is of type WSDLDocumentation.Placement . The placement can have one of the following values: WSDLDocumentation.Placement.BINDING WSDLDocumentation.Placement.BINDING_OPERATION WSDLDocumentation.Placement.BINDING_OPERATION_FAULT WSDLDocumentation.Placement.BINDING_OPERATION_INPUT WSDLDocumentation.Placement.BINDING_OPERATION_OUTPUT WSDLDocumentation.Placement.DEFAULT WSDLDocumentation.Placement.FAULT_MESSAGE WSDLDocumentation.Placement.INPUT_MESSAGE WSDLDocumentation.Placement.OUTPUT_MESSAGE WSDLDocumentation.Placement.PORT_TYPE WSDLDocumentation.Placement.PORT_TYPE_OPERATION WSDLDocumentation.Placement.PORT_TYPE_OPERATION_FAULT WSDLDocumentation.Placement.PORT_TYPE_OPERATION_INPUT WSDLDocumentation.Placement.PORT_TYPE_OPERATION_OUTPUT WSDLDocumentation.Placement.SERVICE WSDLDocumentation.Placement.SERVICE_PORT WSDLDocumentation.Placement.TOP Example of @WSDLDocumentation Section 24.3.4.3, "Using @WSDLDocumentation" shows how to add a @WSDLDocumentation annotation to the SEI and to one of its methods. 24.3.4.3. Using @WSDLDocumentation When WSDL, shown in Section 24.3.4.4, "WSDL generated with documentation" , is generated from the SEI in Section 24.3.4.3, "Using @WSDLDocumentation" , the default placements of the documentation elements are, respectively, PORT_TYPE and PORT_TYPE_OPERATION . 24.3.4.4. WSDL generated with documentation Example of @WSDLDocumentationCollection Section 24.3.4.5, "Using @WSDLDocumentationCollection" shows how to add a @WSDLDocumentationCollection annotation to an SEI. 24.3.4.5. Using @WSDLDocumentationCollection 24.3.4.6. Schema Validation of Messages @SchemaValidation annotation The @SchemaValidation annotation is defined by the org.apache.cxf.annotations.SchemaValidation interface. It can be placed on the SEI and on individual SEI methods. This annotation turns on schema validation of the XML messages sent to this endpoint. This can be useful for testing purposes, when you suspect there is a problem with the format of incoming XML messages. By default, validation is disabled, because it has a significant impact on performance. Schema validation type The schema validation behaviour is controlled by the type parameter, whose value is an enumeration of org.apache.cxf.annotations.SchemaValidation.SchemaValidationType type. Section 24.3.4.7, "Schema Validation Type Values" shows the list of available validation types. 24.3.4.7. Schema Validation Type Values Type Description IN Apply schema validation to incoming messages on client and server. OUT Apply schema validation to outgoing messages on client and server. BOTH Apply schema validation to both incoming and outgoing messages on client and server. NONE All schema validation is disabled. REQUEST Apply schema validation to Request messages-that is, causing validation to be applied to outgoing client messages and to incoming server messages. RESPONSE Apply schema validation to Response messages-that is, causing validation to be applied to incoming client messages, and outgoing server messages. Example The following example shows how to enable schema validation of messages for endpoints based on the MyService SEI. Note how the annotation can be applied to the SEI as a whole, as well as to individual methods in the SEI. 24.3.4.8. Specifying the Data Binding @DataBinding annotation The @DataBinding annotation is defined by the org.apache.cxf.annotations.DataBinding interface. It is placed on the SEI. This annotation is used to associate a data binding with the SEI, replacing the default JAXB data binding. The value of the @DataBinding annotation must be the class that provides the data binding, ClassName .class . Supported data bindings The following data bindings are currently supported by Apache CXF: org.apache.cxf.jaxb.JAXBDataBinding (Default) The standard JAXB data binding. org.apache.cxf.sdo.SDODataBinding The Service Data Objects (SDO) data binding is based on the Apache Tuscany SDO implementation. If you want to use this data binding in the context of a Maven build, you need to add a dependency on the cxf-rt-databinding-sdo artifact. org.apache.cxf.aegis.databinding.AegisDatabinding If you want to use this data binding in the context of a Maven build, you need to add a dependency on the cxf-rt-databinding-aegis artifact. org.apache.cxf.xmlbeans.XmlBeansDataBinding If you want to use this data binding in the context of a Maven build, you need to add a dependency on the cxf-rt-databinding-xmlbeans artifact. org.apache.cxf.databinding.source.SourceDataBinding This data binding belongs to the Apache CXF core. org.apache.cxf.databinding.stax.StaxDataBinding This data binding belongs to the Apache CXF core. Example Section 24.3.4.9, "Setting the data binding" shows how to associate the SDO binding with the HelloWorld SEI 24.3.4.9. Setting the data binding 24.3.4.10. Compressing Messages @GZIP annotation The @GZIP annotation is defined by the org.apache.cxf.annotations.GZIP interface. It is placed on the SEI. Enables GZIP compression of messages. GZIP is a negotiated enhancement. That is, an initial request from a client will not be gzipped, but an Accept header will be added and, if the server supports GZIP compression, the response will be gzipped and any subsequent requests will be also. Section 24.3.4.11, "@GZIP Properties" shows the optional properties supported by the @GZIP annotation. 24.3.4.11. @GZIP Properties Property Description threshold Messages smaller than the size specified by this property are not gzipped. Default is -1 (no limit). @FastInfoset The @FastInfoset annotation is defined by the org.apache.cxf.annotations.FastInfoset interface. It is placed on the SEI. Enables the use of FastInfoset format for messages. FastInfoset is a binary encoding format for XML, which aims to optimize both the message size and the processing performance of XML messages. For more details, see the following Sun article on Fast Infoset . FastInfoset is a negotiated enhancement. That is, an initial request from a client will not be in FastInfoset format, but an Accept header will be added and, if the server supports FastInfoset, the response will be in FastInfoset and any subsequent requests will be also. Section 24.3.4.12, "@FastInfoset Properties" shows the optional properties supported by the @FastInfoset annotation. 24.3.4.12. @FastInfoset Properties Property Description force A boolean property that forces the use of FastInfoset format, instead of negotiating. When true , force the use of FastInfoset format; otherwise, negotiate. Default is false . Example of @GZIP Section 24.3.4.13, "Enabling GZIP" shows how to enable GZIP compression for the HelloWorld SEI. 24.3.4.13. Enabling GZIP Exampe of @FastInfoset Section 24.3.4.14, "Enabling FastInfoset" shows how to enable the FastInfoset format for the HelloWorld SEI. 24.3.4.14. Enabling FastInfoset 24.3.4.15. Enable Logging on an Endpoint @Logging annotation The @Logging annotation is defined by the org.apache.cxf.annotations.Logging interface. It is placed on the SEI. This annotation enables logging for all endpoints associated with the SEI. Section 24.3.4.16, "@Logging Properties" shows the optional properties you can set in this annotation. 24.3.4.16. @Logging Properties Property Description limit Specifies the size limit, beyond which the message is truncated in the logs. Default is 64K. inLocation Specifies the location to log incoming messages. Can be either <stderr> , <stdout> , <logger> , or a filename. Default is <logger> . outLocation Specifies the location to log outgoing messages. Can be either <stderr> , <stdout> , <logger> , or a filename. Default is <logger> . Example Section 24.3.4.17, "Logging configuration using annotations" shows how to enable logging for the HelloWorld SEI, where incoming messages are sent to <stdout> and outgoing messages are sent to <logger> . 24.3.4.17. Logging configuration using annotations 24.3.4.18. Adding Properties and Policies to an Endpoint Abstract Both properties and policies can be used to associate configuration data with an endpoint. The essential difference between them is that properties are a Apache CXF specific configuration mechanism whereas policies are a standard WSDL configuration mechanism. Policies typically originate from WS specifications and standards and they are normally set by defining wsdl:policy elements that appear in the WSDL contract. By contrast, properties are Apache CXF-specific and they are normally set by defining jaxws:properties elements in the Apache CXF Spring configuration file. It is also possible, however, to define property settings and WSDL policy settings in Java using annotations, as described here. 24.3.4.19. Adding properties @EndpointProperty annotation The @EndpointProperty annotation is defined by the org.apache.cxf.annotations.EndpointProperty interface. It is placed on the SEI. This annotation adds Apache CXF-specific configuration settings to an endpoint. Endpoint properties can also be specified in a Spring configuration file. For example, to configure WS-Security on an endpoint, you could add endpoint properties using the jaxws:properties element in a Spring configuration file as follows: Alternatively, you could specify the preceding configuration settings in Java by adding @EndpointProperty annotations to the SEI, as shown in Section 24.3.4.20, "Configuring WS-Security Using @EndpointProperty Annotations" . 24.3.4.20. Configuring WS-Security Using @EndpointProperty Annotations @EndpointProperties annotation The @EndpointProperties annotation is defined by the org.apache.cxf.annotations.EndpointProperties interface. It is placed on the SEI. This annotation provides a way of grouping multiple @EndpointProperty annotations into a list. Using @EndpointProperties , it is possible to re-write Section 24.3.4.20, "Configuring WS-Security Using @EndpointProperty Annotations" as shown in Section 24.3.4.21, "Configuring WS-Security Using an @EndpointProperties Annotation" . 24.3.4.21. Configuring WS-Security Using an @EndpointProperties Annotation 24.3.4.22. Adding policies @Policy annotation The @Policy annotation is defined by the org.apache.cxf.annotations.Policy interface. It can be placed on the SEI or the SEI methods. This annotation is used to associate a WSDL policy with an SEI or an SEI method. The policy is specified by providing a URI that references an XML file containing a standard wsdl:policy element. If a WSDL contract is to be generated from the SEI (for example, using the java2ws command-line tool), you can specify whether or not you want to include this policy in the WSDL. Section 24.3.4.23, "@Policy Properties" shows the properties supported by the @Policy annotation. 24.3.4.23. @Policy Properties Property Description uri (Required) The location of the file containing the policy definition. includeInWSDL (Optional) Whether to include the policy in the generated contract, when generating WSDL. Default is true . placement (Optional) Specifies where in the WSDL file this documentation is to appear. For the list of possible placement values, see the section called "Placement in the WSDL contract" . faultClass (Optional) If the placement is set to be BINDING_OPERATION_FAULT or PORT_TYPE_OPERATION_FAULT , you must also set this property to specify which fault this policy applies to. The value is the Java class that represents the fault. @Policies annotation The @Policies annotation is defined by the org.apache.cxf.annotations.Policies interface. It can be placed on the SEI or thse SEI methods. This annotation provides a way of grouping multiple @Policy annotations into a list. Placement in the WSDL contract To specify where the policy should appear in the WSDL contract, you can specify the placement property, which is of type Policy.Placement . The placement can have one of the following values: Example of @Policy The following example shows how to associate WSDL policies with the HelloWorld SEI and how to associate a policy with the sayHi method. The policies themselves are stored in XML files in the file system, under the annotationpolicies directory. Example of @Policies You can use the @Policies annotation to group multiple @Policy annotations into a list, as shown in the following example: 24.4. Generating WSDL Using Maven Once your code is annotated, you can generate a WSDL contract for your service using the java2ws Maven plug-in's -wsdl option. For a detailed listing of options for the java2ws Maven plug-in see Section 44.3, "java2ws" . Example 24.8, "Generating WSDL from Java" shows how to set up the java2ws Maven plug-in to generate WSDL. Example 24.8. Generating WSDL from Java Note Replace the value of className with the qualified className. Example Example 24.9, "Generated WSDL from an SEI " shows the WSDL contract that is generated for the SEI shown in Example 24.7, "Fully Annotated SEI" . Example 24.9. Generated WSDL from an SEI [1] Board is an assumed class whose implementation is left to the reader. | [
"package com.fusesource.demo; public interface quoteReporter { public Quote getQuote(String ticker); }",
"package com.fusesource.demo; import java.util.*; public class stockQuoteReporter implements quoteReporter { public Quote getQuote(String ticker) { Quote retVal = new Quote(); retVal.setID(ticker); retVal.setVal(Board.check(ticker)); [1] Date retDate = new Date(); retVal.setTime(retDate.toString()); return(retVal); } }",
"package com.fusesource.demo; import javax.jws.*; @WebService(name=\"quoteUpdater\", targetNamespace=\"http://demos.redhat.com\", serviceName=\"updateQuoteService\", wsdlLocation=\"http://demos.redhat.com/quoteExampleService?wsdl\", portName=\"updateQuotePort\") public interface quoteReporter { public Quote getQuote(String ticker); }",
"package org.eric.demo; import javax.jws.*; @WebService(endpointInterface=\"com.fusesource.demo.quoteReporter\") public class stockQuoteReporter implements quoteReporter { public Quote getQuote(String ticker) { } }",
"package org.eric.demo; import javax.jws.*; import javax.jws.soap.*; import javax.jws.soap.SOAPBinding.*; @WebService(name=\"quoteReporter\") @SOAPBinding(parameterStyle=ParameterStyle.BARE) public interface quoteReporter { }",
"package com.fusesource.demo; import javax.jws.*; import javax.xml.ws.*; @WebService(name=\"quoteReporter\") public interface quoteReporter { @WebMethod(operationName=\"getStockQuote\") @RequestWrapper(targetNamespace=\"http://demo.redhat.com/types\", className=\"java.lang.String\") @ResponseWrapper(targetNamespace=\"http://demo.redhat.com/types\", className=\"org.eric.demo.Quote\") public Quote getQuote(String ticker); }",
"package com.fusesource.demo; import javax.jws.*; import javax.xml.ws.*; import javax.jws.soap.*; import javax.jws.soap.SOAPBinding.*; import javax.jws.WebParam.*; @WebService(targetNamespace=\"http://demo.redhat.com\", name=\"quoteReporter\") @SOAPBinding(style=Style.RPC, use=Use.LITERAL) public interface quoteReporter { @WebMethod(operationName=\"getStockQuote\") @RequestWrapper(targetNamespace=\"http://demo.redhat.com/types\", className=\"java.lang.String\") @ResponseWrapper(targetNamespace=\"http://demo.redhat.com/types\", className=\"org.eric.demo.Quote\") @WebResult(targetNamespace=\"http://demo.redhat.com/types\", name=\"updatedQuote\") public Quote getQuote( @WebParam(targetNamespace=\"http://demo.redhat.com/types\", name=\"stockTicker\", mode=Mode.IN) String ticker ); }",
"@WebService @WSDLDocumentation(\"A very simple example of an SEI\") public interface HelloWorld { @WSDLDocumentation(\"A traditional form of greeting\") String sayHi(@WebParam(name = \"text\") String text); }",
"<wsdl:definitions ... > <wsdl:portType name=\"HelloWorld\"> <wsdl:documentation>A very simple example of an SEI</wsdl:documentation> <wsdl:operation name=\"sayHi\"> <wsdl:documentation>A traditional form of greeting</wsdl:documentation> <wsdl:input name=\"sayHi\" message=\"tns:sayHi\"> </wsdl:input> <wsdl:output name=\"sayHiResponse\" message=\"tns:sayHiResponse\"> </wsdl:output> </wsdl:operation> </wsdl:portType> </wsdl:definitions>",
"@WebService @WSDLDocumentationCollection( { @WSDLDocumentation(\"A very simple example of an SEI\"), @WSDLDocumentation(value = \"My top level documentation\", placement = WSDLDocumentation.Placement.TOP), @WSDLDocumentation(value = \"Binding documentation\", placement = WSDLDocumentation.Placement.BINDING) } ) public interface HelloWorld { @WSDLDocumentation(\"A traditional form of Geeky greeting\") String sayHi(@WebParam(name = \"text\") String text); }",
"@WebService @SchemaValidation(type = SchemaValidationType.BOTH) public interface MyService { Foo validateBoth(Bar data); @SchemaValidation(type = SchemaValidationType.NONE) Foo validateNone(Bar data); @SchemaValidation(type = SchemaValidationType.IN) Foo validateIn(Bar data); @SchemaValidation(type = SchemaValidationType.OUT) Foo validateOut(Bar data); @SchemaValidation(type = SchemaValidationType.REQUEST) Foo validateRequest(Bar data); @SchemaValidation(type = SchemaValidationType.RESPONSE) Foo validateResponse(Bar data); }",
"@WebService @DataBinding(org.apache.cxf.sdo.SDODataBinding.class) public interface HelloWorld { String sayHi(@WebParam(name = \"text\") String text); }",
"@WebService @GZIP public interface HelloWorld { String sayHi(@WebParam(name = \"text\") String text); }",
"@WebService @FastInfoset public interface HelloWorld { String sayHi(@WebParam(name = \"text\") String text); }",
"@WebService @Logging(limit=16000, inLocation=\"<stdout>\") public interface HelloWorld { String sayHi(@WebParam(name = \"text\") String text); }",
"<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:jaxws=\"http://cxf.apache.org/jaxws\" ... > <jaxws:endpoint id=\"MyService\" address=\"https://localhost:9001/MyService\" serviceName=\"interop:MyService\" endpointName=\"interop:MyServiceEndpoint\" implementor=\"com.foo.MyService\"> <jaxws:properties> <entry key=\"ws-security.callback-handler\" value=\"interop.client.UTPasswordCallback\"/> <entry key=\"ws-security.signature.properties\" value=\"etc/keystore.properties\"/> <entry key=\"ws-security.encryption.properties\" value=\"etc/truststore.properties\"/> <entry key=\"ws-security.encryption.username\" value=\"useReqSigCert\"/> </jaxws:properties> </jaxws:endpoint> </beans>",
"@WebService @EndpointProperty(name=\"ws-security.callback-handler\" value=\"interop.client.UTPasswordCallback\") @EndpointProperty(name=\"ws-security.signature.properties\" value=\"etc/keystore.properties\") @EndpointProperty(name=\"ws-security.encryption.properties\" value=\"etc/truststore.properties\") @EndpointProperty(name=\"ws-security.encryption.username\" value=\"useReqSigCert\") public interface HelloWorld { String sayHi(@WebParam(name = \"text\") String text); }",
"@WebService @EndpointProperties( { @EndpointProperty(name=\"ws-security.callback-handler\" value=\"interop.client.UTPasswordCallback\"), @EndpointProperty(name=\"ws-security.signature.properties\" value=\"etc/keystore.properties\"), @EndpointProperty(name=\"ws-security.encryption.properties\" value=\"etc/truststore.properties\"), @EndpointProperty(name=\"ws-security.encryption.username\" value=\"useReqSigCert\") }) public interface HelloWorld { String sayHi(@WebParam(name = \"text\") String text); }",
"Policy.Placement.BINDING Policy.Placement.BINDING_OPERATION Policy.Placement.BINDING_OPERATION_FAULT Policy.Placement.BINDING_OPERATION_INPUT Policy.Placement.BINDING_OPERATION_OUTPUT Policy.Placement.DEFAULT Policy.Placement.PORT_TYPE Policy.Placement.PORT_TYPE_OPERATION Policy.Placement.PORT_TYPE_OPERATION_FAULT Policy.Placement.PORT_TYPE_OPERATION_INPUT Policy.Placement.PORT_TYPE_OPERATION_OUTPUT Policy.Placement.SERVICE Policy.Placement.SERVICE_PORT",
"@WebService @Policy(uri = \"annotationpolicies/TestImplPolicy.xml\", placement = Policy.Placement.SERVICE_PORT), @Policy(uri = \"annotationpolicies/TestPortTypePolicy.xml\", placement = Policy.Placement.PORT_TYPE) public interface HelloWorld { @Policy(uri = \"annotationpolicies/TestOperationPTPolicy.xml\", placement = Policy.Placement.PORT_TYPE_OPERATION), String sayHi(@WebParam(name = \"text\") String text); }",
"@WebService @Policies({ @Policy(uri = \"annotationpolicies/TestImplPolicy.xml\", placement = Policy.Placement.SERVICE_PORT), @Policy(uri = \"annotationpolicies/TestPortTypePolicy.xml\", placement = Policy.Placement.PORT_TYPE) }) public interface HelloWorld { @Policy(uri = \"annotationpolicies/TestOperationPTPolicy.xml\", placement = Policy.Placement.PORT_TYPE_OPERATION), String sayHi(@WebParam(name = \"text\") String text); }",
"<plugin> <groupId>org.apache.cxf</groupId> <artifactId>cxf-java2ws-plugin</artifactId> <version>USD{cxf.version}</version> <executions> <execution> <id>process-classes</id> <phase>process-classes</phase> <configuration> <className> className </className> <genWsdl>true</genWsdl> </configuration> <goals> <goal>java2ws</goal> </goals> </execution> </executions> </plugin>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <wsdl:definitions targetNamespace=\"http://demo.eric.org/\" xmlns:tns=\"http://demo.eric.org/\" xmlns:ns1=\"\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:ns2=\"http://demo.eric.org/types\" xmlns:soap=\"http://schemas.xmlsoap.org/wsdl/soap/\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\"> <wsdl:types> <xsd:schema> <xs:complexType name=\"quote\"> <xs:sequence> <xs:element name=\"ID\" type=\"xs:string\" minOccurs=\"0\"/> <xs:element name=\"time\" type=\"xs:string\" minOccurs=\"0\"/> <xs:element name=\"val\" type=\"xs:float\"/> </xs:sequence> </xs:complexType> </xsd:schema> </wsdl:types> <wsdl:message name=\"getStockQuote\"> <wsdl:part name=\"stockTicker\" type=\"xsd:string\"> </wsdl:part> </wsdl:message> <wsdl:message name=\"getStockQuoteResponse\"> <wsdl:part name=\"updatedQuote\" type=\"tns:quote\"> </wsdl:part> </wsdl:message> <wsdl:portType name=\"quoteReporter\"> <wsdl:operation name=\"getStockQuote\"> <wsdl:input name=\"getQuote\" message=\"tns:getStockQuote\"> </wsdl:input> <wsdl:output name=\"getQuoteResponse\" message=\"tns:getStockQuoteResponse\"> </wsdl:output> </wsdl:operation> </wsdl:portType> <wsdl:binding name=\"quoteReporterBinding\" type=\"tns:quoteReporter\"> <soap:binding style=\"rpc\" transport=\"http://schemas.xmlsoap.org/soap/http\" /> <wsdl:operation name=\"getStockQuote\"> <soap:operation style=\"rpc\" /> <wsdl:input name=\"getQuote\"> <soap:body use=\"literal\" /> </wsdl:input> <wsdl:output name=\"getQuoteResponse\"> <soap:body use=\"literal\"/> </wsdl:output> </wsdl:operation> </wsdl:binding> <wsdl:service name=\"quoteReporterService\"> <wsdl:port name=\"quoteReporterPort\" binding=\"tns:quoteReporterBinding\"> <soap:address location=\"http://localhost:9000/quoteReporterService\" /> </wsdl:port> </wsdl:service> </wsdl:definitions>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/jaxwsservicedevjavafirst |
Chapter 8. Sources | Chapter 8. Sources The updated Red Hat Ceph Storage source code packages are available at the following location: For Red Hat Enterprise Linux 9: https://ftp.redhat.com/redhat/linux/enterprise/9Base/en/RHCEPH/SRPMS/ | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/release_notes/sources |
Chapter 100. Mock | Chapter 100. Mock Only producer is supported Testing of distributed and asynchronous processing is notoriously difficult. The Mock , Test and Dataset endpoints work great with the Camel Testing Framework to simplify your unit and integration testing using Enterprise Integration Patterns and Camel's large range of Components together with the powerful Bean Integration. The Mock component provides a powerful declarative testing mechanism, which is similar to jMock in that it allows declarative expectations to be created on any Mock endpoint before a test begins. Then the test is run, which typically fires messages to one or more endpoints, and finally the expectations can be asserted in a test case to ensure the system worked as expected. This allows you to test various things like: The correct number of messages are received on each endpoint, The correct payloads are received, in the right order, Messages arrive on an endpoint in order, using some Expression to create an order testing function, Messages arrive match some kind of Predicate such as that specific headers have certain values, or that messages match some predicate, such as by evaluating an XPath or XQuery Expression. Note There is also the Test endpoint which is a Mock endpoint, but which uses a second endpoint to provide the list of expected message bodies and automatically sets up the Mock endpoint assertions. In other words, it's a Mock endpoint that automatically sets up its assertions from some sample messages in a File or database , for example. Note Mock endpoints keep received Exchanges in memory indefinitely. Remember that Mock is designed for testing. When you add Mock endpoints to a route, each Exchange sent to the endpoint will be stored (to allow for later validation) in memory until explicitly reset or the JVM is restarted. If you are sending high volume and/or large messages, this may cause excessive memory use. If your goal is to test deployable routes inline, consider using NotifyBuilder or AdviceWith in your tests instead of adding Mock endpoints to routes directly. There are two new options retainFirst, and retainLast that can be used to limit the number of messages the Mock endpoints keep in memory. 100.1. Dependencies When using mock with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mock-starter</artifactId> </dependency> 100.2. URI format Where someName can be any string that uniquely identifies the endpoint. 100.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 100.3.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 100.3.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 100.4. Component Options The Mock component supports 4 options, which are listed below. Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean log (producer) To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean exchangeFormatter (advanced) Autowired Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter. ExchangeFormatter 100.5. Endpoint Options The Mock endpoint is configured using URI syntax: with the following path and query parameters: 100.5.1. Path Parameters (1 parameters) Name Description Default Type name (producer) Required Name of mock endpoint. String 100.5.2. Query Parameters (12 parameters) Name Description Default Type assertPeriod (producer) Sets a grace period after which the mock endpoint will re-assert to ensure the preliminary assertion is still valid. This is used for example to assert that exactly a number of messages arrives. For example if expectedMessageCount(int) was set to 5, then the assertion is satisfied when 5 or more message arrives. To ensure that exactly 5 messages arrives, then you would need to wait a little period to ensure no further message arrives. This is what you can use this method for. By default this period is disabled. long expectedCount (producer) Specifies the expected number of message exchanges that should be received by this endpoint. Beware: If you want to expect that 0 messages, then take extra care, as 0 matches when the tests starts, so you need to set a assert period time to let the test run for a while to make sure there are still no messages arrived; for that use setAssertPeriod(long). An alternative is to use NotifyBuilder, and use the notifier to know when Camel is done routing some messages, before you call the assertIsSatisfied() method on the mocks. This allows you to not use a fixed assert period, to speedup testing times. If you want to assert that exactly n'th message arrives to this mock endpoint, then see also the setAssertPeriod(long) method for further details. -1 int failFast (producer) Sets whether assertIsSatisfied() should fail fast at the first detected failed expectation while it may otherwise wait for all expected messages to arrive before performing expectations verifications. Is by default true. Set to false to use behavior as in Camel 2.x. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean log (producer) To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class. false boolean reportGroup (producer) A number that is used to turn on throughput logging based on groups of the size. int resultMinimumWaitTime (producer) Sets the minimum expected amount of time (in millis) the assertIsSatisfied() will wait on a latch until it is satisfied. long resultWaitTime (producer) Sets the maximum amount of time (in millis) the assertIsSatisfied() will wait on a latch until it is satisfied. long retainFirst (producer) Specifies to only retain the first n'th number of received Exchanges. This is used when testing with big data, to reduce memory consumption by not storing copies of every Exchange this mock endpoint receives. Important: When using this limitation, then the getReceivedCounter() will still return the actual number of received Exchanges. For example if we have received 5000 Exchanges, and have configured to only retain the first 10 Exchanges, then the getReceivedCounter() will still return 5000 but there is only the first 10 Exchanges in the getExchanges() and getReceivedExchanges() methods. When using this method, then some of the other expectation methods is not supported, for example the expectedBodiesReceived(Object... ) sets a expectation on the first number of bodies received. You can configure both setRetainFirst(int) and setRetainLast(int) methods, to limit both the first and last received. -1 int retainLast (producer) Specifies to only retain the last n'th number of received Exchanges. This is used when testing with big data, to reduce memory consumption by not storing copies of every Exchange this mock endpoint receives. Important: When using this limitation, then the getReceivedCounter() will still return the actual number of received Exchanges. For example if we have received 5000 Exchanges, and have configured to only retain the last 20 Exchanges, then the getReceivedCounter() will still return 5000 but there is only the last 20 Exchanges in the getExchanges() and getReceivedExchanges() methods. When using this method, then some of the other expectation methods is not supported, for example the expectedBodiesReceived(Object... ) sets a expectation on the first number of bodies received. You can configure both setRetainFirst(int) and setRetainLast(int) methods, to limit both the first and last received. -1 int sleepForEmptyTest (producer) Allows a sleep to be specified to wait to check that this endpoint really is empty when expectedMessageCount(int) is called with zero. long copyOnExchange (producer (advanced)) Sets whether to make a deep copy of the incoming Exchange when received at this mock endpoint. Is by default true. true boolean 100.6. Simple Example Here's a simple example of Mock endpoint in use. First, the endpoint is resolved on the context. Then we set an expectation, and then, after the test has run, we assert that our expectations have been met: MockEndpoint resultEndpoint = context.getEndpoint("mock:foo", MockEndpoint.class); // set expectations resultEndpoint.expectedMessageCount(2); // send some messages // now lets assert that the mock:foo endpoint received 2 messages resultEndpoint.assertIsSatisfied(); You typically always call the method to test that the expectations were met after running a test. Camel will by default wait 10 seconds when the assertIsSatisfied() is invoked. This can be configured by setting the setResultWaitTime(millis) method. 100.7. Using assertPeriod When the assertion is satisfied then Camel will stop waiting and continue from the assertIsSatisfied method. That means if a new message arrives on the mock endpoint, just a bit later, that arrival will not affect the outcome of the assertion. Suppose you do want to test that no new messages arrives after a period thereafter, then you can do that by setting the setAssertPeriod method, for example: MockEndpoint resultEndpoint = context.getEndpoint("mock:foo", MockEndpoint.class); resultEndpoint.setAssertPeriod(5000); resultEndpoint.expectedMessageCount(2); // send some messages // now lets assert that the mock:foo endpoint received 2 messages resultEndpoint.assertIsSatisfied(); 100.8. Setting expectations You can see from the Javadoc of MockEndpoint the various helper methods you can use to set expectations. The main methods are as follows: Method Description expectedMessageCount(int) To define the expected message count on the endpoint. expectedMinimumMessageCount(int) To define the minimum number of expected messages on the endpoint. expectedBodiesReceived(... ) To define the expected bodies that should be received (in order). expectedHeaderReceived(... ) To define the expected header that should be received expectsAscending(Expression) To add an expectation that messages are received in order, using the given Expression to compare messages. expectsDescending(Expression) To add an expectation that messages are received in order, using the given Expression to compare messages. expectsNoDuplicates(Expression) To add an expectation that no duplicate messages are received; using an Expression to calculate a unique identifier for each message. This could be something like the JMSMessageID if using JMS, or some unique reference number within the message. Here's another example: resultEndpoint.expectedBodiesReceived("firstMessageBody", "secondMessageBody", "thirdMessageBody"); 100.9. Adding expectations to specific messages In addition, you can use the message(int messageIndex) method to add assertions about a specific message that is received. For example, to add expectations of the headers or body of the first message (using zero-based indexing like java.util.List ), you can use the following code: resultEndpoint.message(0).header("foo").isEqualTo("bar"); There are some examples of the Mock endpoint in use in the camel-core processor tests . 100.10. Mocking existing endpoints Camel now allows you to automatically mock existing endpoints in your Camel routes. Note How it works The endpoints are still in action. What happens differently is that a Mock endpoint is injected and receives the message first and then delegates the message to the target endpoint. You can view this as a kind of intercept and delegate or endpoint listener. Suppose you have the given route below: Route @Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from("direct:start").routeId("start") .to("direct:foo").to("log:foo").to("mock:result"); from("direct:foo").routeId("foo") .transform(constant("Bye World")); } }; } You can then use the adviceWith feature in Camel to mock all the endpoints in a given route from your unit test, as shown below: adviceWith mocking all endpoints @Test public void testAdvisedMockEndpoints() throws Exception { // advice the start route using the inlined AdviceWith lambda style route builder // which has extended capabilities than the regular route builder AdviceWith.adviceWith(context, "start", a -> // mock all endpoints a.mockEndpoints()); getMockEndpoint("mock:direct:start").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:direct:foo").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:log:foo").expectedBodiesReceived("Bye World"); getMockEndpoint("mock:result").expectedBodiesReceived("Bye World"); template.sendBody("direct:start", "Hello World"); assertMockEndpointsSatisfied(); // additional test to ensure correct endpoints in registry assertNotNull(context.hasEndpoint("direct:start")); assertNotNull(context.hasEndpoint("direct:foo")); assertNotNull(context.hasEndpoint("log:foo")); assertNotNull(context.hasEndpoint("mock:result")); // all the endpoints was mocked assertNotNull(context.hasEndpoint("mock:direct:start")); assertNotNull(context.hasEndpoint("mock:direct:foo")); assertNotNull(context.hasEndpoint("mock:log:foo")); } Notice that the mock endpoints is given the URI mock:<endpoint> , for example mock:direct:foo . Camel logs at INFO level the endpoints being mocked: Note Mocked endpoints are without parameters Endpoints which are mocked will have their parameters stripped off. For example the endpoint log:foo?showAll=true will be mocked to the following endpoint mock:log:foo . Notice the parameters have been removed. Its also possible to only mock certain endpoints using a pattern. For example to mock all log endpoints you do as shown: adviceWith mocking only log endpoints using a pattern @Test public void testAdvisedMockEndpointsWithPattern() throws Exception { // advice the start route using the inlined AdviceWith lambda style route builder // which has extended capabilities than the regular route builder AdviceWith.adviceWith(context, "start", a -> // mock only log endpoints a.mockEndpoints("log*")); // now we can refer to log:foo as a mock and set our expectations getMockEndpoint("mock:log:foo").expectedBodiesReceived("Bye World"); getMockEndpoint("mock:result").expectedBodiesReceived("Bye World"); template.sendBody("direct:start", "Hello World"); assertMockEndpointsSatisfied(); // additional test to ensure correct endpoints in registry assertNotNull(context.hasEndpoint("direct:start")); assertNotNull(context.hasEndpoint("direct:foo")); assertNotNull(context.hasEndpoint("log:foo")); assertNotNull(context.hasEndpoint("mock:result")); // only the log:foo endpoint was mocked assertNotNull(context.hasEndpoint("mock:log:foo")); assertNull(context.hasEndpoint("mock:direct:start")); assertNull(context.hasEndpoint("mock:direct:foo")); } The pattern supported can be a wildcard or a regular expression. See more details about this at Intercept as its the same matching function used by Camel. Note Mind that mocking endpoints causes the messages to be copied when they arrive on the mock. That means Camel will use more memory. This may not be suitable when you send in a lot of messages. 100.11. Mocking existing endpoints using the camel-test component Instead of using the adviceWith to instruct Camel to mock endpoints, you can easily enable this behavior when using the camel-test Test Kit. The same route can be tested as follows. Notice that we return "*" from the isMockEndpoints method, which tells Camel to mock all endpoints. If you only want to mock all log endpoints you can return "log*" instead. isMockEndpoints using camel-test kit public class IsMockEndpointsJUnit4Test extends CamelTestSupport { @Override public String isMockEndpoints() { // override this method and return the pattern for which endpoints to mock. // use * to indicate all return "*"; } @Test public void testMockAllEndpoints() throws Exception { // notice we have automatic mocked all endpoints and the name of the endpoints is "mock:uri" getMockEndpoint("mock:direct:start").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:direct:foo").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:log:foo").expectedBodiesReceived("Bye World"); getMockEndpoint("mock:result").expectedBodiesReceived("Bye World"); template.sendBody("direct:start", "Hello World"); assertMockEndpointsSatisfied(); // additional test to ensure correct endpoints in registry assertNotNull(context.hasEndpoint("direct:start")); assertNotNull(context.hasEndpoint("direct:foo")); assertNotNull(context.hasEndpoint("log:foo")); assertNotNull(context.hasEndpoint("mock:result")); // all the endpoints was mocked assertNotNull(context.hasEndpoint("mock:direct:start")); assertNotNull(context.hasEndpoint("mock:direct:foo")); assertNotNull(context.hasEndpoint("mock:log:foo")); } @Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from("direct:start").to("direct:foo").to("log:foo").to("mock:result"); from("direct:foo").transform(constant("Bye World")); } }; } } 100.12. Mocking existing endpoints with XML DSL If you do not use the camel-test component for unit testing (as shown above) you can use a different approach when using XML files for routes. The solution is to create a new XML file used by the unit test and then include the intended XML file which has the route you want to test. Suppose we have the route in the camel-route.xml file: camel-route.xml <!-- this camel route is in the camel-route.xml file --> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start"/> <to uri="direct:foo"/> <to uri="log:foo"/> <to uri="mock:result"/> </route> <route> <from uri="direct:foo"/> <transform> <constant>Bye World</constant> </transform> </route> </camelContext> Then we create a new XML file as follows, where we include the camel-route.xml file and define a spring bean with the class org.apache.camel.impl.InterceptSendToMockEndpointStrategy which tells Camel to mock all endpoints: test-camel-route.xml <!-- the Camel route is defined in another XML file --> <import resource="camel-route.xml"/> <!-- bean which enables mocking all endpoints --> <bean id="mockAllEndpoints" class="org.apache.camel.component.mock.InterceptSendToMockEndpointStrategy"/> Then in your unit test you load the new XML file ( test-camel-route.xml ) instead of camel-route.xml . To only mock all Log endpoints you can define the pattern in the constructor for the bean: <bean id="mockAllEndpoints" class="org.apache.camel.impl.InterceptSendToMockEndpointStrategy"> <constructor-arg index="0" value="log*"/> </bean> 100.13. Mocking endpoints and skip sending to original endpoint Sometimes you want to easily mock and skip sending to a certain endpoints. So the message is detoured and send to the mock endpoint only. You can now use the mockEndpointsAndSkip method using AdviceWith. The example below will skip sending to the two endpoints "direct:foo" , and "direct:bar" . adviceWith mock and skip sending to endpoints @Test public void testAdvisedMockEndpointsWithSkip() throws Exception { // advice the first route using the inlined AdviceWith route builder // which has extended capabilities than the regular route builder AdviceWith.adviceWith(context.getRouteDefinitions().get(0), context, new AdviceWithRouteBuilder() { @Override public void configure() throws Exception { // mock sending to direct:foo and direct:bar and skip send to it mockEndpointsAndSkip("direct:foo", "direct:bar"); } }); getMockEndpoint("mock:result").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:direct:foo").expectedMessageCount(1); getMockEndpoint("mock:direct:bar").expectedMessageCount(1); template.sendBody("direct:start", "Hello World"); assertMockEndpointsSatisfied(); // the message was not send to the direct:foo route and thus not sent to // the seda endpoint SedaEndpoint seda = context.getEndpoint("seda:foo", SedaEndpoint.class); assertEquals(0, seda.getCurrentQueueSize()); } The same example using the Test Kit isMockEndpointsAndSkip using camel-test kit public class IsMockEndpointsAndSkipJUnit4Test extends CamelTestSupport { @Override public String isMockEndpointsAndSkip() { // override this method and return the pattern for which endpoints to mock, // and skip sending to the original endpoint. return "direct:foo"; } @Test public void testMockEndpointAndSkip() throws Exception { // notice we have automatic mocked the direct:foo endpoints and the name of the endpoints is "mock:uri" getMockEndpoint("mock:result").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:direct:foo").expectedMessageCount(1); template.sendBody("direct:start", "Hello World"); assertMockEndpointsSatisfied(); // the message was not send to the direct:foo route and thus not sent to the seda endpoint SedaEndpoint seda = context.getEndpoint("seda:foo", SedaEndpoint.class); assertEquals(0, seda.getCurrentQueueSize()); } @Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from("direct:start").to("direct:foo").to("mock:result"); from("direct:foo").transform(constant("Bye World")).to("seda:foo"); } }; } } 100.14. Limiting the number of messages to keep The Mock endpoints will by default keep a copy of every Exchange that it received. So if you test with a lot of messages, then it will consume memory. We have introduced two options retainFirst and retainLast that can be used to specify to only keep N'th of the first and/or last Exchanges. For example in the code below, we only want to retain a copy of the first 5 and last 5 Exchanges the mock receives. MockEndpoint mock = getMockEndpoint("mock:data"); mock.setRetainFirst(5); mock.setRetainLast(5); mock.expectedMessageCount(2000); mock.assertIsSatisfied(); Using this has some limitations. The getExchanges() and getReceivedExchanges() methods on the MockEndpoint will return only the retained copies of the Exchanges. So in the example above, the list will contain 10 Exchanges; the first five, and the last five. The retainFirst and retainLast options also have limitations on which expectation methods you can use. For example the expectedXXX methods that work on message bodies, headers, etc. will only operate on the retained messages. In the example above they can test only the expectations on the 10 retained messages. 100.15. Testing with arrival times The Mock endpoint stores the arrival time of the message as a property on the Exchange Date time = exchange.getProperty(Exchange.RECEIVED_TIMESTAMP, Date.class); You can use this information to know when the message arrived on the mock. But it also provides foundation to know the time interval between the and message arrived on the mock. You can use this to set expectations using the arrives DSL on the Mock endpoint. For example to say that the first message should arrive between 0-2 seconds before the you can do: mock.message(0).arrives().noLaterThan(2).seconds().beforeNext(); You can also define this as that 2nd message (0 index based) should arrive no later than 0-2 seconds after the : mock.message(1).arrives().noLaterThan(2).seconds().afterPrevious(); You can also use between to set a lower bound. For example suppose that it should be between 1-4 seconds: mock.message(1).arrives().between(1, 4).seconds().afterPrevious(); You can also set the expectation on all messages, for example to say that the gap between them should be at most 1 second: mock.allMessages().arrives().noLaterThan(1).seconds().beforeNext(); Note Time units In the example above we use seconds as the time unit, but Camel offers milliseconds , and minutes as well. 100.16. Spring Boot Auto-Configuration The component supports 5 options, which are listed below. Name Description Default Type camel.component.mock.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.mock.enabled Whether to enable auto configuration of the mock component. This is enabled by default. Boolean camel.component.mock.exchange-formatter Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter. The option is a org.apache.camel.spi.ExchangeFormatter type. ExchangeFormatter camel.component.mock.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.mock.log To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class. false Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mock-starter</artifactId> </dependency>",
"mock:someName[?options]",
"mock:name",
"MockEndpoint resultEndpoint = context.getEndpoint(\"mock:foo\", MockEndpoint.class); // set expectations resultEndpoint.expectedMessageCount(2); // send some messages // now lets assert that the mock:foo endpoint received 2 messages resultEndpoint.assertIsSatisfied();",
"MockEndpoint resultEndpoint = context.getEndpoint(\"mock:foo\", MockEndpoint.class); resultEndpoint.setAssertPeriod(5000); resultEndpoint.expectedMessageCount(2); // send some messages // now lets assert that the mock:foo endpoint received 2 messages resultEndpoint.assertIsSatisfied();",
"resultEndpoint.expectedBodiesReceived(\"firstMessageBody\", \"secondMessageBody\", \"thirdMessageBody\");",
"resultEndpoint.message(0).header(\"foo\").isEqualTo(\"bar\");",
"@Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from(\"direct:start\").routeId(\"start\") .to(\"direct:foo\").to(\"log:foo\").to(\"mock:result\"); from(\"direct:foo\").routeId(\"foo\") .transform(constant(\"Bye World\")); } }; }",
"@Test public void testAdvisedMockEndpoints() throws Exception { // advice the start route using the inlined AdviceWith lambda style route builder // which has extended capabilities than the regular route builder AdviceWith.adviceWith(context, \"start\", a -> // mock all endpoints a.mockEndpoints()); getMockEndpoint(\"mock:direct:start\").expectedBodiesReceived(\"Hello World\"); getMockEndpoint(\"mock:direct:foo\").expectedBodiesReceived(\"Hello World\"); getMockEndpoint(\"mock:log:foo\").expectedBodiesReceived(\"Bye World\"); getMockEndpoint(\"mock:result\").expectedBodiesReceived(\"Bye World\"); template.sendBody(\"direct:start\", \"Hello World\"); assertMockEndpointsSatisfied(); // additional test to ensure correct endpoints in registry assertNotNull(context.hasEndpoint(\"direct:start\")); assertNotNull(context.hasEndpoint(\"direct:foo\")); assertNotNull(context.hasEndpoint(\"log:foo\")); assertNotNull(context.hasEndpoint(\"mock:result\")); // all the endpoints was mocked assertNotNull(context.hasEndpoint(\"mock:direct:start\")); assertNotNull(context.hasEndpoint(\"mock:direct:foo\")); assertNotNull(context.hasEndpoint(\"mock:log:foo\")); }",
"INFO Adviced endpoint [direct://foo] with mock endpoint [mock:direct:foo]",
"@Test public void testAdvisedMockEndpointsWithPattern() throws Exception { // advice the start route using the inlined AdviceWith lambda style route builder // which has extended capabilities than the regular route builder AdviceWith.adviceWith(context, \"start\", a -> // mock only log endpoints a.mockEndpoints(\"log*\")); // now we can refer to log:foo as a mock and set our expectations getMockEndpoint(\"mock:log:foo\").expectedBodiesReceived(\"Bye World\"); getMockEndpoint(\"mock:result\").expectedBodiesReceived(\"Bye World\"); template.sendBody(\"direct:start\", \"Hello World\"); assertMockEndpointsSatisfied(); // additional test to ensure correct endpoints in registry assertNotNull(context.hasEndpoint(\"direct:start\")); assertNotNull(context.hasEndpoint(\"direct:foo\")); assertNotNull(context.hasEndpoint(\"log:foo\")); assertNotNull(context.hasEndpoint(\"mock:result\")); // only the log:foo endpoint was mocked assertNotNull(context.hasEndpoint(\"mock:log:foo\")); assertNull(context.hasEndpoint(\"mock:direct:start\")); assertNull(context.hasEndpoint(\"mock:direct:foo\")); }",
"public class IsMockEndpointsJUnit4Test extends CamelTestSupport { @Override public String isMockEndpoints() { // override this method and return the pattern for which endpoints to mock. // use * to indicate all return \"*\"; } @Test public void testMockAllEndpoints() throws Exception { // notice we have automatic mocked all endpoints and the name of the endpoints is \"mock:uri\" getMockEndpoint(\"mock:direct:start\").expectedBodiesReceived(\"Hello World\"); getMockEndpoint(\"mock:direct:foo\").expectedBodiesReceived(\"Hello World\"); getMockEndpoint(\"mock:log:foo\").expectedBodiesReceived(\"Bye World\"); getMockEndpoint(\"mock:result\").expectedBodiesReceived(\"Bye World\"); template.sendBody(\"direct:start\", \"Hello World\"); assertMockEndpointsSatisfied(); // additional test to ensure correct endpoints in registry assertNotNull(context.hasEndpoint(\"direct:start\")); assertNotNull(context.hasEndpoint(\"direct:foo\")); assertNotNull(context.hasEndpoint(\"log:foo\")); assertNotNull(context.hasEndpoint(\"mock:result\")); // all the endpoints was mocked assertNotNull(context.hasEndpoint(\"mock:direct:start\")); assertNotNull(context.hasEndpoint(\"mock:direct:foo\")); assertNotNull(context.hasEndpoint(\"mock:log:foo\")); } @Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from(\"direct:start\").to(\"direct:foo\").to(\"log:foo\").to(\"mock:result\"); from(\"direct:foo\").transform(constant(\"Bye World\")); } }; } }",
"<!-- this camel route is in the camel-route.xml file --> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <to uri=\"direct:foo\"/> <to uri=\"log:foo\"/> <to uri=\"mock:result\"/> </route> <route> <from uri=\"direct:foo\"/> <transform> <constant>Bye World</constant> </transform> </route> </camelContext>",
"<!-- the Camel route is defined in another XML file --> <import resource=\"camel-route.xml\"/> <!-- bean which enables mocking all endpoints --> <bean id=\"mockAllEndpoints\" class=\"org.apache.camel.component.mock.InterceptSendToMockEndpointStrategy\"/>",
"<bean id=\"mockAllEndpoints\" class=\"org.apache.camel.impl.InterceptSendToMockEndpointStrategy\"> <constructor-arg index=\"0\" value=\"log*\"/> </bean>",
"@Test public void testAdvisedMockEndpointsWithSkip() throws Exception { // advice the first route using the inlined AdviceWith route builder // which has extended capabilities than the regular route builder AdviceWith.adviceWith(context.getRouteDefinitions().get(0), context, new AdviceWithRouteBuilder() { @Override public void configure() throws Exception { // mock sending to direct:foo and direct:bar and skip send to it mockEndpointsAndSkip(\"direct:foo\", \"direct:bar\"); } }); getMockEndpoint(\"mock:result\").expectedBodiesReceived(\"Hello World\"); getMockEndpoint(\"mock:direct:foo\").expectedMessageCount(1); getMockEndpoint(\"mock:direct:bar\").expectedMessageCount(1); template.sendBody(\"direct:start\", \"Hello World\"); assertMockEndpointsSatisfied(); // the message was not send to the direct:foo route and thus not sent to // the seda endpoint SedaEndpoint seda = context.getEndpoint(\"seda:foo\", SedaEndpoint.class); assertEquals(0, seda.getCurrentQueueSize()); }",
"public class IsMockEndpointsAndSkipJUnit4Test extends CamelTestSupport { @Override public String isMockEndpointsAndSkip() { // override this method and return the pattern for which endpoints to mock, // and skip sending to the original endpoint. return \"direct:foo\"; } @Test public void testMockEndpointAndSkip() throws Exception { // notice we have automatic mocked the direct:foo endpoints and the name of the endpoints is \"mock:uri\" getMockEndpoint(\"mock:result\").expectedBodiesReceived(\"Hello World\"); getMockEndpoint(\"mock:direct:foo\").expectedMessageCount(1); template.sendBody(\"direct:start\", \"Hello World\"); assertMockEndpointsSatisfied(); // the message was not send to the direct:foo route and thus not sent to the seda endpoint SedaEndpoint seda = context.getEndpoint(\"seda:foo\", SedaEndpoint.class); assertEquals(0, seda.getCurrentQueueSize()); } @Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from(\"direct:start\").to(\"direct:foo\").to(\"mock:result\"); from(\"direct:foo\").transform(constant(\"Bye World\")).to(\"seda:foo\"); } }; } }",
"MockEndpoint mock = getMockEndpoint(\"mock:data\"); mock.setRetainFirst(5); mock.setRetainLast(5); mock.expectedMessageCount(2000); mock.assertIsSatisfied();",
"Date time = exchange.getProperty(Exchange.RECEIVED_TIMESTAMP, Date.class);",
"mock.message(0).arrives().noLaterThan(2).seconds().beforeNext();",
"mock.message(1).arrives().noLaterThan(2).seconds().afterPrevious();",
"mock.message(1).arrives().between(1, 4).seconds().afterPrevious();",
"mock.allMessages().arrives().noLaterThan(1).seconds().beforeNext();"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-mock-component-starter |
Chapter 57. File Systems | Chapter 57. File Systems NetApp storage appliances serving NFSv4 are advised to check their configuration Note that features can be enabled or disabled on a per-minor version basis when using NetApp storage appliances that serve NFSv4. It is recommended to verify the configuration to ensure that the appropriate features are enabled as desired, for example by using the following Data ONTAP command: (BZ# 1450447 ) | [
"vserver nfs show -vserver <vserver-name> -fields v4.0-acl,v4.0-read-delegation,v4.0-write-delegation,v4.0-referrals,v4.0-migration,v4.1-referrals,v4.1-migration,v4.1-acl,v4.1-read-delegation,v4.1-write-delegation"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/known_issues_file_systems |
function::user_long | function::user_long Name function::user_long - Retrieves a long value stored in user space. Synopsis Arguments addr The user space address to retrieve the long from. General Syntax user_long:long(addr:long) Description Returns the long value from a given user space address. Returns zero when user space data is not accessible. Note that the size of the long depends on the architecture of the current user space task (for those architectures that support both 64/32 bit compat tasks). | [
"function user_long:long(addr:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-user-long |
Chapter 4. Security and Authentication of HawtIO | Chapter 4. Security and Authentication of HawtIO HawtIO enables authentication out of the box depending on the runtimes/containers it runs with. To use HawtIO with your application, either setting up authentication for the runtime or disabling HawtIO authentication is necessary. 4.1. Configuration properties The following table lists the Security-related configuration properties for the HawtIO core system. Name Default Description hawtio.authenticationContainerDiscoveryClasses io.hawt.web.tomcat.TomcatAuthenticationContainerDiscovery List of used AuthenticationContainerDiscovery implementations separated by a comma. By default, there is just TomcatAuthenticationContainerDiscovery, which is used to authenticate users on Tomcat from tomcat-users.xml file. Feel free to remove it if you want to authenticate users on Tomcat from the configured JAAS login module or feel free to add more classes of your own. hawtio.authenticationContainerTomcatDigestAlgorithm NONE When using the Tomcat tomcat-users.xml file, passwords can be hashed instead of plain text. Use this to specify the digest algorithm; valid values are NONE, MD5, SHA, SHA-256, SHA-384, and SHA-512. hawtio.authenticationEnabled true Whether or not security is enabled. hawtio.keycloakClientConfig classpath:keycloak.json Keycloak configuration file used for the front end. It is mandatory if Keycloak integration is enabled. hawtio.keycloakEnabled false Whether to enable or disable Keycloak integration. hawtio.noCredentials401 false Whether to return HTTP status 401 when authentication is enabled, but no credentials have been provided. Returning 401 will cause the browser popup window to prompt for credentials. By default this option is false, returning HTTP status 403 instead. hawtio.realm hawtio The security realm used to log in. hawtio.rolePrincipalClasses Fully qualified principal class name(s). A comma can separate multiple classes. hawtio.roles Admin, manager, viewer The user roles are required to log in to the console. A comma can separate multiple roles to allow. Set to * or an empty value to disable role checking when HawtIO authenticates a user. hawtio.tomcatUserFileLocation conf/tomcat-users.xml Specify an alternative location for the tomcat-users.xml file, e.g. /production/userlocation/. 4.2. Quarkus HawtIO is secured with the authentication mechanisms that Quarkus and also Keycloak provide. If you want to disable HawtIO authentication for Quarkus, add the following configuration to application.properties : quarkus.hawtio.authenticationEnabled = false 4.2.1. Quarkus authentication mechanisms HawtIO is just a web application in terms of Quarkus, so the various mechanisms Quarkus provides are used to authenticate HawtIO in the same way it authenticates a Web application. Here we show how you can use the properties-based authentication with HawtIO for demonstrating purposes. Important The properties-based authentication is not recommended for use in production. This mechanism is for development and testing purposes only. To use the properties-based authentication with HawtIO, add the following dependency to pom.xml : <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-elytron-security-properties-file</artifactId> </dependency> You can then define users in application.properties to enable the authentication. For example, defining a user hawtio with password s3cr3t! and role admin would look like the following: quarkus.security.users.embedded.enabled = true quarkus.security.users.embedded.plain-text = true quarkus.security.users.embedded.users.hawtio = s3cr3t! quarkus.security.users.embedded.roles.hawtio = admin Example: See Quarkus example for a working example of the properties-based authentication. 4.2.2. Quarkus with Keycloak See Keycloak Integration - Quarkus . 4.3. Spring Boot In addition to the standard JAAS authentication, HawtIO on Spring Boot can be secured through Spring Security or Keycloak . If you want to disable HawtIO authentication for Spring Boot, add the following configuration to application.properties : hawtio.authenticationEnabled = false 4.3.1. Spring Security To use Spring Security with HawtIO: Add org.springframework.boot:spring-boot-starter-security to the dependencies in pom.xml : <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-security</artifactId> </dependency> Spring Security configuration in src/main/resources/application.properties should look like the following: spring.security.user.name = hawtio spring.security.user.password = s3cr3t! spring.security.user.roles = admin,viewer A security config class has to be defined to set up how to secure the application with Spring Security: @EnableWebSecurity public class SecurityConfig { @Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { http.authorizeRequests().anyRequest().authenticated() .and() .formLogin() .and() .httpBasic() .and() .csrf().csrfTokenRepository(CookieCsrfTokenRepository.withHttpOnlyFalse()); return http.build(); } } Example: See springboot-security example for a working example. 4.3.1.1. Connecting to a remote application with Spring Security If you try to connect to a remote Spring Boot application with Spring Security enabled, make sure the Spring Security configuration allows access from the HawtIO console. Most likely, the default CSRF protection prohibits remote access to the Jolokia endpoint and thus causes authentication failures at the HawtIO console. Warning Be aware that it will expose your application to the risk of CSRF attacks. The easiest solution is to disable CSRF protection for the Jolokia endpoint at the remote application as follows. import org.springframework.boot.actuate.autoconfigure.jolokia.JolokiaEndpoint; import org.springframework.boot.actuate.autoconfigure.security.servlet.EndpointRequest; @EnableWebSecurity public class SecurityConfig { @Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { ... // Disable CSRF protection for the Jolokia endpoint http.csrf().ignoringRequestMatchers(EndpointRequest.to(JolokiaEndpoint.class)); return http.build(); } } To secure the Jolokia endpoint even without Spring Security's CSRF protection, you need to provide a jolokia-access.xml file under src/main/resources/ like the following (snippet) so that only trusted nodes can access it: <restrict> ... <cors> <allow-origin>http*://localhost:*</allow-origin> <allow-origin>http*://127.0.0.1:*</allow-origin> <allow-origin>http*://*.example.com</allow-origin> <allow-origin>http*://*.example.com:*</allow-origin> <strict-checking /> </cors> </restrict> 4.3.2. Spring Boot with Keycloak See Keycloak Integration - Spring Boot . | [
"quarkus.hawtio.authenticationEnabled = false",
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-elytron-security-properties-file</artifactId> </dependency>",
"quarkus.security.users.embedded.enabled = true quarkus.security.users.embedded.plain-text = true quarkus.security.users.embedded.users.hawtio = s3cr3t! quarkus.security.users.embedded.roles.hawtio = admin",
"hawtio.authenticationEnabled = false",
"<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-security</artifactId> </dependency>",
"spring.security.user.name = hawtio spring.security.user.password = s3cr3t! spring.security.user.roles = admin,viewer",
"@EnableWebSecurity public class SecurityConfig { @Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { http.authorizeRequests().anyRequest().authenticated() .and() .formLogin() .and() .httpBasic() .and() .csrf().csrfTokenRepository(CookieCsrfTokenRepository.withHttpOnlyFalse()); return http.build(); } }",
"import org.springframework.boot.actuate.autoconfigure.jolokia.JolokiaEndpoint; import org.springframework.boot.actuate.autoconfigure.security.servlet.EndpointRequest; @EnableWebSecurity public class SecurityConfig { @Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { // Disable CSRF protection for the Jolokia endpoint http.csrf().ignoringRequestMatchers(EndpointRequest.to(JolokiaEndpoint.class)); return http.build(); } }",
"<restrict> <cors> <allow-origin>http*://localhost:*</allow-origin> <allow-origin>http*://127.0.0.1:*</allow-origin> <allow-origin>http*://*.example.com</allow-origin> <allow-origin>http*://*.example.com:*</allow-origin> <strict-checking /> </cors> </restrict>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/hawtio_diagnostic_console_guide/security-and-authentication-of-hawtio |
Chapter 6. AWS Lambda Sink | Chapter 6. AWS Lambda Sink Send a payload to an AWS Lambda function 6.1. Configuration Options The following table summarizes the configuration options available for the aws-lambda-sink Kamelet: Property Name Description Type Default Example accessKey * Access Key The access key obtained from AWS string function * Function Name The Lambda Function name string region * AWS Region The AWS region to connect to string "eu-west-1" secretKey * Secret Key The secret key obtained from AWS string Note Fields marked with an asterisk (*) are mandatory. 6.2. Dependencies At runtime, the aws-lambda-sink Kamelet relies upon the presence of the following dependencies: camel:kamelet camel:aws2-lambda 6.3. Usage This section describes how you can use the aws-lambda-sink . 6.3.1. Knative Sink You can use the aws-lambda-sink Kamelet as a Knative sink by binding it to a Knative object. aws-lambda-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-lambda-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-lambda-sink properties: accessKey: "The Access Key" function: "The Function Name" region: "eu-west-1" secretKey: "The Secret Key" 6.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 6.3.1.2. Procedure for using the cluster CLI Save the aws-lambda-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f aws-lambda-sink-binding.yaml 6.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel aws-lambda-sink -p "sink.accessKey=The Access Key" -p "sink.function=The Function Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" This command creates the KameletBinding in the current namespace on the cluster. 6.3.2. Kafka Sink You can use the aws-lambda-sink Kamelet as a Kafka sink by binding it to a Kafka topic. aws-lambda-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-lambda-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-lambda-sink properties: accessKey: "The Access Key" function: "The Function Name" region: "eu-west-1" secretKey: "The Secret Key" 6.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 6.3.2.2. Procedure for using the cluster CLI Save the aws-lambda-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f aws-lambda-sink-binding.yaml 6.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-lambda-sink -p "sink.accessKey=The Access Key" -p "sink.function=The Function Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" This command creates the KameletBinding in the current namespace on the cluster. 6.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/aws-lambda-sink.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-lambda-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-lambda-sink properties: accessKey: \"The Access Key\" function: \"The Function Name\" region: \"eu-west-1\" secretKey: \"The Secret Key\"",
"apply -f aws-lambda-sink-binding.yaml",
"kamel bind channel:mychannel aws-lambda-sink -p \"sink.accessKey=The Access Key\" -p \"sink.function=The Function Name\" -p \"sink.region=eu-west-1\" -p \"sink.secretKey=The Secret Key\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-lambda-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-lambda-sink properties: accessKey: \"The Access Key\" function: \"The Function Name\" region: \"eu-west-1\" secretKey: \"The Secret Key\"",
"apply -f aws-lambda-sink-binding.yaml",
"kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-lambda-sink -p \"sink.accessKey=The Access Key\" -p \"sink.function=The Function Name\" -p \"sink.region=eu-west-1\" -p \"sink.secretKey=The Secret Key\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/aws-lambda-sink |
19.6.3. Related Books | 19.6.3. Related Books Sendmail Milters: A Guide for Fighting Spam by Bryan Costales and Marcia Flynt; Addison-Wesley - A good Sendmail guide that can help you customize your mail filters. Sendmail by Bryan Costales with Eric Allman et al.; O'Reilly & Associates - A good Sendmail reference written with the assistance of the original creator of Delivermail and Sendmail. Removing the Spam: Email Processing and Filtering by Geoff Mulligan; Addison-Wesley Publishing Company - A volume that looks at various methods used by email administrators using established tools, such as Sendmail and Procmail, to manage spam problems. Internet Email Protocols: A Developer's Guide by Kevin Johnson; Addison-Wesley Publishing Company - Provides a very thorough review of major email protocols and the security they provide. Managing IMAP by Dianna Mullet and Kevin Mullet; O'Reilly & Associates - Details the steps required to configure an IMAP server. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-email-related-books |
3.4. Discovering and Joining Identity Domains | 3.4. Discovering and Joining Identity Domains The realm discover command returns complete domain configuration and a list of packages that must be installed for the system to be enrolled in the domain. The realm join command then sets up the local machine for use with a specified domain by configuring both the local system services and the entries in the identity domain. The process run by realm join follows these steps: Running a discovery scan for the specified domain. Automatic installation of the packages required to join the system to the domain. This includes SSSD and the PAM home directory job packages. Note that the automatic installation of packages requires the PackageKit suite to be running. Note If PackageKit is disabled, the system prompts you for the missing packages, and you will be required to install them manually using the yum utility. Joining the domain by creating an account entry for the system in the directory. Creating the /etc/krb5.keytab host keytab file. Configuring the domain in SSSD and restarting the service. Enabling domain users for the system services in PAM configuration and the /etc/nsswitch.conf file. Discovering Domains When run without any options, the realm discover command displays information about the default DNS domain, which is the domain assigned through the Dynamic Host Configuration Protocol (DHCP): It is also possible to run a discovery for a specific domain. To do this, run realm discover and add the name of the domain you want to discover: The realmd system will then use DNS SRV lookups to find the domain controllers in this domain automatically. Note The realm discover command requires NetworkManager to be running; in particular, it depends on the D-Bus interface of NetworkManager. If your system does not use NetworkManager, always specify the domain name in the realm discover command. The realmd system can discover both Active Directory and Identity Management domains. If both domains exist in your environment, you can limit the discovery results to a specific type of server using the --server-software option. For example: One of the attributes returned in the discovery search is login-policy , which shows if domain users are allowed to log in as soon as the join is complete. If logins are not allowed by default, you can allow them manually by using the realm permit command. For details, see Section 3.7, "Managing Login Permissions for Domain Users" . For more information about the realm discover command, see the realm (8) man page. Joining a Domain Important Note that Active Directory domains require unique computer names to be used. Both NetBIOS computer name and its DNS host name should be uniquely defined and correspond to each other. To join the system to an identity domain, use the realm join command and specify the domain name: By default, the join is performed as the domain administrator. For AD, the administrator account is called Administrator ; for IdM, it is called admin . To connect as a different user, use the -U option: The command first attempts to connect without credentials, but it prompts for a password if required. If Kerberos is properly configured on a Linux system, joining can also be performed with a Kerberos ticket for authentication. To select a Kerberos principal, use the -U option. The realm join command accepts several other configuration options. For more information about the realm join command, see the realm (8) man page. Example 3.1. Example Procedure for Enrolling a System into a Domain Run the realm discover command to display information about the domain. Run the realm join command and pass the domain name to the command. Provide the administrator password if the system prompts for it. Note that when discovering or joining a domain, realmd checks for the DNS SRV record: _ldap._tcp.domain.example.com. for Identity Management records _ldap._tcp.dc._msdcs.domain.example.com. for Active Directory records The record is created by default when AD is configured, which enables it to be found by the service discovery. Testing the System Configuration after Joining a Domain To test whether the system was successfully enrolled into a domain, verify that you can log in as a user from the domain and that the user information is displayed correctly: Run the id user @ domain_name command to display information about a user from the domain. Using the ssh utility, log in as the same user. Verify that the pwd utility prints the user's home directory. Verify that the id utility prints the same information as the id user @ domain_name command from the first step. The kinit utility is also useful when testing whether the domain join was successful. Note that to use the utility, the krb5-workstation package must be installed. | [
"realm discover ad.example.com type: kerberos realm-name: AD.EXAMPLE.COM domain-name: ad.example.com configured: no server-software: active-directory client-software: sssd required-package: oddjob required-package: oddjob-mkhomedir required-package: sssd required-package: adcli required-package: samba-common",
"realm discover ad.example.com",
"realm discover --server-software=active-directory",
"realm join ad.example.com realm: Joined ad.example.com domain",
"realm join ad.example.com -U user",
"kinit user # realm join ad.example.com -U user",
"realm discover ad.example.com ad.example.com type: kerberos realm-name: AD.EXAMPLE.COM domain-name: ad.example.com configured: no server-software: active-directory client-software: sssd",
"realm join ad.example.com Password for Administrator: password",
"id user @ ad.example.com uid=1348601103([email protected]) gid=1348600513(domain [email protected]) groups=1348600513(domain [email protected])",
"ssh -l user @ ad.example.com linux-client.ad.example.com [email protected]@linux-client.ad.example.com's password: Creating home directory for [email protected].",
"pwd /home/ad.example.com/user",
"id uid=1348601103([email protected]) gid=1348600513(domain [email protected]) groups=1348600513(domain [email protected]) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/realmd-domain |
7.57. gcc | 7.57. gcc 7.57.1. RHBA-2015:1339 - gcc bug fix and enhancement update Updated gcc packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The gcc packages provide compilers for C, C++, Java, Fortran, Objective C, and Ada 95 GNU, as well as related support libraries. Bug Fixes BZ# 1190640 Previously, due to a bug in the stdarg functions optimization, the compiler could produce incorrect code. The problem occurred only when the va_list variable escaped a PHI node. This bug has been fixed, and the compiler now generates correct code. BZ# 1150606 Previously, when the vectorization optimization was enabled, the compiler could extract a scalar component of a vector with element types whose precision did not match the precision of their mode. Consequently, GCC could terminate unexpectedly while trying to vectorize a code that was using bit-fields. With this update, the compiler no longer vectorizes such code, and the code now compiles correctly. BZ# 1177458 Previously, the compiler did not properly handle incorrect usage of the PCH (Precompiled Headers) feature. When a PCH file was not included as the first include, the compiler terminated unexpectedly with a segmentation fault. The compiler has been fixed not to use such incorrect includes, and it no longer crashes in this scenario. BZ# 1134560 In versions of the GNU Fortran compiler, the type specifiers for Cray pointees were incorrectly overwritten by the type specifiers of components with the same name. Consequently, compiling failed with an error message. This bug has been fixed, and the Cray pointers are now handled correctly. Enhancement BZ# 1148120 The gcc hotpatch attribute implements support for online patching of multithreaded code on System z binaries. With this update, it is possible to select specific functions for hotpatching using a "function attribute" and to enable hotpatching for all functions using the "-mhotpatch=" command-line option. As enabled hotpatching has negative impact on software size and performance, it is recommended to use hotpatching for specific functions and not to enable hotpatch support in general. Users of gcc are advised to upgrade to these updated packages, which fix these bugs and add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-gcc |
Chapter 2. Installing Red Hat Developer Hub in an air-gapped environment with the Operator | Chapter 2. Installing Red Hat Developer Hub in an air-gapped environment with the Operator On an OpenShift Container Platform cluster operating on a restricted network, public resources are not available. However, deploying the Red Hat Developer Hub Operator and running Developer Hub requires the following public resources: Operator images (bundle, operator, catalog) Operands images (RHDH, PostgreSQL) To make these resources available, replace them with their equivalent resources in a mirror registry accessible to the OpenShift Container Platform cluster. You can use a helper script that mirrors the necessary images and provides the necessary configuration to ensure those images will be used when installing the Red Hat Developer Hub Operator and creating Developer Hub instances. Note This script requires a target mirror registry which you should already have installed if your OpenShift Container Platform cluster is ready to operate on a restricted network. However, if you are preparing your cluster for disconnected usage, you can use the script to deploy a mirror registry in the cluster and use it for the mirroring process. Prerequisites You have an active OpenShift CLI ( oc ) session with administrative permissions to the OpenShift Container Platform cluster. See Getting started with the OpenShift CLI . You have an active oc registry session to the registry.redhat.io Red Hat Ecosystem Catalog. See Red Hat Container Registry Authentication . The opm CLI tool is installed. See Installing the opm CLI . The jq package is installed. See Download jq . Podman is installed. See Podman Installation Instructions . Skopeo version 1.14 or higher is installed. See Installing Skopeo . If you already have a mirror registry for your cluster, an active Skopeo session with administrative access to this registry is required. See Authenticating to a registry and Mirroring images for a disconnected installation . Note The internal OpenShift Container Platform cluster image registry cannot be used as a target mirror registry. See About the mirror registry . If you prefer to create your own mirror registry, see Creating a mirror registry with mirror registry for Red Hat OpenShift . If you do not already have a mirror registry, you can use the helper script to create one for you and you need the following additional prerequisites: The cURL package is installed. For Red Hat Enterprise Linux, the curl command is available by installing the curl package. To use curl for other platforms, see the cURL website . The htpasswd command is available. For Red Hat Enterprise Linux, the htpasswd command is available by installing the httpd-tools package. Procedure Download and run the mirroring script to install a custom Operator catalog and mirror the related images: prepare-restricted-environment.sh ( source ). curl -sSLO https://raw.githubusercontent.com/redhat-developer/rhdh-operator/release-1.3/.rhdh/scripts/prepare-restricted-environment.sh # if you do not already have a target mirror registry # and want the script to create one for you # use the following example: bash prepare-restricted-environment.sh \ --prod_operator_index "registry.redhat.io/redhat/redhat-operator-index:v4.17" \ --prod_operator_package_name "rhdh" \ --prod_operator_bundle_name "rhdh-operator" \ --prod_operator_version "v1.3.5" # if you already have a target mirror registry # use the following example: bash prepare-restricted-environment.sh \ --prod_operator_index "registry.redhat.io/redhat/redhat-operator-index:v4.17" \ --prod_operator_package_name "rhdh" \ --prod_operator_bundle_name "rhdh-operator" \ --prod_operator_version "v1.3.5" \ --use_existing_mirror_registry "my_registry" Note The script can take several minutes to complete as it copies multiple images to the mirror registry. | [
"curl -sSLO https://raw.githubusercontent.com/redhat-developer/rhdh-operator/release-1.3/.rhdh/scripts/prepare-restricted-environment.sh if you do not already have a target mirror registry and want the script to create one for you use the following example: bash prepare-restricted-environment.sh --prod_operator_index \"registry.redhat.io/redhat/redhat-operator-index:v4.17\" --prod_operator_package_name \"rhdh\" --prod_operator_bundle_name \"rhdh-operator\" --prod_operator_version \"v1.3.5\" if you already have a target mirror registry use the following example: bash prepare-restricted-environment.sh --prod_operator_index \"registry.redhat.io/redhat/redhat-operator-index:v4.17\" --prod_operator_package_name \"rhdh\" --prod_operator_bundle_name \"rhdh-operator\" --prod_operator_version \"v1.3.5\" --use_existing_mirror_registry \"my_registry\""
] | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/installing_red_hat_developer_hub_in_an_air-gapped_environment/proc-install-rhdh-airgapped-environment-ocp-operator_title-install-rhdh-air-grapped |
24.5.4. Using the lscpu Command | 24.5.4. Using the lscpu Command The lscpu command allows you to list information about CPUs that are present in the system, including the number of CPUs, their architecture, vendor, family, model, CPU caches, etc. To do so, type the following at a shell prompt: lscpu For example: For a complete list of available command-line options, see the lscpu (1) manual page. | [
"~]USD lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 4 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 23 Stepping: 7 CPU MHz: 1998.000 BogoMIPS: 4999.98 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 3072K NUMA node0 CPU(s): 0-3"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-sysinfo-hardware-lscpu |
Chapter 4. Red Hat OpenShift Cluster Manager | Chapter 4. Red Hat OpenShift Cluster Manager Red Hat OpenShift Cluster Manager is a managed service where you can install, modify, operate, and upgrade your Red Hat OpenShift clusters. This service allows you to work with all of your organization's clusters from a single dashboard. OpenShift Cluster Manager guides you to install OpenShift Container Platform, Red Hat OpenShift Service on AWS (ROSA), and OpenShift Dedicated clusters. It is also responsible for managing both OpenShift Container Platform clusters after self-installation as well as your ROSA and OpenShift Dedicated clusters. You can use OpenShift Cluster Manager to do the following actions: Create new clusters View cluster details and metrics Manage your clusters with tasks such as scaling, changing node labels, networking, authentication Manage access control Monitor clusters Schedule upgrades 4.1. Accessing Red Hat OpenShift Cluster Manager You can access OpenShift Cluster Manager with your configured OpenShift account. Prerequisites You have an account that is part of an OpenShift organization. If you are creating a cluster, your organization has specified quota. Procedure Log in to OpenShift Cluster Manager using your login credentials. 4.2. General actions On the top right of the cluster page, there are some actions that a user can perform on the entire cluster: Open console launches a web console so that the cluster owner can issue commands to the cluster. Actions drop-down menu allows the cluster owner to rename the display name of the cluster, change the amount of load balancers and persistent storage on the cluster, if applicable, manually set the node count, and delete the cluster. Refresh icon forces a refresh of the cluster. 4.3. Cluster tabs Selecting an active, installed cluster shows tabs associated with that cluster. The following tabs display after the cluster's installation completes: Overview Access control Add-ons Networking Insights Advisor Machine pools Support Settings 4.3.1. Overview tab The Overview tab provides information about how your cluster was configured: Cluster ID is the unique identification for the created cluster. This ID can be used when issuing commands to the cluster from the command line. Type shows the OpenShift version that the cluster is using. Region is the server region. Provider shows which cloud provider that the cluster was built upon. Availability shows which type of availability zone that the cluster uses, either single or multizone. Version is the OpenShift version that is installed on the cluster. If there is an update available, you can update from this field. Created at shows the date and time that the cluster was created. Owner identifies who created the cluster and has owner rights. Subscription type shows the subscription model that was selected on creation. Infrastructure type is the type of account that the cluster uses. Status displays the current status of the cluster. Total vCPU shows the total available virtual CPU for this cluster. Total memory shows the total available memory for this cluster. Load balancers Persistent storage displays the amount of storage that is available on this cluster. Nodes shows the actual and desired nodes on the cluster. These numbers might not match due to cluster scaling. Network field shows the address and prefixes for network connectivity. Resource usage section of the tab displays the resources in use with a graph. Advisor recommendations section gives insight in relation to security, performance, availability, and stability. This section requires the use of remote health functionality. See Using Insights to identify issues with your cluster in the Additional resources section. Cluster history section shows everything that has been done with the cluster including creation and when a new version is identified. 4.3.2. Access control tab The Access control tab allows the cluster owner to set up an identity provider, grant elevated permissions, and grant roles to other users. Prerequisites You must be the cluster owner or have the correct permissions to grant roles on the cluster. Procedure Select the Grant role button. Enter the Red Hat account login for the user that you wish to grant a role on the cluster. Select the Grant role button on the dialog box. The dialog box closes, and the selected user shows the "Cluster Editor" access. 4.3.3. Add-ons tab The Add-ons tab displays all of the optional add-ons that can be added to the cluster. Select the desired add-on, and then select Install below the description for the add-on that displays. 4.3.4. Insights Advisor tab The Insights Advisor tab uses the Remote Health functionality of the OpenShift Container Platform to identify and mitigate risks to security, performance, availability, and stability. See Using Insights to identify issues with your cluster in the OpenShift Container Platform documentation. 4.3.5. Machine pools tab The Machine pools tab allows the cluster owner to create new machine pools, if there is enough available quota, or edit an existing machine pool. Selecting the More options > Scale opens the "Edit node count" dialog. In this dialog, you can change the node count per availability zone. If autoscaling is enabled, you can also set the range for autoscaling. 4.3.6. Support tab In the Support tab, you can add notification contacts for individuals that should receive cluster notifications. The username or email address that you provide must relate to a user account in the Red Hat organization where the cluster is deployed. Also from this tab, you can open a support case to request technical support for your cluster. 4.3.7. Settings tab The Settings tab provides a few options for the cluster owner: Monitoring , which is enabled by default, allows for reporting done on user-defined actions. See Understanding the monitoring stack . Update strategy allows you to determine if the cluster automatically updates on a certain day of the week at a specified time or if all updates are scheduled manually. Node draining sets the duration that protected workloads are respected during updates. When this duration has passed, the node is forcibly removed. Update status shows the current version and if there are any updates available. 4.4. Additional resources For the complete documentation for OpenShift Cluster Manager, see OpenShift Cluster Manager documentation . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/architecture/ocm-overview-ocp |
2.3. keepalived Scheduling Overview | 2.3. keepalived Scheduling Overview Using Keepalived provides a great deal of flexibility in distributing traffic across real servers, in part due to the variety of scheduling algorithms supported. Load balancing is superior to less flexible methods, such as Round-Robin DNS where the hierarchical nature of DNS and the caching by client machines can lead to load imbalances. Additionally, the low-level filtering employed by the LVS router has advantages over application-level request forwarding because balancing loads at the network packet level causes minimal computational overhead and allows for greater scalability. Using assigned weights gives arbitrary priorities to individual machines. Using this form of scheduling, it is possible to create a group of real servers using a variety of hardware and software combinations and the active router can evenly load each real server. The scheduling mechanism for Keepalived is provided by a collection of kernel patches called IP Virtual Server or IPVS modules. These modules enable layer 4 ( L4 ) transport layer switching, which is designed to work well with multiple servers on a single IP address. To track and route packets to the real servers efficiently, IPVS builds an IPVS table in the kernel. This table is used by the active LVS router to redirect requests from a virtual server address to and returning from real servers in the pool. 2.3.1. Keepalived Scheduling Algorithms The structure that the IPVS table takes depends on the scheduling algorithm that the administrator chooses for any given virtual server. To allow for maximum flexibility in the types of services you can cluster and how these services are scheduled, Keepalived supports the following scheduling algorithms listed below. Round-Robin Scheduling Distributes each request sequentially around the pool of real servers. Using this algorithm, all the real servers are treated as equals without regard to capacity or load. This scheduling model resembles round-robin DNS but is more granular due to the fact that it is network-connection based and not host-based. Load Balancer round-robin scheduling also does not suffer the imbalances caused by cached DNS queries. Weighted Round-Robin Scheduling Distributes each request sequentially around the pool of real servers but gives more jobs to servers with greater capacity. Capacity is indicated by a user-assigned weight factor, which is then adjusted upward or downward by dynamic load information. Weighted round-robin scheduling is a preferred choice if there are significant differences in the capacity of real servers in the pool. However, if the request load varies dramatically, the more heavily weighted server may answer more than its share of requests. Least-Connection Distributes more requests to real servers with fewer active connections. Because it keeps track of live connections to the real servers through the IPVS table, least-connection is a type of dynamic scheduling algorithm, making it a better choice if there is a high degree of variation in the request load. It is best suited for a real server pool where each member node has roughly the same capacity. If a group of servers have different capabilities, weighted least-connection scheduling is a better choice. Weighted Least-Connections Distributes more requests to servers with fewer active connections relative to their capacities. Capacity is indicated by a user-assigned weight, which is then adjusted upward or downward by dynamic load information. The addition of weighting makes this algorithm ideal when the real server pool contains hardware of varying capacity. Locality-Based Least-Connection Scheduling Distributes more requests to servers with fewer active connections relative to their destination IPs. This algorithm is designed for use in a proxy-cache server cluster. It routes the packets for an IP address to the server for that address unless that server is above its capacity and has a server in its half load, in which case it assigns the IP address to the least loaded real server. Locality-Based Least-Connection Scheduling with Replication Scheduling Distributes more requests to servers with fewer active connections relative to their destination IPs. This algorithm is also designed for use in a proxy-cache server cluster. It differs from Locality-Based Least-Connection Scheduling by mapping the target IP address to a subset of real server nodes. Requests are then routed to the server in this subset with the lowest number of connections. If all the nodes for the destination IP are above capacity, it replicates a new server for that destination IP address by adding the real server with the least connections from the overall pool of real servers to the subset of real servers for that destination IP. The most loaded node is then dropped from the real server subset to prevent over-replication. Destination Hash Scheduling Distributes requests to the pool of real servers by looking up the destination IP in a static hash table. This algorithm is designed for use in a proxy-cache server cluster. Source Hash Scheduling Distributes requests to the pool of real servers by looking up the source IP in a static hash table. This algorithm is designed for LVS routers with multiple firewalls. Shortest Expected Delay Distributes connection requests to the server that has the shortest delay expected based on number of connections on a given server divided by its assigned weight. Never Queue A two-pronged scheduler that first finds and sends connection requests to a server that is idling, or has no connections. If there are no idling servers, the scheduler defaults to the server that has the least delay in the same manner as Shortest Expected Delay . 2.3.2. Server Weight and Scheduling The administrator of Load Balancer can assign a weight to each node in the real server pool. This weight is an integer value which is factored into any weight-aware scheduling algorithms (such as weighted least-connections) and helps the LVS router more evenly load hardware with different capabilities. Weights work as a ratio relative to one another. For instance, if one real server has a weight of 1 and the other server has a weight of 5, then the server with a weight of 5 gets 5 connections for every 1 connection the other server gets. The default value for a real server weight is 1. Although adding weight to varying hardware configurations in a real server pool can help load-balance the cluster more efficiently, it can cause temporary imbalances when a real server is introduced to the real server pool and the virtual server is scheduled using weighted least-connections. For example, suppose there are three servers in the real server pool. Servers A and B are weighted at 1 and the third, server C, is weighted at 2. If server C goes down for any reason, servers A and B evenly distributes the abandoned load. However, once server C comes back online, the LVS router sees it has zero connections and floods the server with all incoming requests until it is on par with servers A and B. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/load_balancer_administration/s1-lvs-scheduling-VSA |
Chapter 6. View OpenShift Data Foundation Topology | Chapter 6. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_on_vmware_vsphere/viewing-odf-topology_rhodf |
Chapter 9. Removing the kubeadmin user | Chapter 9. Removing the kubeadmin user 9.1. The kubeadmin user OpenShift Container Platform creates a cluster administrator, kubeadmin , after the installation process completes. This user has the cluster-admin role automatically applied and is treated as the root user for the cluster. The password is dynamically generated and unique to your OpenShift Container Platform environment. After installation completes the password is provided in the installation program's output. For example: INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided> 9.2. Removing the kubeadmin user After you define an identity provider and create a new cluster-admin user, you can remove the kubeadmin to improve cluster security. Warning If you follow this procedure before another user is a cluster-admin , then OpenShift Container Platform must be reinstalled. It is not possible to undo this command. Prerequisites You must have configured at least one identity provider. You must have added the cluster-admin role to a user. You must be logged in as an administrator. Procedure Remove the kubeadmin secrets: USD oc delete secrets kubeadmin -n kube-system | [
"INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>",
"oc delete secrets kubeadmin -n kube-system"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/authentication_and_authorization/removing-kubeadmin |
Configuring your Red Hat build of Quarkus applications by using a YAML file | Configuring your Red Hat build of Quarkus applications by using a YAML file Red Hat build of Quarkus 3.15 Red Hat Customer Content Services | [
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-config-yaml</artifactId> </dependency>",
"./mvnw quarkus:add-extension -Dextensions=\"quarkus-config-yaml\"",
"Properties that configure the JDBC data source driver of your PostgreSQL data source quarkus: datasource: db-kind: postgresql jdbc: url: jdbc:postgresql://localhost:5432/quarkus_test username: quarkus_test password: quarkus_test Property that configures the URL of the endpoint to which the REST client sends requests quarkus: rest-client: org.acme.rest.client.ExtensionsService: url: https://stage.code.quarkus.io/api Property that configures the log message level for your application For configuration property names that use quotes, do not split the string inside the quotes quarkus: log: category: \"io.quarkus.category\": level: INFO",
"\"%dev\": quarkus: datasource: db-kind: postgresql jdbc: url: jdbc:postgresql://localhost:5432/quarkus_test username: quarkus_test password: quarkus_test",
"mach: 3 x: factor: 2.23694 display: mach: USD{mach} unit: name: \"mph\" factor: USD{x.factor}",
"├── config │ └── application.yaml ├── my-app-runner",
"quarkus: http: cors: ~: true methods: GET,PUT,POST",
"./mvnw quarkus:dev"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html-single/configuring_your_red_hat_build_of_quarkus_applications_by_using_a_yaml_file/index |
Chapter 20. Storage | Chapter 20. Storage Targetd plug-in from the libStorageMgmt API, see the section called "Storage Array Management with libStorageMgmt API" LSI Syncro CS HA-DAS adapters, see the section called "Support for LSI Syncro" DIF/DIX, see the section called "DIF/DIX Support" | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/chap-tp-storage |
function::usymfileline | function::usymfileline Name function::usymfileline - Return the file name and line number of an address. Synopsis Arguments addr The address to translate. Description Returns the file name and the (approximate) line number of the given address, if known. If the file name or the line number cannot be found, the hex string representation of the address will be returned. | [
"usymfileline:string(addr:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-usymfileline |
Chapter 3. Red Hat build of OpenJDK features | Chapter 3. Red Hat build of OpenJDK features The latest Red Hat build of OpenJDK 21 release might include new features. Additionally, the latest release might enhance, deprecate, or remove features that originated from earlier Red Hat build of OpenJDK 21 releases. Note For all the other changes and security fixes, see OpenJDK 21.0.6 Released . Red Hat build of OpenJDK enhancements Red Hat build of OpenJDK 21 provides enhancements to features originally created in earlier releases of Red Hat build of OpenJDK. Option for jar command to avoid overwriting files when extracting an archive In earlier Red Hat build of OpenJDK releases, when the jar tool extracted files from an archive, the jar tool overwrote any existing files with the same name in the target directory. Red Hat build of OpenJDK 21.0.6 adds a new ‐k (or ‐‐keep-old-files ) option that you can use to ensure that the jar tool does not overwrite existing files. You can specify this new option in either short or long format. For example: jar xkf myfile.jar jar --extract ‐‐keep-old-files ‐‐file myfile.jar Note In Red Hat build of OpenJDK 21.0.6, the jar tool retains the old behavior by default. If you do not explicitly specify the ‐k (or ‐‐keep-old-files ) option, the jar tool automatically overwrites any existing files with the same name. See JDK-8335912 (JDK Bug System) and JDK bug system reference ID: JDK-8337499. IANA time zone database updated to version 2024b In Red Hat build of OpenJDK 21.0.6, the in-tree copy of the Internet Assigned Numbers Authority (IANA) time zone database is updated to version 2024b. This update is primarily concerned with improving historical data for Mexico, Mongolia, and Portugal. This update to the IANA database also includes the following changes: Asia/Choibalsan is an alias for Asia/Ulaanbaatar . The Middle European Time (MET) time zone is equal to Central European Time (CET). Some legacy time-zone IDs are mapped to geographical names rather than fixed offsets: Eastern Standard Time (EST) is mapped to America/Panama rather than -5:00 . Mountain Standard Time (MST) is mapped to America/Phoenix rather than -7:00 . Hawaii Standard Time (HST) is mapped to Pacific/Honolulu rather than -10:00 . Red Hat build of OpenJDK overrides the change in the legacy time-zone ID mappings by retaining the existing fixed-offset mapping. See JDK-8339637 (JDK Bug System) . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.6/rn_openjdk-2106-features_openjdk |
Chapter 18. Reference | Chapter 18. Reference 18.1. Data Grid Server 8.5.2 Readme Information about Data Grid Server 14.0.21.Final-redhat-00001 distribution. 18.1.1. Requirements Data Grid Server requires JDK 11 or later. 18.1.2. Starting servers Use the server script to run Data Grid Server instances. Unix / Linux Windows Tip Include the --help or -h option to view command arguments. 18.1.3. Stopping servers Use the shutdown command with the CLI to perform a graceful shutdown. Alternatively, enter Ctrl-C from the terminal to interrupt the server process or kill it via the TERM signal. 18.1.4. Configuration Server configuration extends Data Grid configuration with the following server-specific elements: cache-container Defines cache containers for managing cache lifecycles. endpoints Enables and configures endpoint connectors for client protocols. security Configures endpoint security realms. socket-bindings Maps endpoint connectors to interfaces and ports. The default configuration file is USDRHDG_HOME/server/conf/infinispan.xml . infinispan.xml Provides configuration to run Data Grid Server using default cache container with statistics and authorization enabled. Demonstrates how to set up authentication and authorization using security realms. Data Grid provides other ready-to-use configuration files that are primarily for development and testing purposes. USDRHDG_HOME/server/conf/ provides the following configuration files: infinispan-dev-mode.xml Configures Data Grid Server specifically for cross-site replication with IP multicast discovery. The configuration provides BASIC authentication to connect to the Hot Rod and REST endpoints. The configuration is designed for development mode and should not be used in production environments. infinispan-local.xml Configures Data Grid Server without clustering capabilities. infinispan-xsite.xml Configures cross-site replication on a single host and uses IP multicast for discovery. infinispan-memcached.xml Configures Data Grid Server to behave like a default Memcached server, listening on port 11221 and without authentication. infinispan-resp.xml Configures Data Grid Server to behave like a default Redis server, listening on port 6379 and without authentication. log4j2.xml Configures Data Grid Server logging. Use different configuration files with the -c argument, as in the following example that starts a server without clustering capabilities: Unix / Linux Windows 18.1.5. Bind address Data Grid Server binds to the loopback IP address localhost on your network by default. Use the -b argument to set a different IP address, as in the following example that binds to all network interfaces: Unix / Linux Windows 18.1.6. Bind port Data Grid Server listens on port 11222 by default. Use the -p argument to set an alternative port: Unix / Linux Windows 18.1.7. Clustering address Data Grid Server configuration defines cluster transport so multiple instances on the same network discover each other and automatically form clusters. Use the -k argument to change the IP address for cluster traffic: Unix / Linux Windows 18.1.8. Cluster stacks JGroups stacks configure the protocols for cluster transport. Data Grid Server uses the tcp stack by default. Use alternative cluster stacks with the -j argument, as in the following example that uses UDP for cluster transport: Unix / Linux Windows 18.1.9. Authentication Data Grid Server requires authentication. Create a username and password with the CLI as follows: Unix / Linux Windows 18.1.10. Server home directory Data Grid Server uses infinispan.server.home.path to locate the contents of the server distribution on the host filesystem. The server home directory, referred to as USDRHDG_HOME , contains the following folders: Folder Description /bin Contains scripts to start servers and CLI. /boot Contains JAR files to boot servers. /docs Provides configuration examples, schemas, component licenses, and other resources. /lib Contains JAR files that servers require internally. Do not place custom JAR files in this folder. /server Provides a root folder for Data Grid Server instances. /static Contains static resources for Data Grid Console. 18.1.11. Server root directory Data Grid Server uses infinispan.server.root.path to locate configuration files and data for Data Grid Server instances. You can create multiple server root folders in the same directory or in different directories and then specify the locations with the -s or --server-root argument, as in the following example: Unix / Linux Windows Each server root directory contains the following folders: Folder Description System property override /server/conf Contains server configuration files. infinispan.server.config.path /server/data Contains data files organized by container name. infinispan.server.data.path /server/lib Contains server extension files. This directory is scanned recursively and used as a classpath. infinispan.server.lib.path Separate multiple paths with the following delimiters: : on Unix / Linux ; on Windows /server/log Contains server log files. infinispan.server.log.path 18.1.12. Logging Configure Data Grid Server logging with the log4j2.xml file in the server/conf folder. Use the --logging-config=<path_to_logfile> argument to use custom paths, as follows: Unix / Linux Tip To ensure custom paths take effect, do not use the ~ shortcut. Windows | [
"USDRHDG_HOME/bin/server.sh",
"USDRHDG_HOME\\bin\\server.bat",
"USDRHDG_HOME/bin/server.sh -c infinispan-local.xml",
"USDRHDG_HOME\\bin\\server.bat -c infinispan-local.xml",
"USDRHDG_HOME/bin/server.sh -b 0.0.0.0",
"USDRHDG_HOME\\bin\\server.bat -b 0.0.0.0",
"USDRHDG_HOME/bin/server.sh -p 30000",
"USDRHDG_HOME\\bin\\server.bat -p 30000",
"USDRHDG_HOME/bin/server.sh -k 192.168.1.100",
"USDRHDG_HOME\\bin\\server.bat -k 192.168.1.100",
"USDRHDG_HOME/bin/server.sh -j udp",
"USDRHDG_HOME\\bin\\server.bat -j udp",
"USDRHDG_HOME/bin/cli.sh user create username -p \"qwer1234!\"",
"USDRHDG_HOME\\bin\\cli.bat user create username -p \"qwer1234!\"",
"├── bin ├── boot ├── docs ├── lib ├── server └── static",
"USDRHDG_HOME/bin/server.sh -s server2",
"USDRHDG_HOME\\bin\\server.bat -s server2",
"├── server │ ├── conf │ ├── data │ ├── lib │ └── log",
"USDRHDG_HOME/bin/server.sh --logging-config=/path/to/log4j2.xml",
"USDRHDG_HOME\\bin\\server.bat --logging-config=path\\to\\log4j2.xml"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_server_guide/server_reference |
32.5.2. Check the memory dump | 32.5.2. Check the memory dump When the memory dump is finished and Red Hat Enterprise Linux is rebooted, you can check the memory dump with the crash command by opening the special device file. The following example shows how to check the memory dump which is saved on /dev/sdb1. Example 32.9. Checking Dump Integrity | [
"crash /usr/lib/debug/lib/modules/2.6.32-358.el6.x86_64/vmlinux /dev/sdb1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-ppc-sadump-check-memory-dump |
7.261. util-linux-ng | 7.261. util-linux-ng 7.261.1. RHSA-2013:0517 - Low: util-linux-ng security, bug fix and enhancement update Updated util-linux-ng packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having low security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The util-linux-ng packages contain a large variety of low-level system utilities that are necessary for a Linux operating system to function. Security Fix CVE-2013-0157 An information disclosure flaw was found in the way the mount command reported errors. A local attacker could use this flaw to determine the existence of files and directories they do not have access to. Bug Fixes BZ# 790728 Previously, the blkid utility ignored swap area UUIDs if the first byte was zero. As a consequence, the swap areas could not be addressed by UUIDs; for example, from the /etc/fstab file. The libblkd library has been fixed and now swap partitions are labeled with a valid UUID value if the first byte is zero. BZ# 818621 Previously, the lsblk utility opened block devices to check if the device was in read-only mode, although the information was available in the /sys file system. This resulted in unexpected SELinux alerts and unnecessary open() calls. Now, the lsblk utility does not perform unnecessary opening operations and no longer reads the information from the /sys file system. BZ#736245 On a non-uniform CPU configuration, for example on a system with two sockets with a different number of cores, the lscpu command failed unexpectedly with a segmentation fault and a core dump was generated. After this update, when executing the lscpu command on such a configuration, the correct result is printed and no core dump is generated. BZ#837935 On a system with a large number of active processors, the lscpu command failed unexpectedly with a segmentation fault and a core dump was generated. This bug is now fixed and the lscpu command now works as expected on this configuration. BZ#819945 Executing the hwclock --systz command to reset the system time based on the current time zone caused the clock to be incorrectly adjusted by one hour. This was because hwclock did not adjust the system time during boot according to the "warp clock" semantic described in the settimeofday(2) man page. With this update, hwclock correctly sets the system time when required. BZ#845477 When SElinux options were specified both in the /etc/fstab file and on the command line, mounting failed and the kernel logged the following error upon running dmesg : The handling of SElinux options has been changed so that options on the command line now replace options given in the /etc/fstab file and as a result, devices can be mounted successfully. BZ#845971 Due to a change in the search order of the mount utility, while reading the /etc/fstab file, the mount command returned a device before a directory. With this update, the search order has been modified and mount now works as expected. BZ#858009 Previously, any new login or logout sequence by a telnet client caused the /var/run/utmp file to increase by one record on the telnetd machine. As a consequence, the /var/run/utmp file grew without a limit. As a result of trying to search though a huge /var/run/utmp file, the machine running telnetd could experience more severe side-effects over time. For example, the telnetd process could become unresponsive or the overall system performance could degrade. The telnetd now creates a proper record in /var/run/utmp before starting the logging process. As a result, the /var/run/utmp does not grow without a limit on each new login or logout sequence of a telnet session. BZ#730891, BZ# 783514 , BZ#809139, BZ#820183, BZ# 839281 Man pages of several utilities included in the package have been updated to fix minor mistakes and add entries for previously undocumented functionalities. Enhancements BZ#719927 A new --compare option for hwclock to compare the offset between system time and hardware clock has been added due to a discontinued distribution of adjtimex in Red Hat Enterprise Linux 6.0 and later, which had previously provided this option. BZ#809449 The lsblk command now supports a new option, --inverse , used to print dependencies between block devices in reverse order. This feature is required to properly reboot or shut down systems with a configured cluster. BZ#823008 The lscpu utility, which displays detailed information about the available CPUs, has been updated to include numerous new features. Also, a new utility, chcpu , has been added, which allows the user to change the CPU state (online or offline, standby or active, and other states), disable and enable CPUs, and configure specified CPUs. For more information about these utilities, refer to the lscpu(1) and chcpu(8) man pages. Users of util-linux-ng are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | [
"SELinux: duplicate or incompatible mount options"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/util-linux-ng |
Automating system administration by using RHEL System Roles in RHEL 7.9 | Automating system administration by using RHEL System Roles in RHEL 7.9 Red Hat Enterprise Linux 7 Consistent and repeatable configuration of RHEL deployments across multiple hosts with Red Hat Ansible Automation Platform playbooks Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/automating_system_administration_by_using_rhel_system_roles_in_rhel_7.9/index |
Chapter 12. Creating and executing DMN and BPMN models using Maven | Chapter 12. Creating and executing DMN and BPMN models using Maven You can use Maven archetypes to develop DMN and BPMN models in VS Code using the Red Hat Process Automation Manager VS Code extension instead of Business Central. You can then integrate your archetypes with your Red Hat Process Automation Manager decision and process services in Business Central as needed. This method of developing DMN and BPMN models is helpful for building new business applications using the Red Hat Process Automation Manager VS Code extension. Procedure In a command terminal, navigate to a local folder where you want to store the new Red Hat Process Automation Manager project. Enter the following command to use a Maven archtype to generate a project within a defined folder: Generating a project using Maven archetype This command generates a Maven project with required dependencies and generates required directories and files to build your business application. You can use the Git version control system (recommended) when developing a project. If you want to generate multiple projects in the same directory, specify the artifactId and groupId of the generated business application by adding -DgroupId=<groupid> -DartifactId=<artifactId> to the command. In your VS Code IDE, click File , select Open Folder , and navigate to the folder that is generated using the command. Before creating the first asset, set a package for your business application, for example, org.kie.businessapp , and create respective directories in the following paths: PROJECT_HOME/src/main/java PROJECT_HOME/src/main/resources PROJECT_HOME/src/test/resources For example, you can create PROJECT_HOME/src/main/java/org/kie/businessapp for org.kie.businessapp package. Use VS Code to create assets for your business application. You can create the assets supported by Red Hat Process Automation Manager VS Code extension using the following ways: To create a business process, create a new file with .bpmn or .bpmn2 in PROJECT_HOME/src/main/resources/org/kie/businessapp directory, such as Process.bpmn . To create a DMN model, create a new file with .dmn in PROJECT_HOME/src/main/resources/org/kie/businessapp directory, such as AgeDecision.dmn . To create a test scenario simulation model, create a new file with .scesim in PROJECT_HOME/src/test/resources/org/kie/businessapp directory, such as TestAgeScenario.scesim . After you create the assets in your Maven archetype, navigate to the root directory (contains pom.xml ) of the project in the command line and run the following command to build the knowledge JAR (KJAR) of your project: If the build fails, address any problems described in the command line error messages and try again to validate the project until the build is successful. However, if the build is successful, you can find the artifact of your business application in PROJECT_HOME/target directory. Note Use mvn clean install command often to validate your project after each major change during development. You can deploy the generated knowledge JAR (KJAR) of your business application on a running KIE Server using the REST API. For more information about using REST API, see Interacting with Red Hat Process Automation Manager using KIE APIs . | [
"mvn archetype:generate -DarchetypeGroupId=org.kie -DarchetypeArtifactId=kie-kjar-archetype -DarchetypeVersion=7.67.0.Final-redhat-00024",
"mvn clean install"
] | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_process_automation_manager/proc-dmn-bpmn-maven-create_getting-started-process-services |
4.244. python-dmidecode | 4.244. python-dmidecode 4.244.1. RHBA-2011:1589 - python-dmidecode bug fix update An updated python-dmidecode package that fixes various bugs is now available for Red Hat Enterprise Linux 6. The python-dmidecode package provides a python extension module that uses the code-base of the dmidecode utility and presents the data as python data structures or as XML data using the libxml2 library. The python-dmidecode package has been upgraded to upstream version 3.10.13, which provides a number of bug fixes over the version. (BZ# 621567 ) Bug Fixes BZ# 627901 When trying to identify the processor type by performing a string comparison, Python terminated with a segmentation fault. This was caused by DMI tables which did not report the CPU processor information as a string and returned a NULL value instead. This update adds additional checks for NULL values before doing the string comparison. BZ# 646429 Previously, when calling the memcpy() function on the IBM System z machine which was under heavy memory load, a SIGILL signal was triggered. As a consequence, the complete Python interpreter core dumped. A signal handler was added to properly handle heavy memory loads. BZ# 667363 Prior to this update, when running the rhn_register utility, providing a valid user name and password, and clicking the Forward button, the tool terminated unexpectedly with a segmentation fault. This was caused by the dmi_processor_id() function not checking whether the version pointer was NULL. This update adds additional checks for NULL values, fixing the problem. All users of python-dmidecode are advised to upgrade to this updated package, which resolves these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/python-dmidecode |
4.334. usbutils | 4.334. usbutils 4.334.1. RHBA-2011:1646 - usbutils bug fix and enhancement update An updated usbutils package that fixes several bugs and adds various enhancements is now available for Red Hat Enterprise Linux 6. The usbutils package contains utilities for inspecting devices connected to a USB bus. The usbutils package has been upgraded to upstream version 003, which adds support for USB3 devices. Note: warning messages about short transfer on control endpoint and stalled endpoint can under circumstances be logged after the upgrade. This is the standard behavior of the xHCI driver and these messages can be safely ignored. This update also provides a number of bug fixes and enhancements over the version. (BZ# 725973 , BZ# 725096 ) Bug Fixes BZ# 725982 Previously, when running the "lsusb -t" command in a KVM guest, the lsusb utility terminated with a segmentation fault. The utility has been modified and now lists USB devices correctly. BZ# 730671 The FILES item in the lsusb(8) manual page displayed an incorrect path to the usb.ids file. This path has been changed to the correct /usr/share/hwdata/usb.ids path. All users of usbutils are advised to upgrade to this updated usbutils package, which fixes these bugs and adds these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/usbutils |
23.8. Enforcing a Specific Authentication Indicator When Obtaining a Ticket from the KDC | 23.8. Enforcing a Specific Authentication Indicator When Obtaining a Ticket from the KDC To enforce a specific authentication indicator on: A host object, execute: A Kerberos service, execute: To set multiple authentication indicators, specify the --auth-ind parameter multiple times. Warning Setting an authentication indicator to the HTTP/ IdM_master service causes the IdM master to fail. Additionally, the utilities provided by IdM do not enable you to restore the master. Example 23.2. Enforcing the pkinit Indicator on a Specific Host The following command configures that only the users authenticated through a smart card can obtain a service ticket for the host.idm.example.com host: The setting above ensures that the ticket-granting ticket (TGT) of a user requesting a Kerberos ticket, contains the pkinit authentication indicator. | [
"ipa host-mod host_name --auth-ind= indicator",
"ipa service-mod service / host_name --auth-ind= indicator",
"ipa host-mod host.idm.example.com --auth-ind=pkinit"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/enforcing-a-specific-authentication-indicator-when-obtaining-a-ticket-from-the-kdc |
Chapter 1. Adding a custom application configuration file to Red Hat OpenShift Container Platform | Chapter 1. Adding a custom application configuration file to Red Hat OpenShift Container Platform To access the Red Hat Developer Hub, you must add a custom application configuration file to Red Hat OpenShift Container Platform. In OpenShift Container Platform, you can use the following content as a base template to create a ConfigMap named app-config-rhdh : kind: ConfigMap apiVersion: v1 metadata: name: app-config-rhdh data: app-config-rhdh.yaml: | app: title: Red Hat Developer Hub You can add the custom application configuration file to OpenShift Container Platform in one of the following ways: The Red Hat Developer Hub Operator The Red Hat Developer Hub Helm chart 1.1. Adding a custom application configuration file to OpenShift Container Platform using the Helm chart You can use the Red Hat Developer Hub Helm chart to add a custom application configuration file to your OpenShift Container Platform instance. Prerequisites You have created an Red Hat OpenShift Container Platform account. Procedure From the OpenShift Container Platform web console, select the ConfigMaps tab. Click Create ConfigMap . From Create ConfigMap page, select the YAML view option in Configure via and make changes to the file, if needed. Click Create . Go to the Helm tab to see the list of Helm releases. Click the overflow menu on the Helm release that you want to use and select Upgrade . Use either the Form view or YAML view to edit the Helm configuration. Using Form view Expand Root Schema Backstage chart schema Backstage parameters Extra app configuration files to inline into command arguments . Click the Add Extra app configuration files to inline into command arguments link. Enter the value in the following fields: configMapRef : app-config-rhdh filename : app-config-rhdh.yaml Click Upgrade . Using YAML view Set the value of the upstream.backstage.extraAppConfig.configMapRef and upstream.backstage.extraAppConfig.filename parameters as follows: # ... other Red Hat Developer Hub Helm Chart configurations upstream: backstage: extraAppConfig: - configMapRef: app-config-rhdh filename: app-config-rhdh.yaml # ... other Red Hat Developer Hub Helm Chart configurations Click Upgrade . 1.2. Adding a custom application configuration file to OpenShift Container Platform using the Operator A custom application configuration file is a ConfigMap object that you can use to change the configuration of your Red Hat Developer Hub instance. If you are deploying your Developer Hub instance on Red Hat OpenShift Container Platform, you can use the Red Hat Developer Hub Operator to add a custom application configuration file to your OpenShift Container Platform instance by creating the ConfigMap object and referencing it in the Developer Hub custom resource (CR). The custom application configuration file contains a sensitive environment variable, named BACKEND_SECRET . This variable contains a mandatory backend authentication key that Developer Hub uses to reference an environment variable defined in an OpenShift Container Platform secret. You must create a secret, named 'secrets-rhdh', and reference it in the Developer Hub CR. Note You are responsible for protecting your Red Hat Developer Hub installation from external and unauthorized access. Manage the backend authentication key like any other secret. Meet strong password requirements, do not expose it in any configuration files, and only inject it into configuration files as an environment variable. Prerequisites You have an active Red Hat OpenShift Container Platform account. Your administrator has installed the Red Hat Developer Hub Operator in OpenShift Container Platform. For more information, see Installing the Red Hat Developer Hub Operator . You have created the Red Hat Developer Hub CR in OpenShift Container Platform. Procedure From the Developer perspective in the OpenShift Container Platform web console, select the Topology view, and click the Open URL icon on the Developer Hub pod to identify your Developer Hub external URL: <RHDH_URL> . From the Developer perspective in the OpenShift Container Platform web console, select the ConfigMaps view. Click Create ConfigMap . Select the YAML view option in Configure via and use the following example as a base template to create a ConfigMap object, such as app-config-rhdh.yaml : kind: ConfigMap apiVersion: v1 metadata: name: app-config-rhdh data: "app-config-rhdh.yaml": | app: title: Red Hat Developer Hub baseUrl: <RHDH_URL> 1 backend: auth: externalAccess: - type: legacy options: subject: legacy-default-config secret: "USD{BACKEND_SECRET}" 2 baseUrl: <RHDH_URL> 3 cors: origin: <RHDH_URL> 4 1 Set the external URL of your Red Hat Developer Hub instance. 2 Use an environment variable exposing an OpenShift Container Platform secret to define the mandatory Developer Hub backend authentication key. 3 Set the external URL of your Red Hat Developer Hub instance. 4 Set the external URL of your Red Hat Developer Hub instance. Click Create . Select the Secrets view. Click Create Key/value Secret . Create a secret named secrets-rhdh . Add a key named BACKEND_SECRET and a base64 encoded string as a value. Use a unique value for each Red Hat Developer Hub instance. For example, you can use the following command to generate a key from your terminal: node -p 'require("crypto").randomBytes(24).toString("base64")' Click Create . Select the Topology view. Click the overflow menu for the Red Hat Developer Hub instance that you want to use and select Edit Backstage to load the YAML view of the Red Hat Developer Hub instance. In the CR, enter the name of the custom application configuration config map as the value for the spec.application.appConfig.configMaps field, and enter the name of your secret as the value for the spec.application.extraEnvs.secrets field. For example: apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: name: developer-hub spec: application: appConfig: mountPath: /opt/app-root/src configMaps: - name: app-config-rhdh extraEnvs: secrets: - name: secrets-rhdh extraFiles: mountPath: /opt/app-root/src replicas: 1 route: enabled: true database: enableLocalDb: true Click Save . Navigate back to the Topology view and wait for the Red Hat Developer Hub pod to start. Click the Open URL icon to use the Red Hat Developer Hub platform with the configuration changes. Additional resources For more information about roles and responsibilities in Developer Hub, see Role-Based Access Control (RBAC) in Red Hat Developer Hub . | [
"kind: ConfigMap apiVersion: v1 metadata: name: app-config-rhdh data: app-config-rhdh.yaml: | app: title: Red Hat Developer Hub",
"... other Red Hat Developer Hub Helm Chart configurations upstream: backstage: extraAppConfig: - configMapRef: app-config-rhdh filename: app-config-rhdh.yaml ... other Red Hat Developer Hub Helm Chart configurations",
"kind: ConfigMap apiVersion: v1 metadata: name: app-config-rhdh data: \"app-config-rhdh.yaml\": | app: title: Red Hat Developer Hub baseUrl: <RHDH_URL> 1 backend: auth: externalAccess: - type: legacy options: subject: legacy-default-config secret: \"USD{BACKEND_SECRET}\" 2 baseUrl: <RHDH_URL> 3 cors: origin: <RHDH_URL> 4",
"node -p 'require(\"crypto\").randomBytes(24).toString(\"base64\")'",
"apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: name: developer-hub spec: application: appConfig: mountPath: /opt/app-root/src configMaps: - name: app-config-rhdh extraEnvs: secrets: - name: secrets-rhdh extraFiles: mountPath: /opt/app-root/src replicas: 1 route: enabled: true database: enableLocalDb: true"
] | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/administration_guide_for_red_hat_developer_hub/assembly-add-custom-app-file-openshift_admin-rhdh |
Chapter 1. OpenShift Dedicated cluster upgrades | Chapter 1. OpenShift Dedicated cluster upgrades You can schedule automatic or manual upgrade policies to update the version of your OpenShift Dedicated clusters. Upgrading OpenShift Dedicated clusters can be done through Red Hat OpenShift Cluster Manager or OpenShift Cluster Manager CLI. Red Hat Site Reliability Engineers (SREs) monitor upgrade progress and remedy any issues encountered. 1.1. Life cycle policies and planning To plan an upgrade, review the OpenShift Dedicated update life cycle guide in the "Additional resources" section. The life cycle page includes release definitions, support and upgrade requirements, installation policy information, and life cycle dates. You can use update channels to decide which Red Hat OpenShift Container Platform minor version to update your clusters to. OpenShift Dedicated supports updates only through the stable channel. To learn more about OpenShift update channels and releases, see Understanding update channels and releases . Additional resources For more information about the OpenShift Dedicated life cycle policy, see OpenShift Dedicated update life cycle . 1.2. Understanding OpenShift Dedicated cluster upgrades When upgrades are made available for your OpenShift Dedicated cluster, you can upgrade to the newest version through Red Hat OpenShift Cluster Manager or OpenShift Cluster Manager CLI. You can set your upgrade policies on existing clusters or during cluster creation, and upgrades can be scheduled to occur automatically or manually. Important Before upgrading a Workload Identity Federation (WIF)-enabled OpenShift Dedicated on Google Cloud Platform (GCP) cluster, you must update the wif-config. For more information, see "Cluster upgrades with Workload Identity Federation (WIF)". Red Hat Site Reliability Engineers (SRE) will provide a curated list of available versions for your OpenShift Dedicated clusters. For each cluster you will be able to review the full list of available releases, as well as the corresponding release notes. OpenShift Cluster Manager will enable installation of clusters at the latest supported versions, and upgrades can be canceled at any time. You can also set a grace period for how long PodDisruptionBudget protected workloads are respected during upgrades. After this grace period, any workloads protected by PodDisruptionBudget that have not been successfully drained from a node, will be forcibly deleted. Note All Kubernetes objects and PVs in each OpenShift Dedicated cluster are backed up as part of the OpenShift Dedicated service. Application and application data backups are not a part of the OpenShift Dedicated service. Ensure you have a backup policy in place for your applications and application data prior to scheduling upgrades. Note When following a scheduled upgrade policy, there might be a delay of an hour or more before the upgrade process begins, even if it is an immediate upgrade. Additionally, the duration of the upgrade might vary based on your workload configuration. 1.2.1. Recurring upgrades Upgrades can be scheduled to occur automatically on a day and time specified by the cluster owner or administrator. Upgrades occur on a weekly basis, unless an upgrade is unavailable for that week. If you select recurring updates for your cluster, you must provide an administrator's acknowledgment. OpenShift Cluster Manager does not start scheduled y-stream updates for minor versions without receiving an administrator's acknowledgment. Note Recurring upgrade policies are optional and if they are not set, the upgrade policies default to individual. 1.2.2. Individual upgrades If you opt for individual upgrades, you are responsible for updating your cluster. If you select an update version that requires approval, you must provide an administrator's acknowledgment. If your cluster version becomes outdated, it will transition to a limited support status. For more information on OpenShift life cycle policies, see OpenShift Dedicated update life cycle . 1.2.3. Upgrade notifications From OpenShift Cluster Manager console you can view your cluster's history from the Overview tab. The Upgrade states can be viewed in the service log under the Cluster history heading. Every change of state also triggers an email notification to the cluster owner and subscribed users. You will receive email notifications for the following events: An upgrade has been scheduled. An upgrade has started. An upgrade has completed. An upgrade has been canceled. Note For recurring upgrades, you will also receive email notifications before the upgrade occurs based on the following cadence: 2 week notice 1 week notice 1 day notice 1.2.4. Cluster upgrades with Workload Identity Federation (WIF) Before upgrading an OpenShift Dedicated on Google Cloud Platform (GCP) cluster with WIF authentication type to a newer y-stream version, you must update the WIF configuration to that version as well. Failure to do so before attempting to upgrade the cluster version will result in an error. For more information on how to update a WIF configuration, see the Additional resources section. Note The update path to a brand new release of OpenShift Dedicated is not available in the stable channel until 45 to 90 days after the initial GA of a newer y-stream version. Additional resources For more information about the service log and adding cluster notification contacts, see Accessing cluster notifications in Red Hat Hybrid Cloud Console . For more information on how to update a WIF configuration, see Updating a WIF configuration . 1.3. Scheduling recurring upgrades for your cluster You can use OpenShift Cluster Manager to schedule recurring, automatic upgrades for z-stream patch versions for your OpenShift Dedicated cluster. Based on upstream changes, there might be times when no updates are released. Therefore, no upgrade occurs for that week. Procedure From OpenShift Cluster Manager , select your cluster from the clusters list. Click the Upgrade settings tab to access the upgrade operator. To schedule recurring upgrades, select Recurring updates . Provide an administrator's acknowledgment and click Approve and continue . OpenShift Cluster Manager does not start scheduled y-stream updates for minor versions without receiving an administrator's acknowledgment. Important Before upgrading a Workload Identity Federation (WIF)-enabled OpenShift Dedicated on Google Cloud Platform (GCP) cluster, you must update the wif-config. For more information, see "Cluster upgrades with Workload Identity Federation (WIF)". Specify the day of the week and the time you want your cluster to upgrade. Click Save . Optional: Set a grace period for Node draining by selecting a designated amount of time from the drop down list. A 1 hour grace period is set by default. To edit an existing recurring upgrade policy, edit the preferred day or start time from the Upgrade Settings tab. Click Save . To cancel a recurring upgrade policy, switch the upgrade method to individual from the Upgrade Settings tab. Click Save . On the Upgrade settings tab, the Upgrade status box indicates that an upgrade is scheduled. The date and time of the scheduled update is listed. 1.4. Scheduling individual upgrades for your cluster You can use OpenShift Cluster Manager to manually upgrade your OpenShift Dedicated cluster one time. Procedure From OpenShift Cluster Manager , select your cluster from the clusters list. Click the Upgrade settings tab to access the upgrade operator. You can also update your cluster from the Overview tab by clicking Update to the cluster version under the Details heading. To schedule an individual upgrade, select Individual updates . Click Update in the Update Status box. Select the version you want to upgrade your cluster to. Recommended cluster upgrades appear in the UI. To learn more about each available upgrade version, click View release notes . If you select an update version that requires approval, provide an administrator's acknowledgment and click Approve and continue . Important Before upgrading a Workload Identity Federation (WIF)-enabled OpenShift Dedicated on Google Cloud Platform (GCP) cluster, you must update the wif-config. For more information, see "Cluster upgrades with Workload Identity Federation (WIF)". Click . To schedule your upgrade: Click Upgrade now to upgrade within the hour. Click Schedule a different time and specify the date and time that you want the cluster to upgrade. Click . Review the upgrade policy and click Confirm upgrade . A confirmation appears when the cluster upgrade has been scheduled. Click Close . Optional: Set a grace period for Node draining by selecting a designated amount of time from the drop down list. A 1 hour grace period is set by default. From the Overview tab, to the cluster version, the UI notates that the upgrade has been scheduled. Click View details to view the upgrade details. If you need to cancel the scheduled upgrade, you can click Cancel this upgrade from the View Details pop-up. The same upgrade details are available on the Upgrade settings tab under the Upgrade status box. If you need to cancel the scheduled upgrade, you can click Cancel this upgrade from the Upgrade status box. Warning In the event that a CVE or other critical issue to OpenShift Dedicated is found, all clusters are upgraded within 48 hours of the fix being released. You are notified when the fix is available and informed that the cluster will be automatically upgraded at your latest preferred start time before the 48 hour window closes. You can also upgrade manually at any time before the recurring upgrade starts. | null | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/upgrading/osd-upgrades |
Chapter 7. Conclusion | Chapter 7. Conclusion Congratulations. In this tutorial, you learned how to incorporate data science, artificial intelligence, and machine learning into an OpenShift development workflow. You used an example fraud detection model and completed the following tasks: Explored a pre-trained fraud detection model by using a Jupyter notebook. Deployed the model by using OpenShift AI model serving. Refined and trained the model by using automated pipelines. Learned how to train the model by using Ray, a distributed computing framework. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/openshift_ai_tutorial_-_fraud_detection_example/conclusion-tutorial |
Chapter 10. Using config maps with applications | Chapter 10. Using config maps with applications Config maps allow you to decouple configuration artifacts from image content to keep containerized applications portable. The following sections define config maps and how to create and use them. For information on creating config maps, see Creating and using config maps . 10.1. Understanding config maps Many applications require configuration by using some combination of configuration files, command line arguments, and environment variables. In OpenShift Container Platform, these configuration artifacts are decoupled from image content to keep containerized applications portable. The ConfigMap object provides mechanisms to inject containers with configuration data while keeping containers agnostic of OpenShift Container Platform. A config map can be used to store fine-grained information like individual properties or coarse-grained information like entire configuration files or JSON blobs. The ConfigMap object holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data for system components such as controllers. For example: ConfigMap Object Definition kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2 1 Contains the configuration data. 2 Points to a file that contains non-UTF8 data, for example, a binary Java keystore file. Enter the file data in Base 64. Note You can use the binaryData field when you create a config map from a binary file, such as an image. Configuration data can be consumed in pods in a variety of ways. A config map can be used to: Populate environment variable values in containers Set command-line arguments in a container Populate configuration files in a volume Users and system components can store configuration data in a config map. A config map is similar to a secret, but designed to more conveniently support working with strings that do not contain sensitive information. Config map restrictions A config map must be created before its contents can be consumed in pods. Controllers can be written to tolerate missing configuration data. Consult individual components configured by using config maps on a case-by-case basis. ConfigMap objects reside in a project. They can only be referenced by pods in the same project. The Kubelet only supports the use of a config map for pods it gets from the API server. This includes any pods created by using the CLI, or indirectly from a replication controller. It does not include pods created by using the OpenShift Container Platform node's --manifest-url flag, its --config flag, or its REST API because these are not common ways to create pods. 10.2. Use cases: Consuming config maps in pods The following sections describe some uses cases when consuming ConfigMap objects in pods. 10.2.1. Populating environment variables in containers by using config maps You can use config maps to populate individual environment variables in containers or to populate environment variables in containers from all keys that form valid environment variable names. As an example, consider the following config map: ConfigMap with two environment variables apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4 1 Name of the config map. 2 The project in which the config map resides. Config maps can only be referenced by pods in the same project. 3 4 Environment variables to inject. ConfigMap with one environment variable apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2 1 Name of the config map. 2 Environment variable to inject. Procedure You can consume the keys of this ConfigMap in a pod using configMapKeyRef sections. Sample Pod specification configured to inject specific environment variables apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 restartPolicy: Never 1 Stanza to pull the specified environment variables from a ConfigMap . 2 Name of a pod environment variable that you are injecting a key's value into. 3 5 Name of the ConfigMap to pull specific environment variables from. 4 6 Environment variable to pull from the ConfigMap . 7 Makes the environment variable optional. As optional, the pod will be started even if the specified ConfigMap and keys do not exist. 8 Stanza to pull all environment variables from a ConfigMap . 9 Name of the ConfigMap to pull all environment variables from. When this pod is run, the pod logs will include the following output: Note SPECIAL_TYPE_KEY=charm is not listed in the example output because optional: true is set. 10.2.2. Setting command-line arguments for container commands with config maps You can use a config map to set the value of the commands or arguments in a container by using the Kubernetes substitution syntax USD(VAR_NAME) . As an example, consider the following config map: apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm Procedure To inject values into a command in a container, you must consume the keys you want to use as environment variables. Then you can refer to them in a container's command using the USD(VAR_NAME) syntax. Sample pod specification configured to inject specific environment variables apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type restartPolicy: Never 1 Inject the values into a command in a container using the keys you want to use as environment variables. When this pod is run, the output from the echo command run in the test-container container is as follows: 10.2.3. Injecting content into a volume by using config maps You can inject content into a volume by using config maps. Example ConfigMap custom resource (CR) apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm Procedure You have a couple different options for injecting content into a volume by using config maps. The most basic way to inject content into a volume by using a config map is to populate the volume with files where the key is the file name and the content of the file is the value of the key: apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "cat", "/etc/config/special.how" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never 1 File containing key. When this pod is run, the output of the cat command will be: You can also control the paths within the volume where config map keys are projected: apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "cat", "/etc/config/path/to/special-key" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never 1 Path to config map key. When this pod is run, the output of the cat command will be: | [
"kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4",
"apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 restartPolicy: Never",
"SPECIAL_LEVEL_KEY=very log_level=INFO",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)\" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type restartPolicy: Never",
"very charm",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/special.how\" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never",
"very",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/path/to/special-key\" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never",
"very"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/building_applications/config-maps |
7.3. Colocation of Resources | 7.3. Colocation of Resources A colocation constraint determines that the location of one resource depends on the location of another resource. There is an important side effect of creating a colocation constraint between two resources: it affects the order in which resources are assigned to a node. This is because you cannot place resource A relative to resource B unless you know where resource B is. So when you are creating colocation constraints, it is important to consider whether you should colocate resource A with resource B or resource B with resource A. Another thing to keep in mind when creating colocation constraints is that, assuming resource A is colocated with resource B, the cluster will also take into account resource A's preferences when deciding which node to choose for resource B. The following command creates a colocation constraint. For information on master and slave resources, see Section 9.2, "Multistate Resources: Resources That Have Multiple Modes" . Table 7.4, "Properties of a Colocation Constraint" . summarizes the properties and options for configuring colocation constraints. Table 7.4. Properties of a Colocation Constraint Field Description source_resource The colocation source. If the constraint cannot be satisfied, the cluster may decide not to allow the resource to run at all. target_resource The colocation target. The cluster will decide where to put this resource first and then decide where to put the source resource. score Positive values indicate the resource should run on the same node. Negative values indicate the resources should not run on the same node. A value of + INFINITY , the default value, indicates that the source_resource must run on the same node as the target_resource . A value of - INFINITY indicates that the source_resource must not run on the same node as the target_resource . 7.3.1. Mandatory Placement Mandatory placement occurs any time the constraint's score is +INFINITY or -INFINITY . In such cases, if the constraint cannot be satisfied, then the source_resource is not permitted to run. For score=INFINITY , this includes cases where the target_resource is not active. If you need myresource1 to always run on the same machine as myresource2 , you would add the following constraint: Because INFINITY was used, if myresource2 cannot run on any of the cluster nodes (for whatever reason) then myresource1 will not be allowed to run. Alternatively, you may want to configure the opposite, a cluster in which myresource1 cannot run on the same machine as myresource2 . In this case use score=-INFINITY Again, by specifying -INFINITY , the constraint is binding. So if the only place left to run is where myresource2 already is, then myresource1 may not run anywhere. 7.3.2. Advisory Placement If mandatory placement is about "must" and "must not", then advisory placement is the "I would prefer if" alternative. For constraints with scores greater than -INFINITY and less than INFINITY , the cluster will try to accommodate your wishes but may ignore them if the alternative is to stop some of the cluster resources. Advisory colocation constraints can combine with other elements of the configuration to behave as if they were mandatory. 7.3.3. Colocating Sets of Resources If your configuration requires that you create a set of resources that is colocated and started in order, you can configure a resource group that contains those resources, as described in Section 6.5, "Resource Groups" . There are some situations, however, where configuring the resources that need to be colocated as a resource group is not appropriate: You may need to colocate a set of resources but the resources do not necessarily need to start in order. You may have a resource C that must be colocated with either resource A or B has started but there is no relationship between A and B. You may have resources C and D that must be colocated with both resources A and B, but there is no relationship between A and B or between C and D. In these situations, you can create a colocation constraint on a set or sets of resources with the pcs constraint colocation set command. You can set the following options for a set of resources with the pcs constraint colocation set command. sequential , which can be set to true or false to indicate whether the members of the set must be colocated with each other. Setting sequential to false allows the members of this set to be colocated with another set listed later in the constraint, regardless of which members of this set are active. Therefore, this option makes sense only if another set is listed after this one in the constraint; otherwise, the constraint has no effect. role , which can be set to Stopped , Started , Master , or Slave . For information on multistate resources, see Section 9.2, "Multistate Resources: Resources That Have Multiple Modes" . You can set the following constraint options for a set of resources following the setoptions parameter of the pcs constraint colocation set command. kind , to indicate how to enforce the constraint. For information on this option, see Table 7.3, "Properties of an Order Constraint" . symmetrical , to indicate the order in which to stop the resources. If true, which is the default, stop the resources in the reverse order. Default value: true id , to provide a name for the constraint you are defining. When listing members of a set, each member is colocated with the one before it. For example, "set A B" means "B is colocated with A". However, when listing multiple sets, each set is colocated with the one after it. For example, "set C D sequential=false set A B" means "set C D (where C and D have no relation between each other) is colocated with set A B (where B is colocated with A)". The following command creates a colocation constraint on a set or sets of resources. 7.3.4. Removing Colocation Constraints Use the following command to remove colocation constraints with source_resource . | [
"pcs constraint colocation add [master|slave] source_resource with [master|slave] target_resource [ score ] [ options ]",
"pcs constraint colocation add myresource1 with myresource2 score=INFINITY",
"pcs constraint colocation add myresource1 with myresource2 score=-INFINITY",
"pcs constraint colocation set resource1 resource2 [ resourceN ]... [ options ] [set resourceX resourceY ... [ options ]] [setoptions [ constraint_options ]]",
"pcs constraint colocation remove source_resource target_resource"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-colocationconstraints-haar |
Chapter 3. Connecting to Knative with Kamelets | Chapter 3. Connecting to Knative with Kamelets You can connect Kamelets to Knative destinations (channels or brokers). Red Hat OpenShift Serverless is based on the open source Knative project , which provides portability and consistency across hybrid and multi-cloud environments by enabling an enterprise-grade serverless platform. OpenShift Serverless includes support for the Knative Eventing and Knative Serving components. Red Hat OpenShift Serverless, Knative Eventing, and Knative Serving enable you to use an event-driven architecture with serverless applications, decoupling the relationship between event producers and consumers by using a publish-subscribe or event-streaming model. Knative Eventing uses standard HTTP POST requests to send and receive events between event producers and consumers. These events conform to the CloudEvents specifications , which enables creating, parsing, sending, and receiving events in any programming language. You can use Kamelets to send CloudEvents to Knative and send them from Knative to event consumers. Kamelets can translate messages to CloudEvents and you can use them to apply any pre-processing and post-processing of the data within CloudEvents. 3.1. Overview of connecting to Knative with Kamelets If you use a Knative stream-processing framework, you can use Kamelets to connect services and applications to a Knative destination (channel or broker). Figure 3.1 illustrates the flow of connecting source and sink Kamelets to a Knative destination. Figure 3.1: Data flow with Kamelets and a Knative channel Here is an overview of the basic steps for using Kamelets and Kamelet Bindings to connect applications and services to a Knative destination: Set up Knative: Prepare your OpenShift cluster by installing the Camel K and OpenShift Serverless operators. Install the required Knative Serving and Eventing components. Create a Knative channel or broker. Determine which services or applications you want to connect to your Knative channel or broker. View the Kamelet Catalog to find the Kamelets for the source and sink components that you want to add to your integration. Also, determine the required configuration parameters for each Kamelet that you want to use. Create Kamelet Bindings: Create a Kamelet Binding that connects a source Kamelet to a Knative channel (or broker). Create a Kamelet Binding that connects the Knative channel (or broker) to a sink Kamelet. Optionally, manipulate the data that is passing between the Knative channel (or broker) and the data source or sink by adding one or more action Kamelets as intermediary steps within a Kamelet Binding. Optionally, define how to handle errors within a Kamelet Binding. Apply the Kamelet Bindings as resources to the project. The Camel K operator generates a separate Camel integration for each Kamelet Binding. When you configure a Kamelet Binding to use a Knative channel or a broker as the source of events, the Camel K operator materializes the corresponding integration as a Knative Serving service, to leverage the auto-scaling capabilities offered by Knative. 3.2. Setting up Knative Setting up Knative involves installing the required OpenShift operators and creating a Knative channel. 3.2.1. Preparing your OpenShift cluster To use Kamelets and OpenShift Serverless, install the following operators, components, and CLI tools: Red Hat Integration - Camel K operator and CLI tool - The operator installs and manages Camel K - a lightweight integration framework that runs natively in the cloud on OpenShift. The kamel CLI tool allows you to access all Camel K features. See the installation instructions in Installing Camel K . OpenShift Serverless operator - Provides a collection of APIs that enables containers, microservices, and functions to run "serverless". Serverless applications can scale up and down (to zero) on demand and be triggered by a number of event sources. When you install the OpenShift Serverless operator, it automatically creates the knative-serving namespace (for installing the Knative Serving component) and the knative-eventing namespace (required for installing the Knative Eventing component). Knative Eventing component Knative Serving component Knative CLI tool ( kn ) - Allows you to create Knative resources from the command line or from within Shell scripts. 3.2.1.1. Installing OpenShift Serverless You can install the OpenShift Serverless Operator on your OpenShift cluster from the OperatorHub. The OperatorHub is available from the OpenShift Container Platform web console and provides an interface for cluster administrators to discover and install Operators. The OpenShift Serverless Operator supports both Knative Serving and Knative Eventing features. For more details, see installing OpenShift Serverless Operator . Prerequisites You have cluster administrator access to an OpenShift project in which the Camel K Operator is installed. You installed the OpenShift CLI tool ( oc ) so that you can interact with the OpenShift cluster at the command line. For details on how to install the OpenShift CLI, see Installing the OpenShift CLI . Procedure In the OpenShift Container Platform web console, log in by using an account with cluster administrator privileges. In the left navigation menu, click Operators > OperatorHub . In the Filter by keyword text box, enter Serverless to find the OpenShift Serverless Operator . Read the information about the Operator and then click Install to display the Operator subscription page. Select the default subscription settings: Update Channel > Select the channel that matches your OpenShift version, for example, 4.14 Installation Mode > All namespaces on the cluster Approval Strategy > Automatic Note The Approval Strategy > Manual setting is also available if required by your environment. Click Install , and wait a few moments until the Operator is ready for use. Install the required Knative components using the steps in the OpenShift documentation: Installing Knative Serving Installing Knative Eventing (Optional) Download and install the OpenShift Serverless CLI tool: From the Help menu (?) at the top of the OpenShift web console, select Command line tools . Scroll down to the kn - OpenShift Serverless - Command Line Interface section. Click the link to download the binary for your local operating system (Linux, Mac, Windows) Unzip and install the CLI in your system path. To verify that you can access the kn CLI, open a command window and then type the following: kn --help This command shows information about OpenShift Serverless CLI commands. For more details, see the OpenShift Serverless CLI documentation . Additional resources Installing OpenShift Serverless in the OpenShift documentation 3.2.2. Creating a Knative channel A Knative channel is a custom resource that forwards events. After events have been sent to a channel from an event source or producer, these events can be sent to multiple Knative services, or other sinks, by using a subscription. This example uses an InMemoryChannel channel, which you use with OpenShift Serverless for development purposes. Note that InMemoryChannel type channels have the following limitations: No event persistence is available. If a pod goes down, events on that pod are lost. InMemoryChannel channels do not implement event ordering, so two events that are received in the channel at the same time can be delivered to a subscriber in any order. If a subscriber rejects an event, there are no re-delivery attempts by default. You can configure re-delivery attempts by modifying the delivery spec in the Subscription object. Prerequisites The OpenShift Serverless operator, Knative Eventing, and Knative Serving components are installed on your OpenShift Container Platform cluster. You have installed the OpenShift Serverless CLI ( kn ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Log in to your OpenShift cluster. Open the project in which you want to create your integration application. For example: oc project camel-k-knative Create a channel by using the Knative ( kn ) CLI command kn channel create <channel_name> --type <channel_type> For example, to create a channel named mychannel : kn channel create mychannel --type messaging.knative.dev:v1:InMemoryChannel To confirm that the channel now exists, type the following command to list all existing channels: kn channel list You should see your channel in the list. steps Connecting a data source to a Knative destination in a Kamelet Binding Connecting a Knative destination to a data sink in a Kamelet Binding 3.2.3. Creating a Knative broker A Knative broker is a custom resource that defines an event mesh for collecting a pool of CloudEvents. OpenShift Serverless provides a default Knative broker that you can create by using the kn CLI. You can use a broker in a Kamelet Binding, for example, when your application handles multiple event types and you do not want to create a channel for each event type. Prerequisites The OpenShift Serverless operator, Knative Eventing, and Knative Serving components are installed on your OpenShift Container Platform cluster. You have installed the OpenShift Serverless CLI ( kn ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Log in to your OpenShift cluster. Open the project in which you want to create your integration application. For example: oc project camel-k-knative Create the broker by using this Knative ( kn ) CLI command: kn broker create default To confirm that the broker now exists, type the following command to list all existing brokers: kn broker list You should see the default broker in the list. steps Connecting a data source to a Knative destination in a Kamelet Binding Connecting a Knative destination to a data sink in a Kamelet Binding 3.3. Connecting a data source to a Knative destination in a Kamelet Binding To connect a data source to a Knative destination (channel or broker), you create a Kamelet Binding as illustrated in Figure 3.2 . Figure 3.2 Connecting a data source to a Knative destination The Knative destination can be a Knative channel or a Knative broker. When you send data to a channel, there is only one event type for the channel. You do not need to specify any property values for the channel in a Kamelet Binding. When you send data to a broker, because the broker can handle more than one event type, you must specify a value for the type property when you reference the broker in a Kamelet Binding. Prerequisites You know the name and type of the Knative channel or broker to which you want to send events. The example in this procedure uses the InMemoryChannel channel named mychannel or the broker named default . For the broker example, the type property value is coffee for coffee events. You know which Kamelet you want to add to your Camel integration and the required instance parameters. The example Kamelet for this procedure is the coffee-source Kamelet. It has an optional parameter, period , that specifies how often to send each event. You can copy the code from Example source Kamelet to a file named coffee-source.kamelet.yaml file and then run the following command to add it as a resource to your namespace: oc apply -f coffee-source.kamelet.yaml Procedure To connect a data source to a Knative destination, create a Kamelet Binding: In an editor of your choice, create a YAML file with the following basic structure: Add a name for the Kamelet Binding. For this example, the name is coffees-to-knative because the binding connects the coffee-source Kamelet to a Knative destination. For the Kamelet Binding's source, specify a data source Kamelet (for example, the coffee-source Kamelet produces events that contain data about coffee) and configure any parameters for the Kamelet. For the Kamelet Binding's sink specify the Knative channel or broker and the required parameters. This example specifies a Knative channel as the sink: This example specifies a Knative broker as the sink: Save the YAML file (for example, coffees-to-knative.yaml ). Log into your OpenShift project. Add the Kamelet Binding as a resource to your OpenShift namespace: oc apply -f <kamelet binding filename> For example: oc apply -f coffees-to-knative.yaml The Camel K operator generates and runs a Camel K integration by using the KameletBinding resource. It might take a few minutes to build. To see the status of the KameletBinding : oc get kameletbindings To see the status of their integrations: oc get integrations To view the integration's log: kamel logs <integration> -n <project> For example: kamel logs coffees-to-knative -n my-camel-knative steps Connecting a Knative destination to a data sink in a Kamelet Binding See also Applying operations to data within a connection Handling errors within a connection 3.4. Connecting a Knative destination to a data sink in a Kamelet Binding To connect a Knative destination to a data sink, you create a Kamelet Binding as illustrated in Figure 3.3 . Figure 3.3 Connecting a Knative destination to a data sink The Knative destination can be a Knative channel or a Knative broker. When you send data from a channel, there is only one event type for the channel. You do not need to specify any property values for the channel in a Kamelet Binding. When you send data from a broker, because the broker can handle more than one event type, you must specify a value for the type property when you reference the broker in a Kamelet Binding. Prerequisites You know the name and type of the Knative channel or the name of the broker from which you want to receive events. For a broker, you also know the type of events that you want to receive. The example in this procedure uses the InMemoryChannel channel named mychannel or the broker named mybroker and coffee events (for the type property). These are the same example destinations that are used to receive events from the coffee source in Connecting a data source to a Knative channel in a Kamelet Binding . You know which Kamelet you want to add to your Camel integration and the required instance parameters. The example Kamelet for this procedure is the log-sink Kamelet that is provided in the Kamelet Catalog and is useful for testing and debugging. The showStreams parameter specified to show the message body of the data. Procedure To connect a Knative channel to a data sink, create a Kamelet Binding: In an editor of your choice, create a YAML file with the following basic structure: Add a name for the Kamelet Binding. For this example, the name is knative-to-log because the binding connects the Knative destination to the log-sink Kamelet. For the Kamelet Binding's source, specify the Knative channel or broker and the required parameters. This example specifies a Knative channel as the source: This example specifies a Knative broker as the source: For the Kamelet Binding's sink, specify the data consumer Kamelet (for example, the log-sink Kamelet) and configure any parameters for the Kamelet, for example: Save the YAML file (for example, knative-to-log.yaml ). Log into your OpenShift project. Add the Kamelet Binding as a resource to your OpenShift namespace: oc apply -f <kamelet binding filename> For example: oc apply -f knative-to-log.yaml The Camel K operator generates and runs a Camel K integration by using the KameletBinding resource. It might take a few minutes to build. To see the status of the KameletBinding : oc get kameletbindings To see the status of the integration: oc get integrations To view the integration's log: kamel logs <integration> -n <project> For example: kamel logs knative-to-log -n my-camel-knative In the output, you should see coffee events, for example: To stop a running integration, delete the associated Kamelet Binding resource: oc delete kameletbindings/<kameletbinding-name> For example: oc delete kameletbindings/knative-to-log See also Applying operations to data within a connection Handling errors within a connection | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: spec: source: sink:",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: coffees-to-knative spec: source: sink:",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: coffees-to-knative spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: coffee-source properties: period: 5000 sink:",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: coffees-to-knative spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: coffee-source properties: period: 5000 sink: ref: apiVersion: messaging.knative.dev/v1 kind: InMemoryChannel name: mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: coffees-to-knative spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: coffee-source properties: period: 5000 sink: ref: kind: Broker apiVersion: eventing.knative.dev/v1 name: default properties: type: coffee",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: spec: source: sink:",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: knative-to-log spec: source: sink:",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: knative-to-log spec: source: ref: apiVersion: messaging.knative.dev/v1 kind: InMemoryChannel name: mychannel sink:",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: knative-to-log spec: source: ref: kind: Broker apiVersion: eventing.knative.dev/v1 name: default properties: type: coffee sink:",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: knative-to-log spec: source: ref: apiVersion: messaging.knative.dev/v1 kind: InMemoryChannel name: mychannel sink: ref: apiVersion: camel.apache.org/v1alpha1 kind: Kamelet name: log-sink properties: showStreams: true",
"[1] INFO [sink] (vert.x-worker-thread-1) {\"id\":254,\"uid\":\"8e180ef7-8924-4fc7-ab81-d6058618cc42\",\"blend_name\":\"Good-morning Star\",\"origin\":\"Santander, Colombia\",\"variety\":\"Kaffa\",\"notes\":\"delicate, creamy, lemongrass, granola, soil\",\"intensifier\":\"sharp\"} [1] INFO [sink] (vert.x-worker-thread-2) {\"id\":8169,\"uid\":\"3733c3a5-4ad9-43a3-9acc-d4cd43de6f3d\",\"blend_name\":\"Caf? Java\",\"origin\":\"Nayarit, Mexico\",\"variety\":\"Red Bourbon\",\"notes\":\"unbalanced, full, granola, bittersweet chocolate, nougat\",\"intensifier\":\"delicate\"}"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/integrating_applications_with_kamelets/connecting-to-knative-kamelets |
Chapter 72. KafkaClientAuthenticationScramSha256 schema reference | Chapter 72. KafkaClientAuthenticationScramSha256 schema reference Used in: KafkaBridgeSpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaClientAuthenticationScramSha256 schema properties To configure SASL-based SCRAM-SHA-256 authentication, set the type property to scram-sha-256 . The SCRAM-SHA-256 authentication mechanism requires a username and password. 72.1. username Specify the username in the username property. 72.2. passwordSecret In the passwordSecret property, specify a link to a Secret containing the password. You can use the secrets created by the User Operator. If required, you can create a text file that contains the password, in cleartext, to use for authentication: echo -n PASSWORD > MY-PASSWORD .txt You can then create a Secret from the text file, setting your own field name (key) for the password: oc create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt Example Secret for SCRAM-SHA-256 client authentication for Kafka Connect apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-connect-password-field: LFTIyFRFlMmU2N2Tm The secretName property contains the name of the Secret , and the password property contains the name of the key under which the password is stored inside the Secret . Important Do not specify the actual password in the password property. Example SASL-based SCRAM-SHA-256 client authentication configuration for Kafka Connect authentication: type: scram-sha-256 username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-connect-password-field 72.3. KafkaClientAuthenticationScramSha256 schema properties Property Description passwordSecret Reference to the Secret which holds the password. PasswordSecretSource type Must be scram-sha-256 . string username Username used for the authentication. string | [
"echo -n PASSWORD > MY-PASSWORD .txt",
"create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt",
"apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-connect-password-field: LFTIyFRFlMmU2N2Tm",
"authentication: type: scram-sha-256 username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-connect-password-field"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaClientAuthenticationScramSha256-reference |
Chapter 14. Interoperability | Chapter 14. Interoperability This chapter discusses how to use AMQ C++ in combination with other AMQ components. For an overview of the compatibility of AMQ components, see the product introduction . 14.1. Interoperating with other AMQP clients AMQP messages are composed using the AMQP type system . This common format is one of the reasons AMQP clients in different languages are able to interoperate with each other. When sending messages, AMQ C++ automatically converts language-native types to AMQP-encoded data. When receiving messages, the reverse conversion takes place. Note More information about AMQP types is available at the interactive type reference maintained by the Apache Qpid project. Table 14.1. AMQP types AMQP type Description null An empty value boolean A true or false value char A single Unicode character string A sequence of Unicode characters binary A sequence of bytes byte A signed 8-bit integer short A signed 16-bit integer int A signed 32-bit integer long A signed 64-bit integer ubyte An unsigned 8-bit integer ushort An unsigned 16-bit integer uint An unsigned 32-bit integer ulong An unsigned 64-bit integer float A 32-bit floating point number double A 64-bit floating point number array A sequence of values of a single type list A sequence of values of variable type map A mapping from distinct keys to values uuid A universally unique identifier symbol A 7-bit ASCII string from a constrained domain timestamp An absolute point in time Table 14.2. AMQ C++ types before encoding and after decoding AMQP type AMQ C++ type before encoding AMQ C++ type after decoding null nullptr nullptr boolean bool bool char wchar_t wchar_t string std::string std::string binary proton::binary proton::binary byte int8_t int8_t short int16_t int16_t int int32_t int32_t long int64_t int64_t ubyte uint8_t uint8_t ushort uint16_t uint16_t uint uint32_t uint32_t ulong uint64_t uint64_t float float float double double double list std::vector std::vector map std::map std::map uuid proton::uuid proton::uuid symbol proton::symbol proton::symbol timestamp proton::timestamp proton::timestamp Table 14.3. AMQ C++ and other AMQ client types (1 of 2) AMQ C++ type before encoding AMQ JavaScript type AMQ .NET type nullptr null null bool boolean System.Boolean wchar_t number System.Char std::string string System.String proton::binary string System.Byte[] int8_t number System.SByte int16_t number System.Int16 int32_t number System.Int32 int64_t number System.Int64 uint8_t number System.Byte uint16_t number System.UInt16 uint32_t number System.UInt32 uint64_t number System.UInt64 float number System.Single double number System.Double std::vector Array Amqp.List std::map object Amqp.Map proton::uuid number System.Guid proton::symbol string Amqp.Symbol proton::timestamp number System.DateTime Table 14.4. AMQ C++ and other AMQ client types (2 of 2) AMQ C++ type before encoding AMQ Python type AMQ Ruby type nullptr None nil bool bool true, false wchar_t unicode String std::string unicode String proton::binary bytes String int8_t int Integer int16_t int Integer int32_t long Integer int64_t long Integer uint8_t long Integer uint16_t long Integer uint32_t long Integer uint64_t long Integer float float Float double float Float std::vector list Array std::map dict Hash proton::uuid - - proton::symbol str Symbol proton::timestamp long Time 14.2. Interoperating with AMQ JMS AMQP defines a standard mapping to the JMS messaging model. This section discusses the various aspects of that mapping. For more information, see the AMQ JMS Interoperability chapter. JMS message types AMQ C++ provides a single message type whose body type can vary. By contrast, the JMS API uses different message types to represent different kinds of data. The table below indicates how particular body types map to JMS message types. For more explicit control of the resulting JMS message type, you can set the x-opt-jms-msg-type message annotation. See the AMQ JMS Interoperability chapter for more information. Table 14.5. AMQ C++ and JMS message types AMQ C++ body type JMS message type std::string TextMessage nullptr TextMessage proton::binary BytesMessage Any other type ObjectMessage 14.3. Connecting to AMQ Broker AMQ Broker is designed to interoperate with AMQP 1.0 clients. Check the following to ensure the broker is configured for AMQP messaging: Port 5672 in the network firewall is open. The AMQ Broker AMQP acceptor is enabled. See Default acceptor settings . The necessary addresses are configured on the broker. See Addresses, Queues, and Topics . The broker is configured to permit access from your client, and the client is configured to send the required credentials. See Broker Security . 14.4. Connecting to AMQ Interconnect AMQ Interconnect works with any AMQP 1.0 client. Check the following to ensure the components are configured correctly: Port 5672 in the network firewall is open. The router is configured to permit access from your client, and the client is configured to send the required credentials. See Securing network connections . | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_cpp_client/interoperability |
Chapter 5. Installing a cluster on Azure with customizations | Chapter 5. Installing a cluster on Azure with customizations In OpenShift Container Platform version 4.16, you can install a customized cluster on infrastructure that the installation program provisions on Microsoft Azure. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.4. Using the Azure Marketplace offering Using the Azure Marketplace offering lets you deploy an OpenShift Container Platform cluster, which is billed on pay-per-use basis (hourly, per core) through Azure, while still being supported directly by Red Hat. To deploy an OpenShift Container Platform cluster using the Azure Marketplace offering, you must first obtain the Azure Marketplace image. The installation program uses this image to deploy worker or control plane nodes. When obtaining your image, consider the following: While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher. The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image. Important Installing images with the Azure marketplace is not supported on clusters with 64-bit ARM instances. Prerequisites You have installed the Azure CLI client (az) . Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client. Procedure Display all of the available OpenShift Container Platform images by running one of the following commands: North America: USD az vm image list --all --offer rh-ocp-worker --publisher redhat -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409 EMEA: USD az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409 Note Use the latest image that is available for compute and control plane nodes. If required, your VMs are automatically upgraded as part of the installation process. Inspect the image for your offer by running one of the following commands: North America: USD az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Review the terms of the offer by running one of the following commands: North America: USD az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Accept the terms of the offering by running one of the following commands: North America: USD az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Record the image details of your offer. You must update the compute section in the install-config.yaml file with values for publisher , offer , sku , and version before deploying the cluster. You may also update the controlPlane section to deploy control plane machines with the specified image details, or the defaultMachinePlatform section to deploy both control plane and compute machines with the specified image details. Use the latest available image for control plane and compute nodes. Sample install-config.yaml file with the Azure Marketplace compute nodes apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: azure: type: Standard_D4s_v5 osImage: publisher: redhat offer: rh-ocp-worker sku: rh-ocp-worker version: 413.92.2023101700 replicas: 3 5.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 5.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, leave this value blank. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that the cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on Azure". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Additional resources Installation configuration parameters for Azure 5.6.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 5.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 5.6.2. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 5.1. Machine types based on 64-bit x86 architecture standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSMediumMemoryv2Family standardMISMediumMemoryv2Family standardMSFamily standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 5.6.3. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 5.2. Machine types based on 64-bit ARM architecture standardBpsv2Family standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family 5.6.4. Enabling trusted launch for Azure VMs You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules . See the Azure documentation about virtual machine sizes to learn what sizes of virtual machines support these features. Important Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 1 Specify controlPlane.platform.azure or compute.platform.azure to enable trusted launch on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to enable trusted launch on all nodes. 2 Enable trusted launch features. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 5.6.5. Enabling confidential VMs You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes. Important Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use confidential VMs with the following VM sizes: DCasv5-series DCadsv5-series ECasv5-series ECadsv5-series Important Confidential VMs are currently not supported on 64-bit ARM architectures. Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5 1 Specify controlPlane.platform.azure or compute.platform.azure to deploy confidential VMs on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to deploy confidential VMs on all nodes. 2 Enable confidential VMs. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 5 Specify VMGuestStateOnly to encrypt the VM guest state. 5.6.6. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 16 fips: false 17 sshKey: ssh-ed25519 AAAA... 18 1 10 14 16 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The publisher , offer , sku , and version parameters under platform.azure.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the parameters under controlPlane.platform.azure.osImage or compute.platform.azure.osImage are set, they override the platform.azure.defaultMachinePlatform.osImage parameters. 13 Specify the name of the resource group that contains the DNS zone for your base domain. 15 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 17 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 18 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 5.6.7. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 5.7. Configuring user-defined tags for Azure In OpenShift Container Platform, you can use the tags for grouping resources and for managing resource access and cost. You can define the tags on the Azure resources in the install-config.yaml file only during OpenShift Container Platform cluster creation. You cannot modify the user-defined tags after cluster creation. Support for user-defined tags is available only for the resources created in the Azure Public Cloud. User-defined tags are not supported for the OpenShift Container Platform clusters upgraded to OpenShift Container Platform 4.16. User-defined and OpenShift Container Platform specific tags are applied only to the resources created by the OpenShift Container Platform installer and its core operators such as Machine api provider azure Operator, Cluster Ingress Operator, Cluster Image Registry Operator. By default, OpenShift Container Platform installer attaches the OpenShift Container Platform tags to the Azure resources. These OpenShift Container Platform tags are not accessible for the users. You can use the .platform.azure.userTags field in the install-config.yaml file to define the list of user-defined tags as shown in the following install-config.yaml file. Sample install-config.yaml file additionalTrustBundlePolicy: Proxyonly 1 apiVersion: v1 baseDomain: catchall.azure.devcluster.openshift.com 2 compute: 3 - architecture: amd64 hyperthreading: Enabled 4 name: worker platform: {} replicas: 3 controlPlane: 5 architecture: amd64 hyperthreading: Enabled 6 name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: user 7 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 8 serviceNetwork: - 172.30.0.0/16 platform: azure: baseDomainResourceGroupName: os4-common 9 cloudName: AzurePublicCloud 10 outboundType: Loadbalancer region: southindia 11 userTags: 12 createdBy: user environment: dev 1 Defines the trust bundle policy. 2 Required. The baseDomain parameter specifies the base domain of your cloud provider. The installation program prompts you for this value. 3 The configuration for the machines that comprise compute. The compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - . If you do not provide these parameters and values, the installation program provides the default value. 4 To enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. 5 The configuration for the machines that comprise the control plane. The controlPlane section is a single mapping. The first line of the controlPlane section must not begin with a hyphen, - . You can use only one control plane pool. If you do not provide these parameters and values, the installation program provides the default value. 6 To enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. 7 The installation program prompts you for this value. 8 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 9 Specifies the resource group for the base domain of the Azure DNS zone. 10 Specifies the name of the Azure cloud environment. You can use the cloudName field to configure the Azure SDK with the Azure API endpoints. If you do not provide value, the default value is Azure Public Cloud. 11 Required. Specifies the name of the Azure region that hosts your cluster. The installation program prompts you for this value. 12 Defines the additional keys and values that the installation program adds as tags to all Azure resources that it creates. The user-defined tags have the following limitations: A tag key can have a maximum of 128 characters. A tag key must begin with a letter, end with a letter, number or underscore, and can contain only letters, numbers, underscores, periods, and hyphens. Tag keys are case-insensitive. Tag keys cannot be name . It cannot have prefixes such as kubernetes.io , openshift.io , microsoft , azure , and windows . A tag value can have a maximum of 256 characters. You can configure a maximum of 10 tags for resource group and resources. For more information about Azure tags, see Azure user-defined tags 5.8. Querying user-defined tags for Azure After creating the OpenShift Container Platform cluster, you can access the list of defined tags for the Azure resources. The format of the OpenShift Container Platform tags is kubernetes.io_cluster.<cluster_id>:owned . The cluster_id parameter is the value of .status.infrastructureName present in config.openshift.io/Infrastructure . Query the tags defined for Azure resources by running the following command: USD oc get infrastructures.config.openshift.io cluster -o=jsonpath-as-json='{.status.platformStatus.azure.resourceTags}' Example output [ [ { "key": "createdBy", "value": "user" }, { "key": "environment", "value": "dev" } ] ] 5.9. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 5.10. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an Azure cluster to use short-term credentials . 5.10.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 5.10.2. Configuring an Azure cluster to use short-term credentials To install a cluster that uses Microsoft Entra Workload ID, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster. 5.10.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created a global Microsoft Azure account for the ccoctl utility to use with the following permissions: Example 5.3. Required Azure permissions Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.Resources/subscriptions/resourceGroups/delete Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/delete Microsoft.Authorization/roleAssignments/write Microsoft.Authorization/roleDefinitions/read Microsoft.Authorization/roleDefinitions/write Microsoft.Authorization/roleDefinitions/delete Microsoft.Storage/storageAccounts/listkeys/action Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/blobServices/containers/delete Microsoft.Storage/storageAccounts/blobServices/containers/read Microsoft.ManagedIdentity/userAssignedIdentities/delete Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete Microsoft.Storage/register/action Microsoft.ManagedIdentity/register/action Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 5.10.2.2. Creating Azure resources with the Cloud Credential Operator utility You can use the ccoctl azure create-all command to automate the creation of Azure resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Access to your Microsoft Azure account by using the Azure CLI. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. To enable the ccoctl utility to detect your Azure credentials automatically, log in to the Azure CLI by running the following command: USD az login Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl azure create-all \ --name=<azure_infra_name> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --region=<azure_region> \ 3 --subscription-id=<azure_subscription_id> \ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \ 6 --tenant-id=<azure_tenant_id> 7 1 Specify the user-defined name for all created Azure resources used for tracking. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Specify the Azure region in which cloud resources will be created. 4 Specify the Azure subscription ID to use. 5 Specify the directory containing the files for the component CredentialsRequest objects. 6 Specify the name of the resource group containing the cluster's base domain Azure DNS zone. 7 Specify the Azure tenant ID to use. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. To see additional optional parameters and explanations of how to use them, run the azure create-all --help command. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml You can verify that the Microsoft Entra ID service accounts are created by querying Azure. For more information, refer to Azure documentation on listing Entra ID service accounts. 5.10.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you used the ccoctl utility to create a new Azure resource group instead of using an existing resource group, modify the resourceGroupName parameter in the install-config.yaml as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com # ... platform: azure: resourceGroupName: <azure_infra_name> 1 # ... 1 This value must match the user-defined name for Azure resources that was specified with the --name argument of the ccoctl azure create-all command. If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 5.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 5.12. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 5.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.14. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"az vm image list --all --offer rh-ocp-worker --publisher redhat -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409",
"az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409",
"az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: azure: type: Standard_D4s_v5 osImage: publisher: redhat offer: rh-ocp-worker sku: rh-ocp-worker version: 413.92.2023101700 replicas: 3",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 16 fips: false 17 sshKey: ssh-ed25519 AAAA... 18",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"additionalTrustBundlePolicy: Proxyonly 1 apiVersion: v1 baseDomain: catchall.azure.devcluster.openshift.com 2 compute: 3 - architecture: amd64 hyperthreading: Enabled 4 name: worker platform: {} replicas: 3 controlPlane: 5 architecture: amd64 hyperthreading: Enabled 6 name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: user 7 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 8 serviceNetwork: - 172.30.0.0/16 platform: azure: baseDomainResourceGroupName: os4-common 9 cloudName: AzurePublicCloud 10 outboundType: Loadbalancer region: southindia 11 userTags: 12 createdBy: user environment: dev",
"oc get infrastructures.config.openshift.io cluster -o=jsonpath-as-json='{.status.platformStatus.azure.resourceTags}'",
"[ [ { \"key\": \"createdBy\", \"value\": \"user\" }, { \"key\": \"environment\", \"value\": \"dev\" } ] ]",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"az login",
"ccoctl azure create-all --name=<azure_infra_name> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --region=<azure_region> \\ 3 --subscription-id=<azure_subscription_id> \\ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \\ 6 --tenant-id=<azure_tenant_id> 7",
"ls <path_to_ccoctl_output_dir>/manifests",
"azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"apiVersion: v1 baseDomain: example.com platform: azure: resourceGroupName: <azure_infra_name> 1",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_azure/installing-azure-customizations |
function::ansi_reset_color | function::ansi_reset_color Name function::ansi_reset_color - Resets Select Graphic Rendition mode. Synopsis Arguments None Description Sends ansi code to reset foreground, background and color attribute to default values. | [
"ansi_reset_color()"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ansi-reset-color |
Eclipse Temurin 8.0.402 release notes | Eclipse Temurin 8.0.402 release notes Red Hat build of OpenJDK 8 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.402_release_notes/index |
function::errno_str | function::errno_str Name function::errno_str - Symbolic string associated with error code Synopsis Arguments err The error number received Description This function returns the symbolic string associated with the giver error code, such as ENOENT for the number 2, or E#3333 for an out-of-range value such as 3333. | [
"errno_str:string(err:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-errno-str |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/how_to_configure_identity_management/making-open-source-more-inclusive |
16.9. Setting the Hostname | 16.9. Setting the Hostname Setup prompts you to supply a host name for this computer, either as a fully-qualified domain name (FQDN) in the format hostname . domainname or as a short host name in the format hostname . Many networks have a Dynamic Host Configuration Protocol (DHCP) service that automatically supplies connected systems with a domain name. To allow the DHCP service to assign the domain name to this machine, specify the short host name only. Note You may give your system any name provided that the full hostname is unique. The hostname may include letters, numbers and hyphens. Figure 16.24. Setting the hostname If your Red Hat Enterprise Linux system is connected directly to the Internet, you must pay attention to additional considerations to avoid service interruptions or risk action by your upstream service provider. A full discussion of these issues is beyond the scope of this document. Note The installation program does not configure modems. Configure these devices after installation with the Network utility. The settings for your modem are specific to your particular Internet Service Provider (ISP). 16.9.1. Editing Network Connections Important When a Red Hat Enterprise Linux 6.9 installation boots for the first time, it activates any network interfaces that you configured during the installation process. However, the installer does not prompt you to configure network interfaces on some common installation paths, for example, when you install Red Hat Enterprise Linux from a DVD to a local hard drive. When you install Red Hat Enterprise Linux from a local installation source to a local storage device, be sure to configure at least one network interface manually if you require network access when the system boots for the first time. You will need to select the Connect automatically option manually when editing the connection. Note To change your network configuration after you have completed the installation, use the Network Administration Tool . Type the system-config-network command in a shell prompt to launch the Network Administration Tool . If you are not root, it prompts you for the root password to continue. The Network Administration Tool is now deprecated and will be replaced by NetworkManager during the lifetime of Red Hat Enterprise Linux 6. To configure a network connection manually, click the button Configure Network . The Network Connections dialog appears that allows you to configure wired, wireless, mobile broadband, InfiniBand, VPN, DSL, VLAN, and bonded connections for the system using the NetworkManager tool. A full description of all configurations possible with NetworkManager is beyond the scope of this guide. This section only details the most typical scenario of how to configure wired connections during installation. Configuration of other types of network is broadly similar, although the specific parameters that you must configure are necessarily different. Figure 16.25. Network Connections To add a new connection, click Add and select a connection type from the menu. To modify an existing connection, select it in the list and click Edit . In either case, a dialog box appears with a set of tabs that is appropriate to the particular connection type, as described below. To remove a connection, select it in the list and click Delete . When you have finished editing network settings, click Apply to save the new configuration. If you reconfigured a device that was already active during installation, you must restart the device to use the new configuration - refer to Section 9.7.1.6, "Restart a network device" . 16.9.1.1. Options common to all types of connection Certain configuration options are common to all connection types. Specify a name for the connection in the Connection name name field. Select Connect automatically to start the connection automatically when the system boots. When NetworkManager runs on an installed system, the Available to all users option controls whether a network configuration is available system-wide or not. During installation, ensure that Available to all users remains selected for any network interface that you configure. 16.9.1.2. The Wired tab Use the Wired tab to specify or change the media access control (MAC) address for the network adapter, and either set the maximum transmission unit (MTU, in bytes) that can pass through the interface. Figure 16.26. The Wired tab 16.9.1.3. The 802.1x Security tab Use the 802.1x Security tab to configure 802.1X port-based network access control (PNAC). Select Use 802.1X security for this connection to enable access control, then specify details of your network. The configuration options include: Authentication Choose one of the following methods of authentication: TLS for Transport Layer Security Tunneled TLS for Tunneled Transport Layer Security , otherwise known as TTLS, or EAP-TTLS Protected EAP (PEAP) for Protected Extensible Authentication Protocol Identity Provide the identity of this server. User certificate Browse to a personal X.509 certificate file encoded with Distinguished Encoding Rules (DER) or Privacy Enhanced Mail (PEM). CA certificate Browse to a X.509 certificate authority certificate file encoded with Distinguished Encoding Rules (DER) or Privacy Enhanced Mail (PEM). Private key Browse to a private key file encoded with Distinguished Encoding Rules (DER), Privacy Enhanced Mail (PEM), or the Personal Information Exchange Syntax Standard (PKCS#12). Private key password The password for the private key specified in the Private key field. Select Show password to make the password visible as you type it. Figure 16.27. The 802.1x Security tab 16.9.1.4. The IPv4 Settings tab Use the IPv4 Settings tab tab to configure the IPv4 parameters for the previously selected network connection. Use the Method drop-down menu to specify which settings the system should attempt to obtain from a Dynamic Host Configuration Protocol (DHCP) service running on the network. Choose from the following options: Automatic (DHCP) IPv4 parameters are configured by the DHCP service on the network. Automatic (DHCP) addresses only The IPv4 address, netmask, and gateway address are configured by the DHCP service on the network, but DNS servers and search domains must be configured manually. Manual IPv4 parameters are configured manually for a static configuration. Link-Local Only A link-local address in the 169.254/16 range is assigned to the interface. Shared to other computers The system is configured to provide network access to other computers. The interface is assigned an address in the 10.42.x.1/24 range, a DHCP server and DNS server are started, and the interface is connected to the default network connection on the system with network address translation (NAT). Disabled IPv4 is disabled for this connection. If you selected a method that requires you to supply manual parameters, enter details of the IP address for this interface, the netmask, and the gateway in the Addresses field. Use the Add and Delete buttons to add or remove addresses. Enter a comma-separated list of DNS servers in the DNS servers field, and a comma-separated list of domains in the Search domains field for any domains that you want to include in name server lookups. Optionally, enter a name for this network connection in the DHCP client ID field. This name must be unique on the subnet. When you assign a meaningful DHCP client ID to a connection, it is easy to identify this connection when troubleshooting network problems. Deselect the Require IPv4 addressing for this connection to complete check box to allow the system to make this connection on an IPv6-enabled network if IPv4 configuration fails but IPv6 configuration succeeds. Figure 16.28. The IPv4 Settings tab 16.9.1.4.1. Editing IPv4 routes Red Hat Enterprise Linux configures a number of routes automatically based on the IP addresses of a device. To edit additional routes, click the Routes button. The Editing IPv4 routes dialog appears. Figure 16.29. The Editing IPv4 Routes dialog Click Add to add the IP address, netmask, gateway address, and metric for a new static route. Select Ignore automatically obtained routes to make the interface use only the routes specified for it here. Select Use this connection only for resources on its network to restrict connections only to the local network. 16.9.1.5. The IPv6 Settings tab Use the IPv6 Settings tab tab to configure the IPv6 parameters for the previously selected network connection. Use the Method drop-down menu to specify which settings the system should attempt to obtain from a Dynamic Host Configuration Protocol (DHCP) service running on the network. Choose from the following options: Ignore IPv6 is ignored for this connection. Automatic NetworkManager uses router advertisement (RA) to create an automatic, stateless configuration. Automatic, addresses only NetworkManager uses RA to create an automatic, stateless configuration, but DNS servers and search domains are ignored and must be configured manually. Automatic, DHCP only NetworkManager does not use RA, but requests information from DHCPv6 directly to create a stateful configuration. Manual IPv6 parameters are configured manually for a static configuration. Link-Local Only A link-local address with the fe80::/10 prefix is assigned to the interface. If you selected a method that requires you to supply manual parameters, enter details of the IP address for this interface, the netmask, and the gateway in the Addresses field. Use the Add and Delete buttons to add or remove addresses. Enter a comma-separated list of DNS servers in the DNS servers field, and a comma-separated list of domains in the Search domains field for any domains that you want to include in name server lookups. Optionally, enter a name for this network connection in the DHCP client ID field. This name must be unique on the subnet. When you assign a meaningful DHCP client ID to a connection, it is easy to identify this connection when troubleshooting network problems. Deselect the Require IPv6 addressing for this connection to complete check box to allow the system to make this connection on an IPv4-enabled network if IPv6 configuration fails but IPv4 configuration succeeds. Figure 16.30. The IPv6 Settings tab 16.9.1.5.1. Editing IPv6 routes Red Hat Enterprise Linux configures a number of routes automatically based on the IP addresses of a device. To edit additional routes, click the Routes button. The Editing IPv6 routes dialog appears. Figure 16.31. The Editing IPv6 Routes dialog Click Add to add the IP address, netmask, gateway address, and metric for a new static route. Select Use this connection only for resources on its network to restrict connections only to the local network. 16.9.1.6. Restart a network device If you reconfigured a network that was already in use during installation, you must disconnect and reconnect the device in anaconda for the changes to take effect. Anaconda uses interface configuration (ifcfg) files to communicate with NetworkManager . A device becomes disconnected when its ifcfg file is removed, and becomes reconnected when its ifcfg file is restored, as long as ONBOOT=yes is set. Refer to the Red Hat Enterprise Linux 6.9 Deployment Guide available from https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/index.html for more information about interface configuration files. Press Ctrl + Alt + F2 to switch to virtual terminal tty2 . Move the interface configuration file to a temporary location: where device_name is the device that you just reconfigured. For example, ifcfg-eth0 is the ifcfg file for eth0 . The device is now disconnected in anaconda . Open the interface configuration file in the vi editor: Verify that the interface configuration file contains the line ONBOOT=yes . If the file does not already contain the line, add it now and save the file. Exit the vi editor. Move the interface configuration file back to the /etc/sysconfig/network-scripts/ directory: The device is now reconnected in anaconda . Press Ctrl + Alt + F6 to return to anaconda . | [
"mv /etc/sysconfig/network-scripts/ifcfg- device_name /tmp",
"vi /tmp/ifcfg- device_name",
"mv /tmp/ifcfg- device_name /etc/sysconfig/network-scripts/"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-netconfig-ppc |
Chapter 3. Setting up the environment for an OpenShift installation | Chapter 3. Setting up the environment for an OpenShift installation 3.1. Installing RHEL on the provisioner node With the configuration of the prerequisites complete, the step is to install RHEL 9.x on the provisioner node. The installer uses the provisioner node as the orchestrator while installing the OpenShift Container Platform cluster. For the purposes of this document, installing RHEL on the provisioner node is out of scope. However, options include but are not limited to using a RHEL Satellite server, PXE, or installation media. 3.2. Preparing the provisioner node for OpenShift Container Platform installation Perform the following steps to prepare the environment. Procedure Log in to the provisioner node via ssh . Create a non-root user ( kni ) and provide that user with sudo privileges: # useradd kni # passwd kni # echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni # chmod 0440 /etc/sudoers.d/kni Create an ssh key for the new user: # su - kni -c "ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''" Log in as the new user on the provisioner node: # su - kni Use Red Hat Subscription Manager to register the provisioner node: USD sudo subscription-manager register --username=<user> --password=<pass> --auto-attach USD sudo subscription-manager repos --enable=rhel-9-for-<architecture>-appstream-rpms --enable=rhel-9-for-<architecture>-baseos-rpms Note For more information about Red Hat Subscription Manager, see Using and Configuring Red Hat Subscription Manager . Install the following packages: USD sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool Modify the user to add the libvirt group to the newly created user: USD sudo usermod --append --groups libvirt <user> Restart firewalld and enable the http service: USD sudo systemctl start firewalld USD sudo firewall-cmd --zone=public --add-service=http --permanent USD sudo firewall-cmd --reload Start and enable the libvirtd service: USD sudo systemctl enable libvirtd --now Create the default storage pool and start it: USD sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images USD sudo virsh pool-start default USD sudo virsh pool-autostart default Create a pull-secret.txt file: USD vim pull-secret.txt In a web browser, navigate to Install OpenShift on Bare Metal with installer-provisioned infrastructure . Click Copy pull secret . Paste the contents into the pull-secret.txt file and save the contents in the kni user's home directory. 3.3. Checking NTP server synchronization The OpenShift Container Platform installation program installs the chrony Network Time Protocol (NTP) service on the cluster nodes. To complete installation, each node must have access to an NTP time server. You can verify NTP server synchronization by using the chrony service. For disconnected clusters, you must configure the NTP servers on the control plane nodes. For more information see the Additional resources section. Prerequisites You installed the chrony package on the target node. Procedure Log in to the node by using the ssh command. View the NTP servers available to the node by running the following command: USD chronyc sources Example output MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^+ time.cloudflare.com 3 10 377 187 -209us[ -209us] +/- 32ms ^+ t1.time.ir2.yahoo.com 2 10 377 185 -4382us[-4382us] +/- 23ms ^+ time.cloudflare.com 3 10 377 198 -996us[-1220us] +/- 33ms ^* brenbox.westnet.ie 1 10 377 193 -9538us[-9761us] +/- 24ms Use the ping command to ensure that the node can access an NTP server, for example: USD ping time.cloudflare.com Example output PING time.cloudflare.com (162.159.200.123) 56(84) bytes of data. 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=1 ttl=54 time=32.3 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=2 ttl=54 time=30.9 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=3 ttl=54 time=36.7 ms ... Additional resources Optional: Configuring NTP for disconnected clusters Network Time Protocol (NTP) 3.4. Configuring networking Before installation, you must configure the networking on the provisioner node. Installer-provisioned clusters deploy with a bare-metal bridge and network, and an optional provisioning bridge and network. Note You can also configure networking from the web console. Procedure Export the bare-metal network NIC name by running the following command: USD export PUB_CONN=<baremetal_nic_name> Configure the bare-metal network: Note The SSH connection might disconnect after executing these steps. For a network using DHCP, run the following command: USD sudo nohup bash -c " nmcli con down \"USDPUB_CONN\" nmcli con delete \"USDPUB_CONN\" # RHEL 8.1 appends the word \"System\" in front of the connection, delete in case it exists nmcli con down \"System USDPUB_CONN\" nmcli con delete \"System USDPUB_CONN\" nmcli connection add ifname baremetal type bridge <con_name> baremetal bridge.stp no 1 nmcli con add type bridge-slave ifname \"USDPUB_CONN\" master baremetal pkill dhclient;dhclient baremetal " 1 Replace <con_name> with the connection name. For a network using static IP addressing and no DHCP network, run the following command: USD sudo nohup bash -c " nmcli con down \"USDPUB_CONN\" nmcli con delete \"USDPUB_CONN\" # RHEL 8.1 appends the word \"System\" in front of the connection, delete in case it exists nmcli con down \"System USDPUB_CONN\" nmcli con delete \"System USDPUB_CONN\" nmcli connection add ifname baremetal type bridge con-name baremetal bridge.stp no ipv4.method manual ipv4.addr "x.x.x.x/yy" ipv4.gateway "a.a.a.a" ipv4.dns "b.b.b.b" 1 nmcli con add type bridge-slave ifname \"USDPUB_CONN\" master baremetal nmcli con up baremetal " 1 Replace <con_name> with the connection name. Replace x.x.x.x/yy with the IP address and CIDR for the network. Replace a.a.a.a with the network gateway. Replace b.b.b.b with the IP address of the DNS server. Optional: If you are deploying with a provisioning network, export the provisioning network NIC name by running the following command: USD export PROV_CONN=<prov_nic_name> Optional: If you are deploying with a provisioning network, configure the provisioning network by running the following command: USD sudo nohup bash -c " nmcli con down \"USDPROV_CONN\" nmcli con delete \"USDPROV_CONN\" nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname \"USDPROV_CONN\" master provisioning nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual nmcli con down provisioning nmcli con up provisioning " Note The SSH connection might disconnect after executing these steps. The IPv6 address can be any address that is not routable through the bare-metal network. Ensure that UEFI is enabled and UEFI PXE settings are set to the IPv6 protocol when using IPv6 addressing. Optional: If you are deploying with a provisioning network, configure the IPv4 address on the provisioning network connection by running the following command: USD nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manual SSH back into the provisioner node (if required) by running the following command: # ssh kni@provisioner.<cluster-name>.<domain> Verify that the connection bridges have been properly created by running the following command: USD sudo nmcli con show Example output NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eno1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eno1 bridge-slave-eno2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eno2 3.5. Establishing communication between subnets In a typical OpenShift Container Platform cluster setup, all nodes, including the control plane and worker nodes, reside in the same network. However, for edge computing scenarios, it can be beneficial to locate worker nodes closer to the edge. This often involves using different network segments or subnets for the remote worker nodes than the subnet used by the control plane and local worker nodes. Such a setup can reduce latency for the edge and allow for enhanced scalability. However, the network must be configured properly before installing OpenShift Container Platform to ensure that the edge subnets containing the remote worker nodes can reach the subnet containing the control plane nodes and receive traffic from the control plane too. Important All control plane nodes must run in the same subnet. When using more than one subnet, you can also configure the Ingress VIP to run on the control plane nodes by using a manifest. See "Configuring network components to run on the control plane" for details. Deploying a cluster with multiple subnets requires using virtual media. This procedure details the network configuration required to allow the remote worker nodes in the second subnet to communicate effectively with the control plane nodes in the first subnet and to allow the control plane nodes in the first subnet to communicate effectively with the remote worker nodes in the second subnet. In this procedure, the cluster spans two subnets: The first subnet ( 10.0.0.0 ) contains the control plane and local worker nodes. The second subnet ( 192.168.0.0 ) contains the edge worker nodes. Procedure Configure the first subnet to communicate with the second subnet: Log in as root to a control plane node by running the following command: USD sudo su - Get the name of the network interface by running the following command: # nmcli dev status Add a route to the second subnet ( 192.168.0.0 ) via the gateway by running the following command: # nmcli connection modify <interface_name> +ipv4.routes "192.168.0.0/24 via <gateway>" Replace <interface_name> with the interface name. Replace <gateway> with the IP address of the actual gateway. Example # nmcli connection modify eth0 +ipv4.routes "192.168.0.0/24 via 192.168.0.1" Apply the changes by running the following command: # nmcli connection up <interface_name> Replace <interface_name> with the interface name. Verify the routing table to ensure the route has been added successfully: # ip route Repeat the steps for each control plane node in the first subnet. Note Adjust the commands to match your actual interface names and gateway. Configure the second subnet to communicate with the first subnet: Log in as root to a remote worker node by running the following command: USD sudo su - Get the name of the network interface by running the following command: # nmcli dev status Add a route to the first subnet ( 10.0.0.0 ) via the gateway by running the following command: # nmcli connection modify <interface_name> +ipv4.routes "10.0.0.0/24 via <gateway>" Replace <interface_name> with the interface name. Replace <gateway> with the IP address of the actual gateway. Example # nmcli connection modify eth0 +ipv4.routes "10.0.0.0/24 via 10.0.0.1" Apply the changes by running the following command: # nmcli connection up <interface_name> Replace <interface_name> with the interface name. Verify the routing table to ensure the route has been added successfully by running the following command: # ip route Repeat the steps for each worker node in the second subnet. Note Adjust the commands to match your actual interface names and gateway. Once you have configured the networks, test the connectivity to ensure the remote worker nodes can reach the control plane nodes and the control plane nodes can reach the remote worker nodes. From the control plane nodes in the first subnet, ping a remote worker node in the second subnet by running the following command: USD ping <remote_worker_node_ip_address> If the ping is successful, it means the control plane nodes in the first subnet can reach the remote worker nodes in the second subnet. If you do not receive a response, review the network configurations and repeat the procedure for the node. From the remote worker nodes in the second subnet, ping a control plane node in the first subnet by running the following command: USD ping <control_plane_node_ip_address> If the ping is successful, it means the remote worker nodes in the second subnet can reach the control plane in the first subnet. If you do not receive a response, review the network configurations and repeat the procedure for the node. 3.6. Retrieving the OpenShift Container Platform installer Use the stable-4.x version of the installation program and your selected architecture to deploy the generally available stable version of OpenShift Container Platform: USD export VERSION=stable-4.15 USD export RELEASE_ARCH=<architecture> USD export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}') 3.7. Extracting the OpenShift Container Platform installer After retrieving the installer, the step is to extract it. Procedure Set the environment variables: USD export cmd=openshift-baremetal-install USD export pullsecret_file=~/pull-secret.txt USD export extract_dir=USD(pwd) Get the oc binary: USD curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc Extract the installer: USD sudo cp oc /usr/local/bin USD oc adm release extract --registry-config "USD{pullsecret_file}" --command=USDcmd --to "USD{extract_dir}" USD{RELEASE_IMAGE} USD sudo cp openshift-baremetal-install /usr/local/bin 3.8. Optional: Creating an RHCOS images cache To employ image caching, you must download the Red Hat Enterprise Linux CoreOS (RHCOS) image used by the bootstrap VM to provision the cluster nodes. Image caching is optional, but it is especially useful when running the installation program on a network with limited bandwidth. Note The installation program no longer needs the clusterOSImage RHCOS image because the correct image is in the release payload. If you are running the installation program on a network with limited bandwidth and the RHCOS images download takes more than 15 to 20 minutes, the installation program will timeout. Caching images on a web server will help in such scenarios. Warning If you enable TLS for the HTTPD server, you must confirm the root certificate is signed by an authority trusted by the client and verify the trusted certificate chain between your OpenShift Container Platform hub and spoke clusters and the HTTPD server. Using a server configured with an untrusted certificate prevents the images from being downloaded to the image creation service. Using untrusted HTTPS servers is not supported. Install a container that contains the images. Procedure Install podman : USD sudo dnf install -y podman Open firewall port 8080 to be used for RHCOS image caching: USD sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent USD sudo firewall-cmd --reload Create a directory to store the bootstraposimage : USD mkdir /home/kni/rhcos_image_cache Set the appropriate SELinux context for the newly created directory: USD sudo semanage fcontext -a -t httpd_sys_content_t "/home/kni/rhcos_image_cache(/.*)?" USD sudo restorecon -Rv /home/kni/rhcos_image_cache/ Get the URI for the RHCOS image that the installation program will deploy on the bootstrap VM: USD export RHCOS_QEMU_URI=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH "USD(arch)" '.architectures[USDARCH].artifacts.qemu.formats["qcow2.gz"].disk.location') Get the name of the image that the installation program will deploy on the bootstrap VM: USD export RHCOS_QEMU_NAME=USD{RHCOS_QEMU_URI##*/} Get the SHA hash for the RHCOS image that will be deployed on the bootstrap VM: USD export RHCOS_QEMU_UNCOMPRESSED_SHA256=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH "USD(arch)" '.architectures[USDARCH].artifacts.qemu.formats["qcow2.gz"].disk["uncompressed-sha256"]') Download the image and place it in the /home/kni/rhcos_image_cache directory: USD curl -L USD{RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/USD{RHCOS_QEMU_NAME} Confirm SELinux type is of httpd_sys_content_t for the new file: USD ls -Z /home/kni/rhcos_image_cache Create the pod: USD podman run -d --name rhcos_image_cache \ 1 -v /home/kni/rhcos_image_cache:/var/www/html \ -p 8080:8080/tcp \ registry.access.redhat.com/ubi9/httpd-24 1 Creates a caching webserver with the name rhcos_image_cache . This pod serves the bootstrapOSImage image in the install-config.yaml file for deployment. Generate the bootstrapOSImage configuration: USD export BAREMETAL_IP=USD(ip addr show dev baremetal | awk '/inet /{print USD2}' | cut -d"/" -f1) USD export BOOTSTRAP_OS_IMAGE="http://USD{BAREMETAL_IP}:8080/USD{RHCOS_QEMU_NAME}?sha256=USD{RHCOS_QEMU_UNCOMPRESSED_SHA256}" USD echo " bootstrapOSImage=USD{BOOTSTRAP_OS_IMAGE}" Add the required configuration to the install-config.yaml file under platform.baremetal : platform: baremetal: bootstrapOSImage: <bootstrap_os_image> 1 1 Replace <bootstrap_os_image> with the value of USDBOOTSTRAP_OS_IMAGE . See the "Configuring the install-config.yaml file" section for additional details. 3.9. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, NetworkManager sets the hostnames. By default, DHCP provides the hostnames to NetworkManager , which is the recommended method. NetworkManager gets the hostnames through a reverse DNS lookup in the following cases: If DHCP does not provide the hostnames If you use kernel arguments to set the hostnames If you use another method to set the hostnames Reverse DNS lookup occurs after the network has been initialized on a node, and can increase the time it takes NetworkManager to set the hostname. Other system services can start prior to NetworkManager setting the hostname, which can cause those services to use a default hostname such as localhost . Tip You can avoid the delay in setting hostnames by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 3.10. Configuring the install-config.yaml file 3.10.1. Configuring the install-config.yaml file The install-config.yaml file requires some additional details. Most of the information teaches the installation program and the resulting cluster enough about the available hardware that it is able to fully manage it. Note The installation program no longer needs the clusterOSImage RHCOS image because the correct image is in the release payload. Configure install-config.yaml . Change the appropriate variables to match the environment, including pullSecret and sshKey : apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public_cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 1 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIPs: - <api_ip> ingressVIPs: - <wildcard_ip> provisioningNetworkCIDR: <CIDR> bootstrapExternalStaticIP: <bootstrap_static_ip_address> 2 bootstrapExternalStaticGateway: <bootstrap_static_gateway> 3 bootstrapExternalStaticDNS: <bootstrap_static_dns> 4 hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out_of_band_ip> 5 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" 6 - name: <openshift_master_1> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" - name: <openshift_master_2> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" - name: <openshift_worker_0> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> - name: <openshift_worker_1> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>' 1 Scale the worker machines based on the number of worker nodes that are part of the OpenShift Container Platform cluster. Valid options for the replicas value are 0 and integers greater than or equal to 2 . Set the number of replicas to 0 to deploy a three-node cluster, which contains only three control plane machines. A three-node cluster is a smaller, more resource-efficient cluster that can be used for testing, development, and production. You cannot install the cluster with only one worker. 2 When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticIP configuration setting to specify the static IP address of the bootstrap VM when there is no DHCP server on the bare-metal network. 3 When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticGateway configuration setting to specify the gateway IP address for the bootstrap VM when there is no DHCP server on the bare-metal network. 4 When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticDNS configuration setting to specify the DNS address for the bootstrap VM when there is no DHCP server on the bare-metal network. 5 See the BMC addressing sections for more options. 6 To set the path to the installation disk drive, enter the kernel name of the disk. For example, /dev/sda . Important Because the disk discovery order is not guaranteed, the kernel name of the disk can change across booting options for machines with multiple disks. For example, /dev/sda becomes /dev/sdb and vice versa. To avoid this issue, you must use persistent disk attributes, such as the disk World Wide Name (WWN) or /dev/disk/by-path/ . It is recommended to use the /dev/disk/by-path/<device_path> link to the storage location. To use the disk WWN, replace the deviceName parameter with the wwnWithExtension parameter. Depending on the parameter that you use, enter either of the following values: The disk name. For example, /dev/sda , or /dev/disk/by-path/ . The disk WWN. For example, "0x64cd98f04fde100024684cf3034da5c2" . Ensure that you enter the disk WWN value within quotes so that it is used as a string value and not a hexadecimal value. Failure to meet these requirements for the rootDeviceHints parameter might result in the following error: ironic-inspector inspection failed: No disks satisfied root device hints Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the apiVIP and ingressVIP configuration settings. In OpenShift Container Platform 4.12 and later, these configuration settings are deprecated. Instead, use a list format in the apiVIPs and ingressVIPs configuration settings to specify IPv4 addresses, IPv6 addresses, or both IP address formats. Create a directory to store the cluster configuration: USD mkdir ~/clusterconfigs Copy the install-config.yaml file to the new directory: USD cp install-config.yaml ~/clusterconfigs Ensure all bare metal nodes are powered off prior to installing the OpenShift Container Platform cluster: USD ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off Remove old bootstrap resources if any are left over from a deployment attempt: for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done 3.10.2. Additional install-config parameters See the following tables for the required parameters, the hosts parameter, and the bmc parameter for the install-config.yaml file. Table 3.1. Required parameters Parameters Default Description baseDomain The domain name for the cluster. For example, example.com . bootMode UEFI The boot mode for a node. Options are legacy , UEFI , and UEFISecureBoot . If bootMode is not set, Ironic sets it while inspecting the node. bootstrapExternalStaticDNS The static network DNS of the bootstrap node. You must set this value when deploying a cluster with static IP addresses when there is no Dynamic Host Configuration Protocol (DHCP) server on the bare-metal network. If you do not set this value, the installation program will use the value from bootstrapExternalStaticGateway , which causes problems when the IP address values of the gateway and DNS are different. bootstrapExternalStaticIP The static IP address for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. bootstrapExternalStaticGateway The static IP address of the gateway for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. sshKey The sshKey configuration setting contains the key in the ~/.ssh/id_rsa.pub file required to access the control plane nodes and worker nodes. Typically, this key is from the provisioner node. pullSecret The pullSecret configuration setting contains a copy of the pull secret downloaded from the Install OpenShift on Bare Metal page when preparing the provisioner node. The name to be given to the OpenShift Container Platform cluster. For example, openshift . The public CIDR (Classless Inter-Domain Routing) of the external network. For example, 10.0.0.0/24 . The OpenShift Container Platform cluster requires a name be provided for worker (or compute) nodes even if there are zero nodes. Replicas sets the number of worker (or compute) nodes in the OpenShift Container Platform cluster. The OpenShift Container Platform cluster requires a name for control plane (master) nodes. Replicas sets the number of control plane (master) nodes included as part of the OpenShift Container Platform cluster. provisioningNetworkInterface The name of the network interface on nodes connected to the provisioning network. For OpenShift Container Platform 4.9 and later releases, use the bootMACAddress configuration setting to enable Ironic to identify the IP address of the NIC instead of using the provisioningNetworkInterface configuration setting to identify the name of the NIC. defaultMachinePlatform The default configuration used for machine pools without a platform configuration. apiVIPs (Optional) The virtual IP address for Kubernetes API communication. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or preconfigured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the apiVIPs configuration setting in the install-config.yaml file. The primary IP address must be from the IPv4 network when using dual stack networking. If not set, the installation program uses api.<cluster_name>.<base_domain> to derive the IP address from the DNS. Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the apiVIP configuration setting. From OpenShift Container Platform 4.12 or later, the apiVIP configuration setting is deprecated. Instead, use a list format for the apiVIPs configuration setting to specify an IPv4 address, an IPv6 address or both IP address formats. disableCertificateVerification False redfish and redfish-virtualmedia need this parameter to manage BMC addresses. The value should be True when using a self-signed certificate for BMC addresses. ingressVIPs (Optional) The virtual IP address for ingress traffic. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or preconfigured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the ingressVIPs configuration setting in the install-config.yaml file. The primary IP address must be from the IPv4 network when using dual stack networking. If not set, the installation program uses test.apps.<cluster_name>.<base_domain> to derive the IP address from the DNS. Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the ingressVIP configuration setting. In OpenShift Container Platform 4.12 and later, the ingressVIP configuration setting is deprecated. Instead, use a list format for the ingressVIPs configuration setting to specify an IPv4 addresses, an IPv6 addresses or both IP address formats. Table 3.2. Optional Parameters Parameters Default Description provisioningDHCPRange 172.22.0.10,172.22.0.100 Defines the IP range for nodes on the provisioning network. provisioningNetworkCIDR 172.22.0.0/24 The CIDR for the network to use for provisioning. This option is required when not using the default address range on the provisioning network. clusterProvisioningIP The third IP address of the provisioningNetworkCIDR . The IP address within the cluster where the provisioning services run. Defaults to the third IP address of the provisioning subnet. For example, 172.22.0.3 . bootstrapProvisioningIP The second IP address of the provisioningNetworkCIDR . The IP address on the bootstrap VM where the provisioning services run while the installer is deploying the control plane (master) nodes. Defaults to the second IP address of the provisioning subnet. For example, 172.22.0.2 or 2620:52:0:1307::2 . externalBridge baremetal The name of the bare-metal bridge of the hypervisor attached to the bare-metal network. provisioningBridge provisioning The name of the provisioning bridge on the provisioner host attached to the provisioning network. architecture Defines the host architecture for your cluster. Valid values are amd64 or arm64 . defaultMachinePlatform The default configuration used for machine pools without a platform configuration. bootstrapOSImage A URL to override the default operating system image for the bootstrap node. The URL must contain a SHA-256 hash of the image. For example: https://mirror.openshift.com/rhcos-<version>-qemu.qcow2.gz?sha256=<uncompressed_sha256> . provisioningNetwork The provisioningNetwork configuration setting determines whether the cluster uses the provisioning network. If it does, the configuration setting also determines if the cluster manages the network. Disabled : Set this parameter to Disabled to disable the requirement for a provisioning network. When set to Disabled , you must only use virtual media based provisioning, or bring up the cluster using the assisted installer. If Disabled and using power management, BMCs must be accessible from the bare-metal network. If Disabled , you must provide two IP addresses on the bare-metal network that are used for the provisioning services. Managed : Set this parameter to Managed , which is the default, to fully manage the provisioning network, including DHCP, TFTP, and so on. Unmanaged : Set this parameter to Unmanaged to enable the provisioning network but take care of manual configuration of DHCP. Virtual media provisioning is recommended but PXE is still available if required. httpProxy Set this parameter to the appropriate HTTP proxy used within your environment. httpsProxy Set this parameter to the appropriate HTTPS proxy used within your environment. noProxy Set this parameter to the appropriate list of exclusions for proxy usage within your environment. Hosts The hosts parameter is a list of separate bare metal assets used to build the cluster. Table 3.3. Hosts Name Default Description name The name of the BareMetalHost resource to associate with the details. For example, openshift-master-0 . role The role of the bare metal node. Either master or worker . bmc Connection details for the baseboard management controller. See the BMC addressing section for additional details. bootMACAddress The MAC address of the NIC that the host uses for the provisioning network. Ironic retrieves the IP address using the bootMACAddress configuration setting. Then, it binds to the host. Note You must provide a valid MAC address from the host if you disabled the provisioning network. networkConfig Set this optional parameter to configure the network interface of a host. See "(Optional) Configuring host network interfaces" for additional details. 3.10.3. BMC addressing Most vendors support Baseboard Management Controller (BMC) addressing with the Intelligent Platform Management Interface (IPMI). IPMI does not encrypt communications. It is suitable for use within a data center over a secured or dedicated management network. Check with your vendor to see if they support Redfish network boot. Redfish delivers simple and secure management for converged, hybrid IT and the Software Defined Data Center (SDDC). Redfish is human readable and machine capable, and leverages common internet and web services standards to expose information directly to the modern tool chain. If your hardware does not support Redfish network boot, use IPMI. IPMI Hosts using IPMI use the ipmi://<out-of-band-ip>:<port> address format, which defaults to port 623 if not specified. The following example demonstrates an IPMI configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out-of-band-ip> username: <user> password: <password> Important The provisioning network is required when PXE booting using IPMI for BMC addressing. It is not possible to PXE boot hosts without a provisioning network. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . See "Redfish virtual media for HPE iLO" in the "BMC addressing for HPE iLO" section or "Redfish virtual media for Dell iDRAC" in the "BMC addressing for Dell iDRAC" section for additional details. Redfish network boot To enable Redfish, use redfish:// or redfish+http:// to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True Redfish APIs Several redfish API endpoints are called onto your BCM when using the bare-metal installer-provisioned infrastructure. Important You need to ensure that your BMC supports all of the redfish APIs before installation. List of redfish APIs Power on curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{"ResetType": "On"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset Power off curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{"ResetType": "ForceOff"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset Temporary boot using pxe curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{"Boot": {"BootSourceOverrideTarget": "pxe", "BootSourceOverrideEnabled": "Once"}} Set BIOS boot mode using Legacy or UEFI curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{"Boot": {"BootSourceOverrideMode":"UEFI"}} List of redfish-virtualmedia APIs Set temporary boot device using cd or dvd curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{"Boot": {"BootSourceOverrideTarget": "cd", "BootSourceOverrideEnabled": "Once"}}' Mount virtual media curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" -H "If-Match: *" https://USDServer/redfish/v1/Managers/USDManagerID/VirtualMedia/USDVmediaId -d '{"Image": "https://example.com/test.iso", "TransferProtocolType": "HTTPS", "UserName": "", "Password":""}' Note The PowerOn and PowerOff commands for redfish APIs are the same for the redfish-virtualmedia APIs. Important HTTPS and HTTP are the only supported parameter types for TransferProtocolTypes . 3.10.4. BMC addressing for Dell iDRAC The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For Dell hardware, Red Hat supports integrated Dell Remote Access Controller (iDRAC) virtual media, Redfish network boot, and IPMI. BMC address formats for Dell iDRAC Protocol Address Format iDRAC virtual media idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 Redfish network boot redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 IPMI ipmi://<out-of-band-ip> Important Use idrac-virtualmedia as the protocol for Redfish virtual media. redfish-virtualmedia will not work on Dell hardware. Dell's idrac-virtualmedia uses the Redfish standard with Dell's OEM extensions. See the following sections for additional details. Redfish virtual media for Dell iDRAC For Redfish virtual media on Dell servers, use idrac-virtualmedia:// in the address setting. Using redfish-virtualmedia:// will not work. Note Use idrac-virtualmedia:// as the protocol for Redfish virtual media. Using redfish-virtualmedia:// will not work on Dell hardware, because the idrac-virtualmedia:// protocol corresponds to the idrac hardware type and the Redfish protocol in Ironic. Dell's idrac-virtualmedia:// protocol uses the Redfish standard with Dell's OEM extensions. Ironic also supports the idrac type with the WSMAN protocol. Therefore, you must specify idrac-virtualmedia:// to avoid unexpected behavior when electing to use Redfish with virtual media on Dell hardware. The following example demonstrates using iDRAC virtual media within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. Note Ensure the OpenShift Container Platform cluster nodes have AutoAttach enabled through the iDRAC console. The menu path is: Configuration Virtual Media Attach Mode AutoAttach . The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True Redfish network boot for iDRAC To enable Redfish, use redfish:// or redfish+http:// to disable transport layer security (TLS). The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True Note There is a known issue on Dell iDRAC 9 with firmware version 04.40.00.00 and all releases up to including the 5.xx series for installer-provisioned installations on bare metal deployments. The virtual console plugin defaults to eHTML5, an enhanced version of HTML5, which causes problems with the InsertVirtualMedia workflow. Set the plugin to use HTML5 to avoid this issue. The menu path is Configuration Virtual console Plug-in Type HTML5 . Ensure the OpenShift Container Platform cluster nodes have AutoAttach enabled through the iDRAC console. The menu path is: Configuration Virtual Media Attach Mode AutoAttach . 3.10.5. BMC addressing for HPE iLO The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For HPE integrated Lights Out (iLO), Red Hat supports Redfish virtual media, Redfish network boot, and IPMI. Table 3.4. BMC address formats for HPE iLO Protocol Address Format Redfish virtual media redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 Redfish network boot redfish://<out-of-band-ip>/redfish/v1/Systems/1 IPMI ipmi://<out-of-band-ip> See the following sections for additional details. Redfish virtual media for HPE iLO To enable Redfish virtual media for HPE servers, use redfish-virtualmedia:// in the address setting. The following example demonstrates using Redfish virtual media within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True Note Redfish virtual media is not supported on 9th generation systems running iLO4, because Ironic does not support iLO4 with virtual media. Redfish network boot for HPE iLO To enable Redfish, use redfish:// or redfish+http:// to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True 3.10.6. BMC addressing for Fujitsu iRMC The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For Fujitsu hardware, Red Hat supports integrated Remote Management Controller (iRMC) and IPMI. Table 3.5. BMC address formats for Fujitsu iRMC Protocol Address Format iRMC irmc://<out-of-band-ip> IPMI ipmi://<out-of-band-ip> iRMC Fujitsu nodes can use irmc://<out-of-band-ip> and defaults to port 443 . The following example demonstrates an iRMC configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: irmc://<out-of-band-ip> username: <user> password: <password> Note Currently Fujitsu supports iRMC S5 firmware version 3.05P and above for installer-provisioned installation on bare metal. 3.10.7. BMC addressing for Cisco CIMC The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For Cisco UCS UCSX-210C-M6 hardware, Red Hat supports Cisco Integrated Management Controller (CIMC). Table 3.6. BMC address format for Cisco CIMC Protocol Address Format Redfish virtual media redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> To enable Redfish virtual media for Cisco UCS UCSX-210C-M6 hardware, use redfish-virtualmedia:// in the address setting. The following example demonstrates using Redfish virtual media within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration by using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> username: <user> password: <password> disableCertificateVerification: True 3.10.8. Root device hints The rootDeviceHints parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it. Table 3.7. Subfields Subfield Description deviceName A string containing a Linux device name such as /dev/vda or /dev/disk/by-path/ . It is recommended to use the /dev/disk/by-path/<device_path> link to the storage location. The hint must match the actual value exactly. hctl A string containing a SCSI bus address like 0:0:0:0 . The hint must match the actual value exactly. model A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. vendor A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. serialNumber A string containing the device serial number. The hint must match the actual value exactly. minSizeGigabytes An integer representing the minimum size of the device in gigabytes. wwn A string containing the unique storage identifier. The hint must match the actual value exactly. wwnWithExtension A string containing the unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. wwnVendorExtension A string containing the unique vendor storage identifier. The hint must match the actual value exactly. rotational A boolean indicating whether the device should be a rotating disk (true) or not (false). Example usage - name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: "/dev/sda" 3.10.9. Optional: Setting proxy settings To deploy an OpenShift Container Platform cluster using a proxy, make the following changes to the install-config.yaml file. apiVersion: v1 baseDomain: <domain> proxy: httpProxy: http://USERNAME:[email protected]:PORT httpsProxy: https://USERNAME:[email protected]:PORT noProxy: <WILDCARD_OF_DOMAIN>,<PROVISIONING_NETWORK/CIDR>,<BMC_ADDRESS_RANGE/CIDR> The following is an example of noProxy with values. noProxy: .example.com,172.22.0.0/24,10.10.0.0/24 With a proxy enabled, set the appropriate values of the proxy in the corresponding key/value pair. Key considerations: If the proxy does not have an HTTPS proxy, change the value of httpsProxy from https:// to http:// . If using a provisioning network, include it in the noProxy setting, otherwise the installer will fail. Set all of the proxy settings as environment variables within the provisioner node. For example, HTTP_PROXY , HTTPS_PROXY , and NO_PROXY . Note When provisioning with IPv6, you cannot define a CIDR address block in the noProxy settings. You must define each address separately. 3.10.10. Optional: Deploying with no provisioning network To deploy an OpenShift Container Platform cluster without a provisioning network, make the following changes to the install-config.yaml file. platform: baremetal: apiVIPs: - <api_VIP> ingressVIPs: - <ingress_VIP> provisioningNetwork: "Disabled" 1 1 Add the provisioningNetwork configuration setting, if needed, and set it to Disabled . Important The provisioning network is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . See "Redfish virtual media for HPE iLO" in the "BMC addressing for HPE iLO" section or "Redfish virtual media for Dell iDRAC" in the "BMC addressing for Dell iDRAC" section for additional details. 3.10.11. Optional: Deploying with dual-stack networking For dual-stack networking in OpenShift Container Platform clusters, you can configure IPv4 and IPv6 address endpoints for cluster nodes. To configure IPv4 and IPv6 address endpoints for cluster nodes, edit the machineNetwork , clusterNetwork , and serviceNetwork configuration settings in the install-config.yaml file. Each setting must have two CIDR entries each. For a cluster with the IPv4 family as the primary address family, specify the IPv4 setting first. For a cluster with the IPv6 family as the primary address family, specify the IPv6 setting first. machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112 Important On a bare-metal platform, if you specified an NMState configuration in the networkConfig section of your install-config.yaml file, add interfaces.wait-ip: ipv4+ipv6 to the NMState YAML file to resolve an issue that prevents your cluster from deploying on a dual-stack network. Example NMState YAML configuration file that includes the wait-ip parameter networkConfig: nmstate: interfaces: - name: <interface_name> # ... wait-ip: ipv4+ipv6 # ... To provide an interface to the cluster for applications that use IPv4 and IPv6 addresses, configure IPv4 and IPv6 virtual IP (VIP) address endpoints for the Ingress VIP and API VIP services. To configure IPv4 and IPv6 address endpoints, edit the apiVIPs and ingressVIPs configuration settings in the install-config.yaml file . The apiVIPs and ingressVIPs configuration settings use a list format. The order of the list indicates the primary and secondary VIP address for each service. platform: baremetal: apiVIPs: - <api_ipv4> - <api_ipv6> ingressVIPs: - <wildcard_ipv4> - <wildcard_ipv6> Note For a cluster with dual-stack networking configuration, you must assign both IPv4 and IPv6 addresses to the same interface. 3.10.12. Optional: Configuring host network interfaces Before installation, you can set the networkConfig configuration setting in the install-config.yaml file to configure host network interfaces using NMState. The most common use case for this functionality is to specify a static IP address on the bare-metal network, but you can also configure other networks such as a storage network. This functionality supports other NMState features such as VLAN, VXLAN, bridges, bonds, routes, MTU, and DNS resolver settings. Prerequisites Configure a PTR DNS record with a valid hostname for each node with a static IP address. Install the NMState CLI ( nmstate ). Procedure Optional: Consider testing the NMState syntax with nmstatectl gc before including it in the install-config.yaml file, because the installer will not check the NMState YAML syntax. Note Errors in the YAML syntax might result in a failure to apply the network configuration. Additionally, maintaining the validated YAML syntax is useful when applying changes using Kubernetes NMState after deployment or when expanding the cluster. Create an NMState YAML file: interfaces: - name: <nic1_name> 1 type: ethernet state: up ipv4: address: - ip: <ip_address> 2 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 3 routes: config: - destination: 0.0.0.0/0 -hop-address: <next_hop_ip_address> 4 -hop-interface: <next_hop_nic1_name> 5 1 2 3 4 5 Replace <nic1_name> , <ip_address> , <dns_ip_address> , <next_hop_ip_address> and <next_hop_nic1_name> with appropriate values. Test the configuration file by running the following command: USD nmstatectl gc <nmstate_yaml_file> Replace <nmstate_yaml_file> with the configuration file name. Use the networkConfig configuration setting by adding the NMState configuration to hosts within the install-config.yaml file: hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: "/dev/sda" networkConfig: 1 interfaces: - name: <nic1_name> 2 type: ethernet state: up ipv4: address: - ip: <ip_address> 3 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 4 routes: config: - destination: 0.0.0.0/0 -hop-address: <next_hop_ip_address> 5 -hop-interface: <next_hop_nic1_name> 6 1 Add the NMState YAML syntax to configure the host interfaces. 2 3 4 5 6 Replace <nic1_name> , <ip_address> , <dns_ip_address> , <next_hop_ip_address> and <next_hop_nic1_name> with appropriate values. Important After deploying the cluster, you cannot modify the networkConfig configuration setting of install-config.yaml file to make changes to the host network interface. Use the Kubernetes NMState Operator to make changes to the host network interface after deployment. 3.10.13. Configuring host network interfaces for subnets For edge computing scenarios, it can be beneficial to locate compute nodes closer to the edge. To locate remote nodes in subnets, you might use different network segments or subnets for the remote nodes than you used for the control plane subnet and local compute nodes. You can reduce latency for the edge and allow for enhanced scalability by setting up subnets for edge computing scenarios. Important When using the default load balancer, OpenShiftManagedDefault and adding remote nodes to your OpenShift Container Platform cluster, all control plane nodes must run in the same subnet. When using more than one subnet, you can also configure the Ingress VIP to run on the control plane nodes by using a manifest. See "Configuring network components to run on the control plane" for details. If you have established different network segments or subnets for remote nodes as described in the section on "Establishing communication between subnets", you must specify the subnets in the machineNetwork configuration setting if the workers are using static IP addresses, bonds or other advanced networking. When setting the node IP address in the networkConfig parameter for each remote node, you must also specify the gateway and the DNS server for the subnet containing the control plane nodes when using static IP addresses. This ensures that the remote nodes can reach the subnet containing the control plane and that they can receive network traffic from the control plane. Note Deploying a cluster with multiple subnets requires using virtual media, such as redfish-virtualmedia or idrac-virtualmedia , because remote nodes cannot access the local provisioning network. Procedure Add the subnets to the machineNetwork in the install-config.yaml file when using static IP addresses: networking: machineNetwork: - cidr: 10.0.0.0/24 - cidr: 192.168.0.0/24 networkType: OVNKubernetes Add the gateway and DNS configuration to the networkConfig parameter of each edge compute node using NMState syntax when using a static IP address or advanced networking such as bonds: networkConfig: interfaces: - name: <interface_name> 1 type: ethernet state: up ipv4: enabled: true dhcp: false address: - ip: <node_ip> 2 prefix-length: 24 gateway: <gateway_ip> 3 dns-resolver: config: server: - <dns_ip> 4 1 Replace <interface_name> with the interface name. 2 Replace <node_ip> with the IP address of the node. 3 Replace <gateway_ip> with the IP address of the gateway. 4 Replace <dns_ip> with the IP address of the DNS server. 3.10.14. Optional: Configuring address generation modes for SLAAC in dual-stack networks For dual-stack clusters that use Stateless Address AutoConfiguration (SLAAC), you must specify a global value for the ipv6.addr-gen-mode network setting. You can set this value using NMState to configure the RAM disk and the cluster configuration files. If you do not configure a consistent ipv6.addr-gen-mode in these locations, IPv6 address mismatches can occur between CSR resources and BareMetalHost resources in the cluster. Prerequisites Install the NMState CLI ( nmstate ). Procedure Optional: Consider testing the NMState YAML syntax with the nmstatectl gc command before including it in the install-config.yaml file because the installation program will not check the NMState YAML syntax. Create an NMState YAML file: interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1 1 Replace <address_mode> with the type of address generation mode required for IPv6 addresses in the cluster. Valid values are eui64 , stable-privacy , or random . Test the configuration file by running the following command: USD nmstatectl gc <nmstate_yaml_file> 1 1 Replace <nmstate_yaml_file> with the name of the test configuration file. Add the NMState configuration to the hosts.networkConfig section within the install-config.yaml file: hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: "/dev/sda" networkConfig: interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1 ... 1 Replace <address_mode> with the type of address generation mode required for IPv6 addresses in the cluster. Valid values are eui64 , stable-privacy , or random . 3.10.15. Optional: Configuring host network interfaces for dual port NIC Before installation, you can set the networkConfig configuration setting in the install-config.yaml file to configure host network interfaces by using NMState to support dual port NIC. Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Virtualization only supports the following bond modes: mode=1 active-backup mode=2 balance-xor mode=4 802.3ad Prerequisites Configure a PTR DNS record with a valid hostname for each node with a static IP address. Install the NMState CLI ( nmstate ). Note Errors in the YAML syntax might result in a failure to apply the network configuration. Additionally, maintaining the validated YAML syntax is useful when applying changes by using Kubernetes NMState after deployment or when expanding the cluster. Procedure Add the NMState configuration to the networkConfig field to hosts within the install-config.yaml file: hosts: - name: worker-0 role: worker bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: false bootMACAddress: <NIC1_mac_address> bootMode: UEFI networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false dhcp: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 -hop-address: 10.19.17.254 -hop-interface: bond0 14 table-id: 254 1 The networkConfig field has information about the network configuration of the host, with subfields including interfaces , dns-resolver , and routes . 2 The interfaces field is an array of network interfaces defined for the host. 3 The name of the interface. 4 The type of interface. This example creates a ethernet interface. 5 Set this to `false to disable DHCP for the physical function (PF) if it is not strictly required. 6 Set to the number of SR-IOV virtual functions (VFs) to instantiate. 7 Set this to up . 8 Set this to false to disable IPv4 addressing for the VF attached to the bond. 9 Sets a minimum transmission rate, in Mbps, for the VF. This sample value sets a rate of 100 Mbps. This value must be less than or equal to the maximum transmission rate. Intel NICs do not support the min-tx-rate parameter. For more information, see BZ#1772847 . 10 Sets a maximum transmission rate, in Mbps, for the VF. This sample value sets a rate of 200 Mbps. 11 Sets the desired bond mode. 12 Sets the preferred port of the bonding interface. The bond uses the primary device as the first device of the bonding interfaces. The bond does not abandon the primary device interface unless it fails. This setting is particularly useful when one NIC in the bonding interface is faster and, therefore, able to handle a bigger load. This setting is only valid when the bonding interface is in active-backup mode (mode 1) and balance-tlb (mode 5). 13 Sets a static IP address for the bond interface. This is the node IP address. 14 Sets bond0 as the gateway for the default route. Important After deploying the cluster, you cannot change the networkConfig configuration setting of the install-config.yaml file to make changes to the host network interface. Use the Kubernetes NMState Operator to make changes to the host network interface after deployment. Additional resources Configuring network bonding 3.10.16. Configuring multiple cluster nodes You can simultaneously configure OpenShift Container Platform cluster nodes with identical settings. Configuring multiple cluster nodes avoids adding redundant information for each node to the install-config.yaml file. This file contains specific parameters to apply an identical configuration to multiple nodes in the cluster. Compute nodes are configured separately from the controller node. However, configurations for both node types use the highlighted parameters in the install-config.yaml file to enable multi-node configuration. Set the networkConfig parameters to BOND , as shown in the following example: hosts: - name: ostest-master-0 [...] networkConfig: &BOND interfaces: - name: bond0 type: bond state: up ipv4: dhcp: true enabled: true link-aggregation: mode: active-backup port: - enp2s0 - enp3s0 - name: ostest-master-1 [...] networkConfig: *BOND - name: ostest-master-2 [...] networkConfig: *BOND Note Configuration of multiple cluster nodes is only available for initial deployments on installer-provisioned infrastructure. 3.10.17. Optional: Configuring managed Secure Boot You can enable managed Secure Boot when deploying an installer-provisioned cluster using Redfish BMC addressing, such as redfish , redfish-virtualmedia , or idrac-virtualmedia . To enable managed Secure Boot, add the bootMode configuration setting to each node: Example hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out_of_band_ip> 1 username: <username> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "/dev/sda" bootMode: UEFISecureBoot 2 1 Ensure the bmc.address setting uses redfish , redfish-virtualmedia , or idrac-virtualmedia as the protocol. See "BMC addressing for HPE iLO" or "BMC addressing for Dell iDRAC" for additional details. 2 The bootMode setting is UEFI by default. Change it to UEFISecureBoot to enable managed Secure Boot. Note See "Configuring nodes" in the "Prerequisites" to ensure the nodes can support managed Secure Boot. If the nodes do not support managed Secure Boot, see "Configuring nodes for Secure Boot manually" in the "Configuring nodes" section. Configuring Secure Boot manually requires Redfish virtual media. Note Red Hat does not support Secure Boot with IPMI, because IPMI does not provide Secure Boot management facilities. 3.11. Manifest configuration files 3.11.1. Creating the OpenShift Container Platform manifests Create the OpenShift Container Platform manifests. USD ./openshift-baremetal-install --dir ~/clusterconfigs create manifests INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated 3.11.2. Optional: Configuring NTP for disconnected clusters OpenShift Container Platform installs the chrony Network Time Protocol (NTP) service on the cluster nodes. OpenShift Container Platform nodes must agree on a date and time to run properly. When worker nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server. Procedure Install Butane on your installation host by using the following command: USD sudo dnf -y install butane Create a Butane config, 99-master-chrony-conf-override.bu , including the contents of the chrony.conf file for the control plane nodes. Note See "Creating machine configs with Butane" for information about Butane. Butane config example variant: openshift version: 4.15.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all worker nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-master-chrony-conf-override.yaml , containing the configuration to be delivered to the control plane nodes: USD butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml Create a Butane config, 99-worker-chrony-conf-override.bu , including the contents of the chrony.conf file for the worker nodes that references the NTP servers on the control plane nodes. Butane config example variant: openshift version: 4.15.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-worker-chrony-conf-override.yaml , containing the configuration to be delivered to the worker nodes: USD butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml 3.11.3. Configuring network components to run on the control plane You can configure networking components to run exclusively on the control plane nodes. By default, OpenShift Container Platform allows any node in the machine config pool to host the ingressVIP virtual IP address. However, some environments deploy worker nodes in separate subnets from the control plane nodes, which requires configuring the ingressVIP virtual IP address to run on the control plane nodes. Important When deploying remote workers in separate subnets, you must place the ingressVIP virtual IP address exclusively with the control plane nodes. Procedure Change to the directory storing the install-config.yaml file: USD cd ~/clusterconfigs Switch to the manifests subdirectory: USD cd manifests Create a file named cluster-network-avoid-workers-99-config.yaml : USD touch cluster-network-avoid-workers-99-config.yaml Open the cluster-network-avoid-workers-99-config.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:, This manifest places the ingressVIP virtual IP address on the control plane nodes. Additionally, this manifest deploys the following processes on the control plane nodes only: openshift-ingress-operator keepalived Save the cluster-network-avoid-workers-99-config.yaml file. Create a manifests/cluster-ingress-default-ingresscontroller.yaml file: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: "" Consider backing up the manifests directory. The installer deletes the manifests/ directory when creating the cluster. Modify the cluster-scheduler-02-config.yml manifest to make the control plane nodes schedulable by setting the mastersSchedulable field to true . Control plane nodes are not schedulable by default. For example: Note If control plane nodes are not schedulable after completing this procedure, deploying the cluster will fail. 3.11.4. Optional: Deploying routers on worker nodes During installation, the installer deploys router pods on worker nodes. By default, the installer installs two router pods. If a deployed cluster requires additional routers to handle external traffic loads destined for services within the OpenShift Container Platform cluster, you can create a yaml file to set an appropriate number of router replicas. Important Deploying a cluster with only one worker node is not supported. While modifying the router replicas will address issues with the degraded state when deploying with one worker, the cluster loses high availability for the ingress API, which is not suitable for production environments. Note By default, the installer deploys two routers. If the cluster has no worker nodes, the installer deploys the two routers on the control plane nodes by default. Procedure Create a router-replicas.yaml file: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: <num-of-router-pods> endpointPublishingStrategy: type: HostNetwork nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" Note Replace <num-of-router-pods> with an appropriate value. If working with just one worker node, set replicas: to 1 . If working with more than 3 worker nodes, you can increase replicas: from the default value 2 as appropriate. Save and copy the router-replicas.yaml file to the clusterconfigs/openshift directory: USD cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yaml 3.11.5. Optional: Configuring the BIOS The following procedure configures the BIOS during the installation process. Procedure Create the manifests. Modify the BareMetalHost resource file corresponding to the node: USD vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml Add the BIOS configuration to the spec section of the BareMetalHost resource: spec: firmware: simultaneousMultithreadingEnabled: true sriovEnabled: true virtualizationEnabled: true Note Red Hat supports three BIOS configurations. Only servers with BMC type irmc are supported. Other types of servers are currently not supported. Create the cluster. Additional resources Bare-metal configuration 3.11.6. Optional: Configuring the RAID The following procedure configures a redundant array of independent disks (RAID) using baseboard management controllers (BMCs) during the installation process. Note If you want to configure a hardware RAID for the node, verify that the node has a supported RAID controller. OpenShift Container Platform 4.15 does not support software RAID. Table 3.8. Hardware RAID support by vendor Vendor BMC and protocol Firmware version RAID levels Fujitsu iRMC N/A 0, 1, 5, 6, and 10 Dell iDRAC with Redfish Version 6.10.30.20 or later 0, 1, and 5 Procedure Create the manifests. Modify the BareMetalHost resource corresponding to the node: USD vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml Note The following example uses a hardware RAID configuration because OpenShift Container Platform 4.15 does not support software RAID. If you added a specific RAID configuration to the spec section, this causes the node to delete the original RAID configuration in the preparing phase and perform a specified configuration on the RAID. For example: spec: raid: hardwareRAIDVolumes: - level: "0" 1 name: "sda" numberOfPhysicalDisks: 1 rotational: true sizeGibibytes: 0 1 level is a required field, and the others are optional fields. If you added an empty RAID configuration to the spec section, the empty configuration causes the node to delete the original RAID configuration during the preparing phase, but does not perform a new configuration. For example: spec: raid: hardwareRAIDVolumes: [] If you do not add a raid field in the spec section, the original RAID configuration is not deleted, and no new configuration will be performed. Create the cluster. 3.11.7. Optional: Configuring storage on nodes You can make changes to operating systems on OpenShift Container Platform nodes by creating MachineConfig objects that are managed by the Machine Config Operator (MCO). The MachineConfig specification includes an ignition config for configuring the machines at first boot. This config object can be used to modify files, systemd services, and other operating system features running on OpenShift Container Platform machines. Procedure Use the ignition config to configure storage on nodes. The following MachineSet manifest example demonstrates how to add a partition to a device on a primary node. In this example, apply the manifest before installation to have a partition named recovery with a size of 16 GiB on the primary node. Create a custom-partitions.yaml file and include a MachineConfig object that contains your partition layout: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: primary name: 10_primary_storage_config spec: config: ignition: version: 3.2.0 storage: disks: - device: </dev/xxyN> partitions: - label: recovery startMiB: 32768 sizeMiB: 16384 filesystems: - device: /dev/disk/by-partlabel/recovery label: recovery format: xfs Save and copy the custom-partitions.yaml file to the clusterconfigs/openshift directory: USD cp ~/<MachineConfig_manifest> ~/clusterconfigs/openshift Additional resources Bare-metal configuration Partition naming scheme 3.12. Creating a disconnected registry In some cases, you might want to install an OpenShift Container Platform cluster using a local copy of the installation registry. This could be for enhancing network efficiency because the cluster nodes are on a network that does not have access to the internet. A local, or mirrored, copy of the registry requires the following: A certificate for the registry node. This can be a self-signed certificate. A web server that a container on a system will serve. An updated pull secret that contains the certificate and local repository information. Note Creating a disconnected registry on a registry node is optional. If you need to create a disconnected registry on a registry node, you must complete all of the following sub-sections. Prerequisites If you have already prepared a mirror registry for Mirroring images for a disconnected installation , you can skip directly to Modify the install-config.yaml file to use the disconnected registry . 3.12.1. Preparing the registry node to host the mirrored registry The following steps must be completed prior to hosting a mirrored registry on bare metal. Procedure Open the firewall port on the registry node: USD sudo firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent USD sudo firewall-cmd --add-port=5000/tcp --zone=public --permanent USD sudo firewall-cmd --reload Install the required packages for the registry node: USD sudo yum -y install python3 podman httpd httpd-tools jq Create the directory structure where the repository information will be held: USD sudo mkdir -p /opt/registry/{auth,certs,data} 3.12.2. Mirroring the OpenShift Container Platform image repository for a disconnected registry Complete the following steps to mirror the OpenShift Container Platform image repository for a disconnected registry. Prerequisites Your mirror host has access to the internet. You configured a mirror registry to use in your restricted network and can access the certificate and credentials that you configured. You downloaded the pull secret from Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository. Procedure Review the OpenShift Container Platform downloads page to determine the version of OpenShift Container Platform that you want to install and determine the corresponding tag on the Repository Tags page. Set the required environment variables: Export the release version: USD OCP_RELEASE=<release_version> For <release_version> , specify the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.5.4 . Export the local registry name and host port: USD LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>' For <local_registry_host_name> , specify the registry domain name for your mirror repository, and for <local_registry_host_port> , specify the port that it serves content on. Export the local repository name: USD LOCAL_REPOSITORY='<local_repository_name>' For <local_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4 . Export the name of the repository to mirror: USD PRODUCT_REPO='openshift-release-dev' For a production release, you must specify openshift-release-dev . Export the path to your registry pull secret: USD LOCAL_SECRET_JSON='<path_to_pull_secret>' For <path_to_pull_secret> , specify the absolute path to and file name of the pull secret for your mirror registry that you created. Export the release mirror: USD RELEASE_NAME="ocp-release" For a production release, you must specify ocp-release . Export the type of architecture for your cluster: USD ARCHITECTURE=<cluster_architecture> 1 1 Specify the architecture of the cluster, such as x86_64 , aarch64 , s390x , or ppc64le . Export the path to the directory to host the mirrored images: USD REMOVABLE_MEDIA_PATH=<path> 1 1 Specify the full path, including the initial forward slash (/) character. Mirror the version images to the mirror registry: If your mirror host does not have internet access, take the following actions: Connect the removable media to a system that is connected to the internet. Review the images and configuration manifests to mirror: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Mirror the images to a directory on the removable media: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} Take the media to the restricted network environment and upload the images to the local container registry. USD oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:USD{OCP_RELEASE}*" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1 1 For REMOVABLE_MEDIA_PATH , you must use the same path that you specified when you mirrored the images. If the local container registry is connected to the mirror host, take the following actions: Directly push the release images to the local registry by using following command: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} This command pulls the release information as a digest, and its output includes the imageContentSources data that you require when you install your cluster. Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Note The image name gets patched to Quay.io during the mirroring process, and the podman images will show Quay.io in the registry on the bootstrap virtual machine. To create the installation program that is based on the content that you mirrored, extract it and pin it to the release: If your mirror host does not have internet access, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}" If the local container registry is connected to the mirror host, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}" Important To ensure that you use the correct images for the version of OpenShift Container Platform that you selected, you must extract the installation program from the mirrored content. You must perform this step on a machine with an active internet connection. If you are in a disconnected environment, use the --image flag as part of must-gather and point to the payload image. For clusters using installer-provisioned infrastructure, run the following command: USD openshift-baremetal-install 3.12.3. Modify the install-config.yaml file to use the disconnected registry On the provisioner node, the install-config.yaml file should use the newly created pull-secret from the pull-secret-update.txt file. The install-config.yaml file must also contain the disconnected registry node's certificate and registry information. Procedure Add the disconnected registry node's certificate to the install-config.yaml file: USD echo "additionalTrustBundle: |" >> install-config.yaml The certificate should follow the "additionalTrustBundle: |" line and be properly indented, usually by two spaces. USD sed -e 's/^/ /' /opt/registry/certs/domain.crt >> install-config.yaml Add the mirror information for the registry to the install-config.yaml file: USD echo "imageContentSources:" >> install-config.yaml USD echo "- mirrors:" >> install-config.yaml USD echo " - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml Replace registry.example.com with the registry's fully qualified domain name. USD echo " source: quay.io/openshift-release-dev/ocp-release" >> install-config.yaml USD echo "- mirrors:" >> install-config.yaml USD echo " - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml Replace registry.example.com with the registry's fully qualified domain name. USD echo " source: quay.io/openshift-release-dev/ocp-v4.0-art-dev" >> install-config.yaml 3.13. Validation checklist for installation ❏ OpenShift Container Platform installer has been retrieved. ❏ OpenShift Container Platform installer has been extracted. ❏ Required parameters for the install-config.yaml have been configured. ❏ The hosts parameter for the install-config.yaml has been configured. ❏ The bmc parameter for the install-config.yaml has been configured. ❏ Conventions for the values configured in the bmc address field have been applied. ❏ Created the OpenShift Container Platform manifests. ❏ (Optional) Deployed routers on worker nodes. ❏ (Optional) Created a disconnected registry. ❏ (Optional) Validate disconnected registry settings if in use. | [
"useradd kni",
"passwd kni",
"echo \"kni ALL=(root) NOPASSWD:ALL\" | tee -a /etc/sudoers.d/kni",
"chmod 0440 /etc/sudoers.d/kni",
"su - kni -c \"ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''\"",
"su - kni",
"sudo subscription-manager register --username=<user> --password=<pass> --auto-attach",
"sudo subscription-manager repos --enable=rhel-9-for-<architecture>-appstream-rpms --enable=rhel-9-for-<architecture>-baseos-rpms",
"sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool",
"sudo usermod --append --groups libvirt <user>",
"sudo systemctl start firewalld",
"sudo firewall-cmd --zone=public --add-service=http --permanent",
"sudo firewall-cmd --reload",
"sudo systemctl enable libvirtd --now",
"sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images",
"sudo virsh pool-start default",
"sudo virsh pool-autostart default",
"vim pull-secret.txt",
"chronyc sources",
"MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^+ time.cloudflare.com 3 10 377 187 -209us[ -209us] +/- 32ms ^+ t1.time.ir2.yahoo.com 2 10 377 185 -4382us[-4382us] +/- 23ms ^+ time.cloudflare.com 3 10 377 198 -996us[-1220us] +/- 33ms ^* brenbox.westnet.ie 1 10 377 193 -9538us[-9761us] +/- 24ms",
"ping time.cloudflare.com",
"PING time.cloudflare.com (162.159.200.123) 56(84) bytes of data. 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=1 ttl=54 time=32.3 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=2 ttl=54 time=30.9 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=3 ttl=54 time=36.7 ms",
"export PUB_CONN=<baremetal_nic_name>",
"sudo nohup bash -c \" nmcli con down \\\"USDPUB_CONN\\\" nmcli con delete \\\"USDPUB_CONN\\\" # RHEL 8.1 appends the word \\\"System\\\" in front of the connection, delete in case it exists nmcli con down \\\"System USDPUB_CONN\\\" nmcli con delete \\\"System USDPUB_CONN\\\" nmcli connection add ifname baremetal type bridge <con_name> baremetal bridge.stp no 1 nmcli con add type bridge-slave ifname \\\"USDPUB_CONN\\\" master baremetal pkill dhclient;dhclient baremetal \"",
"sudo nohup bash -c \" nmcli con down \\\"USDPUB_CONN\\\" nmcli con delete \\\"USDPUB_CONN\\\" # RHEL 8.1 appends the word \\\"System\\\" in front of the connection, delete in case it exists nmcli con down \\\"System USDPUB_CONN\\\" nmcli con delete \\\"System USDPUB_CONN\\\" nmcli connection add ifname baremetal type bridge con-name baremetal bridge.stp no ipv4.method manual ipv4.addr \"x.x.x.x/yy\" ipv4.gateway \"a.a.a.a\" ipv4.dns \"b.b.b.b\" 1 nmcli con add type bridge-slave ifname \\\"USDPUB_CONN\\\" master baremetal nmcli con up baremetal \"",
"export PROV_CONN=<prov_nic_name>",
"sudo nohup bash -c \" nmcli con down \\\"USDPROV_CONN\\\" nmcli con delete \\\"USDPROV_CONN\\\" nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname \\\"USDPROV_CONN\\\" master provisioning nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual nmcli con down provisioning nmcli con up provisioning \"",
"nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manual",
"ssh kni@provisioner.<cluster-name>.<domain>",
"sudo nmcli con show",
"NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eno1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eno1 bridge-slave-eno2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eno2",
"sudo su -",
"nmcli dev status",
"nmcli connection modify <interface_name> +ipv4.routes \"192.168.0.0/24 via <gateway>\"",
"nmcli connection modify eth0 +ipv4.routes \"192.168.0.0/24 via 192.168.0.1\"",
"nmcli connection up <interface_name>",
"ip route",
"sudo su -",
"nmcli dev status",
"nmcli connection modify <interface_name> +ipv4.routes \"10.0.0.0/24 via <gateway>\"",
"nmcli connection modify eth0 +ipv4.routes \"10.0.0.0/24 via 10.0.0.1\"",
"nmcli connection up <interface_name>",
"ip route",
"ping <remote_worker_node_ip_address>",
"ping <control_plane_node_ip_address>",
"export VERSION=stable-4.15",
"export RELEASE_ARCH=<architecture>",
"export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}')",
"export cmd=openshift-baremetal-install",
"export pullsecret_file=~/pull-secret.txt",
"export extract_dir=USD(pwd)",
"curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc",
"sudo cp oc /usr/local/bin",
"oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=USDcmd --to \"USD{extract_dir}\" USD{RELEASE_IMAGE}",
"sudo cp openshift-baremetal-install /usr/local/bin",
"sudo dnf install -y podman",
"sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent",
"sudo firewall-cmd --reload",
"mkdir /home/kni/rhcos_image_cache",
"sudo semanage fcontext -a -t httpd_sys_content_t \"/home/kni/rhcos_image_cache(/.*)?\"",
"sudo restorecon -Rv /home/kni/rhcos_image_cache/",
"export RHCOS_QEMU_URI=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.qemu.formats[\"qcow2.gz\"].disk.location')",
"export RHCOS_QEMU_NAME=USD{RHCOS_QEMU_URI##*/}",
"export RHCOS_QEMU_UNCOMPRESSED_SHA256=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.qemu.formats[\"qcow2.gz\"].disk[\"uncompressed-sha256\"]')",
"curl -L USD{RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/USD{RHCOS_QEMU_NAME}",
"ls -Z /home/kni/rhcos_image_cache",
"podman run -d --name rhcos_image_cache \\ 1 -v /home/kni/rhcos_image_cache:/var/www/html -p 8080:8080/tcp registry.access.redhat.com/ubi9/httpd-24",
"export BAREMETAL_IP=USD(ip addr show dev baremetal | awk '/inet /{print USD2}' | cut -d\"/\" -f1)",
"export BOOTSTRAP_OS_IMAGE=\"http://USD{BAREMETAL_IP}:8080/USD{RHCOS_QEMU_NAME}?sha256=USD{RHCOS_QEMU_UNCOMPRESSED_SHA256}\"",
"echo \" bootstrapOSImage=USD{BOOTSTRAP_OS_IMAGE}\"",
"platform: baremetal: bootstrapOSImage: <bootstrap_os_image> 1",
"apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public_cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 1 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIPs: - <api_ip> ingressVIPs: - <wildcard_ip> provisioningNetworkCIDR: <CIDR> bootstrapExternalStaticIP: <bootstrap_static_ip_address> 2 bootstrapExternalStaticGateway: <bootstrap_static_gateway> 3 bootstrapExternalStaticDNS: <bootstrap_static_dns> 4 hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out_of_band_ip> 5 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" 6 - name: <openshift_master_1> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" - name: <openshift_master_2> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" - name: <openshift_worker_0> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> - name: <openshift_worker_1> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>'",
"ironic-inspector inspection failed: No disks satisfied root device hints",
"mkdir ~/clusterconfigs",
"cp install-config.yaml ~/clusterconfigs",
"ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off",
"for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done",
"metadata: name:",
"networking: machineNetwork: - cidr:",
"compute: - name: worker",
"compute: replicas: 2",
"controlPlane: name: master",
"controlPlane: replicas: 3",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out-of-band-ip> username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{\"ResetType\": \"On\"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset",
"curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{\"ResetType\": \"ForceOff\"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"pxe\", \"BootSourceOverrideEnabled\": \"Once\"}}",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideMode\":\"UEFI\"}}",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"cd\", \"BootSourceOverrideEnabled\": \"Once\"}}'",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" -H \"If-Match: *\" https://USDServer/redfish/v1/Managers/USDManagerID/VirtualMedia/USDVmediaId -d '{\"Image\": \"https://example.com/test.iso\", \"TransferProtocolType\": \"HTTPS\", \"UserName\": \"\", \"Password\":\"\"}'",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: irmc://<out-of-band-ip> username: <user> password: <password>",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> username: <user> password: <password> disableCertificateVerification: True",
"- name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: \"/dev/sda\"",
"apiVersion: v1 baseDomain: <domain> proxy: httpProxy: http://USERNAME:[email protected]:PORT httpsProxy: https://USERNAME:[email protected]:PORT noProxy: <WILDCARD_OF_DOMAIN>,<PROVISIONING_NETWORK/CIDR>,<BMC_ADDRESS_RANGE/CIDR>",
"noProxy: .example.com,172.22.0.0/24,10.10.0.0/24",
"platform: baremetal: apiVIPs: - <api_VIP> ingressVIPs: - <ingress_VIP> provisioningNetwork: \"Disabled\" 1",
"machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112",
"networkConfig: nmstate: interfaces: - name: <interface_name> wait-ip: ipv4+ipv6",
"platform: baremetal: apiVIPs: - <api_ipv4> - <api_ipv6> ingressVIPs: - <wildcard_ipv4> - <wildcard_ipv6>",
"interfaces: - name: <nic1_name> 1 type: ethernet state: up ipv4: address: - ip: <ip_address> 2 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 3 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 4 next-hop-interface: <next_hop_nic1_name> 5",
"nmstatectl gc <nmstate_yaml_file>",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: \"/dev/sda\" networkConfig: 1 interfaces: - name: <nic1_name> 2 type: ethernet state: up ipv4: address: - ip: <ip_address> 3 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 4 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 5 next-hop-interface: <next_hop_nic1_name> 6",
"networking: machineNetwork: - cidr: 10.0.0.0/24 - cidr: 192.168.0.0/24 networkType: OVNKubernetes",
"networkConfig: interfaces: - name: <interface_name> 1 type: ethernet state: up ipv4: enabled: true dhcp: false address: - ip: <node_ip> 2 prefix-length: 24 gateway: <gateway_ip> 3 dns-resolver: config: server: - <dns_ip> 4",
"interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1",
"nmstatectl gc <nmstate_yaml_file> 1",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: \"/dev/sda\" networkConfig: interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1",
"hosts: - name: worker-0 role: worker bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: false bootMACAddress: <NIC1_mac_address> bootMode: UEFI networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false dhcp: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 next-hop-address: 10.19.17.254 next-hop-interface: bond0 14 table-id: 254",
"hosts: - name: ostest-master-0 [...] networkConfig: &BOND interfaces: - name: bond0 type: bond state: up ipv4: dhcp: true enabled: true link-aggregation: mode: active-backup port: - enp2s0 - enp3s0 - name: ostest-master-1 [...] networkConfig: *BOND - name: ostest-master-2 [...] networkConfig: *BOND",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out_of_band_ip> 1 username: <username> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"/dev/sda\" bootMode: UEFISecureBoot 2",
"./openshift-baremetal-install --dir ~/clusterconfigs create manifests",
"INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated",
"sudo dnf -y install butane",
"variant: openshift version: 4.15.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all worker nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan",
"butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml",
"variant: openshift version: 4.15.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony",
"butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml",
"cd ~/clusterconfigs",
"cd manifests",
"touch cluster-network-avoid-workers-99-config.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:,",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: \"\"",
"sed -i \"s;mastersSchedulable: false;mastersSchedulable: true;g\" clusterconfigs/manifests/cluster-scheduler-02-config.yml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: <num-of-router-pods> endpointPublishingStrategy: type: HostNetwork nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\"",
"cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yaml",
"vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml",
"spec: firmware: simultaneousMultithreadingEnabled: true sriovEnabled: true virtualizationEnabled: true",
"vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml",
"spec: raid: hardwareRAIDVolumes: - level: \"0\" 1 name: \"sda\" numberOfPhysicalDisks: 1 rotational: true sizeGibibytes: 0",
"spec: raid: hardwareRAIDVolumes: []",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: primary name: 10_primary_storage_config spec: config: ignition: version: 3.2.0 storage: disks: - device: </dev/xxyN> partitions: - label: recovery startMiB: 32768 sizeMiB: 16384 filesystems: - device: /dev/disk/by-partlabel/recovery label: recovery format: xfs",
"cp ~/<MachineConfig_manifest> ~/clusterconfigs/openshift",
"sudo firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent",
"sudo firewall-cmd --add-port=5000/tcp --zone=public --permanent",
"sudo firewall-cmd --reload",
"sudo yum -y install python3 podman httpd httpd-tools jq",
"sudo mkdir -p /opt/registry/{auth,certs,data}",
"OCP_RELEASE=<release_version>",
"LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'",
"LOCAL_REPOSITORY='<local_repository_name>'",
"PRODUCT_REPO='openshift-release-dev'",
"LOCAL_SECRET_JSON='<path_to_pull_secret>'",
"RELEASE_NAME=\"ocp-release\"",
"ARCHITECTURE=<cluster_architecture> 1",
"REMOVABLE_MEDIA_PATH=<path> 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}\"",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"",
"openshift-baremetal-install",
"echo \"additionalTrustBundle: |\" >> install-config.yaml",
"sed -e 's/^/ /' /opt/registry/certs/domain.crt >> install-config.yaml",
"echo \"imageContentSources:\" >> install-config.yaml",
"echo \"- mirrors:\" >> install-config.yaml",
"echo \" - registry.example.com:5000/ocp4/openshift4\" >> install-config.yaml",
"echo \" source: quay.io/openshift-release-dev/ocp-release\" >> install-config.yaml",
"echo \"- mirrors:\" >> install-config.yaml",
"echo \" - registry.example.com:5000/ocp4/openshift4\" >> install-config.yaml",
"echo \" source: quay.io/openshift-release-dev/ocp-v4.0-art-dev\" >> install-config.yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-installation-workflow |
Part VIII. Testing a decision service using test scenarios | Part VIII. Testing a decision service using test scenarios As a business analyst or business rules developer, you can use test scenarios in Business Central to test a decision service before a project is deployed. You can test DMN-based and rules-based decision services to ensure these are functioning properly and as expected. Also, you can test a decision service at any time during project development. Prerequisites The space and project for the decision service have been created in Business Central. For details, see Getting started with decision services . Business rules and their associated data objects have been defined for the rules-based decision service. For details, see Designing a decision service using guided decision tables . DMN decision logic and its associated custom data types have been defined for the DMN-based decision service. For details, see Designing a decision service using DMN models . Note Having defined business rules is not a technical prerequisite for test scenarios, because the scenarios can test the defined data that constitutes the business rules. However, creating the rules first is helpful so that you can also test entire rules in test scenarios and so that the scenarios more closely match the intended decision service. For DMN-based test scenarios ensure that the DMN decision logic and its associated custom data types are defined for the decision service. | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/assembly-test-scenarios |
Architecture | Architecture Red Hat Advanced Cluster Security for Kubernetes 4.5 System architecture Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html-single/architecture/index |
2.10. Converting a Conventional Spec File | 2.10. Converting a Conventional Spec File This section discusses converting a conventional spec file into a Software Collection spec file so that the converted spec file can be used in both the conventional package and the Software Collection. 2.10.1. Example of the Converted Spec File To see what the diff file comparing a conventional spec file with a converted spec file looks like, refer to the following example: --- a/less.spec +++ b/less.spec @@ -1,10 +1,14 @@ +%{?scl:%global _scl_prefix /opt/ provider } +%{?scl:%scl_package less} +%{!?scl:%global pkg_name %{name}} + Summary: A text file browser similar to more, but better -Name: less +Name: %{?scl_prefix}less Version: 444 Release: 7%{?dist} License: GPLv3+ Group: Applications/Text -Source: http://www.greenwoodsoftware.com/less/%{name}-%{version}.tar.gz +Source: http://www.greenwoodsoftware.com/less/%{pkg_name}-%{version}.tar.gz Source1: lesspipe.sh Source2: less.sh Source3: less.csh @@ -19,6 +22,7 @@ URL: http://www.greenwoodsoftware.com/less/ Requires: groff BuildRequires: ncurses-devel BuildRequires: autoconf automake libtool -Obsoletes: lesspipe < 1.0 +Obsoletes: %{?scl_prefix}lesspipe < 1.0 +%{?scl:Requires: %scl_runtime} %description The less utility is a text file browser that resembles more, but has @@ -31,7 +35,7 @@ You should install less because it is a basic utility for viewing text files, and you'll use it frequently. %prep -%setup -q +%setup -q -n %{pkg_name}-%{version} %patch1 -p1 -b .Foption %patch2 -p1 -b .search %patch4 -p1 -b .time @@ -51,16 +55,16 @@ make CC="gcc USDRPM_OPT_FLAGS -D_GNU_SOURCE -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOU %install rm -rf USDRPM_BUILD_ROOT make DESTDIR=USDRPM_BUILD_ROOT install -mkdir -p USDRPM_BUILD_ROOT/etc/profile.d +mkdir -p USDRPM_BUILD_ROOT%{_sysconfdir}/profile.d install -p -c -m 755 %{SOURCE1} USDRPM_BUILD_ROOT/%{_bindir} -install -p -c -m 644 %{SOURCE2} USDRPM_BUILD_ROOT/etc/profile.d -install -p -c -m 644 %{SOURCE3} USDRPM_BUILD_ROOT/etc/profile.d -ls -la USDRPM_BUILD_ROOT/etc/profile.d +install -p -c -m 644 %{SOURCE2} USDRPM_BUILD_ROOT%{_sysconfdir}/profile.d +install -p -c -m 644 %{SOURCE3} USDRPM_BUILD_ROOT%{_sysconfdir}/profile.d +ls -la USDRPM_BUILD_ROOT%{_sysconfdir}/profile.d %files %defattr(-,root,root,-) %doc LICENSE -/etc/profile.d/* +%{_sysconfdir}/profile.d/* %{_bindir}/* %{_mandir}/man1/* 2.10.2. Converting Tags and Macro Definitions The following steps show how to convert tags and macro definitions in a conventional spec file into a Software Collection spec file. Procedure 2.1. Converting tags and macro definitions You can change the location of the root directory by defining the %_scl_prefix macro above the %scl_package macro : %{?scl:%global _scl_prefix /opt/ provider } Add the %scl_package macro to the spec file. Place the macro in front of the spec file preamble as follows: %{?scl:%scl_package package_name } You are advised to define the %pkg_name macro in the spec file preamble in case the package is not built for the Software Collection: %{!?scl:%global pkg_name %{name}} Consequently, you can use the %pkg_name macro to define the original name of the package wherever it is needed in the spec file that you can then use for building both the conventional package and the Software Collection. Change the Name tag in the spec file preamble as follows: Name: %{?scl_prefix} package_name If you are building or linking with other Software Collection packages, then prefix the names of those Software Collection packages in the Requires and BuildRequires tags with %{?scl_prefix} as follows: Requires: %{?scl_prefix}ifconfig When depending on the system versions of packages, you should avoid using versioned Requires or BuildRequires . If you need to depend on a package that could be updated by the system, consider including that package in your Software Collection, or remember to rebuild your Software Collection when the system package updates. To check that all essential Software Collection's packages are dependencies of the main metapackage, add the following macro after the BuildRequires or Requires tags in the spec file: %{?scl:Requires: %scl_runtime} Prefix the Obsoletes , Conflicts and BuildConflicts tags with %{?scl_prefix} . This is to ensure that the Software Collection can be used to deploy new packages to older systems without having the packages specified, for example, by Obsolete removed from the base system installation. For example: Obsoletes: %{?scl_prefix}lesspipe < 1.0 Prefix the Provides tag with %{?scl_prefix} , as in the following example: Provides: %{?scl_prefix}more 2.10.3. Converting Subpackages For any subpackages that define their name with the -n option, prefix their name with %{?scl_prefix} , as in the following example: %package -n %{?scl_prefix}more Prefixing applies not only to the %package macro, but also for %description and %files . For example: %description -n %{?scl_prefix}rubygems RubyGems is the Ruby standard for publishing and managing third party libraries. In case the subpackage requires the main package, make sure to also adjust the Requires tag in that subpackage so that the tag uses %{?scl_prefix}%{pkg_name} . For example: Requires: %{?scl_prefix}%{pkg_name} = %{version}-%{release} 2.10.4. Converting RPM Scripts This section describes general rules for converting RPM scripts that can often be found in the %prep , %build , %install , %check , %pre , and %post sections of a conventional spec file. Replace all occurrences of %name with %pkg_name . Most importantly, this includes adjusting the %setup macro. Adjust the %setup macro in the %prep section of the spec file so that the macro can deal with a different package name in the Software Collection environment: %setup -q -n %{pkg_name}-%{version} Note that the %setup macro is required and that you must always use the macro with the -n option to successfully build your Software Collection. If you are using any of the %_root_ macros to point to the system file system hierarchy, you must use conditionals for these macros so that you can then use the spec file for building both the conventional package and the Software Collection. Edit the macros as in the following example: mkdir -p %{?scl:%_root_sysconfdir}%{?!scl:%_sysconfdir} When building Software Collection packages that depend on other Software Collection packages, it is often important to ensure that the scl enable functionality links properly or run proper binaries, and so on. One of the examples where this is needed is compiling against a Software Collection library or running an interpreted script with the interpreter in the Software Collection. Wrap the script using the %{?scl: prefix, as in the following example: %{?scl:scl enable %scl - << \EOF} set -e ruby example.rb RUBYOPT="-Ilib" ruby bar.rb # The rest of the script contents goes here. %{?scl:EOF} It is important to specify set -e in the script so that the script behavior is consistent regardless of whether the script is executed in the rpm shell or the scl environment. Pay attention to any scripts that are executed during the Software Collection package installation, such as: %pretrans , %pre , %post , %postun , %posttrans , %triggerin , %triggerun , and %triggerpostun . If you use the scl enable functionality in those scripts, you are advised to start with an empty environment to avoid any unintentional collisions with the base system installation. To do so, use env -i - before enabling the Software Collection, as in the following example: %posttrans %{?scl:env -i - scl enable %{scl} - << \EOF} %vagrant_plugin_register %{vagrant_plugin_name} %{?scl:EOF} All hardcoded paths found in RPM scripts must be replaced with proper macros. For example, replace all occurrences of /usr/share with %{_datadir} . This is needed because the USDRPM_BUILD_ROOT variable and the %{build_root} macro are not relocated by the scl macro. 2.10.5. Software Collection Automatic Provides and Requires and Filtering Support Important The functionality described in this section is not available in Red Hat Enterprise Linux 6. RPM in Red Hat Enterprise Linux 7 features support for automatic Provides and Requires and filtering. For example, for all Python libraries, RPM automatically adds the following Requires : Requires: python(abi) = (version) As explained in Section 2.10, "Converting a Conventional Spec File" , you should prefix this Requires with %{?scl_prefix} when converting your conventional RPM package: Requires: %{?scl_prefix}python(abi) = (version)) Keep in mind that the scripts searching for these dependencies must sometimes be rewritten for your Software Collection, as the original RPM scripts are not extensible enough, and, in some cases, filtering is not usable. For example, to rewrite automatic Python Provides and Requires , add the following lines in the macros.%{scl}-config macro file: %__python_provides /usr/lib/rpm/pythondeps-scl.sh --provides %{_scl_root} %{scl_prefix} %__python_requires /usr/lib/rpm/pythondeps-scl.sh --requires %{_scl_root} %{scl_prefix} The /usr/lib/rpm/pythondeps-scl.sh file is based on a pythondeps.sh file from the conventional package and adjusts search paths. If there are Provides or Requires that you need to adjust, for example, a pkg_config Provides , there are two ways to do it: Add the following lines in the macros.%{scl}-config macro file so that it applies to all packages in the Software Collection: %_use_internal_dependency_generator 0 %__deploop() while read FILE; do /usr/lib/rpm/rpmdeps -%{1} USD{FILE}; done | /bin/sort -u %__find_provides /bin/sh -c "%{?__filter_prov_cmd} %{__deploop P} %{?__filter_from_prov}" %__find_requires /bin/sh -c "%{?__filter_req_cmd} %{__deploop R} %{?__filter_from_req}" # Handle pkgconfig's virtual Provides and Requires %__filter_from_req | %{__sed} -e 's|pkgconfig|%{?scl_prefix}pkgconfig|g' %__filter_from_prov | %{__sed} -e 's|pkgconfig|%{?scl_prefix}pkgconfig|g' Or, alternatively, add the following lines after tag definitions in every spec file for which you want to filter Provides or Requires : %{?scl:%filter_from_provides s|pkgconfig|%{?scl_prefix}pkgconfig|g} %{?scl:%filter_from_requires s|pkgconfig|%{?scl_prefix}pkgconfig|g} %{?scl:%filter_setup} Important When using filters, you need to pay attention to the automatic dependencies you change. For example, if the conventional package contains Requires: pkgconfig(package_1) and Requires: pkgconfig(package_2) , and only package_2 is included in the Software Collection, ensure that you do not filter the Requires tag for package_1 . 2.10.6. Software Collection Macro Files Support In some cases, you may need to ship macro files with your Software Collection packages. They are located in the %{?scl:%{_root_sysconfdir}}%{!?scl:%{_sysconfdir}}/rpm/ directory, which corresponds to the /etc/rpm/ directory for conventional packages. When shipping macro files, ensure that: You rename the macro files by appending .%{scl} to their names so that they do not conflict with the files from the base system installation. The macros in the macro files are either not expanded, or they are using conditionals, as in the following example: %__python2 %{_bindir}/python %python2_sitelib %(%{?scl:scl enable %scl '}%{__python2} -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())"%{?scl:'}) As another example, there may be a situation where you need to create a Software Collection mypython that depends on a Software Collection python26 . The python26 Software Collection defines the %{__python2} macro as in the above sample. This macro will evaluate to /opt/provider/mypython/root/usr/bin/python2 , but the python2 binary is only available in the python26 Software Collection ( /opt/provider/python26/root/usr/bin/python2 ). To be able to build software in the mypython Software Collection environment, ensure that: The macros.python.python26 macro file, which is a part of the python26-python-devel package, contains the following line: %__python26_python2 /opt/provider/python26/root/usr/bin/python2 And the macro file in the python26-build subpackage, and also the build subpackage in any depending Software Collection, contains the following line: %scl_package_override() {%global __python2 %__python26_python2} This will redefine the %{__python2} macro only if the build subpackage from a corresponding Software Collection is present, which usually means that you want to build software for that Software Collection. 2.10.7. Software Collection Shebang Support A shebang is a sequence of characters at the beginning of a script that is used as an interpreter directive. The shebang is processed by the automatic dependency generator and it points to a certain location, possibly in the system root file system. When the automatic dependency generator processes the shebang, it adds dependencies according to the interpreters they point to. From the Software Collection point of view, there are two types of shebangs: #!/usr/bin/env example This shebang instructs the /usr/bin/env program to run the interpreter. The automatic dependency generator will create a dependency on the /usr/bin/env program, as expected. If the USDPATH environment variable is redefined properly in the enable scriptlet, the example interpreter is found in the Software Collection file system hierarchy, as expected. You are advised to rewrite the shebang in your Software Collection package so that the shebang specifies the full path to the interpreter located in the Software Collection file system hierarchy. #!/usr/bin/ example This shebang specifies the direct path to the interpreter. The automatic dependency generator will create a dependency on the /usr/bin/example interpreter located outside the Software Collection file system hierarchy. However, when building a package for your Software Collection, you often want to create a dependency on the %{?_scl_root}/usr/bin/example interpreter located in the Software Collection file system hierarchy. Keep in mind that even when you properly redefine the USDPATH environment variable, this has no effect on what interpreter is used. The system version of the interpreter located outside the Software Collection file system hierarchy is always used. In most cases, this is not desired. If you are using this type of shebang and you want the shebang to point to the Software Collection file system hierarchy when building your Software Collection package, use a command like the following: find %{buildroot} -type f | \ xargs sed -i -e '1 s"^#! /usr/bin/example "#!%{?_scl_root} /usr/bin/example "' where /usr/bin/example is the interpreter you want to use. 2.10.8. Making a Software Collection Depend on Another Software Collection To make one Software Collection depend on a package from another Software Collection, you need to adjust the BuildRequires and Requires tags in the dependent Software Collection's spec file so that these tags properly define the dependency. For example, to define dependencies on two Software Collections named software_collection_1 and software_collection_2 , add the following three lines to your application's spec file: BuildRequires: scl-utils-build Requires: %scl_require software_collection_1 Requires: %scl_require software_collection_2 Ensure that the spec file also contains the %scl_package macro in front of the spec file preamble, for example: %{?scl:%scl_package less } Note that the %scl_package macro must be included in every spec file of your Software Collection. You can also use the %scl_require_package macro to define dependencies on a particular package from a specific Software Collection, as in the following example: BuildRequires: scl-utils-build Requires: %scl_require_package software_collection_1 package_name | [
"--- a/less.spec +++ b/less.spec @@ -1,10 +1,14 @@ +%{?scl:%global _scl_prefix /opt/ provider } +%{?scl:%scl_package less} +%{!?scl:%global pkg_name %{name}} + Summary: A text file browser similar to more, but better -Name: less +Name: %{?scl_prefix}less Version: 444 Release: 7%{?dist} License: GPLv3+ Group: Applications/Text -Source: http://www.greenwoodsoftware.com/less/%{name}-%{version}.tar.gz +Source: http://www.greenwoodsoftware.com/less/%{pkg_name}-%{version}.tar.gz Source1: lesspipe.sh Source2: less.sh Source3: less.csh @@ -19,6 +22,7 @@ URL: http://www.greenwoodsoftware.com/less/ Requires: groff BuildRequires: ncurses-devel BuildRequires: autoconf automake libtool -Obsoletes: lesspipe < 1.0 +Obsoletes: %{?scl_prefix}lesspipe < 1.0 +%{?scl:Requires: %scl_runtime} %description The less utility is a text file browser that resembles more, but has @@ -31,7 +35,7 @@ You should install less because it is a basic utility for viewing text files, and you'll use it frequently. %prep -%setup -q +%setup -q -n %{pkg_name}-%{version} %patch1 -p1 -b .Foption %patch2 -p1 -b .search %patch4 -p1 -b .time @@ -51,16 +55,16 @@ make CC=\"gcc USDRPM_OPT_FLAGS -D_GNU_SOURCE -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOU %install rm -rf USDRPM_BUILD_ROOT make DESTDIR=USDRPM_BUILD_ROOT install -mkdir -p USDRPM_BUILD_ROOT/etc/profile.d +mkdir -p USDRPM_BUILD_ROOT%{_sysconfdir}/profile.d install -p -c -m 755 %{SOURCE1} USDRPM_BUILD_ROOT/%{_bindir} -install -p -c -m 644 %{SOURCE2} USDRPM_BUILD_ROOT/etc/profile.d -install -p -c -m 644 %{SOURCE3} USDRPM_BUILD_ROOT/etc/profile.d -ls -la USDRPM_BUILD_ROOT/etc/profile.d +install -p -c -m 644 %{SOURCE2} USDRPM_BUILD_ROOT%{_sysconfdir}/profile.d +install -p -c -m 644 %{SOURCE3} USDRPM_BUILD_ROOT%{_sysconfdir}/profile.d +ls -la USDRPM_BUILD_ROOT%{_sysconfdir}/profile.d %files %defattr(-,root,root,-) %doc LICENSE -/etc/profile.d/* +%{_sysconfdir}/profile.d/* %{_bindir}/* %{_mandir}/man1/*",
"%{?scl:%global _scl_prefix /opt/ provider }",
"%{?scl:%scl_package package_name }",
"%{!?scl:%global pkg_name %{name}}",
"Name: %{?scl_prefix} package_name",
"Requires: %{?scl_prefix}ifconfig",
"%{?scl:Requires: %scl_runtime}",
"Obsoletes: %{?scl_prefix}lesspipe < 1.0",
"Provides: %{?scl_prefix}more",
"%package -n %{?scl_prefix}more",
"%description -n %{?scl_prefix}rubygems RubyGems is the Ruby standard for publishing and managing third party libraries.",
"Requires: %{?scl_prefix}%{pkg_name} = %{version}-%{release}",
"%setup -q -n %{pkg_name}-%{version}",
"mkdir -p %{?scl:%_root_sysconfdir}%{?!scl:%_sysconfdir}",
"%{?scl:scl enable %scl - << \\EOF} set -e ruby example.rb RUBYOPT=\"-Ilib\" ruby bar.rb # The rest of the script contents goes here. %{?scl:EOF}",
"%posttrans %{?scl:env -i - scl enable %{scl} - << \\EOF} %vagrant_plugin_register %{vagrant_plugin_name} %{?scl:EOF}",
"Requires: python(abi) = (version)",
"Requires: %{?scl_prefix}python(abi) = (version))",
"%__python_provides /usr/lib/rpm/pythondeps-scl.sh --provides %{_scl_root} %{scl_prefix} %__python_requires /usr/lib/rpm/pythondeps-scl.sh --requires %{_scl_root} %{scl_prefix}",
"%_use_internal_dependency_generator 0 %__deploop() while read FILE; do /usr/lib/rpm/rpmdeps -%{1} USD{FILE}; done | /bin/sort -u %__find_provides /bin/sh -c \"%{?__filter_prov_cmd} %{__deploop P} %{?__filter_from_prov}\" %__find_requires /bin/sh -c \"%{?__filter_req_cmd} %{__deploop R} %{?__filter_from_req}\" Handle pkgconfig's virtual Provides and Requires %__filter_from_req | %{__sed} -e 's|pkgconfig|%{?scl_prefix}pkgconfig|g' %__filter_from_prov | %{__sed} -e 's|pkgconfig|%{?scl_prefix}pkgconfig|g'",
"%{?scl:%filter_from_provides s|pkgconfig|%{?scl_prefix}pkgconfig|g} %{?scl:%filter_from_requires s|pkgconfig|%{?scl_prefix}pkgconfig|g} %{?scl:%filter_setup}",
"%__python2 %{_bindir}/python %python2_sitelib %(%{?scl:scl enable %scl '}%{__python2} -c \"from distutils.sysconfig import get_python_lib; print(get_python_lib())\"%{?scl:'})",
"%__python26_python2 /opt/provider/python26/root/usr/bin/python2",
"%scl_package_override() {%global __python2 %__python26_python2}",
"find %{buildroot} -type f | xargs sed -i -e '1 s\"^#! /usr/bin/example \"#!%{?_scl_root} /usr/bin/example \"'",
"BuildRequires: scl-utils-build Requires: %scl_require software_collection_1 Requires: %scl_require software_collection_2",
"%{?scl:%scl_package less }",
"BuildRequires: scl-utils-build Requires: %scl_require_package software_collection_1 package_name"
] | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/sect-Converting_a_Conventional_Spec_File |
Appendix A. Component Versions | Appendix A. Component Versions This appendix provides a list of key components and their versions in the Red Hat Enterprise Linux 7.7 release. Table A.1. Component Versions Component Version kernel 3.10.0-1062 kernel-alt 4.14.0-115 QLogic qla2xxx driver 10.00.00.12.07.7-k QLogic qla4xxx driver 5.04.00.00.07.02-k0 Emulex lpfc driver 0:12.0.0.10 iSCSI initiator utils ( iscsi-initiator-utils ) 6.2.0.874-11 DM-Multipath ( device-mapper-multipath ) 0.4.9-127 LVM ( lvm2 ) 2.02.185-2 qemu-kvm [a] 1.5.3-167 qemu-kvm-ma [b] 2.12.0-18 [a] The qemu-kvm packages provide KVM virtualization on AMD64 and Intel 64 systems. [b] The qemu-kvm-ma packages provide KVM virtualization on IBM POWER8, IBM POWER9, and IBM Z. Note that KVM virtualization on IBM POWER9 and IBM Z also requires using the kernel-alt packages. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.7_release_notes/component_versions |
Registry | Registry OpenShift Container Platform 4.15 Configuring registries for OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"podman login registry.redhat.io Username:<your_registry_account_username> Password:<your_registry_account_password>",
"podman pull registry.redhat.io/<repository_name>",
"topologySpreadConstraints: - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: node-role.kubernetes.io/worker whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule",
"topologySpreadConstraints: - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: node-role.kubernetes.io/worker whenUnsatisfiable: DoNotSchedule",
"oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{\"spec\":{\"defaultRoute\":true}}'",
"apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----",
"oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config",
"oc edit image.config.openshift.io cluster",
"spec: additionalTrustedCA: name: registry-config",
"oc create secret generic image-registry-private-configuration-user --from-literal=KEY1=value1 --from-literal=KEY2=value2 --namespace openshift-image-registry",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=myaccesskey --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=mysecretkey --namespace openshift-image-registry",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: s3: bucket: <bucket-name> region: <region-name>",
"regionEndpoint: http://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc.cluster.local",
"oc create secret generic image-registry-private-configuration-user --from-file=REGISTRY_STORAGE_GCS_KEYFILE=<path_to_keyfile> --namespace openshift-image-registry",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: gcs: bucket: <bucket-name> projectID: <project-id> region: <region-name>",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"disableRedirect\":true}}'",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_SWIFT_USERNAME=<username> --from-literal=REGISTRY_STORAGE_SWIFT_PASSWORD=<password> -n openshift-image-registry",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: swift: container: <container-id>",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_AZURE_ACCOUNTKEY=<accountkey> --namespace openshift-image-registry",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: azure: accountName: <storage-account-name> container: <container-name>",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: azure: accountName: <storage-account-name> container: <container-name> cloudName: AzureUSGovernmentCloud 1",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name>",
"oc apply -f <storage_class_file_name>",
"storageclass.storage.k8s.io/custom-csi-storageclass created",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: \"true\" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3",
"oc apply -f <pvc_file_name>",
"persistentvolumeclaim/csi-pvc-imageregistry created",
"oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{\"op\": \"replace\", \"path\": \"/spec/storage/pvc/claim\", \"value\": \"csi-pvc-imageregistry\"}]'",
"config.imageregistry.operator.openshift.io/cluster patched",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"status: managementState: Managed pvc: claim: csi-pvc-imageregistry",
"oc get pvc -n openshift-image-registry csi-pvc-imageregistry",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF",
"bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF",
"bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF",
"bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF",
"bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF",
"bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF",
"bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF",
"bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF",
"bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge",
"oc policy add-role-to-user registry-viewer <user_name>",
"oc policy add-role-to-user registry-editor <user_name>",
"oc get nodes",
"oc debug nodes/<node_name>",
"sh-4.2# chroot /host",
"sh-4.2# oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443",
"sh-4.2# podman login -u kubeadmin -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000",
"Login Succeeded!",
"sh-4.2# podman pull <name.io>/<image>",
"sh-4.2# podman tag <name.io>/<image> image-registry.openshift-image-registry.svc:5000/openshift/<image>",
"sh-4.2# podman push image-registry.openshift-image-registry.svc:5000/openshift/<image>",
"oc get pods -n openshift-image-registry",
"NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-764bd7f846-qqtpb 1/1 Running 0 78m image-registry-79fb4469f6-llrln 1/1 Running 0 77m node-ca-hjksc 1/1 Running 0 73m node-ca-tftj6 1/1 Running 0 77m node-ca-wb6ht 1/1 Running 0 77m node-ca-zvt9q 1/1 Running 0 74m",
"oc logs deployments/image-registry -n openshift-image-registry",
"2015-05-01T19:48:36.300593110Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"version=v2.0.0+unknown\" 2015-05-01T19:48:36.303294724Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"redis not configured\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303422845Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"using inmemory layerinfo cache\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303433991Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"Using OpenShift Auth handler\" 2015-05-01T19:48:36.303439084Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"listening on :5000\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002",
"cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-scraper rules: - apiGroups: - image.openshift.io resources: - registry/metrics verbs: - get EOF",
"oc adm policy add-cluster-role-to-user prometheus-scraper <username>",
"openshift: oc whoami -t",
"curl --insecure -s -u <user>:<secret> \\ 1 https://image-registry.openshift-image-registry.svc:5000/extensions/v2/metrics | grep imageregistry | head -n 20",
"HELP imageregistry_build_info A metric with a constant '1' value labeled by major, minor, git commit & git version from which the image registry was built. TYPE imageregistry_build_info gauge imageregistry_build_info{gitCommit=\"9f72191\",gitVersion=\"v3.11.0+9f72191-135-dirty\",major=\"3\",minor=\"11+\"} 1 HELP imageregistry_digest_cache_requests_total Total number of requests without scope to the digest cache. TYPE imageregistry_digest_cache_requests_total counter imageregistry_digest_cache_requests_total{type=\"Hit\"} 5 imageregistry_digest_cache_requests_total{type=\"Miss\"} 24 HELP imageregistry_digest_cache_scoped_requests_total Total number of scoped requests to the digest cache. TYPE imageregistry_digest_cache_scoped_requests_total counter imageregistry_digest_cache_scoped_requests_total{type=\"Hit\"} 33 imageregistry_digest_cache_scoped_requests_total{type=\"Miss\"} 44 HELP imageregistry_http_in_flight_requests A gauge of requests currently being served by the registry. TYPE imageregistry_http_in_flight_requests gauge imageregistry_http_in_flight_requests 1 HELP imageregistry_http_request_duration_seconds A histogram of latencies for requests to the registry. TYPE imageregistry_http_request_duration_seconds summary imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.5\"} 0.01296087 imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.9\"} 0.014847248 imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.99\"} 0.015981195 imageregistry_http_request_duration_seconds_sum{method=\"get\"} 12.260727916000022",
"oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge",
"HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"sudo mv tls.crt /etc/pki/ca-trust/source/anchors/",
"sudo update-ca-trust enable",
"sudo podman login -u kubeadmin -p USD(oc whoami -t) USDHOST",
"oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge",
"HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')",
"podman login -u kubeadmin -p USD(oc whoami -t) --tls-verify=false USDHOST 1",
"oc create secret tls public-route-tls -n openshift-image-registry --cert=</path/to/tls.crt> --key=</path/to/tls.key>",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"spec: routes: - name: public-routes hostname: myregistry.mycorp.organization secretName: public-route-tls"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/registry/index |
Chapter 382. Zendesk Component | Chapter 382. Zendesk Component Available as of Camel version 2.19 The Zendesk component provides access to all of the zendesk.com APIs accessible using zendesk-java-client . It allows producing messages to manage Zendesk ticket, user, organization, etc. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-zendesk</artifactId> <version>USD{camel-version}</version> </dependency> 382.1. Zendesk Options The Zendesk component supports 3 options, which are listed below. Name Description Default Type configuration (common) To use the shared configuration ZendeskConfiguration zendesk (advanced) To use a shared Zendesk instance. Zendesk resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Zendesk endpoint is configured using URI syntax: with the following path and query parameters: 382.1.1. Path Parameters (1 parameters): Name Description Default Type methodName Required What operation to use String 382.1.2. Query Parameters (10 parameters): Name Description Default Type inBody (common) Sets the name of a parameter to be passed in the exchange In Body String serverUrl (common) The server URL to connect. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean oauthToken (security) The OAuth token. String password (security) The password. String token (security) The security token. String username (security) The user name. String 382.2. Spring Boot Auto-Configuration The component supports 9 options, which are listed below. Name Description Default Type camel.component.zendesk.configuration.method-name What operation to use String camel.component.zendesk.configuration.oauth-token The OAuth token. String camel.component.zendesk.configuration.password The password. String camel.component.zendesk.configuration.server-url The server URL to connect. String camel.component.zendesk.configuration.token The security token. String camel.component.zendesk.configuration.username The user name. String camel.component.zendesk.enabled Enable zendesk component true Boolean camel.component.zendesk.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.zendesk.zendesk To use a shared Zendesk instance. The option is a org.zendesk.client.v2.Zendesk type. String 382.3. URI format zendesk://endpoint?[options] 382.4. Producer Endpoints: Producer endpoints can use endpoint names and associated options described . 382.5. Consumer Endpoints: Any of the producer endpoints can be used as a consumer endpoint. Consumer endpoints can use Scheduled Poll Consumer Options with a consumer. prefix to schedule endpoint invocation. Consumer endpoints that return an array or collection will generate one exchange per element, and their routes will be executed once for each exchange. 382.6. Message header Any of the options can be provided in a message header for producer endpoints with CamelZendesk. prefix. In principal, parameter names are same as the arugument name of each API methods on the original org.zendesk.client.v2.Zendesk class. However some of them are renamed to the other name to avoid confliction in the camel API component framework. To see actual parameter name, please check org.apache.camel.component.zendesk.internal.ZendeskApiMethod . 382.7. Message body All result message bodies utilize objects provided by the Zendesk Java Client. Producer endpoints can specify the option name for incoming message body in the inBody endpoint parameter. | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-zendesk</artifactId> <version>USD{camel-version}</version> </dependency>",
"zendesk:methodName",
"zendesk://endpoint?[options]"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/zendesk-component |
function::get_sa_flags | function::get_sa_flags Name function::get_sa_flags - Returns the numeric value of sa_flags Synopsis Arguments act address of the sigaction to query. | [
"get_sa_flags:long(act:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-get-sa-flags |
Chapter 6. Installing the Migration Toolkit for Containers | Chapter 6. Installing the Migration Toolkit for Containers You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 3 and 4. After you install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.12 by using the Operator Lifecycle Manager, you manually install the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3. By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a source cluster or on a remote cluster . After you have installed MTC, you must configure an object storage to use as a replication repository. To uninstall MTC, see Uninstalling MTC and deleting resources . 6.1. Compatibility guidelines You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OpenShift Container Platform version. Definitions legacy platform OpenShift Container Platform 4.5 and earlier. modern platform OpenShift Container Platform 4.6 and later. legacy operator The MTC Operator designed for legacy platforms. modern operator The MTC Operator designed for modern platforms. control cluster The cluster that runs the MTC controller and GUI. remote cluster A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters via the Velero API to drive migrations. You must use the compatible MTC version for migrating your OpenShift Container Platform clusters. For the migration to succeed both your source cluster and the destination cluster must use the same version of MTC. MTC 1.7 supports migrations from OpenShift Container Platform 3.11 to 4.9. MTC 1.8 only supports migrations from OpenShift Container Platform 4.10 and later. Table 6.1. MTC compatibility: Migrating from a legacy or a modern platform Details OpenShift Container Platform 3.11 OpenShift Container Platform 4.0 to 4.5 OpenShift Container Platform 4.6 to 4.9 OpenShift Container Platform 4.10 or later Stable MTC version MTC v.1.7. z MTC v.1.7. z MTC v.1.7. z MTC v.1.8. z Installation Legacy MTC v.1.7. z operator: Install manually with the operator.yml file. [ IMPORTANT ] This cluster cannot be the control cluster. Install with OLM, release channel release-v1.7 Install with OLM, release channel release-v1.8 Edge cases exist in which network restrictions prevent modern clusters from connecting to other clusters involved in the migration. For example, when migrating from an OpenShift Container Platform 3.11 cluster on premises to a modern OpenShift Container Platform cluster in the cloud, where the modern cluster cannot connect to the OpenShift Container Platform 3.11 cluster. With MTC v.1.7. z , if one of the remote clusters is unable to communicate with the control cluster because of network restrictions, use the crane tunnel-api command. With the stable MTC release, although you should always designate the most modern cluster as the control cluster, in this specific case it is possible to designate the legacy cluster as the control cluster and push workloads to the remote cluster. 6.2. Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 You can install the legacy Migration Toolkit for Containers Operator manually on OpenShift Container Platform 3. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must have access to registry.redhat.io . You must have podman installed. You must create an image stream secret and copy it to each node in the cluster. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials: USD podman login registry.redhat.io Download the operator.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Download the controller.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./ Log in to your OpenShift Container Platform source cluster. Verify that the cluster can authenticate with registry.redhat.io : USD oc run test --image registry.redhat.io/ubi8 --command sleep infinity Create the Migration Toolkit for Containers Operator object: USD oc create -f operator.yml Example output namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists 1 Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists 1 You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OpenShift Container Platform 4 that are provided in later releases. Create the MigrationController object: USD oc create -f controller.yml Verify that the MTC pods are running: USD oc get pods -n openshift-migration 6.3. Installing the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.12 You install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.12 by using the Operator Lifecycle Manager. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the Migration Toolkit for Containers Operator . Select the Migration Toolkit for Containers Operator and click Install . Click Install . On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded . Click Migration Toolkit for Containers Operator . Under Provided APIs , locate the Migration Controller tile, and click Create Instance . Click Create . Click Workloads Pods to verify that the MTC pods are running. 6.4. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.12, the Migration Toolkit for Containers (MTC) inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 6.4.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 6.4.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 6.4.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 6.4.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 6.4.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 6.4.2.1. NetworkPolicy configuration 6.4.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 6.4.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 6.4.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 6.4.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 6.4.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 6.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 6.4.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration For more information, see Configuring the cluster-wide proxy . 6.5. Configuring a replication repository You must configure an object storage to use as a replication repository. The Migration Toolkit for Containers (MTC) copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster. MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. The following storage providers are supported: Multicloud Object Gateway Amazon Web Services S3 Google Cloud Platform Microsoft Azure Blob Generic S3 object storage, for example, Minio or Ceph S3 6.5.1. Prerequisites All clusters must have uninterrupted network access to the replication repository. If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository. 6.5.2. Retrieving Multicloud Object Gateway credentials You must retrieve the Multicloud Object Gateway (MCG) credentials and S3 endpoint in order to configure MCG as a replication repository for the Migration Toolkit for Containers (MTC). You must retrieve the Multicloud Object Gateway (MCG) credentials in order to create a Secret custom resource (CR) for the OpenShift API for Data Protection (OADP). MCG is a component of OpenShift Data Foundation. Prerequisites You must deploy OpenShift Data Foundation by using the appropriate OpenShift Data Foundation deployment guide . Procedure Obtain the S3 endpoint, AWS_ACCESS_KEY_ID , and AWS_SECRET_ACCESS_KEY by running the describe command on the NooBaa custom resource. You use these credentials to add MCG as a replication repository. 6.5.3. Configuring Amazon Web Services You configure Amazon Web Services (AWS) S3 object storage as a replication repository for the Migration Toolkit for Containers (MTC). Prerequisites You must have the AWS CLI installed. The AWS S3 storage bucket must be accessible to the source and target clusters. If you are using the snapshot copy method: You must have access to EC2 Elastic Block Storage (EBS). The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Set the BUCKET variable: USD BUCKET=<your_bucket> Set the REGION variable: USD REGION=<your_region> Create an AWS S3 bucket: USD aws s3api create-bucket \ --bucket USDBUCKET \ --region USDREGION \ --create-bucket-configuration LocationConstraint=USDREGION 1 1 us-east-1 does not support a LocationConstraint . If your region is us-east-1 , omit --create-bucket-configuration LocationConstraint=USDREGION . Create an IAM user: USD aws iam create-user --user-name velero 1 1 If you want to use Velero to back up multiple clusters with multiple S3 buckets, create a unique user name for each cluster. Create a velero-policy.json file: USD cat > velero-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVolumes", "ec2:DescribeSnapshots", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:DeleteObject", "s3:PutObject", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}" ] } ] } EOF Attach the policies to give the velero user the minimum necessary permissions: USD aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.json Create an access key for the velero user: USD aws iam create-access-key --user-name velero Example output { "AccessKey": { "UserName": "velero", "Status": "Active", "CreateDate": "2017-07-31T22:24:41.576Z", "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>, "AccessKeyId": <AWS_ACCESS_KEY_ID> } } Record the AWS_SECRET_ACCESS_KEY and the AWS_ACCESS_KEY_ID . You use the credentials to add AWS as a replication repository. 6.5.4. Configuring Google Cloud Platform You configure a Google Cloud Platform (GCP) storage bucket as a replication repository for the Migration Toolkit for Containers (MTC). Prerequisites You must have the gcloud and gsutil CLI tools installed. See the Google cloud documentation for details. The GCP storage bucket must be accessible to the source and target clusters. If you are using the snapshot copy method: The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Log in to GCP: USD gcloud auth login Set the BUCKET variable: USD BUCKET=<bucket> 1 1 Specify your bucket name. Create the storage bucket: USD gsutil mb gs://USDBUCKET/ Set the PROJECT_ID variable to your active project: USD PROJECT_ID=USD(gcloud config get-value project) Create a service account: USD gcloud iam service-accounts create velero \ --display-name "Velero service account" List your service accounts: USD gcloud iam service-accounts list Set the SERVICE_ACCOUNT_EMAIL variable to match its email value: USD SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)') Attach the policies to give the velero user the minimum necessary permissions: USD ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob ) Create the velero.server custom role: USD gcloud iam roles create velero.server \ --project USDPROJECT_ID \ --title "Velero Server" \ --permissions "USD(IFS=","; echo "USD{ROLE_PERMISSIONS[*]}")" Add IAM policy binding to the project: USD gcloud projects add-iam-policy-binding USDPROJECT_ID \ --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL \ --role projects/USDPROJECT_ID/roles/velero.server Update the IAM service account: USD gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET} Save the IAM service account keys to the credentials-velero file in the current directory: USD gcloud iam service-accounts keys create credentials-velero \ --iam-account USDSERVICE_ACCOUNT_EMAIL You use the credentials-velero file to add GCP as a replication repository. 6.5.5. Configuring Microsoft Azure You configure a Microsoft Azure Blob storage container as a replication repository for the Migration Toolkit for Containers (MTC). Prerequisites You must have the Azure CLI installed. The Azure Blob storage container must be accessible to the source and target clusters. If you are using the snapshot copy method: The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Log in to Azure: USD az login Set the AZURE_RESOURCE_GROUP variable: USD AZURE_RESOURCE_GROUP=Velero_Backups Create an Azure resource group: USD az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1 1 Specify your location. Set the AZURE_STORAGE_ACCOUNT_ID variable: USD AZURE_STORAGE_ACCOUNT_ID="veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')" Create an Azure storage account: USD az storage account create \ --name USDAZURE_STORAGE_ACCOUNT_ID \ --resource-group USDAZURE_RESOURCE_GROUP \ --sku Standard_GRS \ --encryption-services blob \ --https-only true \ --kind BlobStorage \ --access-tier Hot Set the BLOB_CONTAINER variable: USD BLOB_CONTAINER=velero Create an Azure Blob storage container: USD az storage container create \ -n USDBLOB_CONTAINER \ --public-access off \ --account-name USDAZURE_STORAGE_ACCOUNT_ID Create a service principal and credentials for velero : USD AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` Create a service principal with the Contributor role, assigning a specific --role and --scopes : USD AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" \ --role "Contributor" \ --query 'password' -o tsv \ --scopes /subscriptions/USDAZURE_SUBSCRIPTION_ID/resourceGroups/USDAZURE_RESOURCE_GROUP` The CLI generates a password for you. Ensure you capture the password. After creating the service principal, obtain the client id. USD AZURE_CLIENT_ID=`az ad app credential list --id <your_app_id>` Note For this to be successful, you must know your Azure application ID. Save the service principal credentials in the credentials-velero file: USD cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF You use the credentials-velero file to add Azure as a replication repository. 6.5.6. Additional resources MTC workflow About data copy methods Adding a replication repository to the MTC web console 6.6. Uninstalling MTC and deleting resources You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster. Note Deleting the velero CRDs removes Velero from the cluster. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Delete the MigrationController custom resource (CR) on all clusters: USD oc delete migrationcontroller <migration_controller> Uninstall the Migration Toolkit for Containers Operator on OpenShift Container Platform 4 by using the Operator Lifecycle Manager. Delete cluster-scoped resources on all clusters by running the following commands: migration custom resource definitions (CRDs): USD oc delete USD(oc get crds -o name | grep 'migration.openshift.io') velero CRDs: USD oc delete USD(oc get crds -o name | grep 'velero') migration cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io') migration-operator cluster role: USD oc delete clusterrole migration-operator velero cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'velero') migration cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io') migration-operator cluster role bindings: USD oc delete clusterrolebindings migration-operator velero cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'velero') | [
"podman login registry.redhat.io",
"cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./",
"oc run test --image registry.redhat.io/ubi8 --command sleep infinity",
"oc create -f operator.yml",
"namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists",
"oc create -f controller.yml",
"oc get pods -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"BUCKET=<your_bucket>",
"REGION=<your_region>",
"aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1",
"aws iam create-user --user-name velero 1",
"cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF",
"aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json",
"aws iam create-access-key --user-name velero",
"{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }",
"gcloud auth login",
"BUCKET=<bucket> 1",
"gsutil mb gs://USDBUCKET/",
"PROJECT_ID=USD(gcloud config get-value project)",
"gcloud iam service-accounts create velero --display-name \"Velero service account\"",
"gcloud iam service-accounts list",
"SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')",
"ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob )",
"gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"",
"gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server",
"gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}",
"gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL",
"az login",
"AZURE_RESOURCE_GROUP=Velero_Backups",
"az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1",
"AZURE_STORAGE_ACCOUNT_ID=\"veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')\"",
"az storage account create --name USDAZURE_STORAGE_ACCOUNT_ID --resource-group USDAZURE_RESOURCE_GROUP --sku Standard_GRS --encryption-services blob --https-only true --kind BlobStorage --access-tier Hot",
"BLOB_CONTAINER=velero",
"az storage container create -n USDBLOB_CONTAINER --public-access off --account-name USDAZURE_STORAGE_ACCOUNT_ID",
"AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv`",
"AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name \"velero\" --role \"Contributor\" --query 'password' -o tsv --scopes /subscriptions/USDAZURE_SUBSCRIPTION_ID/resourceGroups/USDAZURE_RESOURCE_GROUP`",
"AZURE_CLIENT_ID=`az ad app credential list --id <your_app_id>`",
"cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF",
"oc delete migrationcontroller <migration_controller>",
"oc delete USD(oc get crds -o name | grep 'migration.openshift.io')",
"oc delete USD(oc get crds -o name | grep 'velero')",
"oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')",
"oc delete clusterrole migration-operator",
"oc delete USD(oc get clusterroles -o name | grep 'velero')",
"oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')",
"oc delete clusterrolebindings migration-operator",
"oc delete USD(oc get clusterrolebindings -o name | grep 'velero')"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/migrating_from_version_3_to_4/installing-3-4 |
1.4. Cluster Configuration Considerations | 1.4. Cluster Configuration Considerations When configuring a Red Hat High Availability Add-On cluster, you must take the following considerations into account: Red Hat does not support cluster deployments greater than 16 full cluster nodes. It is possible, however, to scale beyond that limit with remote nodes running the pacemaker_remote service. For information on the pacemaker_remote service, see Section 8.4, "The pacemaker_remote Service" . The use of Dynamic Host Configuration Protocol (DHCP) for obtaining an IP address on a network interface that is utilized by the corosync daemons is not supported. The DHCP client can periodically remove and re-add an IP address to its assigned interface during address renewal. This will result in corosync detecting a connection failure, which will result in fencing activity from any other nodes in the cluster using corosync for heartbeat connectivity. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-configconsider-haar |
Chapter 3. Bug fixes | Chapter 3. Bug fixes In this release of Red Hat Trusted Profile Analyzer (RHTPA), we fixed the following bugs. In addition to these fixes, we list the descriptions of previously known issues found in earlier versions that we fixed. The bombastic-collector does not handle special characters in the id field Before this update, uploading a software bill of materials (SBOM) file that contains special characters in the id field fails to ingest properly when running RHTPA on Amazon Web Services (AWS) infrastructure. This was causing missing data on the vulnerabilities page. With this release, you can now use special characters in the id field before uploading the SBOM. The collector-osv fails to ingest vulnerabilities with a CVSS_V4 severity Before this update, vulnerability data available from the OpenSource Vulnerability (OSV) service fails to associate vulnerabilities with a CVSS_V4 score to the packages that they impact. Because of this, fewer vulnerabilities might be associated to packages and software bill of materials (SBOM) that have been ingested into RHTPA. With this release, this issue has been fixed. Fixed a potential exploit for CVE-2024-21536 With this release, we updated the http-proxy-middleware component in RHTPA to a version that mitigates the vulnerability for CVE-2024-21536 . The v11y-walker job fails when ingesting CVEs The v11y-walker job would generate an error when the prefix configuration to ingest Common Vulnerabilities and Exposures (CVE) was not applied properly. The prefix configuration determines the range of CVEs to ingest. Because of the wrong range, this caused RHTPA to ingest unwanted CVEs. With this release, we fixed the CVE ingestion process to only match CVEs that use the supplied prefix configuration. Fixed a potential exploit for CVE-2024-21538 With this release, we updated the cross-spawn component in RHTPA to a version that mitigates the vulnerability for CVE-2024-21538 . A timeout error occurs when doing an SBOM bulk upload When doing a software bill of materials (SBOM) bulk upload, this causes the SBOM dashboard to fail when loading, giving a connection timeout error. With this release, we fixed the livenessProbe to use curl to connect to the appropriate endpoint. The initialDelaySeconds property for livenessProbe and readinessProbe are configurable Before this update, we had a hard-coded value of 2 seconds set on the initialDelaySeconds property for livenessProbe and readinessProbe . With this release, you can configure the initialDelaySeconds property in the RHTPA Helm values file. A partially ingested SBOM gives an error on the Vulnerabilities tab Uploading a software bill of materials (SBOM) file has many steps to complete during the ingesting process. Until this ingestion process finishes, viewing SBOM vulnerability information is inconsistent, and the page could display an error message, when no real error occurred. With this release, we removed this error message, and return an empty page on the Vulnerabilities tab. The guac-collectsub-pod-service pod is caught in an infinite restart loop Deploying RHTPA on Red Hat Enterprise Linux by using the Ansible Playbook would cause the health check to fail on the guac-collectsub-pod-service pod. This caused the pod to enter an infinite restart loop. With this release, we fixed the livenessProbe by enabling the correct API endpoint. Fixed a timeout issue when ingesting SBOMs for the dashboard charts When ingesting a software bill of materials (SBOM) file that has a large number of packages, and if those packages have many associated vulnerabilities, then the API call to retrieve the data for the dashboard charts would timeout. With this release, we made improvements to the API calls that give data to the dashboard charts, therefore populating the dashboard charts properly and in a timely manner. Fixed vulnerability information for Ubuntu-related CVEs When gathering Ubuntu-related Common Vulnerabilities and Exposures (CVE) information from the OpenSource Vulnerabilities (OSV) database would give the following error message: Failed to get vulnerability from OSV for CVE CVE2024-XXXXX . With this release, we fixed RHTPA with the full set of currently released Ubuntu versions. This allows for a successful ingestion of CVEs related to Ubuntu packages. Missing CVSS scores for some CVEs Some Common Vulnerabilities and Exposures (CVE) have elements in the metrics array, but have no corresponding Common Vulnerability Scoring System (CVSS) score. Not having the CVSS score limits the ability to query for data on CVEs. With this release, we do a check for a valid CVSS score within the elements in the metrics array, and properly display the CVE's CVSS score. Nested packages within a CycloneDX SBOM are not ingested We fixed a bug where only the main package gets ingested, but the nested packages do not. With this release, RHTPA correctly traverses a CycloneDX software bill of materials (SBOM) manifest file, and includes those nested packages in the database. Large SBOM manifest files generate an error when uploading When uploading a large software bill of materials (SBOM) manifest file to RHTPA, the index updates properly, but the database does not. We consider a large SBOM manifest file to be 90 MB in size, containing 70,000 packages. With this release, we fixed the issue with the database update. | null | https://docs.redhat.com/en/documentation/red_hat_trusted_profile_analyzer/1.2/html/release_notes/bug-fixes |
Part IV. Integrating Red Hat Process Automation Manager with Red Hat AMQ Streams | Part IV. Integrating Red Hat Process Automation Manager with Red Hat AMQ Streams As a developer, you can integrate Red Hat Process Automation Manager with Red Hat AMQ Streams or Apache Kafka. A business process can send and receive Kafka messages. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/integrating_red_hat_process_automation_manager_with_other_products_and_components/assembly-integrating-amq-streams |
5.6. Resizing an Online Multipathed Device (RHEL 4.8 and later) | 5.6. Resizing an Online Multipathed Device (RHEL 4.8 and later) In systems running RHEL 4.8 and later, is is possible to resize a multipath device while it is online. This allows you to resize the device when it is open, as when a file system is currently mounted. Use the following procedure to resize an online multipath device. Resize your physical device. Resize your paths. For SCSI devices, writing a 1 to the rescan file for the device causes the SCSI driver to rescan. You can use the following command: Resize your multipath device by running the multipath command: Your hardware setup may require that you temporarily take the actual storage offline in order to resize your physical device. If you take your storage offline and your multipath device is not set to queue when all paths are down, any I/O activity while your storage is offline will fail. You can work around this by executing the following command before taking your storage offline: After you resize your storage and take it back online, you must run the following command before resizing your paths: | [
"echo 1 > /sys/block/ device_name /device/rescan",
"multipath",
"dmsetup suspend --noflush device_name",
"dmsetup resume device_name"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/dm_multipath/multipath_resize |
Chapter 7. Verifying the post-upgrade state of the RHEL 8 system | Chapter 7. Verifying the post-upgrade state of the RHEL 8 system This procedure lists Verification recommended to perform after an in-place upgrade to RHEL 8. Prerequisites The system has been upgraded following the steps described in Performing the upgrade from RHEL 7 to RHEL 8 and you have been able to log in to RHEL 8. Procedure After the upgrade completes, determine whether the system is in the required state, at least: Verify that the current OS version is Red Hat Enterprise Linux 8: Replace target_os_version with the target OS version, for example 8.10. Check the OS kernel version: The target_os should be either 8 or the target OS version, for example 8_10 . Note that .el8 is important and the version should not be earlier than 4.18.0-305. If you are using the Red Hat Subscription Manager: Verify that the correct product is installed: Replace target_os_version with the target OS version, for example 8.10. Verify that the release version is set to the target OS version immediately after the upgrade: Replace target_os_version with the target OS version, for example 8.10. Verify that network services are operational, for example, try to connect to a server using SSH. Check the post-upgrade status of your applications. In some cases, you may need to perform migration and configuration changes manually. For example, to migrate your databases, follow instructions in RHEL 8 Database servers documentation . | [
"cat /etc/redhat-release Red Hat Enterprise Linux release <target_os_version> (Ootpa)",
"uname -r 4.18.0-305.el <target_os> .x86_64",
"subscription-manager list --installed +-----------------------------------------+ Installed Product Status +-----------------------------------------+ Product Name: Red Hat Enterprise Linux for x86_64 Product ID: 479 Version: <target_os_version> Arch: x86_64 Status: Subscribed",
"subscription-manager release Release: <target_os_version>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/upgrading_from_rhel_7_to_rhel_8/verifying-the-post-upgrade-state-of-the-rhel-8-system_upgrading-from-rhel-7-to-rhel-8 |
2.4.4. Changing port numbers | 2.4.4. Changing port numbers Depending on policy configuration, services may only be allowed to run on certain port numbers. Attempting to change the port a service runs on without changing policy may result in the service failing to start. Run the semanage port -l | grep -w "http_port_t" command as the root user to list the ports SELinux allows httpd to listen on: By default, SELinux allows http to listen on TCP ports 80, 443, 488, 8008, 8009, or 8443. If /etc/httpd/conf/httpd.conf is configured so that httpd listens on any port not listed for http_port_t , httpd fails to start. To configure httpd to run on a port other than TCP ports 80, 443, 488, 8008, 8009, or 8443: Edit /etc/httpd/conf/httpd.conf as the root user so the Listen option lists a port that is not configured in SELinux policy for httpd . The following example configures httpd to listen on the 10.0.0.1 IP address, and on TCP port 12345: Run the semanage port -a -t http_port_t -p tcp 12345 command as the root user to add the port to SELinux policy configuration. Run the semanage port -l | grep -w http_port_t command as the root user to confirm the port is added: If you no longer run httpd on port 12345, run the semanage port -d -t http_port_t -p tcp 12345 command as the root user to remove the port from policy configuration. | [
"~]# semanage port -l | grep -w http_port_t http_port_t tcp 80, 443, 488, 8008, 8009, 8443",
"Change this to Listen on specific IP addresses as shown below to prevent Apache from glomming onto all bound IP addresses (0.0.0.0) # #Listen 12.34.56.78:80 Listen 10.0.0.1:12345",
"~]# semanage port -l | grep -w http_port_t http_port_t tcp 12345, 80, 443, 488, 8008, 8009, 8443"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_confined_services/sect-managing_confined_services-configuration_examples-changing_port_numbers |
Release notes | Release notes Red Hat Enterprise Linux AI 1.1 Red Hat Enterprise Linux AI release notes Red Hat RHEL AI Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.1/html/release_notes/index |
Chapter 13. Troubleshooting monitoring issues | Chapter 13. Troubleshooting monitoring issues 13.1. Investigating why user-defined metrics are unavailable ServiceMonitor resources enable you to determine how to use the metrics exposed by a service in user-defined projects. Follow the steps outlined in this procedure if you have created a ServiceMonitor resource but cannot see any corresponding metrics in the Metrics UI. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). You have enabled and configured monitoring for user-defined workloads. You have created the user-workload-monitoring-config ConfigMap object. You have created a ServiceMonitor resource. Procedure Check that the corresponding labels match in the service and ServiceMonitor resource configurations. Obtain the label defined in the service. The following example queries the prometheus-example-app service in the ns1 project: USD oc -n ns1 get service prometheus-example-app -o yaml Example output labels: app: prometheus-example-app Check that the matchLabels app label in the ServiceMonitor resource configuration matches the label output in the preceding step: USD oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml Example output Note You can check service and ServiceMonitor resource labels as a developer with view permissions for the project. Inspect the logs for the Prometheus Operator in the openshift-user-workload-monitoring project. List the pods in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get pods Example output NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m Obtain the logs from the prometheus-operator container in the prometheus-operator pod. In the following example, the pod is called prometheus-operator-776fcbbd56-2nbfm : USD oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator If there is a issue with the service monitor, the logs might include an error similar to this example: level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg="skipping servicemonitor" error="it accesses file system via bearer token file which Prometheus specification prohibits" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload Review the target status for your project in the Prometheus UI directly. Establish port-forwarding to the Prometheus instance in the openshift-user-workload-monitoring project: USD oc port-forward -n openshift-user-workload-monitoring pod/prometheus-user-workload-0 9090 Open http://localhost:9090/targets in a web browser and review the status of the target for your project directly in the Prometheus UI. Check for error messages relating to the target. Configure debug level logging for the Prometheus Operator in the openshift-user-workload-monitoring project. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add logLevel: debug for prometheusOperator under data/config.yaml to set the log level to debug : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug Save the file to apply the changes. Note The prometheus-operator in the openshift-user-workload-monitoring project restarts automatically when you apply the log-level change. Confirm that the debug log-level has been applied to the prometheus-operator deployment in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep "log-level" Example output - --log-level=debug Debug level logging will show all calls made by the Prometheus Operator. Check that the prometheus-operator pod is running: USD oc -n openshift-user-workload-monitoring get pods Note If an unrecognized Prometheus Operator loglevel value is included in the config map, the prometheus-operator pod might not restart successfully. Review the debug logs to see if the Prometheus Operator is using the ServiceMonitor resource. Review the logs for other related errors. Additional resources Creating a user-defined workload monitoring config map See Specifying how a service is monitored for details on how to create a ServiceMonitor or PodMonitor resource 13.2. Determining why Prometheus is consuming a lot of disk space Developers can create labels to define attributes for metrics in the form of key-value pairs. The number of potential key-value pairs corresponds to the number of possible values for an attribute. An attribute that has an unlimited number of potential values is called an unbound attribute. For example, a customer_id attribute is unbound because it has an infinite number of possible values. Every assigned key-value pair has a unique time series. The use of many unbound attributes in labels can result in an exponential increase in the number of time series created. This can impact Prometheus performance and can consume a lot of disk space. You can use the following measures when Prometheus consumes a lot of disk: Check the number of scrape samples that are being collected. Check the time series database (TSDB) status in the Prometheus UI for more information on which labels are creating the most time series. This requires cluster administrator privileges. Reduce the number of unique time series that are created by reducing the number of unbound attributes that are assigned to user-defined metrics. Note Using attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations. Enforce limits on the number of samples that can be scraped across user-defined projects. This requires cluster administrator privileges. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure In the Administrator perspective, navigate to Observe Metrics . Run the following Prometheus Query Language (PromQL) query in the Expression field. This returns the ten metrics that have the highest number of scrape samples: topk(10,count by (job)({__name__=~".+"})) Investigate the number of unbound label values assigned to metrics with higher than expected scrape sample counts. If the metrics relate to a user-defined project , review the metrics key-value pairs assigned to your workload. These are implemented through Prometheus client libraries at the application level. Try to limit the number of unbound attributes referenced in your labels. If the metrics relate to a core OpenShift Container Platform project , create a Red Hat support case on the Red Hat Customer Portal . Check the TSDB status in the Prometheus UI. In the Administrator perspective, navigate to Networking Routes . Select the openshift-monitoring project in the Project list. Select the URL in the prometheus-k8s row to open the login page for the Prometheus UI. Choose Log in with OpenShift to log in using your OpenShift Container Platform credentials. In the Prometheus UI, navigate to Status TSDB Status . Additional resources See Setting a scrape sample limit for user-defined projects for details on how to set a scrape sample limit and create related alerting rules Submitting a support case | [
"oc -n ns1 get service prometheus-example-app -o yaml",
"labels: app: prometheus-example-app",
"oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml",
"spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app",
"oc -n openshift-user-workload-monitoring get pods",
"NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m",
"oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator",
"level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg=\"skipping servicemonitor\" error=\"it accesses file system via bearer token file which Prometheus specification prohibits\" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload",
"oc port-forward -n openshift-user-workload-monitoring pod/prometheus-user-workload-0 9090",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug",
"oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"",
"- --log-level=debug",
"oc -n openshift-user-workload-monitoring get pods",
"topk(10,count by (job)({__name__=~\".+\"}))"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/monitoring/troubleshooting-monitoring-issues |
Chapter 8. Installation configuration parameters for IBM Cloud | Chapter 8. Installation configuration parameters for IBM Cloud Before you deploy an OpenShift Container Platform cluster on IBM Cloud(R), you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml file, you provide values for the required parameters through the command line. You can then modify the install-config.yaml file to customize your cluster further. 8.1. Available installation configuration parameters for IBM Cloud The following tables specify the required, optional, and IBM Cloud-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 8.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 8.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 8.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 8.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . If you are deploying the cluster to an existing Virtual Private Cloud (VPC), the CIDR must contain the subnets defined in platform.ibmcloud.controlPlaneSubnets and platform.ibmcloud.computeSubnets . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 8.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 8.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Mint , Passthrough , Manual or an empty string ( "" ). [1] Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 8.1.4. Additional IBM Cloud configuration parameters Additional IBM Cloud(R) configuration parameters are described in the following table: Table 8.4. Additional IBM Cloud(R) parameters Parameter Description Values The name of an existing resource group. By default, an installer-provisioned VPC and cluster resources are placed in this resource group. When not specified, the installation program creates the resource group for the cluster. If you are deploying the cluster into an existing VPC, the installer-provisioned cluster resources are placed in this resource group. When not specified, the installation program creates the resource group for the cluster. The VPC resources that you have provisioned must exist in a resource group that you specify using the networkResourceGroupName parameter. In either case, this resource group must only be used for a single cluster installation, as the cluster components assume ownership of all of the resources in the resource group. [ 1 ] String, for example existing_resource_group . The name of an existing resource group. This resource contains the existing VPC and subnets to which the cluster will be deployed. This parameter is required when deploying the cluster to a VPC that you have provisioned. String, for example existing_network_resource_group . The new dedicated host to create. If you specify a value for platform.ibmcloud.dedicatedHosts.name , this parameter is not required. Valid IBM Cloud(R) dedicated host profile, such as cx2-host-152x304 . [ 2 ] An existing dedicated host. If you specify a value for platform.ibmcloud.dedicatedHosts.profile , this parameter is not required. String, for example my-dedicated-host-name . The instance type for all IBM Cloud(R) machines. Valid IBM Cloud(R) instance type, such as bx2-8x32 . [ 2 ] The name of the existing VPC that you want to deploy your cluster to. String. The name(s) of the existing subnet(s) in your VPC that you want to deploy your control plane machines to. Specify a subnet for each availability zone. String array The name(s) of the existing subnet(s) in your VPC that you want to deploy your compute machines to. Specify a subnet for each availability zone. Subnet IDs are not supported. String array Whether you define an existing resource group, or if the installer creates one, determines how the resource group is treated when the cluster is uninstalled. If you define a resource group, the installer removes all of the installer-provisioned resources, but leaves the resource group alone; if a resource group is created as part of the installation, the installer removes all of the installer-provisioned resources and the resource group. To determine which profile best meets your needs, see Instance Profiles in the IBM(R) documentation. | [
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:",
"platform: ibmcloud: resourceGroupName:",
"platform: ibmcloud: networkResourceGroupName:",
"platform: ibmcloud: dedicatedHosts: profile:",
"platform: ibmcloud: dedicatedHosts: name:",
"platform: ibmcloud: type:",
"platform: ibmcloud: vpcName:",
"platform: ibmcloud: controlPlaneSubnets:",
"platform: ibmcloud: computeSubnets:"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_ibm_cloud/installation-config-parameters-ibm-cloud-vpc |
probe::scsi.set_state | probe::scsi.set_state Name probe::scsi.set_state - Order SCSI device state change Synopsis scsi.set_state Values state The new state of the device old_state The current state of the device dev_id The scsi device id state_str The new state of the device, as a string old_state_str The current state of the device, as a string lun The lun number channel The channel number host_no The host number | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-scsi-set-state |
Chapter 7. Checking for Local Storage Operator deployments | Chapter 7. Checking for Local Storage Operator deployments Red Hat OpenShift Data Foundation clusters with Local Storage Operator are deployed using local storage devices. To find out if your existing cluster with OpenShift Data Foundation was deployed using local storage devices, use the following procedure: Prerequisites OpenShift Data Foundation is installed and running in the openshift-storage namespace. Procedure By checking the storage class associated with your OpenShift Data Foundation cluster's persistent volume claims (PVCs), you can tell if your cluster was deployed using local storage devices. Check the storage class associated with OpenShift Data Foundation cluster's PVCs with the following command: Check the output. For clusters with Local Storage Operators, the PVCs associated with ocs-deviceset use the storage class localblock . The output looks similar to the following: Additional Resources Deploying OpenShift Data Foundation using local storage devices on VMware Deploying OpenShift Data Foundation using local storage devices on Red Hat Virtualization Deploying OpenShift Data Foundation using local storage devices on bare metal Deploying OpenShift Data Foundation using local storage devices on IBM Power | [
"oc get pvc -n openshift-storage",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE db-noobaa-db-0 Bound pvc-d96c747b-2ab5-47e2-b07e-1079623748d8 50Gi RWO ocs-storagecluster-ceph-rbd 114s ocs-deviceset-0-0-lzfrd Bound local-pv-7e70c77c 1769Gi RWO localblock 2m10s ocs-deviceset-1-0-7rggl Bound local-pv-b19b3d48 1769Gi RWO localblock 2m10s ocs-deviceset-2-0-znhk8 Bound local-pv-e9f22cdc 1769Gi RWO localblock 2m10s"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/troubleshooting_openshift_data_foundation/checking-for-local-storage-operator-deployments_rhodf |
Chapter 5. Other ways to create Eclipse Vert.x projects | Chapter 5. Other ways to create Eclipse Vert.x projects This section shows the different ways in which you can create Eclipse Vert.x projects. 5.1. Creating a Eclipse Vert.x project on the command line You can use the Eclipse Vert.x Maven plug-in on the command line to create a Eclipse Vert.x project. You can specify the attributes and values on the command line. Prerequisites OpenJDK 8 or OpenJDK 11 is installed. Maven 3 or higher is installed. A text ed\u00adi\u00adtor or IDE is available. Curl or HTTPie or a browser to per\u00adform HTTP re\u00adquests is available. Procedure In a command terminal, enter the following command to verify that Maven is using OpenJDK 8 or OpenJDK 11 and the Maven version is 3.6.0 or higher: mvn --version If the preceding command does not return OpenJDK 8 or OpenJDK 11, add the path to OpenJDK 8 or OpenJDK 11 to the PATH environment variable and enter the command again. Create a directory and go to the directory location. Use the following command to create a new project using the Eclipse Vert.x Maven plug-in. mvn io.reactiverse:vertx-maven-plugin:USD{vertx-maven-plugin-version}:setup -DvertxBom=vertx-dependencies \ -DvertxVersion=USD{vertx_version} \ -DprojectGroupId= USD{project_group_id} \ -DprojectArtifactId= USD{project_artifact_id} \ -DprojectVersion=USD{project-version} \ -Dverticle=USD{verticle_class} \ -Ddependencies=USD{dependency_names} The following example shows you how you can create an Eclipse Vert.x application using the command explained. mvn io.reactiverse:vertx-maven-plugin:1.0.24:setup -DvertxBom=vertx-dependencies \ -DvertxVersion=4.3.7.redhat-00002 \ -DprojectGroupId=io.vertx.myapp \ -DprojectArtifactId=my-new-project \ -DprojectVersion=1.0-SNAPSHOT \ -DvertxVersion=4.3.7.redhat-00002 \ -Dverticle=io.vertx.myapp.MainVerticle \ -Ddependencies=web The following table lists the attributes that you can define with the setup command: Attribute Default Value Description vertx_version The version of Eclipse Vert.x. The version of Eclipse Vert.x you want to use in your project. project_group_id io.vertx.example A unique identifier of your project. project_artifact_id my-vertx-project The name of your project and your project directory. If you do not specify the project_artifact_id , the Maven plug-in starts the interactive mode. If the directory already exists, the generation fails. project-version 1.0-SNAPSHOT The version of your project. verticle_class io.vertx.example.MainVerticle The new verticle class file created by the verticle parameter. dependency_names Optional parameter The list of dependencies you want to add to your project separated by comma. You can also use the following syntax to configure the dependencies: groupId:artifactId:version:classifier For example: - To inherit the version from BOM use the following syntax: io.vertx:vertxcodetrans - To specify dependency use the following syntax: commons-io:commons-io:2.5 - To specify dependency with a classifier use the following syntax: io.vertx:vertx-template-engines:3.4.1:shaded The command creates an empty Eclipse Vert.x project with the following artifacts in the getting-started directory: The Maven build de\u00adscrip\u00adtor pom.xml con\u00adfig\u00adured to build and run your ap\u00adpli\u00adca\u00adtion Example verticle in the src/main/java folder In the pom.xml file, specify the repositories that contain the Eclipse Vert.x artifacts to build your application. Alternatively, you can configure the Maven repository to specify the build artifacts in the settings.xml file. See the section Configuring the Apache Maven repository for your Eclipse Vert.x projects , for more information. Use the Eclipse Vert.x project as a template to create your own application. Build the application using Maven from the root directory of the application. mvn package Run the application using Maven from the root directory of the application. mvn vertx:run 5.2. Creating a Eclipse Vert.x project using the community Vert.x starter You can use the community Vert.x starter to create a Eclipse Vert.x project. The starter creates a community project. You will have to convert the community project to a Red Hat build of Eclipse Vert.x project. Prerequisites OpenJDK 8 or OpenJDK 11 is installed. Maven 3 or higher is installed. A text ed\u00adi\u00adtor or IDE is available. Curl or HTTPie or a browser to per\u00adform HTTP re\u00adquests is available. Procedure In a command terminal, enter the following command to verify that Maven is using OpenJDK 8 or OpenJDK 11 and the Maven version is 3.6.0 or higher: mvn --version If the preceding command does not return OpenJDK 8 or OpenJDK 11, add the path to OpenJDK 8 or OpenJDK 11 to the PATH environment variable and enter the command again. Go to Vert.x Starter . Select the Version of Eclipse Vert.x. Select Java as the language. Select Maven as the build tool. Enter a Group Id , which is a unique identifier of your project. For this procedure, keep the default, com.example . Enter an Artifact Id , which is the name of your project and your project directory. For this procedure, keep the default, starter . Specify the dependencies you want to add to your project. For this procedure, add Vert.x Web de\u00adpen\u00addency either by typing it in the De\u00adpen\u00adden\u00adcies text box or select from the list of De\u00adpen\u00adden\u00adcies . Click Advanced options to select the OpenJDK version. For this procedure, keep the default, JDK 11 . Click Gen\u00ader\u00adate Project . The starter.zip file containing the artifacts for Eclipse Vert.x project is downloaded. Create a directory getting-started . Extract the contents of the ZIP file to the getting-started folder. The Vert.x Starter creates an Eclipse Vert.x project with the following artifacts: Maven build de\u00adscrip\u00adtor pom.xml file. The file has configurations to build and run your ap\u00adpli\u00adca\u00adtion. Example verticle in the src/main/java folder. Sam\u00adple test using JUnit 5 in the src/test/java folder. Ed\u00adi\u00adtor con\u00adfig\u00adu\u00adra\u00adtion to en\u00adforce code style. Git con\u00adfig\u00adu\u00adra\u00adtion to ig\u00adnore files. To convert the community project to a Red Hat build of Eclipse Vert.x project, replace the following values in pom.xml file: vertx.version - Specify the Eclipse Vert.x version you want to use. For example, if you want to use Eclipse Vert.x 4.3.7 version, specify the version as 4.3.7.redhat-00002. vertx-stack-depchain - Replace this dependency with vertx-dependencies . Specify the repositories that contain the Eclipse Vert.x artifacts to build your application in the pom.xml file. <repositories> <repository> <id>redhat-ga</id> <name>Red Hat GA Repository</name> <url>https://maven.repository.redhat.com/ga/</url> </repository> </repositories> Alternatively, you can configure the Maven repository to specify the build artifacts in the settings.xml file. See the section Configuring the Apache Maven repository for your Eclipse Vert.x projects , for more information. Use the Eclipse Vert.x project as a template to create your own application. Build the application using Maven from the root directory of the application. mvn package Run the application using Maven from the root directory of the application. mvn exec:java Verify that the application is running. Use curl or your browser to verify if your application is running at http://localhost:8888 and returns "Hello from Vert.x!" as response. USD curl http://localhost:8888 Hello from Vert.x! | [
"mvn --version",
"mkdir getting-started && cd getting-started",
"mvn io.reactiverse:vertx-maven-plugin:USD{vertx-maven-plugin-version}:setup -DvertxBom=vertx-dependencies -DvertxVersion=USD{vertx_version} -DprojectGroupId= USD{project_group_id} -DprojectArtifactId= USD{project_artifact_id} -DprojectVersion=USD{project-version} -Dverticle=USD{verticle_class} -Ddependencies=USD{dependency_names}",
"mvn io.reactiverse:vertx-maven-plugin:1.0.24:setup -DvertxBom=vertx-dependencies -DvertxVersion=4.3.7.redhat-00002 -DprojectGroupId=io.vertx.myapp -DprojectArtifactId=my-new-project -DprojectVersion=1.0-SNAPSHOT -DvertxVersion=4.3.7.redhat-00002 -Dverticle=io.vertx.myapp.MainVerticle -Ddependencies=web",
"<repositories> <repository> <id>redhat-ga</id> <name>Red Hat GA Repository</name> <url>https://maven.repository.redhat.com/ga/</url> </repository> </repositories>",
"mvn package",
"mvn vertx:run",
"mvn --version",
"<repositories> <repository> <id>redhat-ga</id> <name>Red Hat GA Repository</name> <url>https://maven.repository.redhat.com/ga/</url> </repository> </repositories>",
"mvn package",
"mvn exec:java",
"curl http://localhost:8888 Hello from Vert.x!"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_eclipse_vert.x/4.3/html/getting_started_with_eclipse_vert.x/other-ways-create-eclipse-vertx-project_vertx |
Managing, monitoring, and updating the kernel | Managing, monitoring, and updating the kernel Red Hat Enterprise Linux 9 A guide to managing the Linux kernel on Red Hat Enterprise Linux 9 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_monitoring_and_updating_the_kernel/index |
6.2. Preparing for a Driver Update During Installation | 6.2. Preparing for a Driver Update During Installation If a driver update is necessary and available for your hardware, Red Hat or a trusted third party such as the hardware vendor will typically provide it in the form of an image file in ISO format. Some methods of performing a driver update require you to make the image file available to the installation program, while others require you to use the image file to make a driver update disk: Methods that use the image file itself local hard drive USB flash drive Methods that use a driver update disk produced from an image file CD DVD Choose a method to provide the driver update, and refer to Section 6.2.1, "Preparing to Use a Driver Update Image File" , Section 6.2.2, "Preparing a Driver Disc" or Section 6.2.3, "Preparing an Initial RAM Disk Update" . Note that you can use a USB storage device either to provide an image file, or as a driver update disk. 6.2.1. Preparing to Use a Driver Update Image File 6.2.1.1. Preparing to use an image file on local storage To make the ISO image file available on local storage, such as a hard drive or USB flash drive, you must first determine whether you want to install the updates automatically or select them manually. For manual installations, copy the file onto the storage device. You can rename the file if you find it helpful to do so, but you must not change the filename extension, which must remain .iso . In the following example, the file is named dd.iso : Figure 6.1. Content of a USB flash drive holding a driver update image file Note that if you use this method, the storage device will contain only a single file. This differs from driver discs on formats such as CD and DVD, which contain many files. The ISO image file contains all of the files that would normally be on a driver disc. Refer to Section 6.3.2, "Let the Installer Prompt You for a Driver Update" and Section 6.3.3, "Use a Boot Option to Specify a Driver Update Disk" to learn how to select the driver update manually during installation. For automatic installations, you will need to extract the ISO to the root directory of the storage device rather than copy it. Copying the ISO is only effective for manual installations. You must also change the file system label of the device to OEMDRV . The installation program will then automatically examine the extracted ISO for driver updates and load any that it detects. This behavior is controlled by the dlabel=on boot option, which is enabled by default. Refer to Section 6.3.1, "Let the Installer Find a Driver Update Disk Automatically" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sect-preparing_for_a_driver_update_during_installation-x86 |
Chapter 17. Sharing a mount on multiple mount points | Chapter 17. Sharing a mount on multiple mount points As a system administrator, you can duplicate mount points to make the file systems accessible from multiple directories. 17.1. Types of shared mounts There are multiple types of shared mounts that you can use. The difference between them is what happens when you mount another file system under one of the shared mount points. The shared mounts are implemented using the shared subtrees functionality. The following mount types are available: private This type does not receive or forward any propagation events. When you mount another file system under either the duplicate or the original mount point, it is not reflected in the other. shared This type creates an exact replica of a given mount point. When a mount point is marked as a shared mount, any mount within the original mount point is reflected in it, and vice versa. This is the default mount type of the root file system. slave This type creates a limited duplicate of a given mount point. When a mount point is marked as a slave mount, any mount within the original mount point is reflected in it, but no mount within a slave mount is reflected in its original. unbindable This type prevents the given mount point from being duplicated whatsoever. Additional resources The Shared subtrees article on Linux Weekly News 17.2. Creating a private mount point duplicate Duplicate a mount point as a private mount. File systems that you later mount under the duplicate or the original mount point are not reflected in the other. Procedure Create a virtual file system (VFS) node from the original mount point: Mark the original mount point as private: Alternatively, to change the mount type for the selected mount point and all mount points under it, use the --make-rprivate option instead of --make-private . Create the duplicate: Example 17.1. Duplicating /media into /mnt as a private mount point Create a VFS node from the /media directory: Mark the /media directory as private: Create its duplicate in /mnt : It is now possible to verify that /media and /mnt share content but none of the mounts within /media appear in /mnt . For example, if the CD-ROM drive contains non-empty media and the /media/cdrom/ directory exists, use: It is also possible to verify that file systems mounted in the /mnt directory are not reflected in /media . For example, if a non-empty USB flash drive that uses the /dev/sdc1 device is plugged in and the /mnt/flashdisk/ directory is present, use: Additional resources mount(8) man page on your system 17.3. Creating a shared mount point duplicate Duplicate a mount point as a shared mount. File systems that you later mount under the original directory or the duplicate are always reflected in the other. Procedure Create a virtual file system (VFS) node from the original mount point: Mark the original mount point as shared: Alternatively, to change the mount type for the selected mount point and all mount points under it, use the --make-rshared option instead of --make-shared . Create the duplicate: Example 17.2. Duplicating /media into /mnt as a shared mount point To make the /media and /mnt directories share the same content: Create a VFS node from the /media directory: Mark the /media directory as shared: Create its duplicate in /mnt : It is now possible to verify that a mount within /media also appears in /mnt . For example, if the CD-ROM drive contains non-empty media and the /media/cdrom/ directory exists, use: Similarly, it is possible to verify that any file system mounted in the /mnt directory is reflected in /media . For example, if a non-empty USB flash drive that uses the /dev/sdc1 device is plugged in and the /mnt/flashdisk/ directory is present, use: Additional resources mount(8) man page on your system 17.4. Creating a slave mount point duplicate Duplicate a mount point as a slave mount type. File systems that you later mount under the original mount point are reflected in the duplicate but not the other way around. Procedure Create a virtual file system (VFS) node from the original mount point: Mark the original mount point as shared: Alternatively, to change the mount type for the selected mount point and all mount points under it, use the --make-rshared option instead of --make-shared . Create the duplicate and mark it as the slave type: Example 17.3. Duplicating /media into /mnt as a slave mount point This example shows how to get the content of the /media directory to appear in /mnt as well, but without any mounts in the /mnt directory to be reflected in /media . Create a VFS node from the /media directory: Mark the /media directory as shared: Create its duplicate in /mnt and mark it as slave : Verify that a mount within /media also appears in /mnt . For example, if the CD-ROM drive contains non-empty media and the /media/cdrom/ directory exists, use: Also verify that file systems mounted in the /mnt directory are not reflected in /media . For example, if a non-empty USB flash drive that uses the /dev/sdc1 device is plugged in and the /mnt/flashdisk/ directory is present, use: Additional resources mount(8) man page on your system 17.5. Preventing a mount point from being duplicated Mark a mount point as unbindable so that it is not possible to duplicate it in another mount point. Procedure To change the type of a mount point to an unbindable mount, use: Alternatively, to change the mount type for the selected mount point and all mount points under it, use the --make-runbindable option instead of --make-unbindable . Any subsequent attempt to make a duplicate of this mount fails with the following error: Example 17.4. Preventing /media from being duplicated To prevent the /media directory from being shared, use: Additional resources mount(8) man page on your system | [
"mount --bind original-dir original-dir",
"mount --make-private original-dir",
"mount --bind original-dir duplicate-dir",
"mount --bind /media /media",
"mount --make-private /media",
"mount --bind /media /mnt",
"mount /dev/cdrom /media/cdrom ls /media/cdrom EFI GPL isolinux LiveOS ls /mnt/cdrom #",
"mount /dev/sdc1 /mnt/flashdisk ls /media/flashdisk ls /mnt/flashdisk en-US publican.cfg",
"mount --bind original-dir original-dir",
"mount --make-shared original-dir",
"mount --bind original-dir duplicate-dir",
"mount --bind /media /media",
"mount --make-shared /media",
"mount --bind /media /mnt",
"mount /dev/cdrom /media/cdrom ls /media/cdrom EFI GPL isolinux LiveOS ls /mnt/cdrom EFI GPL isolinux LiveOS",
"mount /dev/sdc1 /mnt/flashdisk ls /media/flashdisk en-US publican.cfg ls /mnt/flashdisk en-US publican.cfg",
"mount --bind original-dir original-dir",
"mount --make-shared original-dir",
"mount --bind original-dir duplicate-dir mount --make-slave duplicate-dir",
"mount --bind /media /media",
"mount --make-shared /media",
"mount --bind /media /mnt mount --make-slave /mnt",
"mount /dev/cdrom /media/cdrom ls /media/cdrom EFI GPL isolinux LiveOS ls /mnt/cdrom EFI GPL isolinux LiveOS",
"mount /dev/sdc1 /mnt/flashdisk ls /media/flashdisk ls /mnt/flashdisk en-US publican.cfg",
"mount --bind mount-point mount-point mount --make-unbindable mount-point",
"mount --bind mount-point duplicate-dir mount: wrong fs type, bad option, bad superblock on mount-point , missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so",
"mount --bind /media /media mount --make-unbindable /media"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_file_systems/sharing-a-mount-on-multiple-mount-points_managing-file-systems |
4.2. Networking | 4.2. Networking Mellanox SR-IOV Support Single Root I/O Virtualization (SR-IOV) is now supported as a Technology Preview in the Mellanox libmlx4 library and the following drivers: mlx_core mlx4_ib (InfiniBand protocol) mlx_en (Ethernet protocol) Package: kernel-2.6.32-335 Open multicast ping (Omping), BZ# 657370 Open Multicast Ping (Omping) is a tool to test the IP multicast functionality, primarily in the local network. This utility allows users to test IP multicast functionality and assists in the diagnosing if an issues is in the network configuration or elsewhere (that is, a bug). In Red Hat Enterprise Linux 6 Omping is provided as a Technology Preview. Package: omping-0.0.4-1 QFQ queuing discipline In Red Hat Enterprise Linux 6, the tc utility has been updated to work with the Quick Fair Scheduler (QFQ) kernel features. Users can now take advantage of the new QFQ traffic queuing discipline from userspace. This feature is considered a Technology Preview. Package: kernel-2.6.32-431 vios-proxy, BZ# 721119 vios-proxy is a stream-socket proxy for providing connectivity between a client on a virtual guest and a server on a Hypervisor host. Communication occurs over virtio-serial links. Package: vios-proxy-0.2-1 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/networking_tp |
Chapter 5. Backing up the Original Manager | Chapter 5. Backing up the Original Manager Back up the original Manager using the engine-backup command, and copy the backup file to a separate location so that it can be accessed at any point during the process. For more information about engine-backup --mode=backup options, see Backing Up and Restoring the Red Hat Virtualization Manager in the Administration Guide . Procedure Log in to the original Manager and stop the ovirt-engine service: Note Though stopping the original Manager from running is not obligatory, it is recommended as it ensures no changes are made to the environment after the backup is created. Additionally, it prevents the original Manager and the new Manager from simultaneously managing existing resources. Run the engine-backup command, specifying the name of the backup file to create, and the name of the log file to create to store the backup log: Copy the files to an external server. In the following example, storage.example.com is the fully qualified domain name of a network storage server that will store the backup until it is needed, and /backup/ is any designated folder or path. If you do not require the Manager machine for other purposes, unregister it from Red Hat Subscription Manager: After backing up the Manager, deploy a new self-hosted engine and restore the backup on the new virtual machine. | [
"systemctl stop ovirt-engine systemctl disable ovirt-engine",
"engine-backup --mode=backup --file= file_name --log= log_file_name",
"scp -p file_name log_file_name storage.example.com:/backup/",
"subscription-manager unregister"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/migrating_from_a_standalone_manager_to_a_self-hosted_engine/backing_up_the_original_manager_migrating_to_she |
Chapter 4. Clustering | Chapter 4. Clustering systemd and pacemaker now coordinate correctly during system shutdown Previously, systemd and pacemaker did not coordinate correctly during system shutdown, which caused pacemaker resources not to be terminated properly. With this update, pacemaker is ordered to stop before dbus and other systemd services that pacemaker started. This allows both pacemaker and the resources that pacemaker manages to shut down properly. The pcs resource move and pcs resource ban commands now display a warning message to clarify the commands' behavior The pcs resource move command and the pcs resource ban commands create location constraints that that effectively ban the resource from running on the current node until the constraint is removed or until the constraint lifetime expires. This behavior had previously not been clear to users. These commands now display a warning message explaining this behavior, and the help screens and documentation for these commands have been clarified. New command to move a Pacemaker resource to its preferred node After a Pacemaker resource has moved, either due to a failover or to an administrator manually moving the node, it will not necessarily move back to its original node even after the circumstances that caused the failover have been corrected. You can now use the pcs resource relocate run command to move a resource to its preferred node, as determined by current cluster status, constraints, location of resources and other settings. You can also use the pcs resource relocate show command to display migrated resources. For information on these commands, see the High Availability Add-On Reference. Simplified method for configuring fencing for redundant power supplies in a cluster When configuring fencing for redundant power supplies, you must ensure that when the power supplies are rebooted both power supplies are turned off before either power supply is turned back on. If the node never completely loses power, the node may not release its resources. This opens up the possibility of nodes accessing these resources simultaneously and corrupting them. Prior to Red Hat Enterprise Linux 7.2, you needed to explicitly configure different versions of the devices which used either the 'on' or 'off' actions. Since Red Hat Enterprise Linux 7.2, it is now only required to define each device once and to specify that both are required to fence the node. For information on configuring fencing for redundant power supplies, see the Fencing: Configuring STONITH chapter of the High Availability Add-On Reference manual. New --port-as-ip option for fencing agents Fence agents used only with single devices required complex configuration in pacemaker. It is now possible to use the --port-as-ip option to enter the IP address in the port option. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.2_release_notes/clustering |
9.8. LOB Considerations | 9.8. LOB Considerations Although, you can find information about all JBoss Data Virtualization settings using the Management CLI (see Section 10.1, "JBoss Data Virtualization Settings" ), this section provides some additional information about those settings related to large objects (LOBs). lob-chunk-size-in-kb LOBs and XML documents are streamed from the JBoss Data Virtualization server to the JDBC API. Normally, these values are not materialized in the server memory, avoiding potential out-of-memory issues. When using style sheets, or XQuery, whole XML documents must be materialized on the server. Even when using the XMLQuery or XMLTable functions and document projection is applied, memory issues may occur for large documents. LOBs are broken into pieces when being created and streamed. The maximum size of each piece when fetched by the client can be configured with the lob-chunk-size-in-kb property. The default value is 100. When dealing with extremely large LOBs, you may consider increasing lob-chunk-size-in-kb to decrease the amount of round-trips to stream the result. Setting the value too high may cause the server or client to have memory issues. Source LOB values are typically accessed by reference, rather than having the value copied to a temporary location. Thus care must be taken to ensure that source LOBs are returned in a memory-safe manner. This caution is more for the source driver vendors not to consume VM memory for LOBs. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/lob_considerations1 |
Appendix A. Revision History | Appendix A. Revision History Revision History Revision 6.6.0-2 Fri 7 Jul 2017 John Brier Added deprecation notice of NotifyingFuture. Revision 6.6.0-1 Tue 26 Jan 2016 Christian Huffman Added deprecation notice of Spring Framework. Revision 6.6.0-0 Thu 7 Jan 2016 Christian Huffman Initial Draft for 6.6.0. Included all Known and Resolved Issues. Added deprecation notes. | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/6.6.0_release_notes/appe-revision_history |
Chapter 5. Security realms | Chapter 5. Security realms Security realms integrate Data Grid Server deployments with the network protocols and infrastructure in your environment that control access and verify user identities. 5.1. Creating security realms Add security realms to Data Grid Server configuration to control access to deployments. You can add one or more security realm to your configuration. Note When you add security realms to your configuration, Data Grid Server automatically enables the matching authentication mechanisms for the Hot Rod and REST endpoints. Prerequisites Add socket bindings to your Data Grid Server configuration as required. Create keystores, or have a PEM file, to configure the security realm with TLS/SSL encryption. Data Grid Server can also generate keystores at startup. Provision the resources or services that the security realm configuration relies on. For example, if you add a token realm, you need to provision OAuth services. This procedure demonstrates how to configure multiple property realms. Before you begin, you need to create properties files that add users and assign permissions with the Command Line Interface (CLI). Use the user create commands as follows: Tip Run user create --help for examples and more information. Note Adding credentials to a properties realm with the CLI creates the user only on the server instance to which you are connected. You must manually synchronize credentials in a properties realm to each node in the cluster. Procedure Open your Data Grid Server configuration for editing. Use the security-realms element in the security configuration to contain create multiple security realms. Add a security realm with the security-realm element and give it a unique name with the name attribute. To follow the example, create one security realm named application-realm and another named management-realm . Provide the TLS/SSL identify for Data Grid Server with the server-identities element and configure a keystore as required. Specify the type of security realm by adding one the following elements or fields: properties-realm ldap-realm token-realm truststore-realm Specify properties for the type of security realm you are configuring as appropriate. To follow the example, specify the *.properties files you created with the CLI using the path attribute on the user-properties and group-properties elements or fields. If you add multiple different types of security realm to your configuration, include the distributed-realm element or field so that Data Grid Server uses the realms in combination with each other. Configure Data Grid Server endpoints to use the security realm with the with the security-realm attribute. Save the changes to your configuration. Multiple property realms XML <server xmlns="urn:infinispan:server:14.0"> <security> <security-realms> <security-realm name="application-realm"> <properties-realm groups-attribute="Roles"> <user-properties path="application-users.properties"/> <group-properties path="application-groups.properties"/> </properties-realm> </security-realm> <security-realm name="management-realm"> <properties-realm groups-attribute="Roles"> <user-properties path="management-users.properties"/> <group-properties path="management-groups.properties"/> </properties-realm> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "management-realm", "properties-realm": { "groups-attribute": "Roles", "user-properties": { "digest-realm-name": "management-realm", "path": "management-users.properties" }, "group-properties": { "path": "management-groups.properties" } } }, { "name": "application-realm", "properties-realm": { "groups-attribute": "Roles", "user-properties": { "digest-realm-name": "application-realm", "path": "application-users.properties" }, "group-properties": { "path": "application-groups.properties" } } }] } } } YAML server: security: securityRealms: - name: "management-realm" propertiesRealm: groupsAttribute: "Roles" userProperties: digestRealmName: "management-realm" path: "management-users.properties" groupProperties: path: "management-groups.properties" - name: "application-realm" propertiesRealm: groupsAttribute: "Roles" userProperties: digestRealmName: "application-realm" path: "application-users.properties" groupProperties: path: "application-groups.properties" 5.2. Setting up Kerberos identities Add Kerberos identities to a security realm in your Data Grid Server configuration to use keytab files that contain service principal names and encrypted keys, derived from Kerberos passwords. Prerequisites Have Kerberos service account principals. Note keytab files can contain both user and service account principals. However, Data Grid Server uses service account principals only which means it can provide identity to clients and allow clients to authenticate with Kerberos servers. In most cases, you create unique principals for the Hot Rod and REST endpoints. For example, if you have a "datagrid" server in the "INFINISPAN.ORG" domain you should create the following service principals: hotrod/[email protected] identifies the Hot Rod service. HTTP/[email protected] identifies the REST service. Procedure Create keytab files for the Hot Rod and REST services. Linux Microsoft Windows Copy the keytab files to the server/conf directory of your Data Grid Server installation. Open your Data Grid Server configuration for editing. Add a server-identities definition to the Data Grid server security realm. Specify the location of keytab files that provide service principals to Hot Rod and REST connectors. Name the Kerberos service principals. Save the changes to your configuration. Kerberos identity configuration XML <server xmlns="urn:infinispan:server:14.0"> <security> <security-realms> <security-realm name="kerberos-realm"> <server-identities> <!-- Specifies a keytab file that provides a Kerberos identity. --> <!-- Names the Kerberos service principal for the Hot Rod endpoint. --> <!-- The required="true" attribute specifies that the keytab file must be present when the server starts. --> <kerberos keytab-path="hotrod.keytab" principal="hotrod/[email protected]" required="true"/> <!-- Specifies a keytab file and names the Kerberos service principal for the REST endpoint. --> <kerberos keytab-path="http.keytab" principal="HTTP/[email protected]" required="true"/> </server-identities> </security-realm> </security-realms> </security> <endpoints> <endpoint socket-binding="default" security-realm="kerberos-realm"> <hotrod-connector> <authentication> <sasl server-name="datagrid" server-principal="hotrod/[email protected]"/> </authentication> </hotrod-connector> <rest-connector> <authentication server-principal="HTTP/[email protected]"/> </rest-connector> </endpoint> </endpoints> </server> JSON { "server": { "security": { "security-realms": [{ "name": "kerberos-realm", "server-identities": [{ "kerberos": { "principal": "hotrod/[email protected]", "keytab-path": "hotrod.keytab", "required": true }, "kerberos": { "principal": "HTTP/[email protected]", "keytab-path": "http.keytab", "required": true } }] }] }, "endpoints": { "endpoint": { "socket-binding": "default", "security-realm": "kerberos-realm", "hotrod-connector": { "authentication": { "security-realm": "kerberos-realm", "sasl": { "server-name": "datagrid", "server-principal": "hotrod/[email protected]" } } }, "rest-connector": { "authentication": { "server-principal": "HTTP/[email protected]" } } } } } } YAML server: security: securityRealms: - name: "kerberos-realm" serverIdentities: - kerberos: principal: "hotrod/[email protected]" keytabPath: "hotrod.keytab" required: "true" - kerberos: principal: "HTTP/[email protected]" keytabPath: "http.keytab" required: "true" endpoints: endpoint: socketBinding: "default" securityRealm: "kerberos-realm" hotrodConnector: authentication: sasl: serverName: "datagrid" serverPrincipal: "hotrod/[email protected]" restConnector: authentication: securityRealm: "kerberos-realm" serverPrincipal" : "HTTP/[email protected]" 5.3. Property realms Property realms use property files to define users and groups. users.properties contains Data Grid user credentials. Passwords can be pre-digested with the DIGEST-MD5 and DIGEST authentication mechanisms. groups.properties associates users with roles and permissions. Note You can avoid authentication issues that relate to a property file by using the Data Grid CLI to enter the correct security realm name to the file. You can find the correct security realm name of your Data Grid Server by opening the infinispan.xml file and navigating to the <security-realm name> property. When you copy a property file from one Data Grid Server to another, make sure that the security realm name appropriates to the correct authentication mechanism for the target endpoint. users.properties groups.properties Property realm configuration XML <server xmlns="urn:infinispan:server:14.0"> <security> <security-realms> <security-realm name="default"> <!-- groups-attribute configures the "groups.properties" file to contain security authorization roles. --> <properties-realm groups-attribute="Roles"> <user-properties path="users.properties" relative-to="infinispan.server.config.path" plain-text="true"/> <group-properties path="groups.properties" relative-to="infinispan.server.config.path"/> </properties-realm> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "default", "properties-realm": { "groups-attribute": "Roles", "user-properties": { "digest-realm-name": "default", "path": "users.properties", "relative-to": "infinispan.server.config.path", "plain-text": true }, "group-properties": { "path": "groups.properties", "relative-to": "infinispan.server.config.path" } } }] } } } YAML server: security: securityRealms: - name: "default" propertiesRealm: # groupsAttribute configures the "groups.properties" file # to contain security authorization roles. groupsAttribute: "Roles" userProperties: digestRealmName: "default" path: "users.properties" relative-to: 'infinispan.server.config.path' plainText: "true" groupProperties: path: "groups.properties" relative-to: 'infinispan.server.config.path' 5.4. LDAP realms LDAP realms connect to LDAP servers, such as OpenLDAP, Red Hat Directory Server, Apache Directory Server, or Microsoft Active Directory, to authenticate users and obtain membership information. Note LDAP servers can have different entry layouts, depending on the type of server and deployment. It is beyond the scope of this document to provide examples for all possible configurations. 5.4.1. LDAP connection properties Specify the LDAP connection properties in the LDAP realm configuration. The following properties are required: url Specifies the URL of the LDAP server. The URL should be in format ldap://hostname:port or ldaps://hostname:port for secure connections using TLS. principal Specifies a distinguished name (DN) of a valid user in the LDAp server. The DN uniquely identifies the user within the LDAP directory structure. credential Corresponds to the password associated with the principal mentioned above. Important The principal for LDAP connections must have necessary privileges to perform LDAP queries and access specific attributes. Tip Enabling connection-pooling significantly improves the performance of authentication to LDAP servers. The connection pooling mechanism is provided by the JDK. For more information see Connection Pooling Configuration and Java Tutorials: Pooling . 5.4.2. LDAP realm user authentication methods Configure the user authentication method in the LDAP realm. The LDAP realm can authenticate users in two ways: Hashed password comparison by comparing the hashed password stored in a user's password attribute (usually userPassword ) Direct verification by authenticating against the LDAP server using the supplied credentials Direct verification is the only approach that works with Active Directory, because access to the password attribute is forbidden. Important You cannot use endpoint authentication mechanisms that performs hashing with the direct-verification attribute, since this method requires having the password in clear text. As a result you must use the BASIC authentication mechanism with the REST endpoint and PLAIN with the Hot Rod endpoint to integrate with Active Directory Server. A more secure alternative is to use Kerberos, which allows the SPNEGO , GSSAPI , and GS2-KRB5 authentication mechanisms. The LDAP realm searches the directory to find the entry which corresponds to the authenticated user. The rdn-identifier attribute specifies an LDAP attribute that finds the user entry based on a provided identifier, which is typically a username; for example, the uid or sAMAccountName attribute. Add search-recursive="true" to the configuration to search the directory recursively. By default, the search for the user entry uses the (rdn_identifier={0}) filter. You can specify a different filter using the filter-name attribute. 5.4.3. Mapping user entries to their associated groups In the LDAP realm configuration, specify the attribute-mapping element to retrieve and associate all groups that a user is a member of. The membership information is stored typically in two ways: Under group entries that usually have class groupOfNames or groupOfUniqueNames in the member attribute. This is the default behavior in most LDAP installations, except for Active Directory. In this case, you can use an attribute filter. This filter searches for entries that match the supplied filter, which locates groups with a member attribute equal to the user's DN. The filter then extracts the group entry's CN as specified by from , and adds it to the user's Roles . In the user entry in the memberOf attribute. This is typically the case for Active Directory. In this case you should use an attribute reference such as the following: <attribute-reference reference="memberOf" from="cn" to="Roles" /> This reference gets all memberOf attributes from the user's entry, extracts the CN as specified by from , and adds them to the user's groups ( Roles is the internal name used to map the groups). 5.4.4. LDAP realm configuration reference XML <server xmlns="urn:infinispan:server:14.0"> <security> <security-realms> <security-realm name="ldap-realm"> <!-- Specifies connection properties. --> <ldap-realm url="ldap://my-ldap-server:10389" principal="uid=admin,ou=People,dc=infinispan,dc=org" credential="strongPassword" connection-timeout="3000" read-timeout="30000" connection-pooling="true" referral-mode="ignore" page-size="30" direct-verification="true"> <!-- Defines how principals are mapped to LDAP entries. --> <identity-mapping rdn-identifier="uid" search-dn="ou=People,dc=infinispan,dc=org" search-recursive="false"> <!-- Retrieves all the groups of which the user is a member. --> <attribute-mapping> <attribute from="cn" to="Roles" filter="(&(objectClass=groupOfNames)(member={1}))" filter-dn="ou=Roles,dc=infinispan,dc=org"/> </attribute-mapping> </identity-mapping> </ldap-realm> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "ldap-realm", "ldap-realm": { "url": "ldap://my-ldap-server:10389", "principal": "uid=admin,ou=People,dc=infinispan,dc=org", "credential": "strongPassword", "connection-timeout": "3000", "read-timeout": "30000", "connection-pooling": "true", "referral-mode": "ignore", "page-size": "30", "direct-verification": "true", "identity-mapping": { "rdn-identifier": "uid", "search-dn": "ou=People,dc=infinispan,dc=org", "search-recursive": "false", "attribute-mapping": [{ "from": "cn", "to": "Roles", "filter": "(&(objectClass=groupOfNames)(member={1}))", "filter-dn": "ou=Roles,dc=infinispan,dc=org" }] } } }] } } } YAML server: security: securityRealms: - name: ldap-realm ldapRealm: url: 'ldap://my-ldap-server:10389' principal: 'uid=admin,ou=People,dc=infinispan,dc=org' credential: strongPassword connectionTimeout: '3000' readTimeout: '30000' connectionPooling: true referralMode: ignore pageSize: '30' directVerification: true identityMapping: rdnIdentifier: uid searchDn: 'ou=People,dc=infinispan,dc=org' searchRecursive: false attributeMapping: - filter: '(&(objectClass=groupOfNames)(member={1}))' filterDn: 'ou=Roles,dc=infinispan,dc=org' from: cn to: Roles 5.4.4.1. LDAP realm principal rewriting Principals obtained by SASL authentication mechanisms such as GSSAPI , GS2-KRB5 and Negotiate usually include the domain name, for example [email protected] . Before using these principals in LDAP queries, it is necessary to transform them to ensure their compatibility. This process is called rewriting. Data Grid includes the following transformers: case-principal-transformer rewrites the principal to either all uppercase or all lowercase. For example MyUser would be rewritten as MYUSER in uppercase mode and myuser in lowercase mode. common-name-principal-transformer rewrites principals in the LDAP Distinguished Name format (as defined by RFC 4514 ). It extracts the first attribute of type CN (commonName). For example, DN=CN=myuser,OU=myorg,DC=mydomain would be rewritten as myuser . regex-principal-transformer rewrites principals using a regular expression with capturing groups, allowing, for example, for extractions of any substring. 5.4.4.2. LDAP principal rewriting configuration reference Case principal transformer XML <server xmlns="urn:infinispan:server:14.0"> <security> <security-realms> <security-realm name="ldap-realm"> <ldap-realm url="ldap://USD{org.infinispan.test.host.address}:10389" principal="uid=admin,ou=People,dc=infinispan,dc=org" credential="strongPassword"> <name-rewriter> <!-- Defines a rewriter that transforms usernames to lowercase --> <case-principal-transformer uppercase="false"/> </name-rewriter> <!-- further configuration omitted --> </ldap-realm> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "ldap-realm", "ldap-realm": { "principal": "uid=admin,ou=People,dc=infinispan,dc=org", "url": "ldap://USD{org.infinispan.test.host.address}:10389", "credential": "strongPassword", "name-rewriter": { "case-principal-transformer": { "uppercase": false } } } }] } } } YAML server: security: securityRealms: - name: "ldap-realm" ldapRealm: principal: "uid=admin,ou=People,dc=infinispan,dc=org" url: "ldap://USD{org.infinispan.test.host.address}:10389" credential: "strongPassword" nameRewriter: casePrincipalTransformer: uppercase: false # further configuration omitted Common name principal transformer XML <server xmlns="urn:infinispan:server:14.0"> <security> <security-realms> <security-realm name="ldap-realm"> <ldap-realm url="ldap://USD{org.infinispan.test.host.address}:10389" principal="uid=admin,ou=People,dc=infinispan,dc=org" credential="strongPassword"> <name-rewriter> <!-- Defines a rewriter that obtains the first CN from a DN --> <common-name-principal-transformer /> </name-rewriter> <!-- further configuration omitted --> </ldap-realm> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "ldap-realm", "ldap-realm": { "principal": "uid=admin,ou=People,dc=infinispan,dc=org", "url": "ldap://USD{org.infinispan.test.host.address}:10389", "credential": "strongPassword", "name-rewriter": { "common-name-principal-transformer": {} } } }] } } } YAML server: security: securityRealms: - name: "ldap-realm" ldapRealm: principal: "uid=admin,ou=People,dc=infinispan,dc=org" url: "ldap://USD{org.infinispan.test.host.address}:10389" credential: "strongPassword" nameRewriter: commonNamePrincipalTransformer: ~ # further configuration omitted Regex principal transformer XML <server xmlns="urn:infinispan:server:14.0"> <security> <security-realms> <security-realm name="ldap-realm"> <ldap-realm url="ldap://USD{org.infinispan.test.host.address}:10389" principal="uid=admin,ou=People,dc=infinispan,dc=org" credential="strongPassword"> <name-rewriter> <!-- Defines a rewriter that extracts the username from the principal using a regular expression. --> <regex-principal-transformer pattern="(.*)@INFINISPAN\.ORG" replacement="USD1"/> </name-rewriter> <!-- further configuration omitted --> </ldap-realm> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "ldap-realm", "ldap-realm": { "principal": "uid=admin,ou=People,dc=infinispan,dc=org", "url": "ldap://USD{org.infinispan.test.host.address}:10389", "credential": "strongPassword", "name-rewriter": { "regex-principal-transformer": { "pattern": "(.*)@INFINISPAN\\.ORG", "replacement": "USD1" } } } }] } } } YAML server: security: securityRealms: - name: "ldap-realm" ldapRealm: principal: "uid=admin,ou=People,dc=infinispan,dc=org" url: "ldap://USD{org.infinispan.test.host.address}:10389" credential: "strongPassword" nameRewriter: regexPrincipalTransformer: pattern: (.*)@INFINISPAN\.ORG replacement: "USD1" # further configuration omitted 5.4.4.3. LDAP user and group mapping process with Data Grid This example illustrates the process of loading and internally mapping LDAP users and groups to Data Grid subjects. The following is a LDIF (LDAP Data Interchange Format) file, which describes multiple LDAP entries: LDIF # Users dn: uid=root,ou=People,dc=infinispan,dc=org objectclass: top objectclass: uidObject objectclass: person uid: root cn: root sn: root userPassword: strongPassword # Groups dn: cn=admin,ou=Roles,dc=infinispan,dc=org objectClass: top objectClass: groupOfNames cn: admin description: the Infinispan admin group member: uid=root,ou=People,dc=infinispan,dc=org dn: cn=monitor,ou=Roles,dc=infinispan,dc=org objectClass: top objectClass: groupOfNames cn: monitor description: the Infinispan monitor group member: uid=root,ou=People,dc=infinispan,dc=org The root user is a member of the admin and monitor groups. When a request to authenticate the user root with the password strongPassword is made on one of the endpoints, the following operations are performed: The username is optionally rewritten using the chosen principal transformer. The realm searches within the ou=People,dc=infinispan,dc=org tree for an entry whose uid attribute is equal to root and finds the entry with DN uid=root,ou=People,dc=infinispan,dc=org , which becomes the user principal. The realm searches within the u=Roles,dc=infinispan,dc=org tree for entries of objectClass=groupOfNames that include uid=root,ou=People,dc=infinispan,dc=org in the member attribute. In this case it finds two entries: cn=admin,ou=Roles,dc=infinispan,dc=org and cn=monitor,ou=Roles,dc=infinispan,dc=org . From these entries, it extracts the cn attributes which become the group principals. The resulting subject will therefore look like: NamePrincipal: uid=root,ou=People,dc=infinispan,dc=org RolePrincipal: admin RolePrincipal: monitor At this point, the global authorization mappers are applied on the above subject to convert the principals into roles. The roles are then expanded into a set of permissions, which are validated against the requested cache and operation. 5.5. Token realms Token realms use external services to validate tokens and require providers that are compatible with RFC-7662 (OAuth2 Token Introspection), such as Red Hat SSO. Token realm configuration XML <server xmlns="urn:infinispan:server:14.0"> <security> <security-realms> <security-realm name="token-realm"> <!-- Specifies the URL of the authentication server. --> <token-realm name="token" auth-server-url="https://oauth-server/auth/"> <!-- Specifies the URL of the token introspection endpoint. --> <oauth2-introspection introspection-url="https://oauth-server/auth/realms/infinispan/protocol/openid-connect/token/introspect" client-id="infinispan-server" client-secret="1fdca4ec-c416-47e0-867a-3d471af7050f"/> </token-realm> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "token-realm", "token-realm": { "auth-server-url": "https://oauth-server/auth/", "oauth2-introspection": { "client-id": "infinispan-server", "client-secret": "1fdca4ec-c416-47e0-867a-3d471af7050f", "introspection-url": "https://oauth-server/auth/realms/infinispan/protocol/openid-connect/token/introspect" } } }] } } } YAML server: security: securityRealms: - name: token-realm tokenRealm: authServerUrl: 'https://oauth-server/auth/' oauth2Introspection: clientId: infinispan-server clientSecret: '1fdca4ec-c416-47e0-867a-3d471af7050f' introspectionUrl: 'https://oauth-server/auth/realms/infinispan/protocol/openid-connect/token/introspect' 5.6. Trust store realms Trust store realms use certificates, or certificates chains, that verify Data Grid Server and client identities when they negotiate connections. Keystores Contain server certificates that provide a Data Grid Server identity to clients. If you configure a keystore with server certificates, Data Grid Server encrypts traffic using industry standard SSL/TLS protocols. Trust stores Contain client certificates, or certificate chains, that clients present to Data Grid Server. Client trust stores are optional and allow Data Grid Server to perform client certificate authentication. Client certificate authentication You must add the require-ssl-client-auth="true" attribute to the endpoint configuration if you want Data Grid Server to validate or authenticate client certificates. Trust store realm configuration XML <server xmlns="urn:infinispan:server:14.0"> <security> <security-realms> <security-realm name="trust-store-realm"> <server-identities> <ssl> <!-- Provides an SSL/TLS identity with a keystore that contains server certificates. --> <keystore path="server.p12" relative-to="infinispan.server.config.path" keystore-password="secret" alias="server"/> <!-- Configures a trust store that contains client certificates or part of a certificate chain. --> <truststore path="trust.p12" relative-to="infinispan.server.config.path" password="secret"/> </ssl> </server-identities> <!-- Authenticates client certificates against the trust store. If you configure this, the trust store must contain the public certificates for all clients. --> <truststore-realm/> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "trust-store-realm", "server-identities": { "ssl": { "keystore": { "path": "server.p12", "relative-to": "infinispan.server.config.path", "keystore-password": "secret", "alias": "server" }, "truststore": { "path": "trust.p12", "relative-to": "infinispan.server.config.path", "password": "secret" } } }, "truststore-realm": {} }] } } } YAML server: security: securityRealms: - name: "trust-store-realm" serverIdentities: ssl: keystore: path: "server.p12" relative-to: "infinispan.server.config.path" keystore-password: "secret" alias: "server" truststore: path: "trust.p12" relative-to: "infinispan.server.config.path" password: "secret" truststoreRealm: ~ 5.7. Distributed security realms Distributed realms combine multiple different types of security realms. When users attempt to access the Hot Rod or REST endpoints, Data Grid Server uses each security realm in turn until it finds one that can perform the authentication. Distributed realm configuration XML <server xmlns="urn:infinispan:server:14.0"> <security> <security-realms> <security-realm name="distributed-realm"> <ldap-realm url="ldap://my-ldap-server:10389" principal="uid=admin,ou=People,dc=infinispan,dc=org" credential="strongPassword"> <identity-mapping rdn-identifier="uid" search-dn="ou=People,dc=infinispan,dc=org" search-recursive="false"> <attribute-mapping> <attribute from="cn" to="Roles" filter="(&(objectClass=groupOfNames)(member={1}))" filter-dn="ou=Roles,dc=infinispan,dc=org"/> </attribute-mapping> </identity-mapping> </ldap-realm> <properties-realm groups-attribute="Roles"> <user-properties path="users.properties" relative-to="infinispan.server.config.path"/> <group-properties path="groups.properties" relative-to="infinispan.server.config.path"/> </properties-realm> <distributed-realm/> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "distributed-realm", "ldap-realm": { "principal": "uid=admin,ou=People,dc=infinispan,dc=org", "url": "ldap://my-ldap-server:10389", "credential": "strongPassword", "identity-mapping": { "rdn-identifier": "uid", "search-dn": "ou=People,dc=infinispan,dc=org", "search-recursive": false, "attribute-mapping": { "attribute": { "filter": "(&(objectClass=groupOfNames)(member={1}))", "filter-dn": "ou=Roles,dc=infinispan,dc=org", "from": "cn", "to": "Roles" } } } }, "properties-realm": { "groups-attribute": "Roles", "user-properties": { "digest-realm-name": "distributed-realm", "path": "users.properties" }, "group-properties": { "path": "groups.properties" } }, "distributed-realm": {} }] } } } YAML server: security: securityRealms: - name: "distributed-realm" ldapRealm: principal: "uid=admin,ou=People,dc=infinispan,dc=org" url: "ldap://my-ldap-server:10389" credential: "strongPassword" identityMapping: rdnIdentifier: "uid" searchDn: "ou=People,dc=infinispan,dc=org" searchRecursive: "false" attributeMapping: attribute: filter: "(&(objectClass=groupOfNames)(member={1}))" filterDn: "ou=Roles,dc=infinispan,dc=org" from: "cn" to: "Roles" propertiesRealm: groupsAttribute: "Roles" userProperties: digestRealmName: "distributed-realm" path: "users.properties" groupProperties: path: "groups.properties" distributedRealm: ~ | [
"user create <username> -p <changeme> -g <role> --users-file=application-users.properties --groups-file=application-groups.properties user create <username> -p <changeme> -g <role> --users-file=management-users.properties --groups-file=management-groups.properties",
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <security-realms> <security-realm name=\"application-realm\"> <properties-realm groups-attribute=\"Roles\"> <user-properties path=\"application-users.properties\"/> <group-properties path=\"application-groups.properties\"/> </properties-realm> </security-realm> <security-realm name=\"management-realm\"> <properties-realm groups-attribute=\"Roles\"> <user-properties path=\"management-users.properties\"/> <group-properties path=\"management-groups.properties\"/> </properties-realm> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"management-realm\", \"properties-realm\": { \"groups-attribute\": \"Roles\", \"user-properties\": { \"digest-realm-name\": \"management-realm\", \"path\": \"management-users.properties\" }, \"group-properties\": { \"path\": \"management-groups.properties\" } } }, { \"name\": \"application-realm\", \"properties-realm\": { \"groups-attribute\": \"Roles\", \"user-properties\": { \"digest-realm-name\": \"application-realm\", \"path\": \"application-users.properties\" }, \"group-properties\": { \"path\": \"application-groups.properties\" } } }] } } }",
"server: security: securityRealms: - name: \"management-realm\" propertiesRealm: groupsAttribute: \"Roles\" userProperties: digestRealmName: \"management-realm\" path: \"management-users.properties\" groupProperties: path: \"management-groups.properties\" - name: \"application-realm\" propertiesRealm: groupsAttribute: \"Roles\" userProperties: digestRealmName: \"application-realm\" path: \"application-users.properties\" groupProperties: path: \"application-groups.properties\"",
"ktutil ktutil: addent -password -p [email protected] -k 1 -e aes256-cts Password for [email protected]: [enter your password] ktutil: wkt http.keytab ktutil: quit",
"ktpass -princ HTTP/[email protected] -pass * -mapuser INFINISPAN\\USER_NAME ktab -k http.keytab -a HTTP/[email protected]",
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <security-realms> <security-realm name=\"kerberos-realm\"> <server-identities> <!-- Specifies a keytab file that provides a Kerberos identity. --> <!-- Names the Kerberos service principal for the Hot Rod endpoint. --> <!-- The required=\"true\" attribute specifies that the keytab file must be present when the server starts. --> <kerberos keytab-path=\"hotrod.keytab\" principal=\"hotrod/[email protected]\" required=\"true\"/> <!-- Specifies a keytab file and names the Kerberos service principal for the REST endpoint. --> <kerberos keytab-path=\"http.keytab\" principal=\"HTTP/[email protected]\" required=\"true\"/> </server-identities> </security-realm> </security-realms> </security> <endpoints> <endpoint socket-binding=\"default\" security-realm=\"kerberos-realm\"> <hotrod-connector> <authentication> <sasl server-name=\"datagrid\" server-principal=\"hotrod/[email protected]\"/> </authentication> </hotrod-connector> <rest-connector> <authentication server-principal=\"HTTP/[email protected]\"/> </rest-connector> </endpoint> </endpoints> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"kerberos-realm\", \"server-identities\": [{ \"kerberos\": { \"principal\": \"hotrod/[email protected]\", \"keytab-path\": \"hotrod.keytab\", \"required\": true }, \"kerberos\": { \"principal\": \"HTTP/[email protected]\", \"keytab-path\": \"http.keytab\", \"required\": true } }] }] }, \"endpoints\": { \"endpoint\": { \"socket-binding\": \"default\", \"security-realm\": \"kerberos-realm\", \"hotrod-connector\": { \"authentication\": { \"security-realm\": \"kerberos-realm\", \"sasl\": { \"server-name\": \"datagrid\", \"server-principal\": \"hotrod/[email protected]\" } } }, \"rest-connector\": { \"authentication\": { \"server-principal\": \"HTTP/[email protected]\" } } } } } }",
"server: security: securityRealms: - name: \"kerberos-realm\" serverIdentities: - kerberos: principal: \"hotrod/[email protected]\" keytabPath: \"hotrod.keytab\" required: \"true\" - kerberos: principal: \"HTTP/[email protected]\" keytabPath: \"http.keytab\" required: \"true\" endpoints: endpoint: socketBinding: \"default\" securityRealm: \"kerberos-realm\" hotrodConnector: authentication: sasl: serverName: \"datagrid\" serverPrincipal: \"hotrod/[email protected]\" restConnector: authentication: securityRealm: \"kerberos-realm\" serverPrincipal\" : \"HTTP/[email protected]\"",
"myuser=a_password user2=another_password",
"myuser=supervisor,reader,writer user2=supervisor",
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <security-realms> <security-realm name=\"default\"> <!-- groups-attribute configures the \"groups.properties\" file to contain security authorization roles. --> <properties-realm groups-attribute=\"Roles\"> <user-properties path=\"users.properties\" relative-to=\"infinispan.server.config.path\" plain-text=\"true\"/> <group-properties path=\"groups.properties\" relative-to=\"infinispan.server.config.path\"/> </properties-realm> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"default\", \"properties-realm\": { \"groups-attribute\": \"Roles\", \"user-properties\": { \"digest-realm-name\": \"default\", \"path\": \"users.properties\", \"relative-to\": \"infinispan.server.config.path\", \"plain-text\": true }, \"group-properties\": { \"path\": \"groups.properties\", \"relative-to\": \"infinispan.server.config.path\" } } }] } } }",
"server: security: securityRealms: - name: \"default\" propertiesRealm: # groupsAttribute configures the \"groups.properties\" file # to contain security authorization roles. groupsAttribute: \"Roles\" userProperties: digestRealmName: \"default\" path: \"users.properties\" relative-to: 'infinispan.server.config.path' plainText: \"true\" groupProperties: path: \"groups.properties\" relative-to: 'infinispan.server.config.path'",
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <security-realms> <security-realm name=\"ldap-realm\"> <!-- Specifies connection properties. --> <ldap-realm url=\"ldap://my-ldap-server:10389\" principal=\"uid=admin,ou=People,dc=infinispan,dc=org\" credential=\"strongPassword\" connection-timeout=\"3000\" read-timeout=\"30000\" connection-pooling=\"true\" referral-mode=\"ignore\" page-size=\"30\" direct-verification=\"true\"> <!-- Defines how principals are mapped to LDAP entries. --> <identity-mapping rdn-identifier=\"uid\" search-dn=\"ou=People,dc=infinispan,dc=org\" search-recursive=\"false\"> <!-- Retrieves all the groups of which the user is a member. --> <attribute-mapping> <attribute from=\"cn\" to=\"Roles\" filter=\"(&(objectClass=groupOfNames)(member={1}))\" filter-dn=\"ou=Roles,dc=infinispan,dc=org\"/> </attribute-mapping> </identity-mapping> </ldap-realm> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"ldap-realm\", \"ldap-realm\": { \"url\": \"ldap://my-ldap-server:10389\", \"principal\": \"uid=admin,ou=People,dc=infinispan,dc=org\", \"credential\": \"strongPassword\", \"connection-timeout\": \"3000\", \"read-timeout\": \"30000\", \"connection-pooling\": \"true\", \"referral-mode\": \"ignore\", \"page-size\": \"30\", \"direct-verification\": \"true\", \"identity-mapping\": { \"rdn-identifier\": \"uid\", \"search-dn\": \"ou=People,dc=infinispan,dc=org\", \"search-recursive\": \"false\", \"attribute-mapping\": [{ \"from\": \"cn\", \"to\": \"Roles\", \"filter\": \"(&(objectClass=groupOfNames)(member={1}))\", \"filter-dn\": \"ou=Roles,dc=infinispan,dc=org\" }] } } }] } } }",
"server: security: securityRealms: - name: ldap-realm ldapRealm: url: 'ldap://my-ldap-server:10389' principal: 'uid=admin,ou=People,dc=infinispan,dc=org' credential: strongPassword connectionTimeout: '3000' readTimeout: '30000' connectionPooling: true referralMode: ignore pageSize: '30' directVerification: true identityMapping: rdnIdentifier: uid searchDn: 'ou=People,dc=infinispan,dc=org' searchRecursive: false attributeMapping: - filter: '(&(objectClass=groupOfNames)(member={1}))' filterDn: 'ou=Roles,dc=infinispan,dc=org' from: cn to: Roles",
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <security-realms> <security-realm name=\"ldap-realm\"> <ldap-realm url=\"ldap://USD{org.infinispan.test.host.address}:10389\" principal=\"uid=admin,ou=People,dc=infinispan,dc=org\" credential=\"strongPassword\"> <name-rewriter> <!-- Defines a rewriter that transforms usernames to lowercase --> <case-principal-transformer uppercase=\"false\"/> </name-rewriter> <!-- further configuration omitted --> </ldap-realm> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"ldap-realm\", \"ldap-realm\": { \"principal\": \"uid=admin,ou=People,dc=infinispan,dc=org\", \"url\": \"ldap://USD{org.infinispan.test.host.address}:10389\", \"credential\": \"strongPassword\", \"name-rewriter\": { \"case-principal-transformer\": { \"uppercase\": false } } } }] } } }",
"server: security: securityRealms: - name: \"ldap-realm\" ldapRealm: principal: \"uid=admin,ou=People,dc=infinispan,dc=org\" url: \"ldap://USD{org.infinispan.test.host.address}:10389\" credential: \"strongPassword\" nameRewriter: casePrincipalTransformer: uppercase: false # further configuration omitted",
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <security-realms> <security-realm name=\"ldap-realm\"> <ldap-realm url=\"ldap://USD{org.infinispan.test.host.address}:10389\" principal=\"uid=admin,ou=People,dc=infinispan,dc=org\" credential=\"strongPassword\"> <name-rewriter> <!-- Defines a rewriter that obtains the first CN from a DN --> <common-name-principal-transformer /> </name-rewriter> <!-- further configuration omitted --> </ldap-realm> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"ldap-realm\", \"ldap-realm\": { \"principal\": \"uid=admin,ou=People,dc=infinispan,dc=org\", \"url\": \"ldap://USD{org.infinispan.test.host.address}:10389\", \"credential\": \"strongPassword\", \"name-rewriter\": { \"common-name-principal-transformer\": {} } } }] } } }",
"server: security: securityRealms: - name: \"ldap-realm\" ldapRealm: principal: \"uid=admin,ou=People,dc=infinispan,dc=org\" url: \"ldap://USD{org.infinispan.test.host.address}:10389\" credential: \"strongPassword\" nameRewriter: commonNamePrincipalTransformer: ~ # further configuration omitted",
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <security-realms> <security-realm name=\"ldap-realm\"> <ldap-realm url=\"ldap://USD{org.infinispan.test.host.address}:10389\" principal=\"uid=admin,ou=People,dc=infinispan,dc=org\" credential=\"strongPassword\"> <name-rewriter> <!-- Defines a rewriter that extracts the username from the principal using a regular expression. --> <regex-principal-transformer pattern=\"(.*)@INFINISPAN\\.ORG\" replacement=\"USD1\"/> </name-rewriter> <!-- further configuration omitted --> </ldap-realm> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"ldap-realm\", \"ldap-realm\": { \"principal\": \"uid=admin,ou=People,dc=infinispan,dc=org\", \"url\": \"ldap://USD{org.infinispan.test.host.address}:10389\", \"credential\": \"strongPassword\", \"name-rewriter\": { \"regex-principal-transformer\": { \"pattern\": \"(.*)@INFINISPAN\\\\.ORG\", \"replacement\": \"USD1\" } } } }] } } }",
"server: security: securityRealms: - name: \"ldap-realm\" ldapRealm: principal: \"uid=admin,ou=People,dc=infinispan,dc=org\" url: \"ldap://USD{org.infinispan.test.host.address}:10389\" credential: \"strongPassword\" nameRewriter: regexPrincipalTransformer: pattern: (.*)@INFINISPAN\\.ORG replacement: \"USD1\" # further configuration omitted",
"Users dn: uid=root,ou=People,dc=infinispan,dc=org objectclass: top objectclass: uidObject objectclass: person uid: root cn: root sn: root userPassword: strongPassword Groups dn: cn=admin,ou=Roles,dc=infinispan,dc=org objectClass: top objectClass: groupOfNames cn: admin description: the Infinispan admin group member: uid=root,ou=People,dc=infinispan,dc=org dn: cn=monitor,ou=Roles,dc=infinispan,dc=org objectClass: top objectClass: groupOfNames cn: monitor description: the Infinispan monitor group member: uid=root,ou=People,dc=infinispan,dc=org",
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <security-realms> <security-realm name=\"token-realm\"> <!-- Specifies the URL of the authentication server. --> <token-realm name=\"token\" auth-server-url=\"https://oauth-server/auth/\"> <!-- Specifies the URL of the token introspection endpoint. --> <oauth2-introspection introspection-url=\"https://oauth-server/auth/realms/infinispan/protocol/openid-connect/token/introspect\" client-id=\"infinispan-server\" client-secret=\"1fdca4ec-c416-47e0-867a-3d471af7050f\"/> </token-realm> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"token-realm\", \"token-realm\": { \"auth-server-url\": \"https://oauth-server/auth/\", \"oauth2-introspection\": { \"client-id\": \"infinispan-server\", \"client-secret\": \"1fdca4ec-c416-47e0-867a-3d471af7050f\", \"introspection-url\": \"https://oauth-server/auth/realms/infinispan/protocol/openid-connect/token/introspect\" } } }] } } }",
"server: security: securityRealms: - name: token-realm tokenRealm: authServerUrl: 'https://oauth-server/auth/' oauth2Introspection: clientId: infinispan-server clientSecret: '1fdca4ec-c416-47e0-867a-3d471af7050f' introspectionUrl: 'https://oauth-server/auth/realms/infinispan/protocol/openid-connect/token/introspect'",
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <security-realms> <security-realm name=\"trust-store-realm\"> <server-identities> <ssl> <!-- Provides an SSL/TLS identity with a keystore that contains server certificates. --> <keystore path=\"server.p12\" relative-to=\"infinispan.server.config.path\" keystore-password=\"secret\" alias=\"server\"/> <!-- Configures a trust store that contains client certificates or part of a certificate chain. --> <truststore path=\"trust.p12\" relative-to=\"infinispan.server.config.path\" password=\"secret\"/> </ssl> </server-identities> <!-- Authenticates client certificates against the trust store. If you configure this, the trust store must contain the public certificates for all clients. --> <truststore-realm/> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"trust-store-realm\", \"server-identities\": { \"ssl\": { \"keystore\": { \"path\": \"server.p12\", \"relative-to\": \"infinispan.server.config.path\", \"keystore-password\": \"secret\", \"alias\": \"server\" }, \"truststore\": { \"path\": \"trust.p12\", \"relative-to\": \"infinispan.server.config.path\", \"password\": \"secret\" } } }, \"truststore-realm\": {} }] } } }",
"server: security: securityRealms: - name: \"trust-store-realm\" serverIdentities: ssl: keystore: path: \"server.p12\" relative-to: \"infinispan.server.config.path\" keystore-password: \"secret\" alias: \"server\" truststore: path: \"trust.p12\" relative-to: \"infinispan.server.config.path\" password: \"secret\" truststoreRealm: ~",
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <security-realms> <security-realm name=\"distributed-realm\"> <ldap-realm url=\"ldap://my-ldap-server:10389\" principal=\"uid=admin,ou=People,dc=infinispan,dc=org\" credential=\"strongPassword\"> <identity-mapping rdn-identifier=\"uid\" search-dn=\"ou=People,dc=infinispan,dc=org\" search-recursive=\"false\"> <attribute-mapping> <attribute from=\"cn\" to=\"Roles\" filter=\"(&(objectClass=groupOfNames)(member={1}))\" filter-dn=\"ou=Roles,dc=infinispan,dc=org\"/> </attribute-mapping> </identity-mapping> </ldap-realm> <properties-realm groups-attribute=\"Roles\"> <user-properties path=\"users.properties\" relative-to=\"infinispan.server.config.path\"/> <group-properties path=\"groups.properties\" relative-to=\"infinispan.server.config.path\"/> </properties-realm> <distributed-realm/> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"distributed-realm\", \"ldap-realm\": { \"principal\": \"uid=admin,ou=People,dc=infinispan,dc=org\", \"url\": \"ldap://my-ldap-server:10389\", \"credential\": \"strongPassword\", \"identity-mapping\": { \"rdn-identifier\": \"uid\", \"search-dn\": \"ou=People,dc=infinispan,dc=org\", \"search-recursive\": false, \"attribute-mapping\": { \"attribute\": { \"filter\": \"(&(objectClass=groupOfNames)(member={1}))\", \"filter-dn\": \"ou=Roles,dc=infinispan,dc=org\", \"from\": \"cn\", \"to\": \"Roles\" } } } }, \"properties-realm\": { \"groups-attribute\": \"Roles\", \"user-properties\": { \"digest-realm-name\": \"distributed-realm\", \"path\": \"users.properties\" }, \"group-properties\": { \"path\": \"groups.properties\" } }, \"distributed-realm\": {} }] } } }",
"server: security: securityRealms: - name: \"distributed-realm\" ldapRealm: principal: \"uid=admin,ou=People,dc=infinispan,dc=org\" url: \"ldap://my-ldap-server:10389\" credential: \"strongPassword\" identityMapping: rdnIdentifier: \"uid\" searchDn: \"ou=People,dc=infinispan,dc=org\" searchRecursive: \"false\" attributeMapping: attribute: filter: \"(&(objectClass=groupOfNames)(member={1}))\" filterDn: \"ou=Roles,dc=infinispan,dc=org\" from: \"cn\" to: \"Roles\" propertiesRealm: groupsAttribute: \"Roles\" userProperties: digestRealmName: \"distributed-realm\" path: \"users.properties\" groupProperties: path: \"groups.properties\" distributedRealm: ~"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_server_guide/security-realms |
6.4. Removing a Virtual Machine | 6.4. Removing a Virtual Machine Important The Remove button is disabled while virtual machines are running; you must shut down a virtual machine before you can remove it. Removing Virtual Machines Click Compute Virtual Machines and select the virtual machine to remove. Click Remove . Optionally, select the Remove Disk(s) check box to remove the virtual disks attached to the virtual machine together with the virtual machine. If the Remove Disk(s) check box is cleared, then the virtual disks remain in the environment as floating disks. Click OK . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/removing_a_virtual_machine |
Chapter 1. Console APIs | Chapter 1. Console APIs 1.1. ConsoleCLIDownload [console.openshift.io/v1] Description ConsoleCLIDownload is an extension for configuring openshift web console command line interface (CLI) downloads. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.2. ConsoleExternalLogLink [console.openshift.io/v1] Description ConsoleExternalLogLink is an extension for customizing OpenShift web console log links. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.3. ConsoleLink [console.openshift.io/v1] Description ConsoleLink is an extension for customizing OpenShift web console links. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.4. ConsoleNotification [console.openshift.io/v1] Description ConsoleNotification is the extension for configuring openshift web console notifications. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.5. ConsolePlugin [console.openshift.io/v1] Description ConsolePlugin is an extension for customizing OpenShift web console by dynamically loading code from another service running on the cluster. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.6. ConsoleQuickStart [console.openshift.io/v1] Description ConsoleQuickStart is an extension for guiding user through various workflows in the OpenShift web console. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.7. ConsoleYAMLSample [console.openshift.io/v1] Description ConsoleYAMLSample is an extension for customizing OpenShift web console YAML samples. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/console_apis/console-apis |
Chapter 7. Sources | Chapter 7. Sources The updated Red Hat Ceph Storage source code packages are available at the following location: For Red Hat Enterprise Linux 9: https://ftp.redhat.com/redhat/linux/enterprise/9Base/en/RHCEPH/SRPMS/ | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/7.1_release_notes/sources |
Chapter 26. GroupService | Chapter 26. GroupService 26.1. GetGroup GET /v1/group 26.1.1. Description 26.1.2. Parameters 26.1.2.1. Query Parameters Name Description Required Default Pattern id Unique identifier for group properties and respectively the group. - null traits.mutabilityMode - ALLOW_MUTATE traits.visibility - VISIBLE traits.origin - IMPERATIVE authProviderId - null key - null value - null 26.1.3. Return Type StorageGroup 26.1.4. Content Type application/json 26.1.5. Responses Table 26.1. HTTP Response Codes Code Message Datatype 200 A successful response. StorageGroup 0 An unexpected error response. GooglerpcStatus 26.1.6. Samples 26.1.7. Common object reference 26.1.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 26.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 26.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 26.1.7.3. StorageGroup Group is a GroupProperties : Role mapping. Field Name Required Nullable Type Description Format props StorageGroupProperties roleName String This is the name of the role that will apply to users in this group. 26.1.7.4. StorageGroupProperties GroupProperties defines the properties of a group. Groups apply to users when their properties match. For instance: - If GroupProperties has only an auth_provider_id, then that group applies to all users logged in with that auth provider. - If GroupProperties in addition has a claim key, then it applies to all users with that auth provider and the claim key, etc. Note: Changes to GroupProperties may require changes to v1.DeleteGroupRequest. Field Name Required Nullable Type Description Format id String Unique identifier for group properties and respectively the group. traits StorageTraits authProviderId String key String value String 26.1.7.5. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 26.1.7.6. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 26.1.7.7. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 26.1.7.8. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 26.2. BatchUpdate POST /v1/groupsbatch 26.2.1. Description 26.2.2. Parameters 26.2.2.1. Body Parameter Name Description Required Default Pattern body V1GroupBatchUpdateRequest X 26.2.3. Return Type Object 26.2.4. Content Type application/json 26.2.5. Responses Table 26.2. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 26.2.6. Samples 26.2.7. Common object reference 26.2.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 26.2.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 26.2.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 26.2.7.3. StorageGroup Group is a GroupProperties : Role mapping. Field Name Required Nullable Type Description Format props StorageGroupProperties roleName String This is the name of the role that will apply to users in this group. 26.2.7.4. StorageGroupProperties GroupProperties defines the properties of a group. Groups apply to users when their properties match. For instance: - If GroupProperties has only an auth_provider_id, then that group applies to all users logged in with that auth provider. - If GroupProperties in addition has a claim key, then it applies to all users with that auth provider and the claim key, etc. Note: Changes to GroupProperties may require changes to v1.DeleteGroupRequest. Field Name Required Nullable Type Description Format id String Unique identifier for group properties and respectively the group. traits StorageTraits authProviderId String key String value String 26.2.7.5. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 26.2.7.6. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 26.2.7.7. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 26.2.7.8. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 26.2.7.9. V1GroupBatchUpdateRequest Field Name Required Nullable Type Description Format previousGroups List of StorageGroup groups are the groups expected to be present in the store. Performs a diff on the GroupProperties present in previous_groups and required_groups: 1) if in previous_groups but not required_groups, it gets deleted. 2) if in previous_groups and required_groups, it gets updated. 3) if not in previous_groups but in required_groups, it gets added. requiredGroups List of StorageGroup Required groups are the groups we want to mutate the groups into. force Boolean 26.3. DeleteGroup DELETE /v1/groups 26.3.1. Description 26.3.2. Parameters 26.3.2.1. Query Parameters Name Description Required Default Pattern authProviderId We copy over parameters from storage.GroupProperties for seamless HTTP API migration. - null key - null value - null id - null force - null 26.3.3. Return Type Object 26.3.4. Content Type application/json 26.3.5. Responses Table 26.3. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 26.3.6. Samples 26.3.7. Common object reference 26.3.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 26.3.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 26.3.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 26.4. GetGroups GET /v1/groups 26.4.1. Description 26.4.2. Parameters 26.4.2.1. Query Parameters Name Description Required Default Pattern authProviderId - null key - null value - null id - null 26.4.3. Return Type V1GetGroupsResponse 26.4.4. Content Type application/json 26.4.5. Responses Table 26.4. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetGroupsResponse 0 An unexpected error response. GooglerpcStatus 26.4.6. Samples 26.4.7. Common object reference 26.4.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 26.4.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 26.4.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 26.4.7.3. StorageGroup Group is a GroupProperties : Role mapping. Field Name Required Nullable Type Description Format props StorageGroupProperties roleName String This is the name of the role that will apply to users in this group. 26.4.7.4. StorageGroupProperties GroupProperties defines the properties of a group. Groups apply to users when their properties match. For instance: - If GroupProperties has only an auth_provider_id, then that group applies to all users logged in with that auth provider. - If GroupProperties in addition has a claim key, then it applies to all users with that auth provider and the claim key, etc. Note: Changes to GroupProperties may require changes to v1.DeleteGroupRequest. Field Name Required Nullable Type Description Format id String Unique identifier for group properties and respectively the group. traits StorageTraits authProviderId String key String value String 26.4.7.5. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 26.4.7.6. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 26.4.7.7. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 26.4.7.8. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 26.4.7.9. V1GetGroupsResponse Field Name Required Nullable Type Description Format groups List of StorageGroup 26.5. CreateGroup POST /v1/groups 26.5.1. Description 26.5.2. Parameters 26.5.2.1. Body Parameter Name Description Required Default Pattern body Group is a GroupProperties : Role mapping. StorageGroup X 26.5.3. Return Type Object 26.5.4. Content Type application/json 26.5.5. Responses Table 26.5. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 26.5.6. Samples 26.5.7. Common object reference 26.5.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 26.5.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 26.5.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 26.5.7.3. StorageGroup Group is a GroupProperties : Role mapping. Field Name Required Nullable Type Description Format props StorageGroupProperties roleName String This is the name of the role that will apply to users in this group. 26.5.7.4. StorageGroupProperties GroupProperties defines the properties of a group. Groups apply to users when their properties match. For instance: - If GroupProperties has only an auth_provider_id, then that group applies to all users logged in with that auth provider. - If GroupProperties in addition has a claim key, then it applies to all users with that auth provider and the claim key, etc. Note: Changes to GroupProperties may require changes to v1.DeleteGroupRequest. Field Name Required Nullable Type Description Format id String Unique identifier for group properties and respectively the group. traits StorageTraits authProviderId String key String value String 26.5.7.5. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 26.5.7.6. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 26.5.7.7. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 26.5.7.8. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 26.6. UpdateGroup PUT /v1/groups 26.6.1. Description 26.6.2. Parameters 26.6.2.1. Body Parameter Name Description Required Default Pattern group StorageGroup X 26.6.2.2. Query Parameters Name Description Required Default Pattern force - null 26.6.3. Return Type Object 26.6.4. Content Type application/json 26.6.5. Responses Table 26.6. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 26.6.6. Samples 26.6.7. Common object reference 26.6.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 26.6.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 26.6.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 26.6.7.3. StorageGroup Group is a GroupProperties : Role mapping. Field Name Required Nullable Type Description Format props StorageGroupProperties roleName String This is the name of the role that will apply to users in this group. 26.6.7.4. StorageGroupProperties GroupProperties defines the properties of a group. Groups apply to users when their properties match. For instance: - If GroupProperties has only an auth_provider_id, then that group applies to all users logged in with that auth provider. - If GroupProperties in addition has a claim key, then it applies to all users with that auth provider and the claim key, etc. Note: Changes to GroupProperties may require changes to v1.DeleteGroupRequest. Field Name Required Nullable Type Description Format id String Unique identifier for group properties and respectively the group. traits StorageTraits authProviderId String key String value String 26.6.7.5. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 26.6.7.6. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 26.6.7.7. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 26.6.7.8. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN | [
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"GroupBatchUpdateRequest is an in transaction batch update to the groups present. Next Available Tag: 3",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"API for updating Groups and getting users. Next Available Tag: 2",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/groupservice |
Chapter 7. Viewing containers and applications | Chapter 7. Viewing containers and applications When you login to HawtIO for OpenShift, the HawtIO home page shows the available containers. Procedure : To manage (create, edit, or delete) containers, use the OpenShift console. To view HawtIO-enabled applications and AMQ Brokers (if applicable) on the OpenShift cluster, click the Online tab | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/hawtio_diagnostic_console_guide/viewing_containers_and_applications |
Chapter 9. Synchronizing Content Between Satellite Servers | Chapter 9. Synchronizing Content Between Satellite Servers In a Satellite setup with multiple Satellite Servers, you can use Inter-Satellite Synchronization (ISS) to synchronize content from one upstream server to one or more downstream servers. There are two possible ISS configurations of Satellite, depending on how you deployed your infrastructure. Configure your Satellite for ISS as appropriate for your use case scenario. For more information, see How to Configure Inter-Satellite Synchronization in Installing Satellite Server in a Disconnected Network Environment . To change the pulp export path, see the Knowledgebase article Hammer content export fails with "Path '/the/path' is not an allowed export path" on the Red Hat Customer Portal. 9.1. How to Synchronize Content Using Export and Import There are multiple approaches for synchronizing content using the export and import workflow: You employ the upstream Satellite Server as a content store, which means that you sync the whole Library rather than Content View versions. This approach offers the simplest export/import workflow. In such case, you can manage the versions downstream. For more information, see Section 9.1.1, "Using an Upstream Satellite Server as a Content Store" . You use the upstream Satellite Server to sync Content View versions. This approach offers more control over what content is synced between Satellite Servers. For more information, see Section 9.1.2, "Using an Upstream Satellite Server to Sync Content View Versions" . You sync a single repository. This can be useful if you use the Content-View syncing approach, but you want to sync an additional repository without adding it to an existing Content View. For more information, see Section 9.1.3, "Synchronizing a Single Repository" . 9.1.1. Using an Upstream Satellite Server as a Content Store In this scenario, you use the upstream Satellite Server as a content store for updates rather than to manage content. You use the downstream Satellite Server to manage content for all infrastructure behind the isolated network. You export the Library content from the upstream Satellite Server and import it into the downstream Satellite Server. On the upstream Satellite Server Ensure that repositories are using the Immediate download policy in one of the following ways: For existing repositories using On Demand , change their download policy on the repository details page to Immediate . For new repositories, ensure that the Default Red Hat Repository download policy setting is set to Immediate before enabling Red Hat repositories, and that the Default download policy is set to Immediate for custom repositories. For more information, see Section 6.7, "Download Policies Overview" . Enable the content that you want to synchronize. For more information, see Section 6.5, "Enabling Red Hat Repositories" . If you want to sync custom content, first create a custom Product and synchronize Product repositories . Synchronize the enabled content: On the first export, perform a complete Library export so that all the synchronized content is exported. This generates content archives that you can later import into one or more downstream Satellite Servers. For more information on performing a complete Library export, see Section 9.3, "Exporting the Library Environment" . Export all future updates on the upstream Satellite Server incrementally. This generates leaner content archives that contain only a recent set of updates. For example, if you enable and synchronize a new repository, the exported content archive contains content only from the newly enabled repository. For more information on performing an incremental Library export, see Section 9.4, "Exporting the Library Environment Incrementally" . On the downstream Satellite Server Bring the content exported from the upstream Satellite Server over to the hard disk. Place it inside a directory under /var/lib/pulp/imports . Import the content to an organization using the procedure outlined in Section 9.10, "Importing into the Library Environment" . You can then manage content using Content Views or Lifecycle Environments as you require. 9.1.2. Using an Upstream Satellite Server to Sync Content View Versions In this scenario, you use the upstream Satellite Server not only as a content store, but also to synchronize content for all infrastructure behind the isolated network. You curate updates coming from the CDN into Content Views and Lifecycle Environments. Once you promote content to a designated Lifecycle Environment, you can export the content from the upstream Satellite Server and import it into the downstream Satellite Server. On the upstream Satellite Server Ensure that repositories are using the Immediate download policy in one of the following ways: For existing repositories using On Demand , change their download policy on the repository details page to Immediate . For new repositories, ensure that the Default Red Hat Repository download policy setting is set to Immediate before enabling Red Hat repositories, and that the Default download policy is set to Immediate for custom repositories. For more information, see Section 6.7, "Download Policies Overview" . Enable the content that you want to synchronize. For more information, see Section 6.5, "Enabling Red Hat Repositories" . If you want to sync custom content, first create a custom Product and synchronize Product repositories . Synchronize the enabled content: For the first export, perform a complete Version export on the Content View Version that you want to export. For more information see, Section 9.5, "Exporting a Content View Version" . This generates content archives that you can import into one or more downstream Satellite Servers. Export all future updates in the connected Satellite Servers incrementally. This generates leaner content archives that contain changes only from the recent set of updates. For example, if your Content View has a new repository, this exported content archive contains only the latest changes. For more information, see Section 9.6, "Exporting a Content View Version Incrementally" . When you have new content, republish the Content Views that include this content before exporting the increment. For more information, see Chapter 8, Managing Content Views . This creates a new Content View Version with the appropriate content to export. On the downstream Satellite Server Bring the content exported from the upstream Satellite Server over to the hard disk. Place it inside a directory under /var/lib/pulp/imports . Import the content to the organization that you want. For more information, see Section 9.11, "Importing a Content View Version" . This will create a Content View Version from the exported content archives and then import content appropriately. 9.1.3. Synchronizing a Single Repository In this scenario, you export and import a single repository. On the upstream Satellite Server Ensure that the repository is using the Immediate download policy in one of the following ways: For existing repositories using On Demand , change their download policy on the repository details page to Immediate . For new repositories, ensure that the Default Red Hat Repository download policy setting is set to Immediate before enabling Red Hat repositories, and that the Default download policy is set to Immediate for custom repositories. For more information, see Section 6.7, "Download Policies Overview" . Enable the content that you want to synchronize. For more information, see Section 6.5, "Enabling Red Hat Repositories" . If you want to sync custom content, first create a custom Product and synchronize Product repositories . Synchronize the enabled content: On the first export, perform a complete repository export so that all the synchronized content is exported. This generates content archives that you can later import into one or more downstream Satellite Servers. For more information on performing a complete repository export, see Section 9.7, "Exporting a Repository" . Export all future updates on the upstream Satellite Server incrementally. This generates leaner content archives that contain only a recent set of updates. For more information on performing an incremental repository export, see Section 9.8, "Exporting a Repository Incrementally" . On the downstream Satellite Server Bring the content exported from the upstream Satellite Server over to the hard disk. Place it inside a directory under /var/lib/pulp/imports . Import the content to an organization. See Section 9.12, "Importing a Repository" . You can then manage content using Content Views or Lifecycle Environments as you require. 9.2. Synchronizing a Custom Repository When using Inter-Satellite Synchronization Network Sync, Red Hat repositories are configured automatically, but custom repositories are not. Use this procedure to synchronize content from a custom repository on a connected Satellite Server to a disconnected Satellite Server through Inter-Satellite Synchronization (ISS) Network Sync. Follow the procedure for the connected Satellite Server before completing the procedure for the disconnected Satellite Server. Connected Satellite Server In the Satellite web UI, navigate to Content > Products . Click on the custom product. Click on the custom repository. Copy the Published At: URL. Continue with the procedure on disconnected Satellite Server. Disconnected Satellite Server Download the katello-server-ca.crt file from the connected Satellite Server: Create an SSL Content Credential with the contents of katello-server-ca.crt . For more information on creating an SSL Content Credential, see Section 6.2, "Importing Custom SSL Certificates" . In the Satellite web UI, navigate to Content > Products . Create your custom product with the following: Upstream URL : Paste the link that you copied earlier. SSL CA Cert : Select the SSL certificate that was transferred from your connected Satellite Server. For more information on creating a custom product, see Section 6.3, "Creating a Custom Product" . After completing these steps, the custom repository is properly configured on the disconnected Satellite Server. 9.3. Exporting the Library Environment You can export contents of all Yum repositories in the Library environment of an organization to an archive file from Satellite Server and use this archive file to create the same repositories in another Satellite Server or in another Satellite Server organization. The exported archive file contains the following data: A JSON file containing Content View version metadata An archive file containing all the repositories from the Library environment of the organization. Satellite Server exports only RPM and kickstart files included in a Content View version. Satellite does not export the following content: Docker content Prerequisites To export the contents of the Library lifecycle environment of the organization, ensure that Satellite Server where you want to export meets the following conditions: Ensure that the export directory has free storage space to accommodate the export. Ensure that the /var/lib/pulp/exports directory has free storage space equivalent to the size of the repositories being exported for temporary files created during the export process. Ensure that you set download policy to Immediate for all repositories within the Library lifecycle environment you export. For more information, see Section 6.7, "Download Policies Overview" . Ensure that you synchronize Products that you export to the required date. Export the Library Content of an Organization Use the organization name or ID to export. Verify that the archive containing the exported version of a Content View is located in the export directory: You need all three files, the tar.gz , the toc.json , and the metadata.json file to be able to import. A new Content View Export-Library is created in the organization. This Content View contains all the repositories belonging to this organization. A new version of this Content View is published and exported automatically. Export with chunking In many cases the exported archive content may be several gigabytes in size. If you want to split it into smaller sizes or chunks. You can use the --chunk-size-gb flag directly in the export command to handle this. In the following example, you can see how to specify --chunk-size-gb=2 to split the archives in 2 GB chunks. 9.4. Exporting the Library Environment Incrementally Exporting Library content can be a very expensive operation in terms of system resources. Organizations that have multiple Red Hat Enterprise Linux trees can occupy several gigabytes of space on Satellite Server. In such cases, you can use Incremental Export to export only pieces of content that changed since the export. Incremental exports typically result in smaller archive files than the full exports. The example below shows incremental export of all repositories in the organization's Library. Procedure Create an incremental export: Optional: View the exported data: 9.5. Exporting a Content View Version You can export a version of a Content View to an archive file from Satellite Server and use this archive file to create the same Content View version on another Satellite Server or on another Satellite Server organization. Satellite exports composite Content Views as normal Content Views. The composite nature is not retained. On importing the exported archive, a regular Content View is created or updated on your downstream Satellite Server. The exported archive file contains the following data: A JSON file containing Content View version metadata An archive file containing all the repositories included into the Content View version Satellite Server exports only RPM and kickstart files added to a version of a Content View. Satellite does not export the following content: Docker content Content View definitions and metadata, such as package filters. Prerequisites To export a Content View, ensure that Satellite Server where you want to export meets the following conditions: Ensure that the export directory has free storage space to accommodate the export. Ensure that the /var/lib/pulp/exports directory has free storage space equivalent to the size of the repositories being exported for temporary files created during the export process. Ensure that you set download policy to Immediate for all repositories within the Content View you export. For more information, see Section 6.7, "Download Policies Overview" . Ensure that you synchronize Products that you export to the required date. Ensure that the user exporting the content has the Content Exporter role. To Export a Content View Version List versions of the Content View that are available for export: Export a Content View version Get the version number of desired version. The following example targets version 1.0 for export. Verify that the archive containing the exported version of a Content View is located in the export directory: You require all three files, for example, the tar.gz archive file, the toc.json and metadata.json to import the content successfully. Export with chunking In many cases, the exported archive content can be several gigabytes in size. You might want to split it smaller sizes or chunks. You can use the --chunk-size-gb option with in the hammer content-export command to handle this. The following example uses the --chunk-size-gb=2 to split the archives into 2 GB chunks. 9.6. Exporting a Content View Version Incrementally Exporting complete versions can be a very expensive operation in terms of system resources. Content View versions that have multiple Red Hat Enterprise Linux trees can occupy several gigabytes of space on Satellite Server. In such cases, you can use Incremental Export to export only pieces of content that changed since the export. Incremental exports typically result in smaller archive files than the full exports. The example below targets the version 2.0 for export, because the version 1.0 was exported previously. Procedure Create an incremental export: Optional: View the exported Content View: 9.7. Exporting a Repository You can export the content of a repository in the Library environment of an organization from Satellite Server. You can use this archive file to create the same repository in another Satellite Server or in another Satellite Server organization. You can export the following content from Satellite Server: RPM repositories Kickstart repositories Ansible repositories file repositories You cannot export Docker content from Satellite Server. The export contains the following data: Two JSON files containing repository metadata. One or more archive files containing the contents of the repository from the Library environment of the organization. You need all the files, tar.gz , toc.json and metadata.json , to be able to import. Prerequisites To export the contents of a repository, ensure that Satellite Server from which you want to export, meets the following conditions: Ensure that the export directory has enough free storage space to accommodate the export. Ensure that the /var/lib/pulp/exports directory has enough free storage space equivalent to the size of the repositories being exported for temporary files created during the export process. Ensure that you set download policy to Immediate for the repository within the Library lifecycle environment you export. For more information, see Section 6.7, "Download Policies Overview" . Ensure that you synchronize Products that you export to the required date. Export a Repository Use the repository name or ID to export. Optional: Verify that the exported archive is located in the export directory: Export a Repository with Chunking In many cases the exported content archive may be several gigabytes in size. If you want to split it into chunks of smaller size, you can use the --chunk-size-gb argument in the export command and limit the size by an integer value in gigabytes. Export content into archive chunks of a limited size, such as 2 GB: Optional: View the exported data: 9.8. Exporting a Repository Incrementally Exporting a repository can be a very expensive operation in terms of system resources. A typical Red Hat Enterprise Linux tree may occupy several gigabytes of space on Satellite Server. In such cases, you can use Incremental Export to export only pieces of content that changed since the export. Incremental exports typically result in smaller archive files than the full exports. The example below shows incremental export of a repository in the Library lifecycle environment. Procedure Create an incremental export: Optional: View the exported data: 9.9. Keeping Track of Your Exports Satellite keeps records of all exports. Each time you export content on the upstream Satellite Server, the export is recorded and maintained for future querying. You can use the records to organize and manage your exports, which is useful especially when exporting incrementally. When exporting content from the upstream Satellite Server for several downstream Satellite Servers, you can also keep track of content exported for specific servers. This helps you track which content was exported and to where. Use the --destination-server argument during export to indicate the target server. This option is available for all content-export operations. Tracking Destinations of Library Exports Specify the destination server when exporting the Library: Tracking Destinations of Content View Exports Specify the destination server when exporting a Content View version: Querying Export Records List content exports using the following command: 9.10. Importing into the Library Environment You can import exported Library content into the Library lifecycle environment of an organization on another Satellite Server. For more information about exporting contents from the Library environment, see Section 9.3, "Exporting the Library Environment" . Prerequisites The exported files must be in a directory under /var/lib/pulp/imports . If there are any Red Hat repositories in the exported content, the importing organization's manifest must contain subscriptions for the products contained within the export. The user importing the content must have the Content Importer Role. Procedure Copy the exported files to a subdirectory of /var/lib/pulp/imports on Satellite Server where you want to import. Set the ownership of the import directory and its contents to pulp:pulp . Identify the Organization that you wish to import into. To import the Library content to Satellite Server, enter the following command: Note you must enter the full path /var/lib/pulp/imports/ My_Exported_Library_Dir . Relative paths do not work. To verify that you imported the Library content, check the contents of the Product and Repositories. A new Content View called Import-Library is created in the target organization. This Content View is used to facilitate the Library content import. By default, this Content View is not shown in the Satellite web UI. Import-Library is not meant to be assigned directly to hosts. Instead, assign your hosts to Default Organization View or another Content View as you would normally. 9.11. Importing a Content View Version You can import an exported Content View version to create a version with the same content in an organization on another Satellite Server. For more information about exporting a Content View version, see Section 9.5, "Exporting a Content View Version" . When you import a Content View version, it has the same major and minor version numbers and contains the same repositories with the same packages and errata. Custom Repositories, Products and Content Views are automatically created if they do not exist in the importing organization. Prerequisites The exported files must be in a directory under /var/lib/pulp/imports . If there are any Red Hat repositories in the exported content, the importing organization's manifest must contain subscriptions for the products contained within the export. The user importing the Content View version must have the Content Importer Role. Procedure Copy the exported files to a subdirectory of /var/lib/pulp/imports on Satellite Server where you want to import. Set the ownership of the import directory and its contents to pulp:pulp . Verify that the ownership is set correctly: To import the Content View version to Satellite Server, enter the following command: Note that you must enter the full path /var/lib/pulp/imports/ My_Exported_Version_Dir . Relative paths do not work. To verify that you imported the Content View version successfully, list Content View versions for your organization: 9.12. Importing a Repository You can import an exported repository into an organization on another Satellite Server. For more information about exporting content of a repository, see Section 9.7, "Exporting a Repository" . Prerequisites The export files must be in a directory under /var/lib/pulp/imports . If the export contains any Red Hat repositories, the manifest of the importing organization must contain subscriptions for the products contained within the export. The user importing the content must have the Content Importer Role. Procedure Copy the exported files to a subdirectory of /var/lib/pulp/imports on Satellite Server where you want to import. Set the ownership of the import directory and its contents to pulp:pulp . Identify the Organization that you wish to import into. To import the Library content to Satellite Server, enter the following command: Note that you must enter the full path /var/lib/pulp/imports/ My_Exported_Repo_Dir . Relative paths do not work. To verify that you imported the repository, check the contents of the Product and Repository. 9.13. Exporting and Importing Content using Hammer CLI Cheat Sheet Table 9.1. Export Intent Command Fully export an Organization's Library hammer content-export complete library --organization=" My_Organization " Incrementally export an Organization's Library (assuming you have exported something previously) hammer content-export incremental library --organization=" My_Organization " Fully export a Content View version hammer content-export complete version --content-view=" My_Content_View " --version=1.0 --organization=" My_Organization " Export a Content View version promoted to the Dev Environment hammer content-export complete version --content-view=" My_Content_View " --organization=" My_Organization " --lifecycle-environment="Dev" Export a Content View in smaller chunks (2-GB slabs) hammer content-export complete version --content-view=" My_Content_View " --version=1.0 --organization=" My_Organization " --chunk-size-gb=2 Incrementally export a Content View version (assuming you have exported something previously) hammer content-export incremental version --content-view=" My_Content_View " --version=2.0 --organization=" My_Organization " Fully export a Repository hammer content-export complete repository --product=" My_Product " --name=" My_Repository " --organization=" My_Organization " Incrementally export a Repository (assuming you have exported something previously) hammer content-export incremental repository --product=" My_Product " --name=" My_Repository " --organization=" My_Organization " List exports hammer content-export list --content-view=" My_Content_View " --organization=" My_Organization " Table 9.2. Import Intent Command Import into an Organization's Library hammer content-import library --organization=" My_Organization " --path="/var/lib/pulp/imports/ My_Exported_Library_Dir " Import to a Content View version hammer content-import version --organization=" My_Organization " --path="/var/lib/pulp/imports/ My_Exported_Version_Dir " Import a Repository hammer content-import repository --organization=" My_Organization " --path="/var/lib/pulp/imports/ My_Exported_Repo_Dir " | [
"curl http://satellite.example.com/pub/katello-server-ca.crt",
"hammer content-export complete library --organization=\" My_Organization \"",
"ls -lh /var/lib/pulp/exports/ My_Organization /Export-Library/1.0/2021-03-02T03-35-24-00-00 total 68M -rw-r--r--. 1 pulp pulp 68M Mar 2 03:35 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335.tar.gz -rw-r--r--. 1 pulp pulp 333 Mar 2 03:35 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335-toc.json -rw-r--r--. 1 pulp pulp 443 Mar 2 03:35 metadata.json",
"hammer content-export complete library --chunk-size-gb=2 --organization=\" My_Organization \" Generated /var/lib/pulp/exports/ My_Organization /Export-Library/2.0/2021-03-02T04-01-25-00-00/metadata.json ls -lh /var/lib/pulp/exports/ My_Organization /Export-Library/2.0/2021-03-02T04-01-25-00-00/",
"hammer content-export incremental library --organization=\" My_Organization \" Generated /var/lib/pulp/exports/ My_Organization /Export-Library/3.0/2021-03-02T04-22-14-00-00/metadata.json",
"ls -lh /var/lib/pulp/exports/ My_Organization /Export-Library/3.0/2021-03-02T04-22-14-00-00/ total 172K -rw-r--r--. 1 pulp pulp 161K Mar 2 04:22 export-436882d8-de5a-48e9-a30a-17169318f908-20210302_0422.tar.gz -rw-r--r--. 1 pulp pulp 333 Mar 2 04:22 export-436882d8-de5a-48e9-a30a-17169318f908-20210302_0422-toc.json -rw-r--r--. 1 pulp pulp 492 Mar 2 04:22 metadata.json",
"hammer content-view version list --content-view=\" My_Content_View \" --organization=\" My_Organization \" ---|----------|---------|-------------|----------------------- ID | NAME | VERSION | DESCRIPTION | LIFECYCLE ENVIRONMENTS ---|----------|---------|-------------|----------------------- 5 | view 3.0 | 3.0 | | Library 4 | view 2.0 | 2.0 | | 3 | view 1.0 | 1.0 | | ---|----------|---------|-------------|----------------------",
"hammer content-export complete version --content-view=\" Content_View_Name \" --version=1.0 --organization=\" My_Organization \"",
"ls -lh /var/lib/pulp/exports/ My_Organization / Content_View_Name /1.0/2021-02-25T18-59-26-00-00/",
"hammer content-export complete version --chunk-size-gb=2 --content-view=\" Content_View_Name \" --organization=\" My_Organization \" --version=1.0 ls -lh /var/lib/pulp/exports/ My_Organization /view/1.0/2021-02-25T21-15-22-00-00/",
"hammer content-export incremental version --content-view=\" Content_View_Name \" --organization=\" My_Organization \" --version=2.0",
"ls -lh /var/lib/pulp/exports/ My_Organization /view/2.0/2021-02-25T21-45-34-00-00/",
"hammer content-export complete repository --organization=\" My_Organization \" --product=\" My_Product \" --name=\" My_Repository \"",
"ls -lh /var/lib/pulp/exports/ My_Organization /Export- My_Repository /1.0/2021-03-02T03-35-24-00-00 total 68M -rw-r--r--. 1 pulp pulp 68M Mar 2 03:35 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335.tar.gz -rw-r--r--. 1 pulp pulp 333 Mar 2 03:35 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335-toc.json -rw-r--r--. 1 root root 443 Mar 2 03:35 metadata.json",
"hammer content-export complete repository --chunk-size-gb= 2 --organization=\" My_Organization \" --product=\" My_Product \" --name=\" My_Repository \" Generated /var/lib/pulp/exports/ My_Organization /Export- My_Repository /1.0/2021-03-02T03-35-24-00-00/metadata.json",
"ls -lh /var/lib/pulp/exports/ My_Organization /Export- My_Repository /1.0/2021-03-02T03-35-24-00-00/",
"hammer content-export incremental repository --organization=\" My_Organization \" --product=\" My_Product \" --name=\" My_Repository \" Generated /var/lib/pulp/exports/ My_Organization /Export- My_Repository /3.0/2021-03-02T03-35-24-00-00/metadata.json",
"ls -lh /var/lib/pulp/exports/ My_Organization /Export- My_Repository /3.0/2021-03-02T03-35-24-00-00/ total 172K -rw-r--r--. 1 pulp pulp 20M Mar 2 04:22 export-436882d8-de5a-48e9-a30a-17169318f908-20210302_0422.tar.gz -rw-r--r--. 1 pulp pulp 333 Mar 2 04:22 export-436882d8-de5a-48e9-a30a-17169318f908-20210302_0422-toc.json -rw-r--r--. 1 root root 492 Mar 2 04:22 metadata.json",
"hammer content-export complete library --destination-server= My_Downstream_Server_1 --organization=\" My_Organization \" --version=1.0",
"hammer content-export complete version --content-view=\" Content_View_Name \" --destination-server= My_Downstream_Server_1 --organization=\" My_Organization \" --version=1.0",
"hammer content-export list --organization=\" My_Organization \"",
"chown -R pulp:pulp /var/lib/pulp/imports/2021-03-02T03-35-24-00-00 ls -lh /var/lib/pulp/imports/2021-03-02T03-35-24-00-00 total 68M -rw-r--r--. 1 pulp pulp 68M Mar 2 04:29 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335.tar.gz -rw-r--r--. 1 pulp pulp 333 Mar 2 04:29 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335-toc.json -rw-r--r--. 1 pulp pulp 443 Mar 2 04:29 metadata.json",
"hammer content-import library --organization=\" My_Organization \" --path=/var/lib/pulp/imports/2021-03-02T03-35-24-00-00",
"chown -R pulp:pulp /var/lib/pulp/imports/2021-02-25T21-15-22-00-00/",
"ls -lh /var/lib/pulp/imports/2021-02-25T21-15-22-00-00/",
"hammer content-import version --organization-id= My_Organization_ID --path=/var/lib/pulp/imports/2021-02-25T21-15-22-00-00/",
"hammer content-view version list --organization-id= My_Organization_ID",
"chown -R pulp:pulp /var/lib/pulp/imports/2021-03-02T03-35-24-00-00 ls -lh /var/lib/pulp/imports/2021-03-02T03-35-24-00-00 total 68M -rw-r--r--. 1 pulp pulp 68M Mar 2 04:29 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335.tar.gz -rw-r--r--. 1 pulp pulp 333 Mar 2 04:29 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335-toc.json -rw-r--r--. 1 pulp pulp 443 Mar 2 04:29 metadata.json",
"hammer content-import repository --organization=\" My_Organization \" --path=/var/lib/pulp/imports/ 2021-03-02T03-35-24-00-00"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_content/Synchronizing_Content_Between_Servers_content-management |
Chapter 6. Configuring the systems and running tests using Cockpit | Chapter 6. Configuring the systems and running tests using Cockpit To complete the certification process, you must configure cockpit, prepare the host under test (HUT) and test server, run the tests, and retrieve the test results. 6.1. Setting up the Cockpit server Cockpit is a RHEL tool that lets you change the configuration of your systems as well as monitor their resources from a user-friendly web-based interface. Note You must set up Cockpit on a new system, which is separate from the host under test. Ensure that the Cockpit has access to the host under test. For more information on installing and configuring Cockpit, see Getting Started using the RHEL web console on RHEL 8, Getting Started using the RHEL web console on RHEL 9 and Introducing Cockpit . Prerequisites The Cockpit server has RHEL version 8 or 9 installed. You have installed the Cockpit plugin on your system. You have enabled the Cockpit service. Procedure Log in to the system where you installed Cockpit. Install the Cockpit RPM provided by the Red Hat Certification team. You must run Cockpit on port 9090. 6.2. Adding the host under test and the test server to Cockpit Adding the host under test (HUT) and test server to Cockpit lets the two systems communicate by using passwordless SSH. Repeat this procedure for adding both the systems one by one. Prerequisites You have the IP address or hostname of the HUT and the test server. Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser to launch the Cockpit web application. Enter the username and password, and then click Login . Click the down-arrow on the logged-in cockpit user name-> Add new host . The dialog box displays. In the Host field, enter the IP address or hostname of the system. In the User name field, enter the name you want to assign to this system. Optional: Select the predefined color or select a new color of your choice for the host added. Click Add . Click Accept key and connect to let Cockpit communicate with the system through passwordless SSH. Enter the Password . Select the Authorize SSH Key checkbox. Click Log in . Verification On the left panel, click Tools -> Red Hat Certification and verify that the system you just added displays under the Hosts section on the right. 6.3. Getting authorization on the Red Hat SSO network Procedure Enter http://<Cockpit_system_IP>:9090/ in your browser's address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. On the Cockpit homepage, click Authorize , to establish connectivity with the Red Hat system. The Log in to your Red Hat account page displays. Enter your credentials and click . The Grant access to rhcert-cwe page displays. Click Grant access . A confirmation message displays a successful device login. You are now connected to the Cockpit web application. 6.4. Downloading test plans in Cockpit from Red Hat certification portal For Non-authorized or limited access users: To download the test plan, see Downloading the test plan from Red Hat Certification portal . For authorized users: Procedure Enter http://<Cockpit_system_IP>:9090/ in your browser's address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Test Plans tab. A list of Recent Certification Support Cases will appear. Click Download Test Plan . A message displays confirming the successful addition of the test plan. The downloaded test plan will be listed under the File Name of the Test Plan Files section. 6.5. Using the test plan to prepare the host under test for testing Provisioning the host under test performs a number of operations, such as setting up passwordless SSH communication with the cockpit, installing the required packages on your system based on the certification type, and creating a final test plan to run, which is a list of common tests taken from both the test plan provided by Red Hat and tests generated on discovering the system requirements. For instance, required hardware packages are installed if the test plan is designed for certifying a hardware product. Prerequisites You have downloaded the test plan provided by Red Hat . Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Hosts tab, and then click the host under test on which you want to run the tests. Click Provision . A dialog box appears. Click Upload, and then select the new test plan .xml file. Then, click . A successful upload message is displayed. Optionally, if you want to reuse the previously uploaded test plan, then select it again to reupload. Note During the certification process, if you receive a redesigned test plan for the ongoing product certification, then you can upload it following the step. However, you must run rhcert-clean all in the Terminal tab before proceeding. In the Role field, select Host under test and click Submit . By default, the file is uploaded to path, /var/rhcert/plans/<testplanfile.xml> . 6.6. Using the test plan to prepare the test server for testing Running the Provision Host command enables and starts the rhcertd service, which configures services specified in the test suite on the test server, such as iperf for network testing, and an nfs mount point used in kdump testing. Prerequisites You have downloaded the test plan provided by Red Hat . Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools -> Red Hat Certification in the left panel. Click the Hosts tab, and then click the host under test on which you want to run the tests. Click Provision . A dialog box appears. Click Upload, and then select the new test plan .xml file. Then, click . A successful upload message is displayed. Optionally, if you want to reuse the previously uploaded test plan, then select it again to reupload. Note During the certification process, if you receive a redesigned test plan for the ongoing product certification, then you can upload it following the step. However, you must run rhcert-clean all in the Terminal tab before proceeding. In the Role field, select Test server and click Submit . By default, the file is uploaded to the /var/rhcert/plans/<testplanfile.xml> path. 6.7. Running the certification tests using Cockpit Prerequisites You have prepared the host under test . You have prepared the test server . Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and click Login . Select Tools Red Hat Certification in the left panel. Click the Hosts tab and click on the host on which you want to run the tests. Click the Terminal tab and select Run. A list of recommended tests based on the test plan uploaded displays. The final test plan to run is a list of common tests taken from both the test plan provided by Red Hat and tests generated on discovering the system requirements. When prompted, choose whether to run each test by typing yes or no . You can also run particular tests from the list by typing select . 6.8. Reviewing and downloading the test results file Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Result Files tab to view the test results generated. Optional: Click Preview to view the results of each test. Click Download beside the result files. By default, the result file is saved as /var/rhcert/save/hostname-date-time.xml . 6.9. Submitting the test results from Cockpit to the Red Hat Certification Portal Procedure Enter http://<Cockpit_system_IP>:9090/ in your browser's address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Result Files tab and select the case number from the displayed list. For the authorized users click Submit . A message displays confirming the successful upload of the test result file. For non-authorized users see, Uploading the results file of the executed test plan to Red Hat Certification portal . The test result file of the executed test plan will be uploaded to the Red Hat Certification portal. 6.10. Uploading the test results file to Red Hat Certification Tool Use the Red Hat Certification Tool to submit the test results file of the executed test plan to the Red Hat Certification team. Prerequisites You have downloaded the test results file from Cockpit or HUT. Procedure Log in to Red Hat Certification Tool . On the homepage, enter the product case number in the search bar. Select the case number from the list that is displayed. On the Summary tab, under the Files section, click Upload . steps Red Hat reviews the results file you submitted and suggest the steps. For more information, visit Red Hat Certification Tool . | [
"yum install redhat-certification-cockpit"
] | https://docs.redhat.com/en/documentation/red_hat_certified_cloud_and_service_provider_certification/2025/html/red_hat_cloud_instance_type_workflow/assembly_configuring-the-hosts-and-running-tests-by-using-Cockpit_cloud-instance-wf-setting-test-environment |
Release Notes for AMQ Streams 2.5 on OpenShift | Release Notes for AMQ Streams 2.5 on OpenShift Red Hat Streams for Apache Kafka 2.5 Highlights of what's new and what's changed with this release of AMQ Streams on OpenShift Container Platform | [
"env: - name: STRIMZI_FEATURE_GATES value: +KafkaNodePools",
"env: - name: STRIMZI_FEATURE_GATES value: +UnidirectionalTopicOperator",
"./report.sh --namespace=my-amq-streams-namespace --cluster=my-kafka-cluster --bridge=my-bridge-component --secrets=all --out-dir=~/reports",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # authorization: type: keycloak tokenEndpointUri: <https://<auth_server_-_address>/auth/realms/external/protocol/openid-connect/token> clientId: kafka # grantsMaxIdleSeconds: 300 grantsGcPeriodSeconds: 300 grantsAlwaysLatest: false #",
"- name: external port: 9094 type: loadbalancer tls: true authentication: type: oauth validIssuerUri: <https://<auth-server-address>/auth/realms/external> introspectionEndpointUri: <https://<auth-server-address>/auth/realms/external/protocol/openid-connect/token/introspect> clientId: kafka-broker clientSecret: secretName: my-cluster-oauth key: clientSecret userNameClaim: \"['user.info'].['user.id']\" 1 maxSecondsWithoutReauthentication: 3600 fallbackUserNameClaim: \"['client.info'].['client.id']\" 2 fallbackUserNamePrefix: client-account- #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # kafkaExporter: image: my-registry.io/my-org/my-exporter-cluster:latest groupRegex: \".*\" topicRegex: \".*\" groupExcludeRegex: \"^excluded-.*\" topicExcludeRegex: \"^excluded-.*\"",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # config: client.quota.callback.class: io.strimzi.kafka.quotas.StaticQuotaCallback client.quota.callback.static.produce: 1000000 client.quota.callback.static.fetch: 1000000 client.quota.callback.static.storage.soft: 400000000000 client.quota.callback.static.storage.hard: 500000000000 client.quota.callback.static.storage.check-interval: 5",
"env: - name: STRIMZI_FEATURE_GATES value: +StableConnectIdentities",
"env: - name: STRIMZI_FEATURE_GATES value: +UseKRaft,+KafkaNodePools",
"authorization: type: simple acls: - resource: type: topic name: my-topic operations: - Read - Describe - Create - Write",
"template: serviceAccount: metadata: annotations: openshift.io/internal-registry-pull-secret-ref: my-cluster-entity-operator-dockercfg-qxwxd",
"cluster-info Kubernetes master is running at <master_address>",
"get subs -n <operator_namespace>",
"edit sub amq-streams -n <operator_namespace>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: amq-streams namespace: <operator_namespace> spec: channel: amq-streams-1.8.x installPlanApproval: Automatic name: amq-streams source: mirror-amq-streams sourceNamespace: openshift-marketplace config: env: - name: KUBERNETES_MASTER value: MASTER-ADDRESS",
"get sub amq-streams -n <operator_namespace>",
"get deployment <cluster_operator_deployment_name>",
"get subs -n <operator_namespace>",
"edit sub amq-streams -n <operator_namespace>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: amq-streams namespace: <operator_namespace> spec: channel: amq-streams-1.8.x installPlanApproval: Automatic name: amq-streams source: mirror-amq-streams sourceNamespace: openshift-marketplace config: env: - name: KUBERNETES_DISABLE_HOSTNAME_VERIFICATION value: \"true\"",
"get sub amq-streams -n <operator_namespace>",
"get deployment <cluster_operator_deployment_name>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: topicOperator: {} userOperator: {} cruiseControl: brokerCapacity: inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s config: hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.MinTopicLeadersPerBrokerGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.MinTopicLeadersPerBrokerGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaDistributionGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.PotentialNwOutGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskUsageDistributionGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundUsageDistributionGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundUsageDistributionGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.TopicReplicaDistributionGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.LeaderReplicaDistributionGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.LeaderBytesInDistributionGoal"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html-single/release_notes_for_amq_streams_2.5_on_openshift/%7Bsupported-configurations%7D |
Chapter 1. Support policy for Eclipse Temurin | Chapter 1. Support policy for Eclipse Temurin Red Hat will support select major versions of Eclipse Temurin in its products. For consistency, these versions remain similar to Oracle JDK versions that Oracle designates as long-term support (LTS). A major version of Eclipse Temurin will be supported for a minimum of six years from the time that version is first introduced. For more information, see the Eclipse Temurin Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Eclipse Temurin does not support RHEL 6 as a supported configuration. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.7/rn-openjdk-temurin-support-policy |
multicluster engine operator with Red Hat Advanced Cluster Management | multicluster engine operator with Red Hat Advanced Cluster Management Red Hat Advanced Cluster Management for Kubernetes 2.12 multicluster engine operator with Red Hat Advanced Cluster Management integration | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/multicluster_engine_operator_with_red_hat_advanced_cluster_management/index |
Chapter 3. Securing Users of the Server and Its Management Interfaces | Chapter 3. Securing Users of the Server and Its Management Interfaces 3.1. User Authentication with Elytron 3.1.1. Default Configuration By default, the JBoss EAP management interfaces are secured by the legacy core management authentication. Example: Default Configuration JBoss EAP does provide management-http-authentication and management-sasl-authentication in the elytron subsystem for securing the management interfaces as well. To update JBoss EAP to use the default Elytron components: Set http-authentication-factory to use management-http-authentication : Set sasl-authentication-factory to use management-sasl-authentication : Undefine security-realm : Reload JBoss EAP for the changes to take affect: The management interfaces are now secured using the default components provided by the elytron subsystem. 3.1.1.1. Default Elytron HTTP Authentication Configuration When you access the management interface over http, for example when using the web-based management console, JBoss EAP will use the management-http-authentication http-authentication-factory. The management-http-authentication http-authentication-factory, is configured to use the ManagementDomain security domain. The ManagementDomain security domain is backed by the ManagementRealm Elytron security realm, which is a properties-based realm. Important A properties-based realm is only read when the server starts. Any users added after server start, either manually or by using an add-user script, will require a server reload. This reload is accomplished by running the reload command from the management CLI. 3.1.1.2. Default Elytron Management CLI Authentication By default, the management CLI ( jboss-cli.sh ) is configured to connect over remote+http . Example: Default jboss-cli.xml <jboss-cli xmlns="urn:jboss:cli:3.1"> <default-protocol use-legacy-override="true">remote+http</default-protocol> <!-- The default controller to connect to when 'connect' command is executed w/o arguments --> <default-controller> <protocol>remote+http</protocol> <host>localhost</host> <port>9990</port> </default-controller> This will establish a connection over HTTP and use HTTP upgrade to change the communication protocol to Remoting . The HTTP upgrade connection is secured in the http-upgrade section of the http-interface using a sasl-authentication-factory . Example: Configuration with Default Components The default sasl-authentication-factory is management-sasl-authentication . The management-sasl-authentication sasl-authentication-factory specifies JBOSS-LOCAL-USER and DIGEST-MD5 mechanisms. The ManagementRealm Elytron security realm, used in DIGEST-MD5 , is the same realm used in the management-http-authentication http-authentication-factory. Example: JBOSS-LOCAL-USER Realm The local Elytron security realm is for handling silent authentication for local users. 3.1.2. Secure the Management Interfaces with a New Identity Store Create a security domain and any supporting security realms, decoders, or mappers for your identity store. This process is covered in the Elytron Subsystem section of JBoss EAP How to Configure Identity Management Guide . For example, if you wanted to secure the management interfaces using a filesystem-based identity store, you would follow the steps in Configure Authentication with a Filesystem-based Identity Store . Create an http-authentication-factory or sasl-authentication-factory . Example: http-authentication-factory Example: sasl-authentication-factory Add pattern-filter to the configured configurable-sasl-server-factory . Example: Add GSSAPI to the Configured configurable-sasl-server-factory This is an optional step. When a client attempts to connect to the HTTP management interfaces, JBoss EAP sends back an HTTP response with a status code of 401 Unauthorized , and a set of headers that list the supported authentication mechanisms, for example, Digest, GSSAPI, and so on. For more information, see the Local and Remote Client Authentication with the HTTP Interface section in the JBoss EAP Security Architecture guide. Update the management interfaces to use your http-authentication-factory or sasl-authentication-factory . Example: Update http-authentication-factory Example: Update sasl-authentication-factory Note When using legacy core management authentication, you can only secure the http management interface with a single legacy security realm. This forces the HTTP and SASL configuration to appear in a single legacy security realm. When using the elytron subsystem, you can configure the http-authentication-factory and sasl-authentication-factory separately, allowing you to use distinct security domains for securing the HTTP and SASL mechanisms of the http management interface. Note If two different attributes with similar implementation in legacy security and Elytron, respectively, are configured in the management interface, only the Elytron related configurations are used. For example, if security-realm for legacy security and http-authentication-factory for Elytron are configured, then authentication is handled by http-authentication-factory configuration. Note When the management interface includes both http-authentication-factory , or sasl-authentication-factory for the HTTP interface, as well as the security-realm , and the ssl-context attribute is not used, the authentication is handled by Elytron and the SSL is handled by the legacy security realm. When the management interface includes both the security-realm and the ssl-context , and the http-authentication-factory or sasl-authentication-factory for the HTTP interface is not used, then authentication is handled by the legacy security realm and SSL is handled by Elytron. 3.1.3. Adding Silent Authentication By default, JBoss EAP provides an authentication mechanism for local users, also know as silent authentication, through the local security realm. You can find more details see Silent authentication section. Silent authentication must be added to a sasl-authentication-factory . To add silent authentication to an existing sasl-authentication-factory : To create a new sasl-server-factory with silent authentication: Note The above example uses the existing ManagementDomain security domain, but you can also create and use other security domains. You can find more examples of creating security domains in the Elytron Subsystem section of the JBoss EAP How to Configure Identity Management Guide . Important If the Elytron security is used and an authentication attempt comes in using the JBOSS-LOCAL-USER SASL mechanism with an authentication name that does not correspond to a real identity, authentication fails. Choosing a custom user name for JBOSS-LOCAL-USER is possible with legacy security subsystem. There the authentication proceeds by mapping the user name to a special identity. 3.1.4. Mapping Identity for Authenticated Management Users When using the elytron subsystem to secure the management interfaces, you can provide a security domain to the management interfaces for identity mapping of authenticated users. This allows authenticated users to appear with the appropriate identity when logged into the management interfaces. The application server exposes more than one kind of management interface. Each type of interface can be associated with an independent authentication-factory to handle the authentication requirements of that interface. To make the authorization decision, the current security identity is obtained from the security domain. The returned security identity has the role mapping and permission assignment, based on the rules defined within that security domain. Note In most cases, a common security domain is used for all management; for authentication of the management interfaces as well as for obtaining the security identity used for the authorization decisions. In these cases, the security domain is associated with the authentication factory of the management interface and no special access=identity needs to be defined. In some cases, a different security domain is used to obtain the identity for the authorization decisions. Here, the access=identity resource is defined. It contains a reference to a security domain to obtain the identity for authorization. The below example assumes you have secured the management interfaces with the exampleSD Elytron security domain and have it exposed as exampleManagementRealm . To define the identity mapping, add the identity resource to the management interfaces. Example: Add the identity Resource Once you have added the identity resource, the identity of an authenticated user will appear when accessing the management interfaces. When the identity resource is not added, then the identity of the security domain used for authentication is used. For example, if you logged into the management CLI as user1 , your identity will properly appear. Example: Display the Identity of an Authenticated User from the Management CLI Important If the identity resource is added and legacy security realms are used to secure the management interfaces, authenticated users will always have the anonymous identity. Once the identity resource is removed, users authenticated from the legacy security realms will appear with the appropriate identity. Authorization for management operation always uses the security domain, which is the domain specified on access=identity . If not specified, it is the domain used for authentication. Any role mapping is always in the context of the security domain. The identity resource for the current request will return a set of roles as mapped using the Elytron configuration. When an RBAC based role mapping definition is in use, the roles from the identity resource will be taken as groups and fed into the management RoleMapping to obtain the management roles for the current request. Table 3.1. Identity to be Used for Different Scenarios Scenario No access=identity definition access=identity referencing an Elytron security-domain HTTP management interface using legacy security-realm Identity from connection. Unsupported or anonymous identity. HTTP management interface using elytron HTTP authentication factory backed by security-domain Identity from connection. Identity from referenced security-domain if it was successfully inflowed. Native management, including over HTTP Upgrade, interface using legacy security-realm Identity from connection. Unsupported or anonymous identity. Native management, including over HTTP Upgrade, interface using elytron SASL authentication factory backed by security-domain Identity from connection. Identity from referenced security-domain if it was successfully inflowed. Note If security domain used in the identity resource does not trust the security domain from authentication, anonymous identity is used. The security domain used in the identity resource does not need to trust the security domain from authentication, when both are using an identical security realm. The trusted security domains is not transitive. Where no access=identity resource is defined, then the identity established during authentication against the management interface will be used. Identities established using connections, through the remoting subsystem or using applications, will not be usable in this case. Where an access=identity resource is defined but the security domain used by the management interfaces is different and not listed in the list of domains to inflow from, no identity will be established. An inflow will be attempted using the identity established during authentication. Identities established using connections through the remoting subsystem or using applications will not be inflowed in this way. Important Where the management interfaces are secured using the legacy security realms, the identity will not be sharable across different security domains. In that case no access=identity resource should be defined. So the identity established during authentication can be used directly. Thus, applications secured using PicketBox are not supported for the identity resource. 3.1.5. Using Elytron Client with the Management CLI You can configure the management CLI to use Elytron Client for providing security information when connecting to JBoss EAP. Secure the management interfaces with Elytron. In order to use Elytron Client with the management CLI, you must secure the management interfaces with Elytron. You can find more details on securing the management interfaces with Elytron in User Authentication with Elytron . Create an Elytron Client configuration file. You need to create an Elytron Client configuration file that houses your authentication configuration as well as rules for using that configuration. You can find more details on creating an authentication configuration in the The Configuration File Approach section of the JBoss EAP How to Configure Identity Management Guide . Example: custom-config.xml <configuration> <authentication-client xmlns="urn:elytron:client:1.2"> <authentication-rules> <rule use-configuration="configuration1"> <match-host name="localhost" /> </rule> </authentication-rules> <authentication-configurations> <configuration name="configuration1"> <sasl-mechanism-selector selector="DIGEST-MD5" /> <providers> <use-service-loader /> </providers> <set-user-name name="user1" /> <credentials> <clear-password password="password123" /> </credentials> <set-mechanism-realm name="exampleManagementRealm" /> </configuration> </authentication-configurations> </authentication-client> </configuration> Use the Elytron Client configuration file with management CLI script. 3.2. Identity Propagation and Forwarding with Elytron 3.2.1. Propagating Security Identities for Remote Calls JBoss EAP 7.1 introduced the ability to easily configure the server and your applications to propagate a security identity from a client to the server for remoting calls. You can also configure server components to run within the security identity of a given user. The example in this section demonstrates how to forward security identity credentials. It propagates the security identity of a client and an Jakarta Enterprise Beans to a remote Jakarta Enterprise Beans. It returns a string containing the name of the Principal that called the remote Jakarta Enterprise Beans along with the user's authorized role information. The example consists of the following components. A secured Jakarta Enterprise Beans that contains a single method, accessible by all users, that returns authorization information about the caller. An intermediate Jakarta Enterprise Beans that contains a single method. It makes use of a remote connection and invokes the method on the secured Jakarta Enterprise Beans. A remote standalone client application that invokes the intermediate Jakarta Enterprise Beans. A META-INF/wildfly-config.xml file that contains the identity information used for authentication. You must first enable security identity propagation by configuring the server. review the example application code that uses the WildFlyInitialContextFactory to look up and invoke the remote Jakarta Enterprise Beans. Configure the Server for Security Propagation Configure the ejb3 subsystem to use the Elytron ApplicationDomain . This adds the following application-security-domain configuration to the ejb3 subsystem. <subsystem xmlns="urn:jboss:domain:ejb3:5.0"> .... <application-security-domains> <application-security-domain name="quickstart-domain" security-domain="ApplicationDomain"/> </application-security-domains> </subsystem> Add the PLAIN authentication configuration to send plain text user names and passwords, and the authentication context that is to be used for outbound connections. See Mechanisms That Support Security Identity Propagation for the list of mechanisms that support identity propagation. This adds the following authentication-client configuration to the elytron subsystem. <subsystem xmlns="urn:wildfly:elytron:4.0" final-providers="combined-providers" disallowed-providers="OracleUcrypto"> <authentication-client> <authentication-configuration name="ejb-outbound-configuration" security-domain="ApplicationDomain" sasl-mechanism-selector="PLAIN"/> <authentication-context name="ejb-outbound-context"> <match-rule authentication-configuration="ejb-outbound-configuration"/> </authentication-context> </authentication-client> .... </subsystem> Add the remote destination outbound socket binding to the standard-sockets socket binding group. This adds the following ejb-outbound outbound socket binding to the standard-sockets socket binding group. <socket-binding-group name="standard-sockets" default-interface="public" port-offset="USD{jboss.socket.binding.port-offset:0}"> .... <outbound-socket-binding name="ejb-outbound"> <remote-destination host="localhost" port="8080"/> </outbound-socket-binding> </socket-binding-group> Add the remote outbound connection and set the SASL authentication factory in the HTTP connector. This adds the following http-remoting-connector and ejb-outbound-connection configuration to the remoting subsystem. <subsystem xmlns="urn:jboss:domain:remoting:4.0"> .... <http-connector name="http-remoting-connector" connector-ref="default" security-realm="ApplicationRealm" sasl-authentication-factory="application-sasl-authentication"/> <outbound-connections> <remote-outbound-connection name="ejb-outbound-connection" outbound-socket-binding-ref="ejb-outbound" authentication-context="ejb-outbound-context"/> </outbound-connections> </subsystem> Configure the Elytron SASL authentication to use the PLAIN mechanism. This adds the following application-sasl-authentication configuration to the elytron subsystem. <subsystem xmlns="urn:wildfly:elytron:4.0" final-providers="combined-providers" disallowed-providers="OracleUcrypto"> .... <sasl> .... <sasl-authentication-factory name="application-sasl-authentication" sasl-server-factory="configured" security-domain="ApplicationDomain"> <mechanism-configuration> <mechanism mechanism-name="PLAIN"/> <mechanism mechanism-name="JBOSS-LOCAL-USER" realm-mapper="local"/> <mechanism mechanism-name="DIGEST-MD5"> <mechanism-realm realm-name="ApplicationRealm"/> </mechanism> </mechanism-configuration> </sasl-authentication-factory> </sasl> .... </subsystem> The server is now configured to enable security propagation for the following example application. Review the Example Application Code That Propagates a Security Identity Once security identity propagation is enabled in the server configuration, the Jakarta Enterprise Beans client application can use the WildFlyInitialContextFactory to look up and invoke the Jakarta Enterprise Beans proxy. The Jakarta Enterprise Beans is invoked as the user that authenticated in the client example shown below. The following abbreviated code examples are taken from the ejb-security-context-propagation quickstart that ships with JBoss EAP 7.4. See that quickstart for a complete working example of security identity propagation. To invoke the Jakarta Enterprise Beans as a different user, you can set the Context.SECURITY_PRINCIPAL and Context.SECURITY_CREDENTIALS in the context properties. Example: Remote Client public class RemoteClient { public static void main(String[] args) throws Exception { // invoke the intermediate bean using the identity configured in wildfly-config.xml invokeIntermediateBean(); // now lets programmatically setup an authentication context to switch users before invoking the intermediate bean AuthenticationConfiguration superUser = AuthenticationConfiguration.empty().setSaslMechanismSelector(SaslMechanismSelector.NONE.addMechanism("PLAIN")). useName("superUser").usePassword("superPwd1!"); final AuthenticationContext authCtx = AuthenticationContext.empty(). with(MatchRule.ALL, superUser); AuthenticationContext.getContextManager().setThreadDefault(authCtx); invokeIntermediateBean(); } private static void invokeIntermediateBean() throws Exception { final Hashtable<String, String> jndiProperties = new Hashtable<>(); jndiProperties.put(Context.INITIAL_CONTEXT_FACTORY, "org.wildfly.naming.client.WildFlyInitialContextFactory"); jndiProperties.put(Context.PROVIDER_URL, "remote+http://localhost:8080"); final Context context = new InitialContext(jndiProperties); IntermediateEJBRemote intermediate = (IntermediateEJBRemote) context.lookup("ejb:/ejb-security-context-propagation/IntermediateEJB!" + IntermediateEJBRemote.class.getName()); // Call the intermediate EJB System.out.println(intermediate.makeRemoteCalls()); } } Example: Intermediate Jakarta Enterprise Beans @Stateless @Remote(IntermediateEJBRemote.class) @SecurityDomain("quickstart-domain") @PermitAll public class IntermediateEJB implements IntermediateEJBRemote { @EJB(lookup="ejb:/ejb-security-context-propagation/SecuredEJB!org.jboss.as.quickstarts.ejb_security_context_propagation.SecuredEJBRemote") private SecuredEJBRemote remote; @Resource private EJBContext context; public String makeRemoteCalls() { try { StringBuilder sb = new StringBuilder("** "). append(context.getCallerPrincipal()). append(" * * \n\n"); sb.append("Remote Security Information: "). append(remote.getSecurityInformation()). append("\n"); return sb.toString(); } catch (Exception e) { if (e instanceof RuntimeException) { throw (RuntimeException) e; } throw new RuntimeException("Teasting failed.", e); } } } Example: Secured Jakarta Enterprise Beans @Stateless @Remote(SecuredEJBRemote.class) @SecurityDomain("quickstart-domain") public class SecuredEJB implements SecuredEJBRemote { @Resource private SessionContext context; @PermitAll public String getSecurityInformation() { StringBuilder sb = new StringBuilder("["); sb.append("Principal=["). append(context.getCallerPrincipal().getName()). append("], "); userInRole("guest", sb).append(", "); userInRole("user", sb).append(", "); userInRole("admin", sb).append("]"); return sb.toString(); } } Example: wildfly-config.xml File <?xml version="1.0" encoding="UTF-8"?> <configuration> <authentication-client xmlns="urn:elytron:client:1.2"> <authentication-rules> <rule use-configuration="default"/> </authentication-rules> <authentication-configurations> <configuration name="default"> <set-user-name name="quickstartUser"/> <credentials> <clear-password password="quickstartPwd1!"/> </credentials> <sasl-mechanism-selector selector="PLAIN"/> <providers> <use-service-loader /> </providers> </configuration> </authentication-configurations> </authentication-client> </configuration> 3.2.2. Utilizing Authorization Forwarding Mode In addition to credential forwarding, Elytron supports the trusted use of identities between peers. This can be useful in the following cases. Requirements are such that you cannot send passwords over the wire. The authentication type is one that does not support credential forwarding . The environment requires a need to limit which systems are allowed to receive the propagated requests. To utilize authorization forwarding, you first configure an authentication client on the forwarding server and then configure the receiving server to accept and handle the authorization . Configure the Authentication Client on the Forwarding Server To enable authorization forwarding, you must configure an authentication client configuration in the forwarding server configuration. The following management CLI commands create a default authentication client configuration to enable authentication forwarding. You can configure a more advanced rule based selection if you need one. Example: Management CLI Command to Create the Authentication Client Configuration These commands add the following authentication-configuration and authentication-context configuration to the elytron subsystem. Example: Authentication Client Configuration <authentication-client> <authentication-configuration name="forwardit" authentication-name="theserver1" security-domain="ApplicationDomain" forwarding-mode="authorization" realm="ApplicationRealm"> <credential-reference clear-text="thereallysecretpassword"/> </authentication-configuration> <authentication-context name="forwardctx"> <match-rule match-no-user="true" authentication-configuration="forwardit"/> </authentication-context> </authentication-client> When the forwarding server contacts the receiving server, instead of using the default authentication-based user name and credentials, it uses the predefined server login name theserver1 to establish the trust relationship. Configure the Authorization Forwarding on the Receiving Server For the forwarding to complete successfully, the receiving server configuration needs to be configured with the identity matching the one passed by the forwarding server. In this case, you must configure a user named theserver1 on the receiving server with the correct credentials. You must also configure a "RunAs" permission mapping in the elytron subsystem to allow the identity switch for the theserver1 identity that is passed from the forwarding server. For more information about permission mapping, see Create an Elytron Permission Mapper in How to Configure Server Security for JBoss EAP. The command below adds a simple-permission-mapper named auth-forwarding-permission-mapper that includes the following configurations. A permission mapping for the user anonymous . This user has no permissions, which prevents an anonymous user from being able to log in. A permission mapping for the user theserver1 . This user is assigned the RunAsPrincipalPermission permission of * , which gives this user global permissions to run as any identity. You can restrict the permission to a specific identity if you prefer. A permission mapping for all other users. Example: Management CLI Command to the Create Simple Permission Mapper This command adds the following simple-permission-mapper configuration to the elytron subsystem. Example: Simple Permission Mapper Configuration <mappers> <simple-permission-mapper name="auth-forwarding-permission-mapper"> <permission-mapping> <principal name="anonymous"/> <!-- No permissions: Deny any permission to anonymous! --> </permission-mapping> <permission-mapping> <principal name="theserver1"/> <permission-set name="login-permission"/> <permission-set name="default-permissions"/> <permission-set name="run-as-principal-permission"/> </permission-mapping> <permission-mapping match-all="true"> <permission-set name="login-permission"/> <permission-set name="default-permissions"/> </permission-mapping> </simple-permission-mapper> </mappers> <permission-sets> <permission-set name="login-permission"> <permission class-name="org.wildfly.security.auth.permission.LoginPermission"/> </permission-set> <permission-set name="default-permissions"> <permission class-name="org.wildfly.extension.batch.jberet.deployment.BatchPermission" module="org.wildfly.extension.batch.jberet" target-name="*"/> <permission class-name="org.wildfly.transaction.client.RemoteTransactionPermission" module="org.wildfly.transaction.client"/> <permission class-name="org.jboss.ejb.client.RemoteEJBPermission" module="org.jboss.ejb-client"/> </permission-set> <permission-set name="run-as-principal-permission"> <permission class-name="org.wildfly.security.auth.permission.RunAsPrincipalPermission" target-name="*"/> </permission-set> </permission-sets> Note The login-permission and default-permissions permission sets are already present in the default configuration. In cases where principal transformers are used after forwarding authorization, then those transformers are applied on both the authentication and the authorization principals. 3.2.3. Creating a case-principal-transformer to change the case characters of your principal username The elytron subsystem includes the case-principal-transformer principal transformer. You can use this principal transformer to change a principal's username to either uppercase or lowercase characters. The case-principal-transformer principal transformer includes the upper-case attribute that is set as true by default. To demonstrate a use case for case-principal-transformer , consider that you are using an authentication mechanism to map a principal to a security realm. A realm mapper manipulates the mapped principal to identify a security realm and load one of its identities. The authentication mechanism passes the identity to a post-realm mapping stage and to a final principal transformation stage. Subsequently, the authentication mechanism verifies the identity for authentication purposes. You can use a case-principal-transformer principal transformer to convert the character case format of your mapped principal. The example in the procedure uses the case-principal-transformer in the context of a security domain. You can also use the principal transformer inline with the following authentication policies: http-authentication-factory sasl-authentication-factory ssl-context aggregate-realm Procedure Add the case-principal-transformer to the elytron subsystem, and choose the character case for the username. To change the username of a transformer to uppercase characters, do not change the default upper-case attribute value. Example showing <transformer_name> added to the elytron subsystem with the default upper-case attribute setting defined: Alternatively, you can truncate the command syntax to use the default upper-case attribute value: To change the username of a transformer to lowercase characters, set the upper-case attribute to false . Example showing <transformer_name> added to the elytron subsystem with the upper-case attribute set to false : Optional: Use the elytron subsystem to configure your principal transformer. The following example configured a principal transformer to the default ApplicationDomain configuration that was provided by the elytron subsystem. Elytron applies the default ApplicationDomain configuration to a pre-realm-principal-transformer : Note You can configure a post-realm-principal-transformer to use the ApplicationDomain configuration after a security realm has been identified by a realm mapper. Additional resources For information about the upper-case attribute, see Table A.26 case-principal-transformer attributes . 3.2.4. Retrieving Security Identity Credentials There might be situations where you need to retrieve identity credentials for use in outgoing calls, for example, by an HTTP client. The following example demonstrates how to retrieve security credentials programmatically. import org.wildfly.security.auth.server.IdentityCredentials; import org.wildfly.security.auth.server.SecurityDomain; import org.wildfly.security.auth.server.SecurityIdentity; import org.wildfly.security.credential.PasswordCredential; import org.wildfly.security.password.interfaces.ClearPassword; SecurityIdentity securityIdentity = null; ClearPassword password = null; // Obtain the SecurityDomain for the current deployment. // The calling code requires the // org.wildfly.security.permission.ElytronPermission("getSecurityDomain") permission // if running with a security manager. SecurityDomain securityDomain = SecurityDomain.getCurrent(); if (securityDomain != null) { // Obtain the current security identity from the security domain. // This always returns an identity, but it could be the representation // of the anonymous identity if no authenticated identity is available. securityIdentity = securityDomain.getCurrentSecurityIdentity(); // The private credentials can be accessed to obtain any credentials delegated to the identity. // The calling code requires the // org.wildfly.security.permission.ElytronPermission("getPrivateCredentials") // permission if running with a security manager. IdentityCredentials credentials = securityIdentity.getPrivateCredentials(); if (credentials.contains(PasswordCredential.class)) { password = credentials.getCredential(PasswordCredential.class).getPassword(ClearPassword.class); } } 3.2.5. Mechanisms That Support Security Identity Propagation The following SASL mechanisms support propagation of security identities: PLAIN OAUTHBEARER GSSAPI GS2-KRB5 The following HTTP mechanisms support propagation of security identities: FORM 1 BASIC BEARER_TOKEN SPNEGO 1 FORM authentication is not automatically handled by the web browser. For this reason, you cannot use identity propagation with web applications that use FORM authentication when running in an HA cluster. Other mechanisms, such as BASIC and SPNEGO , support identity propagation in an HA cluster environment. 3.3. Identity Switching with Elytron 3.3.1. Switching Identities in Server-to-server Jakarta Enterprise Beans Calls By default, when you make a remote call to a Jakarta Enterprise Beans deployed to an application server, the identity used for authentication on the remote server is the same one that was used on the source server. In some cases, you might want to run the remote secured Jakarta Enterprise Beans within the security context of a different identity. You can use the Elytron API to switch identities in server-to-server Jakarta Enterprise Beans calls. When you do that, the request received over the connection is executed as a new request, using the identity specified programmatically in the API call. The following code example demonstrates how to switch the identity that is used for authentication on a remote Jakarta Enterprise Beans. The remoteUsername and remotePassword arguments passed in the securityDomain.authenticate() method are the identity credentials that are to be used for authentication on the target server. Example: Switching Identities in Server-to-server Jakarta Enterprise Beans Calls SecurityDomain securityDomain = SecurityDomain.getCurrent(); Callable<T> forwardIdentityCallable = () -> { return AuthenticationContext.empty() .with(MatchRule.ALL, AuthenticationConfiguration.empty() .setSaslMechanismSelector(SaslMechanismSelector.ALL) .useForwardedIdentity(securityDomain)) .runCallable(callable); }; securityDomain.authenticate(remoteUsername, new PasswordGuessEvidence(remotePassword.toCharArray())).runAs(forwardIdentityCallable); 3.4. User Authentication with Legacy Core Management Authentication 3.4.1. Default User Configuration All management interfaces in JBoss EAP are secured by default and users can access them in two different ways: local interfaces and remote interfaces. The basics of both of these authentication mechanisms are covered in the Default Security and JBoss EAP Out of the Box sections of the JBoss EAP Security Architecture guide. By default, access to these interfaces is configured in the Management Realm security realm. Initially, the local interface is enabled and requires access to the host machine running the JBoss EAP instance. Remote access is also enabled and is configured to use a file-based identity store. By default it uses mgmt-users.properties file to store user names and passwords, and mgmt-groups.properties to store user group information. User information is added to these files by using the included adduser script located in the EAP_HOME /bin/ directory. To add a user via the adduser script: Run the add-user.sh or add-user.bat command. Choose whether to add a management user or application user. Choose the realm the user will be added to. By default, the only available realms are ManagementRealm and ApplicationRealm . If a custom realm has been added, its name can be manually entered instead. Type the desired user name, password, and optional roles when prompted. The changes are written to each of the properties files for the security realm. 3.4.2. Adding Authentication via LDAP JBoss EAP also supports using LDAP authentication for securing the management interfaces. The basics of LDAP and how it works with JBoss EAP are covered in the LDAP , Using LDAP with the Management Interfaces , and Using LDAP with the ManagementRealm sections of the Red Hat JBoss Enterprise Application Platform 7 Security Architecture guide. For more specifics on how to secure the management interfaces using LDAP authentication, see the Securing the Management Interfaces with LDAP section of the JBoss EAP How to Configure Identity Management Guide . 3.4.3. Using JAAS for Securing the Management Interfaces JAAS is a declarative security API used by JBoss EAP to manage security. For more details and background regarding JAAS and declarative security, see the Declarative Security and JAAS section of the Red Hat JBoss Enterprise Application Platform Security Architecture guide. Note When JBoss EAP instances are configured to run in ADMIN_ONLY mode, using JAAS to secure the management interfaces is not supported. For more information on ADMIN_ONLY mode, see the Running JBoss EAP in ADMIN_ONLY Mode section of the JBoss EAP Configuration Guide . To use JAAS to authenticate to the management interfaces, the following steps must be performed: Create a security domain. In this example, a security domain is created with the UserRoles login module, but other login modules may be used as well: Create a security realm with JAAS authentication. Update the http-interface management interface to use new security realm. Optional: Assign group membership. The attribute assign-groups determines whether loaded user membership information from the security domain is used for group assignment in the security realm. When set to true , this group assignment is used for Role-Based Access Control (RBAC). 3.5. Role-Based Access Control The basics of Role-Based Access Control are covered in the Role-Based Access Control and Adding RBAC to the Management Interfaces sections of the JBoss EAP Security Architecture guide. 3.5.1. Enabling Role-Based Access Control By default the Role-Based Access Control (RBAC) system is disabled. It is enabled by changing the provider attribute from simple to rbac . provider is an attribute of the access-control element of the management element. This can be done using the management CLI or by editing the server configuration XML file if the server is offline. When RBAC is disabled or enabled on a running server, the server configuration must be reloaded before it takes effect. Warning Before changing the provider to rbac , be sure your configuration has a user who will be mapped to one of the RBAC roles, preferably with at least one in the Administrator or SuperUser role. Otherwise your installation will not be manageable except by shutting it down and editing the XML configuration. If you have started with one of the standard XML configurations shipped with JBoss EAP, the USDlocal user will be mapped to the SuperUser role and the local authentication scheme will be enabled. This will allow a user, running the CLI on the same system as the JBoss EAP process, to have full administrative permissions. Remote CLI users and web-based management console users will have no permissions. It is recommended to map at least one user, besides USDlocal , before switching the provider to rbac . You can do all of the configuration associated with the rbac provider even when the provider is set to simple . Once enabled it can only be disabled by a user of the Administrator or SuperUser roles. By default the management CLI runs as the SuperUser role if it is run on the same machine as the server. CLI to Enable RBAC To enable RBAC with the management CLI, use the write-attribute operation of the access authorization resource to set the provider attribute to rbac . In a managed domain, the access control configuration is part of the domain wide configuration, so the resource address is the same as above, but the management CLI is connected to the master domain controller. Note As with a standalone server, a reload or restart is required for the change to take effect. In a managed domain, all hosts and servers in the domain will need to be reloaded or restarted, starting with the master domain controller. Management CLI Command to Disable RBAC To disable RBAC with the management CLI, use the write-attribute operation of the access authorization resource to set the provider attribute to simple . XML Configuration to Enable or Disable RBAC If the server is offline the XML configuration can be edited to enable or disable RBAC. To do this, edit the provider attribute of the access-control element of the management element. Set the value to rbac to enable, and simple to disable. Example: XML Configuration to Enable or Disable RBAC <management> <access-control provider="rbac"> <role-mapping> <role name="SuperUser"> <include> <user name="USDlocal"/> </include> </role> </role-mapping> </access-control> </management> 3.5.2. Changing the Permission Combination Policy The Permission Combination Policy determines how permissions are determined if a user is assigned more than one role. This can be set to permissive or rejecting . The default is permissive . When set to permissive , if any role is assigned to the user that permits an action, then the action is allowed. When set to rejecting , if multiple roles are assigned to a user, then no action is allowed. This means that when the policy is set to rejecting each user should only be assigned one role. Users with multiple roles will not be able to use the management console or the management CLI when the policy is set to rejecting. The Permission Combination Policy is configured by setting the permission-combination-policy attribute to either permissive or rejecting . This can be done using the management CLI or by editing the server configuration XML file if the server is offline. The permission-combination-policy attribute is part of the access-control element and the access-control element can be found in the management element. Setting the Permission Combination Policy Use the write-attribute operation of the access authorization resource to set the permission-combination-policy attribute to the required policy name. The valid policy names are rejecting and permissive . Example: Management CLI Command for Rejecting Permission Combination Policy If the server is offline the XML configuration can be edited to change the permission combination policy value. To do this, edit the permission-combination-policy attribute of the access-control element. Example: XML Configuration for Rejecting Permission Combination Policy <access-control provider="rbac" permission-combination-policy="rejecting"> <role-mapping> <role name="SuperUser"> <include> <user name="USDlocal"/> </include> </role> </role-mapping> </access-control> 3.5.3. Managing Roles When Role-Based Access Control (RBAC) is enabled, what a management user is permitted to do is determined by the roles to which the user is assigned. JBoss EAP 7 uses a system of includes and excludes based on both the user and group membership to determine to which role a user belongs. A user is considered to be assigned to a role if the user is: listed as a user to be included in the role, or a member of a group that is listed to be included in the role. A user is also considered to be assigned to a role if the user is not: listed as a user to exclude from the role, or a member of a group that is listed to be excluded from the role. Exclusions take priority over inclusions. Role include and exclude settings for users and groups can be configured using both the management console and the management CLI. Only users of the SuperUser or Administrator roles can perform this configuration. 3.5.3.1. Configure User Role Assignment Using the Management CLI The configuration of mapping users and groups to roles is located at: /core-service=management/access=authorization as role-mapping elements. Only users of the SuperUser or Administrator roles can perform this configuration. Viewing Role Assignment Configuration Use the :read-children-names operation to get a complete list of the configured roles: Use the read-resource operation of a specified role-mapping to get the full details of a specific role: Add a New Role This procedure shows how to add a role-mapping entry for a role. This must be done before the role can be configured. Use the add operation to add a new role configuration. ROLENAME is the name of the role that the new mapping is for, such as Auditor . Example: Management CLI Command for New Role Configuration Add a User as Included in a Role This procedure shows how to add a user to the included list of a role. If no configuration for a role has been done, then a role-mapping entry for it must be done first. Use the add operation to add a user entry to the includes list of the role. ROLENAME is the name of the role being configured, such as Auditor . ALIAS is a unique name for this mapping. Red Hat recommends the use of a naming convention for aliases, such as user- USERNAME (for example, user-max ). USERNAME is the name of the user being added to the include list, such as max . Example: Management CLI Command for User Included in a Role Add a User as Excluded in a Role This procedure shows how to add a user to the excluded list of a role. If no configuration for a role has been done, then a role-mapping entry for it must be done first. Use the add operation to add a user entry to the excludes list of the role. ROLENAME is the name of the role being configured, for example Auditor . USERNAME is the name of the user being added to the exclude list, for example max . ALIAS is a unique name for this mapping. Red Hat recommends that the use of a naming convention for aliases, such as user- USERNAME (for example, user-max ). Example: Management CLI Command User Excluded in a Role Remove User Role Include Configuration This procedure shows how to remove a user include entry from a role mapping. Use the remove operation to remove the entry. ROLENAME is the name of the role being configured, such as Auditor . ALIAS is a unique name for this mapping. Red Hat recommends that the use of a naming convention for aliases, such as user- USERNAME (for example, user-max ). Example: Management CLI Command for Removing User Role Include Configuration Note Removing the user from the list of includes does not remove the user from the system, nor does it guarantee that the role will not be assigned to the user. The role might still be assigned based on group membership. Remove User Role Exclude Configuration This procedure shows how to remove an user exclude entry from a role mapping. Use the remove operation to remove the entry. ROLENAME is the name of the role being configured, such as Auditor . ALIAS is a unique name for this mapping. Red Hat recommends that the use of a naming convention for aliases, such as user- USERNAME (for example, user-max ). Note Removing the user from the list of excludes does not remove the user from the system, nor does it guarantee the role will be assigned to the user. Roles might still be excluded based on group membership. 3.5.4. Configure User Role Assignment with the Elytron Subsystem In addition to adding role mappings for users directly, as covered in Managing Roles section, you can also configure RBAC roles to be directly taken from the identity provided by the elytron subsystem. To configure the RBAC system to use roles provided by the elytron subsystem: Important RBAC must be enabled to use this functionality, and the principal must have RBAC roles. 3.5.5. Roles and User Groups A user group is an arbitrary label that can be assigned to one or more users. When authenticating using the management interfaces, users are assigned groups from either the elytron subsystem or core management authentication, depending on how the management interfaces are secured. The RBAC system can be configured to automatically assign roles to users depending on what user groups they are members of. It can also exclude users from roles based on group membership. 3.5.6. Configure Group Role Assignment Using the Management CLI Groups to be included or excluded from a role can be configured in the management console and the management CLI. This topic only shows using the management CLI. The configuration of mapping users and groups to roles is located in the management API at: /core-service=management/access=authorization as role-mapping elements. Only users in the SuperUser or Administrator roles can perform this configuration. Viewing Group Role Assignment Configuration Use the read-children-names operation to get a complete list of the configured roles: Use the read-resource operation of a specified role-mapping to get the full details of a specific role: Add a New Role This procedure shows how to add a role-mapping entry for a role. This must be done before the role can be configured. Use the add operation to add a new role configuration. Add a Group as Included in a Role This procedure shows how to add a group to the included list of a role. If no configuration for a role has been done, then a role-mapping entry for it must be done first. Use the add operation to add a group entry to the includes list of the role. ROLENAME is the name of the role being configured, such as Auditor . GROUPNAME is the name of the group being added to the include list, such as investigators . ALIAS is a unique name for this mapping. Red Hat recommends that you use a naming convention for your aliases, such as group- GROUPNAME (for example, group-investigators ). Example: Management CLI Command for Adding a Group as Included in a Role Add a Group as Excluded in a Role This procedure shows how to add a group to the excluded list of a role. If no configuration for a role has been done, then a role-mapping entry for it must be created first. Use the add operation to add a group entry to the excludes list of the role. ROLENAME is the name of the role being configured, such as Auditor . GROUPNAME is the name of the group being added to the include list, such as supervisors . ALIAS is a unique name for this mapping. Red Hat recommends that you use a naming convention for your aliases, such as group- GROUPNAME (for example, group-supervisors ). Example: Management CLI Command for Adding a Group as Excluded in a Role Remove Group Role Include Configuration This procedure shows how to remove a group include entry from a role mapping. Use the remove operation to remove the entry. ROLENAME is the name of the role being configured, such as Auditor . ALIAS is a unique name for this mapping. Red Hat recommends that you use a naming convention for your aliases, such as group- GROUPNAME (for example, group-investigators ). Example: Management CLI Command for Removing Group Role Include Configuration Note Removing the group from the list of includes does not remove the group from the system, nor does it guarantee that the role will not be assigned to users in this group. The role might still be assigned to users in the group individually. Remove a User Group Exclude Entry This procedure shows how to remove a group exclude entry from a role mapping. Use the remove operation to remove the entry. ROLENAME is the name of the role being configured, such as Auditor . ALIAS is a unique name for this mapping. Red Hat recommends that you use a naming convention for your aliases, such as group- GROUPNAME (for example, group-supervisors ). Note Removing the group from the list of excludes does not remove the group from the system. It also does not guarantee the role will be assigned to members of the group. Roles might still be excluded based on group membership. 3.5.7. Using RBAC with LDAP The basics of using RBAC with LDAP as well as how to configure JBoss EAP to use RBAC with LDAP are covered in the LDAP and RBAC section of the JBoss EAP How to Configure Identity Management Guide . 3.5.8. Scoped Roles Scoped roles are user-defined roles that grant the permissions of one of the standard roles but only for one or more specified server groups or hosts in an JBoss EAP managed domain. Scoped roles allow for management users to be granted permissions that are limited to only those server groups or hosts that are required. Important Scoped roles can be created by users assigned the Administrator or SuperUser roles. They are defined by five characteristics: A unique name. The standard roles which it is based on. If it applies to server groups or hosts. The list of server groups or hosts that it is restricted to. If all users are automatically included. This defaults to false . Once created a scoped role can be assigned to users and groups the same way that the standard roles are. Creating a scoped role does not allow for defining new permissions. Scoped roles can only be used to apply the permissions of an existing role in a limited scope. For example, a scoped role could be created based on the Deployer role which is restricted to a single server group. There are only two scopes that roles can be limited to: Host-scoped roles A role that is host-scoped restricts the permissions of that role to one or more hosts. This means access is provided to the relevant /host=*/ resource trees but resources that are specific to other hosts are hidden. Server-group-scoped roles A role that is server-group-scoped restricts the permissions of that role to one or more server groups. Additionally the role permissions will also apply to the profile, socket binding group, server configuration, and server resources that are associated with the specified server-groups . Any sub-resources within any of those that are not logically related to the server-group will not be visible to the user. Important Some resources are non-addressable to server-group and host scoped roles in order to provide a simplified view of the management model to improve usability. This is distinct from resources that are non-addressable to protect sensitive data. For host scoped roles this means that resources in the /host=* portion of the management model will not be visible if they are not related to the server groups specified for the role. For server-group scoped roles, this means that resources in the profile , socket-binding-group , deployment , deployment-overlay , server-group , server-config and server portions of the management model will not be visible if they are not related to the server groups specified for the role. 3.5.8.1. Configuring Scoped Roles from the Management CLI Important Only users in the SuperUser or Administrator roles can perform this configuration. Add a New Scoped Role To add a new scoped role, the following operations must be done: Replace NEW-SCOPED-ROLE , BASE-ROLE , and SERVER-GROUP-NAME with the proper information. Viewing and Editing a Scoped Role Mapping A scoped role's details, including members, can be viewed by using the following command: Replace NEW-SCOPED-ROLE with the proper information. To edit a scoped role's details, the write-attribute command may be used. For example: Replace NEW-SCOPED-ROLE with the proper information. Delete a Scoped Role Replace NEW-SCOPED-ROLE with the proper information. Important A scoped role cannot be deleted if users or groups are assigned to it. Remove the role assignments first, and then delete it. Adding and Removing Users Adding and removing users to and from scoped roles follows the same process as adding and removing standard roles . 3.5.8.2. Configuring Scoped Roles from the Management Console Important Only users in the SuperUser or Administrator roles can perform this configuration. Scoped role configuration in the management console can be found by following these steps: Log in to the management console. Click on the Access Control tab. Click on Roles to view all roles, including scoped roles. The following procedures show how to perform configuration tasks for scoped roles. Add a New Scoped Role Log in to the management console. Click on the Access Control tab. Select Roles and click the Add ( + ) button. Choose Host Scoped Role or Server Group Scoped Role . Specify the following details: Name : The unique name for the new scoped role. Base Role : The role which this role will base its permissions on. Hosts or Server Groups : The list of hosts or server groups that the role is restricted to, depending on the type of scoped role being added. Multiple entries can be selected. Include All : Whether this role should automatically include all users. Defaults to OFF . Click Add to create the new role. Edit a Scoped Role Log in to the management console. Click on the Access Control tab. Click on the Roles menu on the left. Click on the desired scoped role to edit and click Edit . Update the desired details to change and click the Save button. View Scoped Role Members Log in to the management console. Click on the Access Control tab. Click on the Roles menu on the left. Click on the desired scoped role to view the included and excluded members. Delete a Scoped Role Log in to the management console. Click on the Access Control tab. Click on the Roles menu on the left. Click on the desired scoped role and click Remove from the drop down. Click Yes to remove the role and all of its assignments. Adding and Removing Users Adding and removing users to and from scoped roles follows the same process as adding and removing standard roles. To update a user's scoped roles: Log in to the management console. Click on the Access Control tab. Click on the Roles menu on the left and click on the desired scoped role. Select the Plus ( + ) button to include a member or the Minus ( - ) button to exclude a member. 3.5.9. Configuring Constraints 3.5.9.1. Configure Sensitivity Constraints Each sensitivity constraint defines a set of resources that are considered sensitive . A sensitive resource is generally one that either should be secret, like passwords, or one that will have serious impact on the server, like networking, JVM configuration, or system properties. The access control system itself is also considered sensitive. Resource sensitivity limits which roles are able to read, write or address a specific resource. Sensitivity constraint configuration is at /core-service=management/access=authorization/constraint=sensitivity-classification . Within the management model each sensitivity constraint is identified as a classification. The classifications are then grouped into types. Each classification has an applies-to element which is a list of path patterns to which the classifications configuration applies. To configure a sensitivity constraint, use the write-attribute operation to set the configured-requires-read , configured-requires-write , or configured-requires-addressable attribute. To make that type of operation sensitive set the value of the attribute to true , otherwise to make it nonsensitive set it to false . By default these attributes are not set and the values of default-requires-read , default-requires-write , and default-requires-addressable are used. Once the configured attribute is set it is that value that is used instead of the default. The default values cannot be changed. Example: Make Reading System Properties a Sensitive Operation Example: Result The roles, and the respective operations that they are able to perform, depend on the configuration of the attributes. This is summarized in the following table: Table 3.2. Sensitivity Constraint Configuration Outcomes Value requires-read requires-write requires-addressable true Read is sensitive. Only Auditor , Administrator , SuperUser can read. Write is sensitive. Only Administrator and SuperUser can write. Addressing is sensitive. Only Auditor , Administrator , SuperUser can address. false Read is not sensitive. Any management user can read. Write is not sensitive. Only Maintainer , Administrator and SuperUser can write. Deployer can also write the resource is an application resource. Addressing is not sensitive. Any management user can address. 3.5.9.2. List Sensitivity Constraints You can see a list of the available sensitivity constraints directly from the JBoss EAP management model using the following management CLI command: 3.5.9.3. Configure Application Resource Constraints Each application resource constraint defines a set of resources, attributes and operations that are usually associated with the deployment of applications and services. When an application resource constraint is enabled management users of the Deployer role are granted access to the resources that it applies to. Application constraint configuration is at /core-service=management/access=authorization/constraint=application-classification/ . Each application resource constraint is identified as a classification. The classifications are then grouped into types. Each classification has an applies-to element which is a list of path patterns to which the classifications configuration applies. By default the only application resource classification that is enabled is core. Core includes deployments, deployment overlays, and the deployment operations. To enable an application resource, use the write-attribute operation to set the configured-application attribute of the classification to true . To disable an application resource, set this attribute to false . By default these attributes are not set and the value of default-application attribute is used. The default value cannot be changed. Example: Enabling the logger-profile Application Resource Classification Example: Result Important Application resource constraints apply to all resources that match its configuration. For example, it is not possible to grant a Deployer user access to one datasource resource but not another. If this level of separation is required then it is recommended to configure the resources in different server groups and create different scoped Deployer roles for each group. 3.5.9.4. List Application Resource Constraints You can see a list of the available application resource constraints directly from the JBoss EAP management model using the following management CLI command: 3.5.9.5. Configure the Vault Expression Constraint By default, reading and writing vault expressions are sensitive operations. Configuring the vault expression constraint allows either or both of those operations to be set to nonsensitive. Changing this constraint allows a greater number of roles to read and write vault expressions. The vault expression constraint is found at /core-service=management/access=authorization/constraint=vault-expression . To configure the vault expression constraint, use the write-attribute operation to set the attributes of configured-requires-write and configured-requires-read to true or false . By default these are not set and the values of default-requires-read and default-requires-write are used. The default values cannot be changed. Example: Making Writing to Vault Expressions a Nonsensitive Operation Example: Result The roles, and the respective vault expressions that they will be able to read and write, depend on the configuration of the attributes. This is summarized in the following table: Table 3.3. Vault Expression Constraint Configuration Outcomes Value requires-read requires-write true Read operation is sensitive. Only Auditor , Administrator , and SuperUser can read. Write operation is sensitive. Only Administrator and SuperUser can write. false Read operation is not sensitive. All management users can read. Write operation is not sensitive. Monitor , Administrator , and SuperUser can write. Deployer can also write if the vault expression is in an application resource. | [
"/core-service=management/management-interface=http-interface:read-resource() { \"outcome\" => \"success\", \"result\" => { \"allowed-origins\" => undefined, \"console-enabled\" => true, \"http-authentication-factory\" => undefined, \"http-upgrade\" => {\"enabled\" => true}, \"http-upgrade-enabled\" => true, \"sasl-protocol\" => \"remote\", \"secure-socket-binding\" => undefined, \"security-realm\" => \"ManagementRealm\", \"server-name\" => undefined, \"socket-binding\" => \"management-http\", \"ssl-context\" => undefined }",
"/core-service=management/management-interface=http-interface:write-attribute(name=http-authentication-factory, value=management-http-authentication)",
"/core-service=management/management-interface=http-interface:write-attribute(name=http-upgrade.sasl-authentication-factory, value=management-sasl-authentication)",
"/core-service=management/management-interface=http-interface:undefine-attribute(name=security-realm)",
"reload",
"/subsystem=elytron/http-authentication-factory=management-http-authentication:read-resource() { \"outcome\" => \"success\", \"result\" => { \"http-server-mechanism-factory\" => \"global\", \"mechanism-configurations\" => [{ \"mechanism-name\" => \"DIGEST\", \"mechanism-realm-configurations\" => [{\"realm-name\" => \"ManagementRealm\"}] }], \"security-domain\" => \"ManagementDomain\" } }",
"/subsystem=elytron/security-domain=ManagementDomain:read-resource() { \"outcome\" => \"success\", \"result\" => { \"default-realm\" => \"ManagementRealm\", \"permission-mapper\" => \"default-permission-mapper\", \"post-realm-principal-transformer\" => undefined, \"pre-realm-principal-transformer\" => undefined, \"principal-decoder\" => undefined, \"realm-mapper\" => undefined, \"realms\" => [ { \"realm\" => \"ManagementRealm\", \"role-decoder\" => \"groups-to-roles\" }, { \"realm\" => \"local\", \"role-mapper\" => \"super-user-mapper\" } ], \"role-mapper\" => undefined, \"trusted-security-domains\" => undefined } }",
"reload",
"/subsystem=elytron/properties-realm=ManagementRealm:read-resource() { \"outcome\" => \"success\", \"result\" => { \"groups-attribute\" => \"groups\", \"groups-properties\" => { \"path\" => \"mgmt-groups.properties\", \"relative-to\" => \"jboss.server.config.dir\" }, \"plain-text\" => false, \"users-properties\" => { \"path\" => \"mgmt-users.properties\", \"relative-to\" => \"jboss.server.config.dir\" } } }",
"<jboss-cli xmlns=\"urn:jboss:cli:3.1\"> <default-protocol use-legacy-override=\"true\">remote+http</default-protocol> <!-- The default controller to connect to when 'connect' command is executed w/o arguments --> <default-controller> <protocol>remote+http</protocol> <host>localhost</host> <port>9990</port> </default-controller>",
"/core-service=management/management-interface=http-interface:read-resource() { \"outcome\" => \"success\", \"result\" => { \"allowed-origins\" => undefined, \"console-enabled\" => true, \"http-authentication-factory\" => \"management-http-authentication\", \"http-upgrade\" => { \"enabled\" => true, \"sasl-authentication-factory\" => \"management-sasl-authentication\" }, \"http-upgrade-enabled\" => true, \"sasl-protocol\" => \"remote\", \"secure-socket-binding\" => undefined, \"security-realm\" => undefined, \"server-name\" => undefined, \"socket-binding\" => \"management-http\", \"ssl-context\" => undefined } }",
"/subsystem=elytron/sasl-authentication-factory=management-sasl-authentication:read-resource() { \"outcome\" => \"success\", \"result\" => { \"mechanism-configurations\" => [ { \"mechanism-name\" => \"JBOSS-LOCAL-USER\", \"realm-mapper\" => \"local\" }, { \"mechanism-name\" => \"DIGEST-MD5\", \"mechanism-realm-configurations\" => [{\"realm-name\" => \"ManagementRealm\"}] } ], \"sasl-server-factory\" => \"configured\", \"security-domain\" => \"ManagementDomain\" } }",
"/subsystem=elytron/identity-realm=local:read-resource() { \"outcome\" => \"success\", \"result\" => { \"attribute-name\" => undefined, \"attribute-values\" => undefined, \"identity\" => \"USDlocal\" } }",
"/subsystem=elytron/http-authentication-factory=example-http-auth:add(http-server-mechanism-factory=global, security-domain=exampleSD, mechanism-configurations=[{mechanism-name=DIGEST, mechanism-realm-configurations=[{realm-name=exampleManagementRealm}]}])",
"/subsystem=elytron/sasl-authentication-factory=example-sasl-auth:add(sasl-server-factory=configured, security-domain=exampleSD, mechanism-configurations=[{mechanism-name=DIGEST-MD5, mechanism-realm-configurations=[{realm-name=exampleManagementRealm}]}])",
"/subsystem=elytron/configurable-sasl-server-factory=configured:list-add(name=filters, value={pattern-filter=GSSAPI})",
"/core-service=management/management-interface=http-interface:write-attribute(name=http-authentication-factory, value=example-http-auth) reload",
"/core-service=management/management-interface=http-interface:write-attribute(name=http-upgrade.sasl-authentication-factory, value=example-sasl-auth) reload",
"/subsystem=elytron/sasl-authentication-factory=example-sasl-auth:list-add(name=mechanism-configurations, value={mechanism-name=JBOSS-LOCAL-USER, realm-mapper=local}) reload",
"/subsystem=elytron/sasl-authentication-factory=example-sasl-auth:add(sasl-server-factory=configured,security-domain=ManagementDomain,mechanism-configurations=[{mechanism-name=DIGEST-MD5,mechanism-realm-configurations=[{realm-name=exampleManagementRealm}]},{mechanism-name=JBOSS-LOCAL-USER, realm-mapper=local}]) reload",
"/core-service=management/access=identity:add(security-domain=exampleSD)",
":whoami { \"outcome\" => \"success\", \"result\" => {\"identity\" => {\"username\" => \"user1\"}} }",
"<configuration> <authentication-client xmlns=\"urn:elytron:client:1.2\"> <authentication-rules> <rule use-configuration=\"configuration1\"> <match-host name=\"localhost\" /> </rule> </authentication-rules> <authentication-configurations> <configuration name=\"configuration1\"> <sasl-mechanism-selector selector=\"DIGEST-MD5\" /> <providers> <use-service-loader /> </providers> <set-user-name name=\"user1\" /> <credentials> <clear-password password=\"password123\" /> </credentials> <set-mechanism-realm name=\"exampleManagementRealm\" /> </configuration> </authentication-configurations> </authentication-client> </configuration>",
"./jboss-cli.sh -c -Dwildfly.config.url=/path/to/custom-config.xml",
"/subsystem=ejb3/application-security-domain=quickstart-domain:add(security-domain=ApplicationDomain)",
"<subsystem xmlns=\"urn:jboss:domain:ejb3:5.0\"> . <application-security-domains> <application-security-domain name=\"quickstart-domain\" security-domain=\"ApplicationDomain\"/> </application-security-domains> </subsystem>",
"/subsystem=elytron/authentication-configuration=ejb-outbound-configuration:add(security-domain=ApplicationDomain,sasl-mechanism-selector=\"PLAIN\") /subsystem=elytron/authentication-context=ejb-outbound-context:add(match-rules=[{authentication-configuration=ejb-outbound-configuration}])",
"<subsystem xmlns=\"urn:wildfly:elytron:4.0\" final-providers=\"combined-providers\" disallowed-providers=\"OracleUcrypto\"> <authentication-client> <authentication-configuration name=\"ejb-outbound-configuration\" security-domain=\"ApplicationDomain\" sasl-mechanism-selector=\"PLAIN\"/> <authentication-context name=\"ejb-outbound-context\"> <match-rule authentication-configuration=\"ejb-outbound-configuration\"/> </authentication-context> </authentication-client> . </subsystem>",
"/socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=ejb-outbound:add(host=localhost,port=8080)",
"<socket-binding-group name=\"standard-sockets\" default-interface=\"public\" port-offset=\"USD{jboss.socket.binding.port-offset:0}\"> . <outbound-socket-binding name=\"ejb-outbound\"> <remote-destination host=\"localhost\" port=\"8080\"/> </outbound-socket-binding> </socket-binding-group>",
"/subsystem=remoting/remote-outbound-connection=ejb-outbound-connection:add(outbound-socket-binding-ref=ejb-outbound, authentication-context=ejb-outbound-context) /subsystem=remoting/http-connector=http-remoting-connector:write-attribute(name=sasl-authentication-factory,value=application-sasl-authentication)",
"<subsystem xmlns=\"urn:jboss:domain:remoting:4.0\"> . <http-connector name=\"http-remoting-connector\" connector-ref=\"default\" security-realm=\"ApplicationRealm\" sasl-authentication-factory=\"application-sasl-authentication\"/> <outbound-connections> <remote-outbound-connection name=\"ejb-outbound-connection\" outbound-socket-binding-ref=\"ejb-outbound\" authentication-context=\"ejb-outbound-context\"/> </outbound-connections> </subsystem>",
"/subsystem=elytron/sasl-authentication-factory=application-sasl-authentication:write-attribute(name=mechanism-configurations,value=[{mechanism-name=PLAIN},{mechanism-name=JBOSS-LOCAL-USER,realm-mapper=local},{mechanism-name=DIGEST-MD5,mechanism-realm-configurations=[{realm-name=ApplicationRealm}]}])",
"<subsystem xmlns=\"urn:wildfly:elytron:4.0\" final-providers=\"combined-providers\" disallowed-providers=\"OracleUcrypto\"> . <sasl> . <sasl-authentication-factory name=\"application-sasl-authentication\" sasl-server-factory=\"configured\" security-domain=\"ApplicationDomain\"> <mechanism-configuration> <mechanism mechanism-name=\"PLAIN\"/> <mechanism mechanism-name=\"JBOSS-LOCAL-USER\" realm-mapper=\"local\"/> <mechanism mechanism-name=\"DIGEST-MD5\"> <mechanism-realm realm-name=\"ApplicationRealm\"/> </mechanism> </mechanism-configuration> </sasl-authentication-factory> </sasl> . </subsystem>",
"public class RemoteClient { public static void main(String[] args) throws Exception { // invoke the intermediate bean using the identity configured in wildfly-config.xml invokeIntermediateBean(); // now lets programmatically setup an authentication context to switch users before invoking the intermediate bean AuthenticationConfiguration superUser = AuthenticationConfiguration.empty().setSaslMechanismSelector(SaslMechanismSelector.NONE.addMechanism(\"PLAIN\")). useName(\"superUser\").usePassword(\"superPwd1!\"); final AuthenticationContext authCtx = AuthenticationContext.empty(). with(MatchRule.ALL, superUser); AuthenticationContext.getContextManager().setThreadDefault(authCtx); invokeIntermediateBean(); } private static void invokeIntermediateBean() throws Exception { final Hashtable<String, String> jndiProperties = new Hashtable<>(); jndiProperties.put(Context.INITIAL_CONTEXT_FACTORY, \"org.wildfly.naming.client.WildFlyInitialContextFactory\"); jndiProperties.put(Context.PROVIDER_URL, \"remote+http://localhost:8080\"); final Context context = new InitialContext(jndiProperties); IntermediateEJBRemote intermediate = (IntermediateEJBRemote) context.lookup(\"ejb:/ejb-security-context-propagation/IntermediateEJB!\" + IntermediateEJBRemote.class.getName()); // Call the intermediate EJB System.out.println(intermediate.makeRemoteCalls()); } }",
"@Stateless @Remote(IntermediateEJBRemote.class) @SecurityDomain(\"quickstart-domain\") @PermitAll public class IntermediateEJB implements IntermediateEJBRemote { @EJB(lookup=\"ejb:/ejb-security-context-propagation/SecuredEJB!org.jboss.as.quickstarts.ejb_security_context_propagation.SecuredEJBRemote\") private SecuredEJBRemote remote; @Resource private EJBContext context; public String makeRemoteCalls() { try { StringBuilder sb = new StringBuilder(\"** \"). append(context.getCallerPrincipal()). append(\" * * \\n\\n\"); sb.append(\"Remote Security Information: \"). append(remote.getSecurityInformation()). append(\"\\n\"); return sb.toString(); } catch (Exception e) { if (e instanceof RuntimeException) { throw (RuntimeException) e; } throw new RuntimeException(\"Teasting failed.\", e); } } }",
"@Stateless @Remote(SecuredEJBRemote.class) @SecurityDomain(\"quickstart-domain\") public class SecuredEJB implements SecuredEJBRemote { @Resource private SessionContext context; @PermitAll public String getSecurityInformation() { StringBuilder sb = new StringBuilder(\"[\"); sb.append(\"Principal=[\"). append(context.getCallerPrincipal().getName()). append(\"], \"); userInRole(\"guest\", sb).append(\", \"); userInRole(\"user\", sb).append(\", \"); userInRole(\"admin\", sb).append(\"]\"); return sb.toString(); } }",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <configuration> <authentication-client xmlns=\"urn:elytron:client:1.2\"> <authentication-rules> <rule use-configuration=\"default\"/> </authentication-rules> <authentication-configurations> <configuration name=\"default\"> <set-user-name name=\"quickstartUser\"/> <credentials> <clear-password password=\"quickstartPwd1!\"/> </credentials> <sasl-mechanism-selector selector=\"PLAIN\"/> <providers> <use-service-loader /> </providers> </configuration> </authentication-configurations> </authentication-client> </configuration>",
"/subsystem=elytron/authentication-configuration=forwardit:add(authentication-name=theserver1,security-domain=ApplicationDomain,realm=ApplicationRealm,forwarding-mode=authorization,credential-reference={clear-text=thereallysecretpassword}) /subsystem=elytron/authentication-context=forwardctx:add(match-rules=[{authentication-configuration=forwardit,match-no-user=true}])",
"<authentication-client> <authentication-configuration name=\"forwardit\" authentication-name=\"theserver1\" security-domain=\"ApplicationDomain\" forwarding-mode=\"authorization\" realm=\"ApplicationRealm\"> <credential-reference clear-text=\"thereallysecretpassword\"/> </authentication-configuration> <authentication-context name=\"forwardctx\"> <match-rule match-no-user=\"true\" authentication-configuration=\"forwardit\"/> </authentication-context> </authentication-client>",
"/subsystem=elytron/permission-set=run-as-principal-permission:add(permissions=[{class-name=\"org.wildfly.security.auth.permission.RunAsPrincipalPermission\",target-name=\"*\"}]) /subsystem=elytron/simple-permission-mapper=auth-forwarding-permission-mapper:add(permission-mappings=[{principals=[\"anonymous\"]},{principals=[\"theserver1\"],permission-sets=[{permission-set=login-permission},{permission-set=default-permissions},{permission-set=run-as-principal-permission}]},{match-all=true,permission-sets=[{permission-set=login-permission},{permission-set=default-permissions}]}]",
"<mappers> <simple-permission-mapper name=\"auth-forwarding-permission-mapper\"> <permission-mapping> <principal name=\"anonymous\"/> <!-- No permissions: Deny any permission to anonymous! --> </permission-mapping> <permission-mapping> <principal name=\"theserver1\"/> <permission-set name=\"login-permission\"/> <permission-set name=\"default-permissions\"/> <permission-set name=\"run-as-principal-permission\"/> </permission-mapping> <permission-mapping match-all=\"true\"> <permission-set name=\"login-permission\"/> <permission-set name=\"default-permissions\"/> </permission-mapping> </simple-permission-mapper> </mappers> <permission-sets> <permission-set name=\"login-permission\"> <permission class-name=\"org.wildfly.security.auth.permission.LoginPermission\"/> </permission-set> <permission-set name=\"default-permissions\"> <permission class-name=\"org.wildfly.extension.batch.jberet.deployment.BatchPermission\" module=\"org.wildfly.extension.batch.jberet\" target-name=\"*\"/> <permission class-name=\"org.wildfly.transaction.client.RemoteTransactionPermission\" module=\"org.wildfly.transaction.client\"/> <permission class-name=\"org.jboss.ejb.client.RemoteEJBPermission\" module=\"org.jboss.ejb-client\"/> </permission-set> <permission-set name=\"run-as-principal-permission\"> <permission class-name=\"org.wildfly.security.auth.permission.RunAsPrincipalPermission\" target-name=\"*\"/> </permission-set> </permission-sets>",
"/subsystem=elytron/case-principal-transformer= <transformer_name> :add(upper-case=\"true\")",
"/subsystem=elytron/case-principal-transformer= <transformer_name> :add()",
"/subsystem=elytron/case-principal-transformer= <transformer_name> :add(upper-case=\"false\")",
"/subsystem=elytron/security-domain=ApplicationDomain:write-attribute(name=pre-realm-principal-transformer,value= <transformer_name> )",
"import org.wildfly.security.auth.server.IdentityCredentials; import org.wildfly.security.auth.server.SecurityDomain; import org.wildfly.security.auth.server.SecurityIdentity; import org.wildfly.security.credential.PasswordCredential; import org.wildfly.security.password.interfaces.ClearPassword; SecurityIdentity securityIdentity = null; ClearPassword password = null; // Obtain the SecurityDomain for the current deployment. // The calling code requires the // org.wildfly.security.permission.ElytronPermission(\"getSecurityDomain\") permission // if running with a security manager. SecurityDomain securityDomain = SecurityDomain.getCurrent(); if (securityDomain != null) { // Obtain the current security identity from the security domain. // This always returns an identity, but it could be the representation // of the anonymous identity if no authenticated identity is available. securityIdentity = securityDomain.getCurrentSecurityIdentity(); // The private credentials can be accessed to obtain any credentials delegated to the identity. // The calling code requires the // org.wildfly.security.permission.ElytronPermission(\"getPrivateCredentials\") // permission if running with a security manager. IdentityCredentials credentials = securityIdentity.getPrivateCredentials(); if (credentials.contains(PasswordCredential.class)) { password = credentials.getCredential(PasswordCredential.class).getPassword(ClearPassword.class); } }",
"SecurityDomain securityDomain = SecurityDomain.getCurrent(); Callable<T> forwardIdentityCallable = () -> { return AuthenticationContext.empty() .with(MatchRule.ALL, AuthenticationConfiguration.empty() .setSaslMechanismSelector(SaslMechanismSelector.ALL) .useForwardedIdentity(securityDomain)) .runCallable(callable); }; securityDomain.authenticate(remoteUsername, new PasswordGuessEvidence(remotePassword.toCharArray())).runAs(forwardIdentityCallable);",
"/subsystem=security/security-domain=UsersLMDomain:add(cache-type=default) /subsystem=security/security-domain=UsersLMDomain/authentication=classic:add /subsystem=security/security-domain=UsersLMDomain/authentication=classic/login-module=UsersRoles:add(code=UsersRoles, flag=required,module-options=[(\"usersProperties\"=>\"users.properties\"),(\"rolesProperties\"=>\"roles.properties\")])",
"/core-service=management/security-realm=SecurityDomainAuthnRealm:add /core-service=management/security-realm=SecurityDomainAuthnRealm/authentication=jaas:add(name=UsersLMDomain)",
"/core-service=management/management-interface=http-interface/:write-attribute(name=security-realm,value=SecurityDomainAuthnRealm)",
"/core-service=management/security-realm=SecurityDomainAuthnRealm/authentication=jaas:write-attribute(name=assign-groups,value=true)",
"/core-service=management/access=authorization:write-attribute(name=provider, value=rbac) { \"outcome\" => \"success\", \"response-headers\" => { \"operation-requires-reload\" => true, \"process-state\" => \"reload-required\" } } reload",
"/core-service=management/access=authorization:write-attribute(name=provider,value=rbac) { \"outcome\" => \"success\", \"response-headers\" => { \"operation-requires-reload\" => true, \"process-state\" => \"reload-required\" }, \"result\" => undefined, \"server-groups\" => {\"main-server-group\" => {\"host\" => {\"master\" => { \"server-one\" => {\"response\" => { \"outcome\" => \"success\", \"response-headers\" => { \"operation-requires-reload\" => true, \"process-state\" => \"reload-required\" } }}, \"server-two\" => {\"response\" => { \"outcome\" => \"success\", \"response-headers\" => { \"operation-requires-reload\" => true, \"process-state\" => \"reload-required\" } }} }}}} } reload --host=master",
"/core-service=management/access=authorization:write-attribute(name=provider, value=simple)",
"<management> <access-control provider=\"rbac\"> <role-mapping> <role name=\"SuperUser\"> <include> <user name=\"USDlocal\"/> </include> </role> </role-mapping> </access-control> </management>",
"/core-service=management/access=authorization:write-attribute(name=permission-combination-policy, value=POLICYNAME)",
"/core-service=management/access=authorization:write-attribute(name=permission-combination-policy, value=rejecting)",
"<access-control provider=\"rbac\" permission-combination-policy=\"rejecting\"> <role-mapping> <role name=\"SuperUser\"> <include> <user name=\"USDlocal\"/> </include> </role> </role-mapping> </access-control>",
"/core-service=management/access=authorization:read-children-names(child-type=role-mapping) { \"outcome\" => \"success\", \"result\" => [ \"Administrator\", \"Deployer\", \"Maintainer\", \"Monitor\", \"Operator\", \"SuperUser\" ] }",
"/core-service=management/access=authorization/role-mapping=ROLENAME:read-resource(recursive=true) { \"outcome\" => \"success\", \"result\" => { \"include-all\" => false, \"exclude\" => undefined, \"include\" => { \"user-theboss\" => { \"name\" => \"theboss\", \"realm\" => undefined, \"type\" => \"USER\" }, \"user-harold\" => { \"name\" => \"harold\", \"realm\" => undefined, \"type\" => \"USER\" }, \"group-SysOps\" => { \"name\" => \"SysOps\", \"realm\" => undefined, \"type\" => \"GROUP\" } } } }",
"/core-service=management/access=authorization/role-mapping= ROLENAME :add",
"/core-service=management/access=authorization/role-mapping=Auditor:add",
"/core-service=management/access=authorization/role-mapping= ROLENAME /include= ALIAS :add(name= USERNAME , type=USER)",
"/core-service=management/access=authorization/role-mapping=Auditor/include=user-max:add(name=max, type=USER)",
"/core-service=management/access=authorization/role-mapping= ROLENAME /exclude= ALIAS :add(name= USERNAME , type=USER)",
"/core-service=management/access=authorization/role-mapping=Auditor/exclude=user-max:add(name=max, type=USER)",
"/core-service=management/access=authorization/role-mapping= ROLENAME /include= ALIAS :remove",
"/core-service=management/access=authorization/role-mapping=Auditor/include=user-max:remove",
"/core-service=management/access=authorization/role-mapping= ROLENAME /exclude= ALIAS :remove",
"/core-service=management/access=authorization/role-mapping=Auditor/exclude=user-max:remove",
"/core-service=management/access=authorization:write-attribute(name=use-identity-roles,value=true)",
"/core-service=management/access=authorization:read-children-names(child-type=role-mapping) { \"outcome\" => \"success\", \"result\" => [ \"Administrator\", \"Deployer\", \"Maintainer\", \"Monitor\", \"Operator\", \"SuperUser\" ] }",
"/core-service=management/access=authorization/role-mapping=ROLENAME:read-resource(recursive=true) { \"outcome\" => \"success\", \"result\" => { \"include-all\" => false, \"exclude\" => undefined, \"include\" => { \"user-theboss\" => { \"name\" => \"theboss\", \"realm\" => undefined, \"type\" => \"USER\" }, \"user-harold\" => { \"name\" => \"harold\", \"realm\" => undefined, \"type\" => \"USER\" }, \"group-SysOps\" => { \"name\" => \"SysOps\", \"realm\" => undefined, \"type\" => \"GROUP\" } } } }",
"/core-service=management/access=authorization/role-mapping= ROLENAME :add",
"/core-service=management/access=authorization/role-mapping= ROLENAME /include= ALIAS :add(name= GROUPNAME , type=GROUP)",
"/core-service=management/access=authorization/role-mapping=Auditor/include=group-investigators:add(name=investigators, type=GROUP)",
"/core-service=management/access=authorization/role-mapping= ROLENAME /exclude= ALIAS :add(name= GROUPNAME , type=GROUP)",
"/core-service=management/access=authorization/role-mapping=Auditor/exclude=group-supervisors:add(name=supervisors, type=GROUP)",
"/core-service=management/access=authorization/role-mapping= ROLENAME /include= ALIAS :remove",
"/core-service=management/access=authorization/role-mapping=Auditor/include=group-investigators:remove",
"/core-service=management/access=authorization/role-mapping= ROLENAME /exclude= ALIAS :remove",
"/core-service=management/access=authorization/role-mapping=Auditor/exclude=group-supervisors:remove",
"/core-service=management/access=authorization/role-mapping= NEW-SCOPED-ROLE :add",
"/core-service=management/access=authorization/server-group-scoped-role= NEW-SCOPED-ROLE :add(base-role= BASE-ROLE , server-groups=[ SERVER-GROUP-NAME ])",
"/core-service=management/access=authorization/role-mapping= NEW-SCOPED-ROLE :read-resource(recursive=true)",
"/core-service=management/access=authorization/role-mapping= NEW-SCOPED-ROLE :write-attribute(name=include-all, value=true)",
"/core-service=management/access=authorization/role-mapping= NEW-SCOPED-ROLE :remove",
"/core-service=management/access=authorization/server-group-scoped-role= NEW-SCOPED-ROLE :remove",
"/core-service=management/access=authorization/constraint=sensitivity-classification/type=core/classification=system-property:write-attribute(name=configured-requires-read,value=true)",
"/core-service=management/access=authorization/constraint=sensitivity-classification/type=core/classification=system-property:read-resource",
"{ \"outcome\" => \"success\", \"result\" => { \"configured-requires-addressable\" => undefined, \"configured-requires-read\" => true, \"configured-requires-write\" => undefined, \"default-requires-addressable\" => false, \"default-requires-read\" => false, \"default-requires-write\" => true, \"applies-to\" => { \"/core-service=platform-mbean/type=runtime\" => undefined, \"/system-property=*\" => undefined, \"/\" => undefined } } }",
"/core-service=management/access=authorization/constraint=sensitivity-classification:read-resource(include-runtime=true,recursive=true)",
"/core-service=management/access=authorization/constraint=application-classification/type=logging/classification=logging-profile:write-attribute(name=configured-application,value=true)",
"/core-service=management/access=authorization/constraint=application-classification/type=logging/classification=logging-profile:read-resource",
"{ \"outcome\" => \"success\", \"result\" => { \"configured-application\" => true, \"default-application\" => false, \"applies-to\" => {\"/subsystem=logging/logging-profile=*\" => undefined} } }",
"/core-service=management/access=authorization/constraint=application-classification:read-resource(include-runtime=true,recursive=true)",
"/core-service=management/access=authorization/constraint=vault-expression:write-attribute(name=configured-requires-write,value=false)",
"/core-service=management/access=authorization/constraint=vault-expression:read-resource",
"{ \"outcome\" => \"success\", \"result\" => { \"configured-requires-read\" => undefined, \"configured-requires-write\" => false, \"default-requires-read\" => true, \"default-requires-write\" => true } }"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/how_to_configure_server_security/securing_users_of_the_server_and_its_management_interfaces |
Chapter 105. KafkaUserTlsExternalClientAuthentication schema reference | Chapter 105. KafkaUserTlsExternalClientAuthentication schema reference Used in: KafkaUserSpec The type property is a discriminator that distinguishes use of the KafkaUserTlsExternalClientAuthentication type from KafkaUserTlsClientAuthentication , KafkaUserScramSha512ClientAuthentication . It must have the value tls-external for the type KafkaUserTlsExternalClientAuthentication . Property Property type Description type string Must be tls-external . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaUserTlsExternalClientAuthentication-reference |
Chapter 1. High Availability Add-On Overview | Chapter 1. High Availability Add-On Overview The High Availability Add-On is a clustered system that provides reliability, scalability, and availability to critical production services. The following sections provide a high-level description of the components and functions of the High Availability Add-On: Section 1.1, "Cluster Basics" Section 1.2, "High Availability Add-On Introduction" Section 1.4, "Pacemaker Architecture Components" 1.1. Cluster Basics A cluster is two or more computers (called nodes or members ) that work together to perform a task. There are four major types of clusters: Storage High availability Load balancing High performance Storage clusters provide a consistent file system image across servers in a cluster, allowing the servers to simultaneously read and write to a single shared file system. A storage cluster simplifies storage administration by limiting the installation and patching of applications to one file system. Also, with a cluster-wide file system, a storage cluster eliminates the need for redundant copies of application data and simplifies backup and disaster recovery. The High Availability Add-On provides storage clustering in conjunction with Red Hat GFS2 (part of the Resilient Storage Add-On). High availability clusters provide highly available services by eliminating single points of failure and by failing over services from one cluster node to another in case a node becomes inoperative. Typically, services in a high availability cluster read and write data (by means of read-write mounted file systems). Therefore, a high availability cluster must maintain data integrity as one cluster node takes over control of a service from another cluster node. Node failures in a high availability cluster are not visible from clients outside the cluster. (High availability clusters are sometimes referred to as failover clusters.) The High Availability Add-On provides high availability clustering through its High Availability Service Management component, Pacemaker . Load-balancing clusters dispatch network service requests to multiple cluster nodes to balance the request load among the cluster nodes. Load balancing provides cost-effective scalability because you can match the number of nodes according to load requirements. If a node in a load-balancing cluster becomes inoperative, the load-balancing software detects the failure and redirects requests to other cluster nodes. Node failures in a load-balancing cluster are not visible from clients outside the cluster. Load balancing is available with the Load Balancer Add-On. High-performance clusters use cluster nodes to perform concurrent calculations. A high-performance cluster allows applications to work in parallel, therefore enhancing the performance of the applications. (High performance clusters are also referred to as computational clusters or grid computing.) Note The cluster types summarized in the preceding text reflect basic configurations; your needs might require a combination of the clusters described. Additionally, the Red Hat Enterprise Linux High Availability Add-On contains support for configuring and managing high availability servers only . It does not support high-performance clusters. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/ch-introduction-haao |
Index | Index D device-mapper multipath, Considerations for Using GNBD with Device-Mapper Multipath fencing GNBD server nodes, Fencing GNBD Server Nodes Linux page caching, Linux Page Caching driver and command usage, GNBD Driver and Command Usage exporting from a server, Exporting a GNBD from a Server importing on a client, Importing a GNBD on a Client E exporting from a server daemon, Exporting a GNBD from a Server F feedback, Feedback fencing GNBD server nodes, Fencing GNBD Server Nodes G GFS, using on a GNBD server node, Running GFS on a GNBD Server Node GNBD, using with Red Hat GFS, Using GNBD with Red Hat GFS gnbd.ko module, GNBD Driver and Command Usage , Importing a GNBD on a Client gnbd_export command , GNBD Driver and Command Usage , Usage gnbd_import command , GNBD Driver and Command Usage , Usage gnbd_serv daemon, GNBD Driver and Command Usage , Exporting a GNBD from a Server I importing on a client module, Importing a GNBD on a Client L Linux page caching, Linux Page Caching S software subsystem components, Using GNBD with Red Hat GFS | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_network_block_device/ix01 |
6.6. Booleans for Users Executing Applications | 6.6. Booleans for Users Executing Applications Not allowing Linux users to execute applications (which inherit users' permissions) in their home directories and the /tmp directory, which they have write access to, helps prevent flawed or malicious applications from modifying files that users own. Booleans are available to change this behavior, and are configured with the setsebool utility, which must be run as root. The setsebool -P command makes persistent changes. Do not use the -P option if you do not want changes to persist across reboots: guest_t To prevent Linux users in the guest_t domain from executing applications in their home directories and /tmp : xguest_t To prevent Linux users in the xguest_t domain from executing applications in their home directories and /tmp : user_t To prevent Linux users in the user_t domain from executing applications in their home directories and /tmp : staff_t To prevent Linux users in the staff_t domain from executing applications in their home directories and /tmp : To turn the staff_exec_content boolean on and to allow Linux users in the staff_t domain to execute applications in their home directories and /tmp : | [
"~]# setsebool -P guest_exec_content off",
"~]# setsebool -P xguest_exec_content off",
"~]# setsebool -P user_exec_content off",
"~]# setsebool -P staff_exec_content off",
"~]# setsebool -P staff_exec_content on"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-Security-Enhanced_Linux-Confining_Users-Booleans_for_Users_Executing_Applications |
Generating Advisor Service Reports with FedRAMP | Generating Advisor Service Reports with FedRAMP Red Hat Insights 1-latest Share reporting from advisor with FedRAMP(R) about the conditions affecting your RHEL infrastructure. Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/generating_advisor_service_reports_with_fedramp/index |
14.13.3. Configuring Virtual CPU Affinity | 14.13.3. Configuring Virtual CPU Affinity To configure the affinity of virtual CPUs with physical CPUs, refer to Example 14.3, "Pinning vCPU to a host physical machine's CPU" . Example 14.3. Pinning vCPU to a host physical machine's CPU The virsh vcpupin assigns a virtual CPU to a physical one. The vcpupin can take the following options: --vcpu requires the vcpu number [--cpulist] >string< lists the host physical machine's CPU number(s) to set, or omit an optional query --config affects boot --live affects the running domain --current affects the current domain | [
"virsh vcpupin rhel6 VCPU: CPU Affinity ---------------------------------- 0: 0-3 1: 0-3"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/form-Virtualization-Managing_guests_with_virsh-Configuring_virtual_CPU_affinity |
Chapter 3. Custom issuers for cert-manager | Chapter 3. Custom issuers for cert-manager An issuer is a resource that acts as a certificate authority for a specific namespace, and is managed by the cert-manager Operator. TLS-e (TLS everywhere) is enabled in Red Hat OpenStack Services on OpenShift (RHOSO) environments, and it uses the following issuers by default: rootca-internal rootca-libvirt rootca-ovn rootca-public 3.1. Creating a custom issuer You can create custom ingress as well as custom internal issuers. To create and manage your own certificates for internal endpoints, you must create a custom internal issuer. Procedure Create a custom issuer in a file named rootca-custom.yaml : Replace <issuer_name> with the name of your custom issuer, for example, rootca-ingress-custom . Replace <secret_name> with the name of the Secret CR used by the certificate for your custom issuer. If you do not include a secret, one is created automatically. Create a certificate in a file named ca-issuer-certificate.yaml : Replace <issuer_name> with the name of your custom issuer. This matches the issuer created in the first step. Replace <hours> with the duration in hours, for example, a value of 87600h is equivalent to 3650 days, or about 10 years. Replace <secret_name> with the name of the Secret CR used by the certificate for your custom issuer. If you do not include a secret, one is created automatically. Create the issuer and certificate: Add the custom issuer to the TLS service definition in the control plane CR file. If your custom issuer is an ingress issuer, the customer issuer is defined under the ingress attribute as shown below: Replace <issuer_name> with the name of your custom issuer. This matches the issuer created in the first step. If your custom issuer is an internal issuer, the custom issuer is defined at the pod level under the internal attribute as shown below: Replace <issuer_name> with the name of your custom issuer. This matches the issuer created in the first step. Additional resources Configuring certificates with an issuer | [
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <issuer_name> spec: ca: secretName: <secret_name>",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: <issuer_name> spec: commonName: <issuer_name> isCA: true duration: <hours> privateKey: algorithm: RSA size: 3072 issuerRef: name: selfsigned-issuer kind: Issuer secretName: <secret-name>",
"oc create -f rootca-custom.yaml oc create -f ca-issuer-certificate.yaml",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane spec: tls: ingress: enabled: true ca: customIssuer: <issuer_name>",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: myctlplane spec: tls: ingress: enabled: true podLevel: enabled: true internal: ca: customIssuer: <issuer_name>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_security_services/custom-issuers-for-cert-manager |
Chapter 7. PriorityLevelConfiguration [flowcontrol.apiserver.k8s.io/v1beta1] | Chapter 7. PriorityLevelConfiguration [flowcontrol.apiserver.k8s.io/v1beta1] Description PriorityLevelConfiguration represents the configuration of a priority level. Type object 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PriorityLevelConfigurationSpec specifies the configuration of a priority level. status object PriorityLevelConfigurationStatus represents the current state of a "request-priority". 7.1.1. .spec Description PriorityLevelConfigurationSpec specifies the configuration of a priority level. Type object Required type Property Type Description limited object LimitedPriorityLevelConfiguration specifies how to handle requests that are subject to limits. It addresses two issues: - How are requests for this priority level limited? - What should be done with requests that exceed the limit? type string type indicates whether this priority level is subject to limitation on request execution. A value of "Exempt" means that requests of this priority level are not subject to a limit (and thus are never queued) and do not detract from the capacity made available to other priority levels. A value of "Limited" means that (a) requests of this priority level are subject to limits and (b) some of the server's limited capacity is made available exclusively to this priority level. Required. 7.1.2. .spec.limited Description LimitedPriorityLevelConfiguration specifies how to handle requests that are subject to limits. It addresses two issues: - How are requests for this priority level limited? - What should be done with requests that exceed the limit? Type object Property Type Description assuredConcurrencyShares integer assuredConcurrencyShares (ACS) configures the execution limit, which is a limit on the number of requests of this priority level that may be exeucting at a given time. ACS must be a positive number. The server's concurrency limit (SCL) is divided among the concurrency-controlled priority levels in proportion to their assured concurrency shares. This produces the assured concurrency value (ACV) --- the number of requests that may be executing at a time --- for each such priority level: ACV(l) = ceil( SCL * ACS(l) / ( sum[priority levels k] ACS(k) ) ) bigger numbers of ACS mean more reserved concurrent requests (at the expense of every other PL). This field has a default value of 30. limitResponse object LimitResponse defines how to handle requests that can not be executed right now. 7.1.3. .spec.limited.limitResponse Description LimitResponse defines how to handle requests that can not be executed right now. Type object Required type Property Type Description queuing object QueuingConfiguration holds the configuration parameters for queuing type string type is "Queue" or "Reject". "Queue" means that requests that can not be executed upon arrival are held in a queue until they can be executed or a queuing limit is reached. "Reject" means that requests that can not be executed upon arrival are rejected. Required. 7.1.4. .spec.limited.limitResponse.queuing Description QueuingConfiguration holds the configuration parameters for queuing Type object Property Type Description handSize integer handSize is a small positive number that configures the shuffle sharding of requests into queues. When enqueuing a request at this priority level the request's flow identifier (a string pair) is hashed and the hash value is used to shuffle the list of queues and deal a hand of the size specified here. The request is put into one of the shortest queues in that hand. handSize must be no larger than queues , and should be significantly smaller (so that a few heavy flows do not saturate most of the queues). See the user-facing documentation for more extensive guidance on setting this field. This field has a default value of 8. queueLengthLimit integer queueLengthLimit is the maximum number of requests allowed to be waiting in a given queue of this priority level at a time; excess requests are rejected. This value must be positive. If not specified, it will be defaulted to 50. queues integer queues is the number of queues for this priority level. The queues exist independently at each apiserver. The value must be positive. Setting it to 1 effectively precludes shufflesharding and thus makes the distinguisher method of associated flow schemas irrelevant. This field has a default value of 64. 7.1.5. .status Description PriorityLevelConfigurationStatus represents the current state of a "request-priority". Type object Property Type Description conditions array conditions is the current state of "request-priority". conditions[] object PriorityLevelConfigurationCondition defines the condition of priority level. 7.1.6. .status.conditions Description conditions is the current state of "request-priority". Type array 7.1.7. .status.conditions[] Description PriorityLevelConfigurationCondition defines the condition of priority level. Type object Property Type Description lastTransitionTime Time lastTransitionTime is the last time the condition transitioned from one status to another. message string message is a human-readable message indicating details about last transition. reason string reason is a unique, one-word, CamelCase reason for the condition's last transition. status string status is the status of the condition. Can be True, False, Unknown. Required. type string type is the type of the condition. Required. 7.2. API endpoints The following API endpoints are available: /apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations DELETE : delete collection of PriorityLevelConfiguration GET : list or watch objects of kind PriorityLevelConfiguration POST : create a PriorityLevelConfiguration /apis/flowcontrol.apiserver.k8s.io/v1beta1/watch/prioritylevelconfigurations GET : watch individual changes to a list of PriorityLevelConfiguration. deprecated: use the 'watch' parameter with a list operation instead. /apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations/{name} DELETE : delete a PriorityLevelConfiguration GET : read the specified PriorityLevelConfiguration PATCH : partially update the specified PriorityLevelConfiguration PUT : replace the specified PriorityLevelConfiguration /apis/flowcontrol.apiserver.k8s.io/v1beta1/watch/prioritylevelconfigurations/{name} GET : watch changes to an object of kind PriorityLevelConfiguration. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations/{name}/status GET : read status of the specified PriorityLevelConfiguration PATCH : partially update status of the specified PriorityLevelConfiguration PUT : replace status of the specified PriorityLevelConfiguration 7.2.1. /apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations Table 7.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of PriorityLevelConfiguration Table 7.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 7.3. Body parameters Parameter Type Description body DeleteOptions schema Table 7.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind PriorityLevelConfiguration Table 7.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.6. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfigurationList schema 401 - Unauthorized Empty HTTP method POST Description create a PriorityLevelConfiguration Table 7.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.8. Body parameters Parameter Type Description body PriorityLevelConfiguration schema Table 7.9. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 201 - Created PriorityLevelConfiguration schema 202 - Accepted PriorityLevelConfiguration schema 401 - Unauthorized Empty 7.2.2. /apis/flowcontrol.apiserver.k8s.io/v1beta1/watch/prioritylevelconfigurations Table 7.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of PriorityLevelConfiguration. deprecated: use the 'watch' parameter with a list operation instead. Table 7.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.3. /apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations/{name} Table 7.12. Global path parameters Parameter Type Description name string name of the PriorityLevelConfiguration Table 7.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a PriorityLevelConfiguration Table 7.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 7.15. Body parameters Parameter Type Description body DeleteOptions schema Table 7.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PriorityLevelConfiguration Table 7.17. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PriorityLevelConfiguration Table 7.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 7.19. Body parameters Parameter Type Description body Patch schema Table 7.20. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 201 - Created PriorityLevelConfiguration schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PriorityLevelConfiguration Table 7.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.22. Body parameters Parameter Type Description body PriorityLevelConfiguration schema Table 7.23. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 201 - Created PriorityLevelConfiguration schema 401 - Unauthorized Empty 7.2.4. /apis/flowcontrol.apiserver.k8s.io/v1beta1/watch/prioritylevelconfigurations/{name} Table 7.24. Global path parameters Parameter Type Description name string name of the PriorityLevelConfiguration Table 7.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind PriorityLevelConfiguration. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 7.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.5. /apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations/{name}/status Table 7.27. Global path parameters Parameter Type Description name string name of the PriorityLevelConfiguration Table 7.28. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified PriorityLevelConfiguration Table 7.29. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified PriorityLevelConfiguration Table 7.30. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 7.31. Body parameters Parameter Type Description body Patch schema Table 7.32. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 201 - Created PriorityLevelConfiguration schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified PriorityLevelConfiguration Table 7.33. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.34. Body parameters Parameter Type Description body PriorityLevelConfiguration schema Table 7.35. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 201 - Created PriorityLevelConfiguration schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/schedule_and_quota_apis/prioritylevelconfiguration-flowcontrol-apiserver-k8s-io-v1beta1 |
Red Hat Quay architecture | Red Hat Quay architecture Red Hat Quay 3.13 Red Hat Quay Architecture Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/red_hat_quay_architecture/index |
Chapter 5. Pulling images from a container repository | Chapter 5. Pulling images from a container repository Pull images from the automation hub container registry to make a copy to your local machine. Automation hub provides the podman pull command for each latest image in the container repository. You can copy and paste this command into your terminal, or use podman pull to copy an image based on an image tag. 5.1. Prerequisites You must have permission to view and pull from a private container repository. 5.2. Pulling an image You can pull images from the automation hub container registry to make a copy to your local machine. Automation hub provides the podman pull command for each latest image in the container repository. Note If you need to pull container images from a password or token-protected registry, you must create a credential in automation controller before pulling the image. Procedure Navigate to Execution Environments . Select your container repository. In the Pull this image entry, click Copy to clipboard . Paste and run the command in your terminal. Verification Run podman images to view images on your local machine. 5.3. Syncing images from a container repository You can pull images from the automation hub container registry to sync an image to your local machine. Prerequisites You must have permission to view and pull from a private container repository. Procedure To sync an image from a remote container registry, you need to configure a remote registry. Navigate to Execution Environments Remote Registries . Add https://registry.redhat.io to the registry. Add any required credentials to authenticate. Note Some container registries are aggressive with rate limiting. It is advisable to set a rate limit under Advanced Options. Navigate to Execution Environments Execution Environments . Click Add execution environment in the page header. Select the registry you wish to pull from. The "name" field displays the name of the image that will show up as on your local registry. Note The "Upstream name" field is the name of the image on the remote server. For example if the upstream name is set to "alpine" and the "name" field to "local/alpine", the alpine image will be downloaded from the remote and renamed to local/alpine. It is advisable to set a list of tags to include or exclude. Syncing images with a large number of tags is time consuming and will use a lot of disk space. Additional resources See Red Hat Container Registry Authentication for a list of registries. 5.4. Additional resources See the What is Podman? documentation for options to use when pulling images. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/managing_containers_in_private_automation_hub/pulling-images-container-repository |
14.3. Editing an External Provider | 14.3. Editing an External Provider Editing an External Provider Click Administration Providers and select the external provider to edit. Click Edit . Change the current values for the provider to the preferred values. Click OK . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/editing_an_external_provider |
Chapter 1. Updating Satellite to the next minor version | Chapter 1. Updating Satellite to the minor version You can update your Satellite Server and Capsule Server to a new minor release version, such as from 6.15.0 to 6.15.1, using the Satellite maintain tool. The minor releases are non-disruptive to your operating environment and often fast. Red Hat recommends performing updates regularly, because the minor releases patch security vulnerabilities and minor issues discovered after code is released. | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/updating_red_hat_satellite/updating-project-to-next-minor-version_updating |
Chapter 1. Introducing RHEL on public cloud platforms | Chapter 1. Introducing RHEL on public cloud platforms Public cloud platforms provide computing resources as a service. Instead of using on-premises hardware, you can run your IT workloads, including Red Hat Enterprise Linux (RHEL) systems, as public cloud instances. 1.1. Benefits of using RHEL in a public cloud RHEL as a cloud instance located on a public cloud platform has the following benefits over RHEL on-premises physical systems or virtual machines (VMs): Flexible and fine-grained allocation of resources A cloud instance of RHEL runs as a VM on a cloud platform, which typically means a cluster of remote servers maintained by the provider of the cloud service. Therefore, allocating hardware resources to the instance, such as a specific type of CPU or storage, happens on the software level and is easily customizable. In comparison to a local RHEL system, you are also not limited by the capabilities of your physical host. Instead, you can choose from a variety of features, based on selection offered by the cloud provider. Space and cost efficiency You do not need to own any on-premises servers to host your cloud workloads. This avoids the space, power, and maintenance requirements associated with physical hardware. Instead, on public cloud platforms, you pay the cloud provider directly for using a cloud instance. The cost is typically based on the hardware allocated to the instance and the time you spend using it. Therefore, you can optimize your costs based on your requirements. Software-controlled configurations The entire configuration of a cloud instance is saved as data on the cloud platform, and is controlled by software. Therefore, you can easily create, remove, clone, or migrate the instance. A cloud instance is also operated remotely in a cloud provider console and is connected to remote storage by default. In addition, you can back up the current state of a cloud instance as a snapshot at any time. Afterwards, you can load the snapshot to restore the instance to the saved state. Separation from the host and software compatibility Similarly to a local VM, the RHEL guest operating system on a cloud instance runs on a virtualized kernel. This kernel is separate from the host operating system and from the client system that you use to connect to the instance. Therefore, any operating system can be installed on the cloud instance. This means that on a RHEL public cloud instance, you can run RHEL-specific applications that cannot be used on your local operating system. In addition, even if the operating system of the instance becomes unstable or is compromised, your client system is not affected in any way. Additional resources What is public cloud? What is a hyperscaler? Types of cloud computing Public cloud use cases for RHEL Obtaining RHEL for public cloud deployments 1.2. Public cloud use cases for RHEL Deploying on a public cloud provides many benefits, but might not be the most efficient solution in every scenario. If you are evaluating whether to migrate your RHEL deployments to the public cloud, consider whether your use case will benefit from the advantages of the public cloud. Beneficial use cases Deploying public cloud instances is very effective for flexibly increasing and decreasing the active computing power of your deployments, also known as scaling up and scaling down . Therefore, using RHEL on public cloud is recommended in the following scenarios: Clusters with high peak workloads and low general performance requirements. Scaling up and down based on your demands can be highly efficient in terms of resource costs. Quickly setting up or expanding your clusters. This avoids high upfront costs of setting up local servers. Cloud instances are not affected by what happens in your local environment. Therefore, you can use them for backup and disaster recovery. Potentially problematic use cases You are running an existing environment that cannot be adjusted. Customizing a cloud instance to fit the specific needs of an existing deployment may not be cost-effective in comparison with your current host platform. You are operating with a hard limit on your budget. Maintaining your deployment in a local data center typically provides less flexibility but more control over the maximum resource costs than the public cloud does. steps Obtaining RHEL for public cloud deployments Additional resources Should I migrate my application to the cloud? Here's how to decide. 1.3. Frequent concerns when migrating to a public cloud Moving your RHEL workloads from a local environment to a public cloud platform might raise concerns about the changes involved. The following are the most commonly asked questions. Will my RHEL work differently as a cloud instance than as a local virtual machine? In most respects, RHEL instances on a public cloud platform work the same as RHEL virtual machines on a local host, such as an on-premises server. Notable exceptions include: Instead of private orchestration interfaces, public cloud instances use provider-specific console interfaces for managing your cloud resources. Certain features, such as nested virtualization, may not work correctly. If a specific feature is critical for your deployment, check the feature's compatibility in advance with your chosen public cloud provider. Will my data stay safe in a public cloud as opposed to a local server? The data in your RHEL cloud instances is in your ownership, and your public cloud provider does not have any access to it. In addition, major cloud providers support data encryption in transit, which improves the security of data when migrating your virtual machines to the public cloud. The general security of your RHEL public cloud instances is managed as follows: Your public cloud provider is responsible for the security of the cloud hypervisor Red Hat provides the security features of the RHEL guest operating systems in your instances You manage the specific security settings and practices in your cloud infrastructure What effect does my geographic region have on the functionality of RHEL public cloud instances? You can use RHEL instances on a public cloud platform regardless of your geographical location. Therefore, you can run your instances in the same region as your on-premises server. However, hosting your instances in a physically distant region might cause high latency when operating them. In addition, depending on the public cloud provider, certain regions may provide additional features or be more cost-efficient. Before creating your RHEL instances, review the properties of the hosting regions available for your chosen cloud provider. 1.4. Obtaining RHEL for public cloud deployments To deploy a RHEL system in a public cloud environment, you need to: Select the optimal cloud provider for your use case, based on your requirements and the current offer on the market. The cloud providers currently certified for running RHEL instances are: Amazon Web Services (AWS) Google Cloud Platform (GCP) Microsoft Azure Note This document specifically talks about deploying RHEL on Microsoft Azure. Create a RHEL cloud instance on your chosen cloud platform. For more information, see Methods for creating RHEL cloud instances . To keep your RHEL deployment up-to-date, use Red Hat Update Infrastructure (RHUI). Additional resources RHUI documentation Red Hat Open Hybrid Cloud 1.5. Methods for creating RHEL cloud instances To deploy a RHEL instance on a public cloud platform, you can use one of the following methods: Create a system image of RHEL and import it to the cloud platform. To create the system image, you can use the RHEL image builder or you can build the image manually. This method uses your existing RHEL subscription, and is also referred to as bring your own subscription (BYOS). You pre-pay a yearly subscription, and you can use your Red Hat customer discount. Your customer service is provided by Red Hat. For creating multiple images effectively, you can use the cloud-init tool. Purchase a RHEL instance directly from the cloud provider marketplace. You post-pay an hourly rate for using the service. Therefore, this method is also referred to as pay as you go (PAYG). Your customer service is provided by the cloud platform provider. Note For detailed instructions on using various methods to deploy RHEL instances On Microsoft Azure, see the following chapters in this document. Additional resources What is a golden image? Configuring and managing cloud-init for RHEL 8 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deploying_rhel_8_on_microsoft_azure/introducing-rhel-on-public-cloud-platforms_cloud-content-azure |
Create RHEL for Edge images and configure automated management | Create RHEL for Edge images and configure automated management Edge management 1-latest Getting started with edge management in the Red Hat Hybrid Cloud Console | null | https://docs.redhat.com/en/documentation/edge_management/1-latest/html/create_rhel_for_edge_images_and_configure_automated_management/index |
11.5. Setting up Cross-Realm Kerberos Trusts | 11.5. Setting up Cross-Realm Kerberos Trusts The Kerberos V5 realm is a set of Kerberos principals defined in the Kerberos database on all connected masters and slaves. You must configure cross-realm Kerberos trust if you want principals from different realms to communicate with each other. A lot of Linux environments, as well as mixed environments, will already have a Kerberos realm deployed for single sign-on, application authentication, and user management. That makes Kerberos a potentially common integration path for different domains and mixed system (such as Windows and Linux) environments, particularly if the Linux environment is not using a more structured domain configuration like Identity Management. 11.5.1. A Trust Relationship A trust means that the users within one realm are trusted to access the resources in another domain as if they belonged to that realm . This is done by creating a shared key for a single principal that is held in common by both domains. Figure 11.2. Basic Trust In Figure 11.2, "Basic Trust" , the shared principal would belong to Domain B ( krbtgt/[email protected] ). When that principal is also added to Domain A, then the clients in Domain A can access the resources in Domain B. The configured principal exists in both realms. That shared principal has three characteristics: It exists in both realms. When a key is created, the same password is used in both realms. The key has the same key version number ( kvno ). A cross-realm trust is unidirectional by default. This trust is not automatically reciprocated so that the B.EXAMPLE.COM realm are trusted to authenticate to services in the A.EXAMPLE.COM realm. To establish trust in the other direction, both realms would need to share keys for the krbtgt/[email protected] service - an entry with a reverse mapping from the example. A realm can have multiple trusts, both realms that it trusts and realms it is trusted by. With Kerberos trusts, the trust can flow in a chain. If Realm A trusts Realm B and Realm B trusts Realm C, Realm A implicitly trusts Realm C, as well. The trust flows along realms; this is a transitive trust. Figure 11.3. Transitive Trust The direction of a transitive trust is the trust flow . The trust flow has to be defined, first by recognizing to what realm a service belongs and then by identifying what realms a client must contact to access that service. A Kerberos principal name is structured in the format service/hostname@REALM . The service is generally a protocol, such as LDAP, IMAP, HTTP, or host. The hostname is the fully-qualified domain name of the host system, and the REALM is the Kerberos realm to which it belongs. Kerberos clients typically use the host name or DNS domain name for Kerberos realm mapping. This mapping can be explicit or implicit. Explicit mapping uses the [domain_realm] section of the /etc/krb5.conf file. With implicit mapping, the domain name is converted to upper case; the converted name is then assumed to be the Kerberos realm to search. When traversing a trust, Kerberos assumes that each realm is structured like a hierarchical DNS domain, with a root domain and subdomains. This means that the trust flows up to a shared root. Each step, or hop , has a shared key. In Figure 11.4, "Trusts in the Same Domain" , SALES.EXAMPLE.COM shares a key with EXAMPLE.COM, and EXAMPLE.COM shares a key with EVERYWHERE.EXAMPLE.COM. Figure 11.4. Trusts in the Same Domain The client treats the realm name as a DNS name, and it determines its trust path by stripping off elements of its own realm name until it reaches the root name. It then begins prepending names until it reaches the service's realm. Figure 11.5. Child/Parent Trusts in the Same Domain This is a nature of trusts being transitive. SITE.SALES.EXAMPLE.COM only has a single shared key, with SALES.EXAMPLE.COM. But because of a series of small trusts, there is a large trust flow that allows trust to go from SITE.SALES.EXAMPLE.COM to EVERYWHERE.EXAMPLE.COM. That trust flow can even go between completely different domains by creating a shared key at the domain level, where the sites share no common suffix. Figure 11.6. Trusts in Different Domains The [capaths] section It is also possible to reduce the number of hops and represent very complex trust flows by explicitly defining the flow. The [capaths] section of the /etc/krb5.conf file defines the trust flow between different realms. The format of the [capaths] section is relatively straightforward: there is a main entry for each realm where a client has a principal, and then inside each realm section is a list of intermediate realms from which the client must obtain credentials. For example, [capaths] can be used to specify the following process for obtaining credentials: With credentials from Realm A, the client obtains a krbtgt/A@A ticket from the KDC of Realm A. Using this ticket, the client then asks for the krbtgt/B@A ticket. The krbtgt/B@A ticket issued by the KDC of Realm A is a cross-realm ticket granting ticket . It allows the client to ask the KDC of Realm B for a ticket to a service principal of Realm B. With the krbtgt/B@A ticket, the client asks for the krbtgt/C@B cross-realm ticket. With the krbtgt/C@B ticket issued by the KDC of Realm B, the client asks for the krbtgt/D@C cross-realm ticket. With the krbtgt/D@C ticket issued by the KDC of Realm C, the client asks the KDC of Realm D for a ticket to a service principal in Realm D. After this, the credentials cache contains tickets for krbtgt/A@A , krbtgt/B@A , krbtgt/C@B , krbtgt/D@C , and service/hostname@D . To obtain the service/hostname@D ticket, it was required to obtain the three intermediate cross-realm tickets. For more information on the [capaths] section, including examples of the [capaths] configuration, see the krb5.conf (5) man page. 11.5.2. Setting up a Realm Trust In this example, the Kerberos realms are A.EXAMPLE.COM and B.EXAMPLE.COM . Create the entry for the shared principal for the B realm in the A realm, using kadmin . [root@server ~]# kadmin -r A.EXAMPLE.COM kadmin: add_principal krbtgt/[email protected] Enter password for principal "krbtgt/[email protected]": Re-enter password for principal "krbtgt/[email protected]": Principal "krbtgt/[email protected]" created. quit That means that the A realm will trust the B principal. Important It is recommended that you choose very strong passwords for cross-realm principals. Unlike many other passwords, for which the user can be prompted as often as several times a day, the system will not request the password for cross-realm principal frequently from you, which is why it does not need to be easy to memorize. To create a bidirectional trust, then create principals going the reverse way. Create a principal for the A realm in the B realm, using kadmin . [root@server ~]# kadmin -r B.EXAMPLE.COM kadmin: add_principal krbtgt/[email protected] Enter password for principal "krbtgt/[email protected]": Re-enter password for principal "krbtgt/[email protected]": Principal "krbtgt/[email protected]" created. quit Use the get_principal command to verify that both entries have matching key version numbers ( kvno values) and encryption types. Important A common, but incorrect, situation is for administrators to try to use the add_principal command's -randkey option to assign a random key instead of a password, dump the new entry from the database of the first realm, and import it into the second. This will not work unless the master keys for the realm databases are identical, as the keys contained in a database dump are themselves encrypted using the master key. | [
"kadmin -r A.EXAMPLE.COM kadmin: add_principal krbtgt/[email protected] Enter password for principal \"krbtgt/[email protected]\": Re-enter password for principal \"krbtgt/[email protected]\": Principal \"krbtgt/[email protected]\" created. quit",
"kadmin -r B.EXAMPLE.COM kadmin: add_principal krbtgt/[email protected] Enter password for principal \"krbtgt/[email protected]\": Re-enter password for principal \"krbtgt/[email protected]\": Principal \"krbtgt/[email protected]\" created. quit"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/Using_trusts |
Chapter 5. Configuring compliance policy deployment methods | Chapter 5. Configuring compliance policy deployment methods Use one the following procedures to configure Satellite for the method that you have selected to deploy compliance policies. You will select one of these methods when you later create a compliance policy . Procedure for Ansible deployment Import the theforeman.foreman_scap_client Ansible role. For more information, see Managing configurations using Ansible integration . Assign the created policy and the theforeman.foreman_scap_client Ansible role to a host or host group. To trigger the deployment, run the Ansible role on the host or host group either manually, or set up a recurring job by using remote execution for regular policy updates. For more information, see Configuring and Setting Up Remote Jobs in Managing hosts . Procedure for Puppet deployment Ensure Puppet is enabled. Ensure the Puppet agent is installed on hosts. Import the Puppet environment that contains the foreman_scap_client Puppet module. For more information, see Managing configurations using Puppet integration . Assign the created policy and the foreman_scap_client Puppet class to a host or host group. Puppet triggers the deployment on the regular run or you can run Puppet manually. Puppet runs every 30 minutes by default. Procedure for manual deployment For the manual deployment method, no additional Satellite configuration is required. For information on manual deployment, see How to set up OpenSCAP Policies using Manual Deployment option in the Red Hat Knowledgebase . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_security_compliance/configuring-compliance-policy-deployment-methods_security-compliance |
Architecture Guide | Architecture Guide Red Hat Ceph Storage 8 Guide on Red Hat Ceph Storage Architecture Red Hat Ceph Storage Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/architecture_guide/index |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.