title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 312. SOAP DataFormat
Chapter 312. SOAP DataFormat Available as of Camel version 2.3 SOAP is a Data Format which uses JAXB2 and JAX-WS annotations to marshal and unmarshal SOAP payloads. It provides the basic features of Apache CXF without need for the CXF Stack. Supported SOAP versions SOAP 1.1 is supported by default. SOAP 1.2 is supported from Camel 2.11 onwards. Namespace prefix mapping See JAXB for details how you can control namespace prefix mappings when marshalling using SOAP data format. 312.1. SOAP Options The SOAP dataformat supports 7 options, which are listed below. Name Default Java Type Description contextPath String Package name where your JAXB classes are located. encoding String To overrule and use a specific encoding elementNameStrategyRef String Refers to an element strategy to lookup from the registry. An element name strategy is used for two purposes. The first is to find a xml element name for a given object and soap action when marshaling the object into a SOAP message. The second is to find an Exception class for a given soap fault name. The following three element strategy class name is provided out of the box. QNameStrategy - Uses a fixed qName that is configured on instantiation. Exception lookup is not supported TypeNameStrategy - Uses the name and namespace from the XMLType annotation of the given type. If no namespace is set then package-info is used. Exception lookup is not supported ServiceInterfaceStrategy - Uses information from a webservice interface to determine the type name and to find the exception class for a SOAP fault All three classes is located in the package name org.apache.camel.dataformat.soap.name If you have generated the web service stub code with cxf-codegen or a similar tool then you probably will want to use the ServiceInterfaceStrategy. In the case you have no annotated service interface you should use QNameStrategy or TypeNameStrategy. version 1.1 String SOAP version should either be 1.1 or 1.2. Is by default 1.1 namespacePrefixRef String When marshalling using JAXB or SOAP then the JAXB implementation will automatic assign namespace prefixes, such as ns2, ns3, ns4 etc. To control this mapping, Camel allows you to refer to a map which contains the desired mapping. schema String To validate against an existing schema. Your can use the prefix classpath:, file: or http: to specify how the resource should by resolved. You can separate multiple schema files by using the ',' character. contentTypeHeader false Boolean Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. 312.2. Spring Boot Auto-Configuration The component supports 8 options, which are listed below. Name Description Default Type camel.dataformat.soapjaxb.content-type-header Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. false Boolean camel.dataformat.soapjaxb.context-path Package name where your JAXB classes are located. String camel.dataformat.soapjaxb.element-name-strategy-ref Refers to an element strategy to lookup from the registry. An element name strategy is used for two purposes. The first is to find a xml element name for a given object and soap action when marshaling the object into a SOAP message. The second is to find an Exception class for a given soap fault name. The following three element strategy class name is provided out of the box. QNameStrategy - Uses a fixed qName that is configured on instantiation. Exception lookup is not supported TypeNameStrategy - Uses the name and namespace from the XMLType annotation of the given type. If no namespace is set then package-info is used. Exception lookup is not supported ServiceInterfaceStrategy - Uses information from a webservice interface to determine the type name and to find the exception class for a SOAP fault All three classes is located in the package name org.apache.camel.dataformat.soap.name If you have generated the web service stub code with cxf-codegen or a similar tool then you probably will want to use the ServiceInterfaceStrategy. In the case you have no annotated service interface you should use QNameStrategy or TypeNameStrategy. String camel.dataformat.soapjaxb.enabled Enable soapjaxb dataformat true Boolean camel.dataformat.soapjaxb.encoding To overrule and use a specific encoding String camel.dataformat.soapjaxb.namespace-prefix-ref When marshalling using JAXB or SOAP then the JAXB implementation will automatic assign namespace prefixes, such as ns2, ns3, ns4 etc. To control this mapping, Camel allows you to refer to a map which contains the desired mapping. String camel.dataformat.soapjaxb.schema To validate against an existing schema. Your can use the prefix classpath:, file: or http: to specify how the resource should by resolved. You can separate multiple schema files by using the ',' character. String camel.dataformat.soapjaxb.version SOAP version should either be 1.1 or 1.2. Is by default 1.1 1.1 String ND 312.3. ElementNameStrategy An element name strategy is used for two purposes. The first is to find a xml element name for a given object and soap action when marshaling the object into a SOAP message. The second is to find an Exception class for a given soap fault name. Strategy Usage QNameStrategy Uses a fixed qName that is configured on instantiation. Exception lookup is not supported TypeNameStrategy Uses the name and namespace from the @XMLType annotation of the given type. If no namespace is set then package-info is used. Exception lookup is not supported ServiceInterfaceStrategy Uses information from a webservice interface to determine the type name and to find the exception class for a SOAP fault If you have generated the web service stub code with cxf-codegen or a similar tool then you probably will want to use the ServiceInterfaceStrategy. In the case you have no annotated service interface you should use QNameStrategy or TypeNameStrategy. 312.4. Using the Java DSL The following example uses a named DataFormat of soap which is configured with the package com.example.customerservice to initialize the JAXBContext . The second parameter is the ElementNameStrategy. The route is able to marshal normal objects as well as exceptions. (Note the below just sends a SOAP Envelope to a queue. A web service provider would actually need to be listening to the queue for a SOAP call to actually occur, in which case it would be a one way SOAP request. If you need request reply then you should look at the example.) SoapJaxbDataFormat soap = new SoapJaxbDataFormat("com.example.customerservice", new ServiceInterfaceStrategy(CustomerService.class)); from("direct:start") .marshal(soap) .to("jms:myQueue"); Tip See also As the SOAP dataformat inherits from the JAXB dataformat most settings apply here as well 312.4.1. Using SOAP 1.2 Available as of Camel 2.11 SoapJaxbDataFormat soap = new SoapJaxbDataFormat("com.example.customerservice", new ServiceInterfaceStrategy(CustomerService.class)); soap.setVersion("1.2"); from("direct:start") .marshal(soap) .to("jms:myQueue"); When using XML DSL there is a version attribute you can set on the <soapjaxb> element. <!-- Defining a ServiceInterfaceStrategy for retrieving the element name when marshalling --> <bean id="myNameStrategy" class="org.apache.camel.dataformat.soap.name.ServiceInterfaceStrategy"> <constructor-arg value="com.example.customerservice.CustomerService"/> <constructor-arg value="true"/> </bean> And in the Camel route <route> <from uri="direct:start"/> <marshal> <soapjaxb contentPath="com.example.customerservice" version="1.2" elementNameStrategyRef="myNameStrategy"/> </marshal> <to uri="jms:myQueue"/> </route> 312.5. Multi-part Messages Available as of Camel 2.8.1 Multi-part SOAP messages are supported by the ServiceInterfaceStrategy. The ServiceInterfaceStrategy must be initialized with a service interface definition that is annotated in accordance with JAX-WS 2.2 and meets the requirements of the Document Bare style. The target method must meet the following criteria, as per the JAX-WS specification: 1) it must have at most one in or in/out non-header parameter, 2) if it has a return type other than void it must have no in/out or out non-header parameters, 3) if it it has a return type of void it must have at most one in/out or out non-header parameter. The ServiceInterfaceStrategy should be initialized with a boolean parameter that indicates whether the mapping strategy applies to the request parameters or response parameters. ServiceInterfaceStrategy strat = new ServiceInterfaceStrategy(com.example.customerservice.multipart.MultiPartCustomerService.class, true); SoapJaxbDataFormat soapDataFormat = new SoapJaxbDataFormat("com.example.customerservice.multipart", strat); 312.5.1. Multi-part Request The payload parameters for a multi-part request are initiazlied using a BeanInvocation object that reflects the signature of the target operation. The camel-soap DataFormat maps the content in the BeanInvocation to fields in the SOAP header and body in accordance with the JAX-WS mapping when the marshal() processor is invoked. BeanInvocation beanInvocation = new BeanInvocation(); // Identify the target method beanInvocation.setMethod(MultiPartCustomerService.class.getMethod("getCustomersByName", GetCustomersByName.class, com.example.customerservice.multipart.Product.class)); // Populate the method arguments GetCustomersByName getCustomersByName = new GetCustomersByName(); getCustomersByName.setName("Dr. Multipart"); Product product = new Product(); product.setName("Multiuse Product"); product.setDescription("Useful for lots of things."); Object[] args = new Object[] {getCustomersByName, product}; // Add the arguments to the bean invocation beanInvocation.setArgs(args); // Set the bean invocation object as the message body exchange.getIn().setBody(beanInvocation); 312.5.2. Multi-part Response A multi-part soap response may include an element in the soap body and will have one or more elements in the soap header. The camel-soap DataFormat will unmarshall the element in the soap body (if it exists) and place it onto the body of the out message in the exchange. Header elements will not be marshaled into their JAXB mapped object types. Instead, these elements are placed into the camel out message header org.apache.camel.dataformat.soap.UNMARSHALLED_HEADER_LIST . The elements will appear either as element instance values, or as JAXBElement values, depending upon the setting for the ignoreJAXBElement property. This property is inherited from camel-jaxb. You can also have the camel-soap DataFormate ignore header content all-together by setting the ignoreUnmarshalledHeaders value to true . 312.5.3. Holder Object mapping JAX-WS specifies the use of a type-parameterized javax.xml.ws.Holder object for In/Out and Out parameters. A Holder object may be used when building the BeanInvocation , or you may use an instance of the parameterized-type directly. The camel-soap DataFormat marshals Holder values in accordance with the JAXB mapping for the class of the Holder's value. No mapping is provided for `Holder objects in an unmarshalled response. 312.6. Examples 312.6.1. Webservice client The following route supports marshalling the request and unmarshalling a response or fault. String WS_URI = "cxf://http://myserver/customerservice?serviceClass=com.example.customerservice&dataFormat=MESSAGE"; SoapJaxbDataFormat soapDF = new SoapJaxbDataFormat("com.example.customerservice", new ServiceInterfaceStrategy(CustomerService.class)); from("direct:customerServiceClient") .onException(Exception.class) .handled(true) .unmarshal(soapDF) .end() .marshal(soapDF) .to(WS_URI) .unmarshal(soapDF); The below snippet creates a proxy for the service interface and makes a SOAP call to the above route. import org.apache.camel.Endpoint; import org.apache.camel.component.bean.ProxyHelper; ... Endpoint startEndpoint = context.getEndpoint("direct:customerServiceClient"); ClassLoader classLoader = Thread.currentThread().getContextClassLoader(); // CustomerService below is the service endpoint interface, *not* the javax.xml.ws.Service subclass CustomerService proxy = ProxyHelper.createProxy(startEndpoint, classLoader, CustomerService.class); GetCustomersByNameResponse response = proxy.getCustomersByName(new GetCustomersByName()); 312.6.2. Webservice Server Using the following route sets up a webservice server that listens on jms queue customerServiceQueue and processes requests using the class CustomerServiceImpl. The customerServiceImpl of course should implement the interface CustomerService. Instead of directly instantiating the server class it could be defined in a spring context as a regular bean. SoapJaxbDataFormat soapDF = new SoapJaxbDataFormat("com.example.customerservice", new ServiceInterfaceStrategy(CustomerService.class)); CustomerService serverBean = new CustomerServiceImpl(); from("jms://queue:customerServiceQueue") .onException(Exception.class) .handled(true) .marshal(soapDF) .end() .unmarshal(soapDF) .bean(serverBean) .marshal(soapDF); 312.7. Dependencies To use the SOAP dataformat in your camel routes you need to add the following dependency to your pom. <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-soap</artifactId> <version>2.3.0</version> </dependency>
[ "SoapJaxbDataFormat soap = new SoapJaxbDataFormat(\"com.example.customerservice\", new ServiceInterfaceStrategy(CustomerService.class)); from(\"direct:start\") .marshal(soap) .to(\"jms:myQueue\");", "SoapJaxbDataFormat soap = new SoapJaxbDataFormat(\"com.example.customerservice\", new ServiceInterfaceStrategy(CustomerService.class)); soap.setVersion(\"1.2\"); from(\"direct:start\") .marshal(soap) .to(\"jms:myQueue\");", "<!-- Defining a ServiceInterfaceStrategy for retrieving the element name when marshalling --> <bean id=\"myNameStrategy\" class=\"org.apache.camel.dataformat.soap.name.ServiceInterfaceStrategy\"> <constructor-arg value=\"com.example.customerservice.CustomerService\"/> <constructor-arg value=\"true\"/> </bean>", "<route> <from uri=\"direct:start\"/> <marshal> <soapjaxb contentPath=\"com.example.customerservice\" version=\"1.2\" elementNameStrategyRef=\"myNameStrategy\"/> </marshal> <to uri=\"jms:myQueue\"/> </route>", "ServiceInterfaceStrategy strat = new ServiceInterfaceStrategy(com.example.customerservice.multipart.MultiPartCustomerService.class, true); SoapJaxbDataFormat soapDataFormat = new SoapJaxbDataFormat(\"com.example.customerservice.multipart\", strat);", "BeanInvocation beanInvocation = new BeanInvocation(); // Identify the target method beanInvocation.setMethod(MultiPartCustomerService.class.getMethod(\"getCustomersByName\", GetCustomersByName.class, com.example.customerservice.multipart.Product.class)); // Populate the method arguments GetCustomersByName getCustomersByName = new GetCustomersByName(); getCustomersByName.setName(\"Dr. Multipart\"); Product product = new Product(); product.setName(\"Multiuse Product\"); product.setDescription(\"Useful for lots of things.\"); Object[] args = new Object[] {getCustomersByName, product}; // Add the arguments to the bean invocation beanInvocation.setArgs(args); // Set the bean invocation object as the message body exchange.getIn().setBody(beanInvocation);", "String WS_URI = \"cxf://http://myserver/customerservice?serviceClass=com.example.customerservice&dataFormat=MESSAGE\"; SoapJaxbDataFormat soapDF = new SoapJaxbDataFormat(\"com.example.customerservice\", new ServiceInterfaceStrategy(CustomerService.class)); from(\"direct:customerServiceClient\") .onException(Exception.class) .handled(true) .unmarshal(soapDF) .end() .marshal(soapDF) .to(WS_URI) .unmarshal(soapDF);", "import org.apache.camel.Endpoint; import org.apache.camel.component.bean.ProxyHelper; Endpoint startEndpoint = context.getEndpoint(\"direct:customerServiceClient\"); ClassLoader classLoader = Thread.currentThread().getContextClassLoader(); // CustomerService below is the service endpoint interface, *not* the javax.xml.ws.Service subclass CustomerService proxy = ProxyHelper.createProxy(startEndpoint, classLoader, CustomerService.class); GetCustomersByNameResponse response = proxy.getCustomersByName(new GetCustomersByName());", "SoapJaxbDataFormat soapDF = new SoapJaxbDataFormat(\"com.example.customerservice\", new ServiceInterfaceStrategy(CustomerService.class)); CustomerService serverBean = new CustomerServiceImpl(); from(\"jms://queue:customerServiceQueue\") .onException(Exception.class) .handled(true) .marshal(soapDF) .end() .unmarshal(soapDF) .bean(serverBean) .marshal(soapDF);", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-soap</artifactId> <version>2.3.0</version> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/soapjaxb-dataformat
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_using_google_cloud/making-open-source-more-inclusive
Chapter 5. Build and run microservices applications on the OpenShift image for JBoss EAP XP
Chapter 5. Build and run microservices applications on the OpenShift image for JBoss EAP XP You can build and run your microservices applications on the OpenShift image for JBoss EAP XP. Note JBoss EAP XP is supported only on OpenShift 4 and later versions. Use the following workflow to build and run a microservices application on the OpenShift image for JBoss EAP XP by using the source-to-image (S2I) process. Note The OpenShift images for JBoss EAP XP 3.0.0 provide a default standalone configuration file, which is based on the standalone-microprofile-ha.xml file. For more information about the server configuration files included in JBoss EAP XP, see the Standalone server configuration files section. This workflow uses the microprofile-config quickstart as an example. The quickstart provides a small, specific working example that can be used as a reference for your own project. See the microprofile-config quickstart that ships with JBoss EAP XP 3.0.0 for more information. Additional resources For more information about the server configuration files included in JBoss EAP XP, see Standalone server configuration files . 5.1. Preparing OpenShift for application deployment Prepare OpenShift for application deployment. Prerequisites You have installed an operational OpenShift instance. For more information, see the Installing and Configuring OpenShift Container Platform Clusters book on Red Hat Customer Portal . Procedure Log in to your OpenShift instance using the oc login command. Create a new project in OpenShift. A project allows a group of users to organize and manage content separately from other groups. You can create a project in OpenShift using the following command. For example, for the microprofile-config quickstart, create a new project named eap-demo using the following command. 5.2. Configuring authentication to the Red Hat Container Registry Before you can import and use the OpenShift image for JBoss EAP XP, you must configure authentication to the Red Hat Container Registry. Create an authentication token using a registry service account to configure access to the Red Hat Container Registry. You need not use or store your Red Hat account's username and password in your OpenShift configuration when you use an authentication token. Procedure Follow the instructions on Red Hat Customer Portal to create an authentication token using a Registry Service Account management application . Download the YAML file containing the OpenShift secret for the token. You can download the YAML file from the OpenShift Secret tab on your token's Token Information page. Create the authentication token secret for your OpenShift project using the YAML file that you downloaded: Configure the secret for your OpenShift project using the following commands, replacing the secret name below with the name of your secret created in the step. Additional resources Configuring authentication to the Red Hat Container Registry Registry Service Account management application Configuring access to secured registries 5.3. Importing the latest OpenShift imagestreams and templates for JBoss EAP XP Import the latest OpenShift imagestreams and templates for JBoss EAP XP. Important OpenJDK 8 images and imagestreams on OpenShift are deprecated. The images and imagestreams are still supported on OpenShift. However, no enhancements are made to these images and imagestreams and they might be removed in the future. Red Hat continues to provide full support and bug fixes OpenJDK 8 images and imagestreams under its standard support terms and conditions. Procedure Use one of the following commands to import the latest JDK 11 imagestreams and templates for the OpenShift image for JBoss EAP XP into your OpenShift project's namespace. Import JDK 11 imagestream: This command imports the following imagestreams and templates: The JDK 11 builder imagestream: jboss-eap-xp3-openjdk11-openshift The JDK 11 runtime imagestream: jboss-eap-xp3-openjdk11-runtime-openshift Import the JDK 11 templates: Note The JBoss EAP XP imagestreams and templates imported using the above command are only available within that OpenShift project. If you have administrative access to the general openshift namespace and want the imagestreams and templates to be accessible by all projects, add -n openshift to the oc replace line of the command. For example: If you want to import the imagestreams and templates into a different project, add the -n PROJECT_NAME to the oc replace line of the command. For example: If you use the cluster-samples-operator, see the OpenShift documentation on configuring the cluster samples operator. See https://docs.openshift.com/container-platform/latest/openshift_images/configuring-samples-operator.html for details about configuring the cluster samples operator. 5.4. Deploying a JBoss EAP XP source-to-image (S2I) application on OpenShift Deploy a JBoss EAP XP source-to-image (S2I) application on OpenShift. Important OpenJDK 8 images and imagestreams on OpenShift are deprecated. The images and imagestreams are still supported on OpenShift. However, no enhancements are made to these images and imagestreams and they might be removed in the future. Red Hat continues to provide full support and bug fixes for OpenJDK 8 images and imagestreams under its standard support terms and conditions. Prerequisites Optional: A template can specify default values for many template parameters, and you might have to override some, or all, of the defaults. To see template information, including a list of parameters and any default values, use the command oc describe template TEMPLATE_NAME . Procedure Create a new OpenShift application using the JBoss EAP XP image and your Java application's source code. Use one of the provided JBoss EAP XP templates for S2I builds. 1 The template to use. The application image is tagged with the latest tag. 2 The latest images streams and templates were imported into the project's namespace , so you must specify the namespace of where to find the imagestream. This is usually the project's name. 3 URL to the repository containing the application source code. 4 The Git repository reference to use for the source code. This can be a Git branch or tag reference. 5 The directory within the source repository to build. Note A template can specify default values for many template parameters, and you might have to override some, or all, of the defaults. To see template information, including a list of parameters and any default values, use the command oc describe template TEMPLATE_NAME . You might also want to configure environment variables when creating your new OpenShift application. Retrieve the name of the build configurations. Use the name of the build configurations from the step to view the Maven progress of the builds. For example, for the microprofile-config , the following command shows the progress of the Maven builds. Additional resources Importing the latest OpenShift imagestreams and templates for JBoss EAP XP Preparing OpenShift for application deployment 5.5. Completing post-deployment tasks for JBoss EAP XP source-to-image (S2I) application Depending on your application, you might need to complete some tasks after your OpenShift application has been built and deployed. Examples of post-deployment tasks include the following: Exposing a service so that the application is viewable from outside of OpenShift. Scaling your application to a specific number of replicas. Procedure Get the service name of your application using the following command. Optional : Expose the main service as a route so you can access your application from outside of OpenShift. For example, for the microprofile-config quickstart, use the following command to expose the required service and port. Note If you used a template to create the application, the route might already exist. If it does, continue on to the step. Get the URL of the route. Access the application in your web browser using the URL. The URL is the value of the HOST/PORT field from command's output. Note For JBoss EAP XP 3.0.0 GA distribution, the Microprofile Config quickstart does not reply to HTTPS GET requests to the application's root context. This enhancement is only available in the {JBossXPShortName101} GA distribution. For example, to interact with the Microprofile Config application, the URL might be http:// HOST_PORT_Value /config/value in your browser. If your application does not use the JBoss EAP root context, append the context of the application to the URL. For example, for the microprofile-config quickstart, the URL might be http:// HOST_PORT_VALUE /microprofile-config/ . Optionally, you can scale up the application instance by running the following command. This command increases the number of replicas to 3. For example, for the microprofile-config quickstart, use the following command to scale up the application. Additional Resources For more information about JBoss EAP XP Quickstarts, see the Use the Quickstarts section in the Using MicroProfile in JBoss EAP guide.
[ "oc new-project PROJECT_NAME", "oc new-project eap-demo", "create -f 1234567_myserviceaccount-secret.yaml", "secrets link default 1234567-myserviceaccount-pull-secret --for=pull secrets link builder 1234567-myserviceaccount-pull-secret --for=pull", "replace --force -f https://raw.githubusercontent.com/jboss-container-images/jboss-eap-openshift-templates/eap-xp3/jboss-eap-xp3-openjdk11-openshift.json", "replace --force -f https://raw.githubusercontent.com/jboss-container-images/jboss-eap-openshift-templates/eap-xp3/templates/eap-xp3-basic-s2i.json", "replace -n openshift --force -f", "replace -n PROJECT_NAME --force -f", "oc new-app --template=eap-xp3-basic-s2i \\ 1 -p EAP_IMAGE_NAME=jboss-eap-xp3-openjdk11-openshift:latest -p EAP_RUNTIME_IMAGE_NAME=jboss-eap-xp3-openjdk11-runtime-openshift:latest -p IMAGE_STREAM_NAMESPACE=eap-demo \\ 2 -p SOURCE_REPOSITORY_URL=https://github.com/jboss-developer/jboss-eap-quickstarts \\ 3 -p SOURCE_REPOSITORY_REF=xp-3.0.x \\ 4 -p CONTEXT_DIR=microprofile-config 5", "oc get bc -o name", "oc logs -f buildconfig/USD{APPLICATION_NAME}-build-artifacts ... Push successful oc logs -f buildconfig/USD{APPLICATION_NAME} ... Push successful", "oc logs -f buildconfig/eap-xp3-basic-app-build-artifacts ... Push successful oc logs -f buildconfig/eap-xp3-basic-app ... Push successful", "oc get service", "oc expose service/eap-xp3-basic-app --port=8080", "oc get route", "oc scale deploymentconfig DEPLOYMENTCONFIG_NAME --replicas=3", "oc scale deploymentconfig/eap-xp3-basic-app --replicas=3" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/using_jboss_eap_xp_3.0.0/using-the-openshift-image-for-jboss-eap-xp_default
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/replacing_failed_hosts/making-open-source-more-inclusive
Chapter 17. kubernetes
Chapter 17. kubernetes The namespace for Kubernetes-specific metadata Data type group 17.1. kubernetes.pod_name The name of the pod Data type keyword 17.2. kubernetes.pod_id The Kubernetes ID of the pod Data type keyword 17.3. kubernetes.namespace_name The name of the namespace in Kubernetes Data type keyword 17.4. kubernetes.namespace_id The ID of the namespace in Kubernetes Data type keyword 17.5. kubernetes.host The Kubernetes node name Data type keyword 17.6. kubernetes.container_name The name of the container in Kubernetes Data type keyword 17.7. kubernetes.annotations Annotations associated with the Kubernetes object Data type group 17.8. kubernetes.labels Labels present on the original Kubernetes Pod Data type group 17.9. kubernetes.event The Kubernetes event obtained from the Kubernetes master API. This event description loosely follows type Event in Event v1 core . Data type group 17.9.1. kubernetes.event.verb The type of event, ADDED , MODIFIED , or DELETED Data type keyword Example value ADDED 17.9.2. kubernetes.event.metadata Information related to the location and time of the event creation Data type group 17.9.2.1. kubernetes.event.metadata.name The name of the object that triggered the event creation Data type keyword Example value java-mainclass-1.14d888a4cfc24890 17.9.2.2. kubernetes.event.metadata.namespace The name of the namespace where the event originally occurred. Note that it differs from kubernetes.namespace_name , which is the namespace where the eventrouter application is deployed. Data type keyword Example value default 17.9.2.3. kubernetes.event.metadata.selfLink A link to the event Data type keyword Example value /api/v1/namespaces/javaj/events/java-mainclass-1.14d888a4cfc24890 17.9.2.4. kubernetes.event.metadata.uid The unique ID of the event Data type keyword Example value d828ac69-7b58-11e7-9cf5-5254002f560c 17.9.2.5. kubernetes.event.metadata.resourceVersion A string that identifies the server's internal version of the event. Clients can use this string to determine when objects have changed. Data type integer Example value 311987 17.9.3. kubernetes.event.involvedObject The object that the event is about. Data type group 17.9.3.1. kubernetes.event.involvedObject.kind The type of object Data type keyword Example value ReplicationController 17.9.3.2. kubernetes.event.involvedObject.namespace The namespace name of the involved object. Note that it may differ from kubernetes.namespace_name , which is the namespace where the eventrouter application is deployed. Data type keyword Example value default 17.9.3.3. kubernetes.event.involvedObject.name The name of the object that triggered the event Data type keyword Example value java-mainclass-1 17.9.3.4. kubernetes.event.involvedObject.uid The unique ID of the object Data type keyword Example value e6bff941-76a8-11e7-8193-5254002f560c 17.9.3.5. kubernetes.event.involvedObject.apiVersion The version of kubernetes master API Data type keyword Example value v1 17.9.3.6. kubernetes.event.involvedObject.resourceVersion A string that identifies the server's internal version of the pod that triggered the event. Clients can use this string to determine when objects have changed. Data type keyword Example value 308882 17.9.4. kubernetes.event.reason A short machine-understandable string that gives the reason for generating this event Data type keyword Example value SuccessfulCreate 17.9.5. kubernetes.event.source_component The component that reported this event Data type keyword Example value replication-controller 17.9.6. kubernetes.event.firstTimestamp The time at which the event was first recorded Data type date Example value 2017-08-07 10:11:57.000000000 Z 17.9.7. kubernetes.event.count The number of times this event has occurred Data type integer Example value 1 17.9.8. kubernetes.event.type The type of event, Normal or Warning . New types could be added in the future. Data type keyword Example value Normal
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/logging/cluster-logging-exported-fields-kubernetes_cluster-logging-exported-fields
Chapter 3. Logging information for Red Hat Quay
Chapter 3. Logging information for Red Hat Quay Obtaining log information using can be beneficial in various ways for managing, monitoring, and troubleshooting applications running in containers or pods. Some of the reasons why obtaining log information is valuable include the following: Debugging and Troubleshooting : Logs provide insights into what's happening inside the application, allowing developers and system administrators to identify and resolve issues. By analyzing log messages, one can identify errors, exceptions, warnings, or unexpected behavior that might occur during the application's execution. Performance Monitoring : Monitoring logs helps to track the performance of the application and its components. Monitoring metrics like response times, request rates, and resource utilization can help in optimizing and scaling the application to meet the demand. Security Analysis : Logs can be essential in auditing and detecting potential security breaches. By analyzing logs, suspicious activities, unauthorized access attempts, or any abnormal behavior can be identified, helping in detecting and responding to security threats. Tracking User Behavior : In some cases, logs can be used to track user activities and behavior. This is particularly important for applications that handle sensitive data, where tracking user actions can be useful for auditing and compliance purposes. Capacity Planning : Log data can be used to understand resource utilization patterns, which can aid in capacity planning. By analyzing logs, one can identify peak usage periods, anticipate resource needs, and optimize infrastructure accordingly. Error Analysis : When errors occur, logs can provide valuable context about what happened leading up to the error. This can help in understanding the root cause of the issue and facilitating the debugging process. Verification of Deployment : Logging during the deployment process can help verify if the application is starting correctly and if all components are functioning as expected. Continuous Integration/Continuous Deployment (CI/CD) : In CI/CD pipelines, logging is essential to capture build and deployment statuses, allowing teams to monitor the success or failure of each stage. 3.1. Obtaining log information for Red Hat Quay Log information can be obtained for all types of Red Hat Quay deployments, including geo-replication deployments, standalone deployments, and Operator deployments. Log information can also be obtained for mirrored repositories. It can help you troubleshoot authentication and authorization issues, and object storage issues. After you have obtained the necessary log information, you can search the Red Hat Knowledgebase for a solution, or file a support ticket with the Red Hat Support team. Use the following procedure to obtain logs for your Red Hat Quay deployment. Procedure If you are using the Red Hat Quay Operator on OpenShift Container Platform, enter the following command to view the logs: USD oc logs <quay_pod_name> If you are on a standalone Red Hat Quay deployment, enter the following command: USD podman logs <quay_container_name> Example output ... gunicorn-web stdout | 2023-01-20 15:41:52,071 [205] [DEBUG] [app] Starting request: urn:request:0d88de25-03b0-4cf9-b8bc-87f1ac099429 (/oauth2/azure/callback) {'X-Forwarded-For': '174.91.79.124'} ... 3.2. Examining verbose logs Red Hat Quay does not have verbose logs, however, with the following procedures, you can obtain a detailed status check of your database pod or container. Note Additional debugging information can be returned if you have deployed Red Hat Quay in one of the following ways: You have deployed Red Hat Quay by passing in the DEBUGLOG=true variable. You have deployed Red Hat Quay with LDAP authentication enabled by passing in the DEBUGLOG=true and USERS_DEBUG=1 variables. You have configured Red Hat Quay on OpenShift Container Platform by updating the QuayRegistry resource to include DEBUGLOG=true . For more information, see "Running Red Hat Quay in debug mode". Procedure Enter the following commands to examine verbose database logs. If you are using the Red Hat Quay Operator on OpenShift Container Platform, enter the following commands: USD oc logs <quay_pod_name> -- USD oc logs <quay_pod_name> -- -c <container_name> USD oc cp <quay_pod_name>:/var/lib/pgsql/data/userdata/log/* /path/to/desired_directory_on_host If you are using a standalone deployment of Red Hat Quay, enter the following commands: USD podman logs <quay_container_id> -- USD podman logs <quay_container_id> -- -c <container_name> USD podman cp <quay_container_id>:/var/lib/pgsql/data/userdata/log/* /path/to/desired_directory_on_host
[ "oc logs <quay_pod_name>", "podman logs <quay_container_name>", "gunicorn-web stdout | 2023-01-20 15:41:52,071 [205] [DEBUG] [app] Starting request: urn:request:0d88de25-03b0-4cf9-b8bc-87f1ac099429 (/oauth2/azure/callback) {'X-Forwarded-For': '174.91.79.124'}", "oc logs <quay_pod_name> --previous", "oc logs <quay_pod_name> --previous -c <container_name>", "oc cp <quay_pod_name>:/var/lib/pgsql/data/userdata/log/* /path/to/desired_directory_on_host", "podman logs <quay_container_id> --previous", "podman logs <quay_container_id> --previous -c <container_name>", "podman cp <quay_container_id>:/var/lib/pgsql/data/userdata/log/* /path/to/desired_directory_on_host" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/troubleshooting_red_hat_quay/obtaining-quay-logs
Chapter 1. Network APIs
Chapter 1. Network APIs 1.1. AdminNetworkPolicy [policy.networking.k8s.io/v1alpha1] Description AdminNetworkPolicy is a cluster level resource that is part of the AdminNetworkPolicy API. Type object 1.2. AdminPolicyBasedExternalRoute [k8s.ovn.org/v1] Description AdminPolicyBasedExternalRoute is a CRD allowing the cluster administrators to configure policies for external gateway IPs to be applied to all the pods contained in selected namespaces. Egress traffic from the pods that belong to the selected namespaces to outside the cluster is routed through these external gateway IPs. Type object 1.3. BaselineAdminNetworkPolicy [policy.networking.k8s.io/v1alpha1] Description BaselineAdminNetworkPolicy is a cluster level resource that is part of the AdminNetworkPolicy API. Type object 1.4. CloudPrivateIPConfig [cloud.network.openshift.io/v1] Description CloudPrivateIPConfig performs an assignment of a private IP address to the primary NIC associated with cloud VMs. This is done by specifying the IP and Kubernetes node which the IP should be assigned to. This CRD is intended to be used by the network plugin which manages the cluster network. The spec side represents the desired state requested by the network plugin, and the status side represents the current state that this CRD's controller has executed. No users will have permission to modify it, and if a cluster-admin decides to edit it for some reason, their changes will be overwritten the time the network plugin reconciles the object. Note: the CR's name must specify the requested private IP address (can be IPv4 or IPv6). Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.5. EgressFirewall [k8s.ovn.org/v1] Description EgressFirewall describes the current egress firewall for a Namespace. Traffic from a pod to an IP address outside the cluster will be checked against each EgressFirewallRule in the pod's namespace's EgressFirewall, in order. If no rule matches (or no EgressFirewall is present) then the traffic will be allowed by default. Type object 1.6. EgressIP [k8s.ovn.org/v1] Description EgressIP is a CRD allowing the user to define a fixed source IP for all egress traffic originating from any pods which match the EgressIP resource according to its spec definition. Type object 1.7. EgressQoS [k8s.ovn.org/v1] Description EgressQoS is a CRD that allows the user to define a DSCP value for pods egress traffic on its namespace to specified CIDRs. Traffic from these pods will be checked against each EgressQoSRule in the namespace's EgressQoS, and if there is a match the traffic is marked with the relevant DSCP value. Type object 1.8. EgressService [k8s.ovn.org/v1] Description EgressService is a CRD that allows the user to request that the source IP of egress packets originating from all of the pods that are endpoints of the corresponding LoadBalancer Service would be its ingress IP. In addition, it allows the user to request that egress packets originating from all of the pods that are endpoints of the LoadBalancer service would use a different network than the main one. Type object 1.9. Endpoints [v1] Description Endpoints is a collection of endpoints that implement the actual service. Example: Type object 1.10. EndpointSlice [discovery.k8s.io/v1] Description EndpointSlice represents a subset of the endpoints that implement a service. For a given service there may be multiple EndpointSlice objects, selected by labels, which must be joined to produce the full set of endpoints. Type object 1.11. EgressRouter [network.operator.openshift.io/v1] Description EgressRouter is a feature allowing the user to define an egress router that acts as a bridge between pods and external systems. The egress router runs a service that redirects egress traffic originating from a pod or a group of pods to a remote external system or multiple destinations as per configuration. It is consumed by the cluster-network-operator. More specifically, given an EgressRouter CR with <name>, the CNO will create and manage: - A service called <name> - An egress pod called <name> - A NAD called <name> Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). EgressRouter is a single egressrouter pod configuration object. Type object 1.12. Ingress [networking.k8s.io/v1] Description Ingress is a collection of rules that allow inbound connections to reach the endpoints defined by a backend. An Ingress can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc. Type object 1.13. IngressClass [networking.k8s.io/v1] Description IngressClass represents the class of the Ingress, referenced by the Ingress Spec. The ingressclass.kubernetes.io/is-default-class annotation can be used to indicate that an IngressClass should be considered default. When a single IngressClass resource has this annotation set to true, new Ingress resources without a class specified will be assigned this default class. Type object 1.14. IPPool [whereabouts.cni.cncf.io/v1alpha1] Description IPPool is the Schema for the ippools API Type object 1.15. MultiNetworkPolicy [k8s.cni.cncf.io/v1beta1] Description MultiNetworkPolicy is a CRD schema to provide NetworkPolicy mechanism for net-attach-def which is specified by the Network Plumbing Working Group. MultiNetworkPolicy is identical to Kubernetes NetworkPolicy, See: https://kubernetes.io/docs/concepts/services-networking/network-policies/ . Type object 1.16. NetworkAttachmentDefinition [k8s.cni.cncf.io/v1] Description NetworkAttachmentDefinition is a CRD schema specified by the Network Plumbing Working Group to express the intent for attaching pods to one or more logical or physical networks. More information available at: https://github.com/k8snetworkplumbingwg/multi-net-spec Type object 1.17. NetworkPolicy [networking.k8s.io/v1] Description NetworkPolicy describes what network traffic is allowed for a set of Pods Type object 1.18. OverlappingRangeIPReservation [whereabouts.cni.cncf.io/v1alpha1] Description OverlappingRangeIPReservation is the Schema for the OverlappingRangeIPReservations API Type object 1.19. PodNetworkConnectivityCheck [controlplane.operator.openshift.io/v1alpha1] Description PodNetworkConnectivityCheck Compatibility level 4: No compatibility is provided, the API can change at any point for any reason. These capabilities should not be used by applications needing long term support. Type object 1.20. Route [route.openshift.io/v1] Description A route allows developers to expose services through an HTTP(S) aware load balancing and proxy layer via a public DNS entry. The route may further specify TLS options and a certificate, or specify a public CNAME that the router should also accept for HTTP and HTTPS traffic. An administrator typically configures their router to be visible outside the cluster firewall, and may also add additional security, caching, or traffic controls on the service content. Routers usually talk directly to the service endpoints. Once a route is created, the host field may not be changed. Generally, routers use the oldest route with a given host when resolving conflicts. Routers are subject to additional customization and may support additional controls via the annotations field. Because administrators may configure multiple routers, the route status field is used to return information to clients about the names and states of the route under each router. If a client chooses a duplicate name, for instance, the route status conditions are used to indicate the route cannot be chosen. To enable HTTP/2 ALPN on a route it requires a custom (non-wildcard) certificate. This prevents connection coalescing by clients, notably web browsers. We do not support HTTP/2 ALPN on routes that use the default certificate because of the risk of connection re-use/coalescing. Routes that do not have their own custom certificate will not be HTTP/2 ALPN-enabled on either the frontend or the backend. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.21. Service [v1] Description Service is a named abstraction of software service (for example, mysql) consisting of local port (for example 3306) that the proxy listens on, and the selector that determines which pods will answer requests sent through the proxy. Type object
[ "Name: \"mysvc\", Subsets: [ { Addresses: [{\"ip\": \"10.10.1.1\"}, {\"ip\": \"10.10.2.2\"}], Ports: [{\"name\": \"a\", \"port\": 8675}, {\"name\": \"b\", \"port\": 309}] }, { Addresses: [{\"ip\": \"10.10.3.3\"}], Ports: [{\"name\": \"a\", \"port\": 93}, {\"name\": \"b\", \"port\": 76}] }, ]" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/network_apis/network-apis
Chapter 3. Joining RHEL systems to an Active Directory by using RHEL system roles
Chapter 3. Joining RHEL systems to an Active Directory by using RHEL system roles If your organization uses Microsoft Active Directory (AD) to centrally manage users, groups, and other resources, you can join your Red Hat Enterprise Linux (RHEL) host to this AD. For example, AD users can then log into RHEL and you can make services on the RHEL host available for authenticated AD users. By using the ad_integration RHEL system role, you can automate the integration of Red Hat Enterprise Linux system into an Active Directory (AD) domain. Note The ad_integration role is for deployments using direct AD integration without an Identity Management (IdM) environment. For IdM environments, use the ansible-freeipa roles. 3.1. Joining RHEL to an Active Directory domain by using the ad_integration RHEL system role You can use the ad_integration RHEL system role to automate the process of joining RHEL to an Active Directory (AD) domain. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The managed node uses a DNS server that can resolve AD DNS entries. Credentials of an AD account which has permissions to join computers to the domain. Ensure that the required ports are open: Ports required for direct integration of RHEL systems into AD using SSSD Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: usr: administrator pwd: <password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Active Directory integration hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Join an Active Directory ansible.builtin.include_role: name: rhel-system-roles.ad_integration vars: ad_integration_user: "{{ usr }}" ad_integration_password: "{{ pwd }}" ad_integration_realm: "ad.example.com" ad_integration_allow_rc4_crypto: false ad_integration_timesync_source: "time_server.ad.example.com" The settings specified in the example playbook include the following: ad_integration_allow_rc4_crypto: <true|false> Configures whether the role activates the AD-SUPPORT crypto policy on the managed node. By default, RHEL does not support the weak RC4 encryption but, if Kerberos in your AD still requires RC4, you can enable this encryption type by setting ad_integration_allow_rc4_crypto: true . Omit this the variable or set it to false if Kerberos uses AES encryption. ad_integration_timesync_source: <time_server> Specifies the NTP server to use for time synchronization. Kerberos requires a synchronized time among AD domain controllers and domain members to prevent replay attacks. If you omit this variable, the ad_integration role does not utilize the timesync RHEL system role to configure time synchronization on the managed node. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ad_integration/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Check if AD users, such as administrator , are available locally on the managed node: Additional resources /usr/share/ansible/roles/rhel-system-roles.ad_integration/README.md file /usr/share/doc/rhel-system-roles/ad_integration/ directory Ansible vault
[ "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "usr: administrator pwd: <password>", "--- - name: Active Directory integration hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Join an Active Directory ansible.builtin.include_role: name: rhel-system-roles.ad_integration vars: ad_integration_user: \"{{ usr }}\" ad_integration_password: \"{{ pwd }}\" ad_integration_realm: \"ad.example.com\" ad_integration_allow_rc4_crypto: false ad_integration_timesync_source: \"time_server.ad.example.com\"", "ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'getent passwd [email protected]' [email protected]:*:1450400500:1450400513:Administrator:/home/[email protected]:/bin/bash" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/integrating_rhel_systems_directly_with_windows_active_directory/integrating-rhel-systems-into-ad-directly-with-ansible-using-rhel-system-roles_integrating-rhel-systems-directly-with-active-directory
27.2.7. Controlling Access to At and Batch
27.2.7. Controlling Access to At and Batch You can restrict the access to the at and batch commands using the /etc/at.allow and /etc/at.deny files. These access control files use the same format defining one user name on each line. Mind that no whitespace are permitted in either file. If the file at.allow exists, only users listed in the file are allowed to use at or batch , and the at.deny file is ignored. If at.allow does not exist, users listed in at.deny are not allowed to use at or batch . The at daemon ( atd ) does not have to be restarted if the access control files are modified. The access control files are read each time a user tries to execute the at or batch commands. The root user can always execute at and batch commands, regardless of the content of the access control files.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-autotasks-at-batch-controlling-access
Chapter 17. Azure Storage Queue Sink
Chapter 17. Azure Storage Queue Sink Send Messages to Azure Storage queues. Important The Azure Storage Queue Sink Kamelet is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview . The Kamelet is able to understand the following headers to be set: expiration / ce-expiration : as the time to live of the message in the queue. If the header won't be set the default of 7 days will be used. The format should be in this form: PnDTnHnMn.nS., e.g: PT20.345S - parses as 20.345 seconds, P2D - parses as 2 days. 17.1. Configuration Options The following table summarizes the configuration options available for the azure-storage-queue-sink Kamelet: Property Name Description Type Default Example accessKey * Access Key The Azure Storage Queue access Key. string accountName * Account Name The Azure Storage Queue account name. string queueName * Queue Name The Azure Storage Queue container name. string Note Fields marked with an asterisk (*) are mandatory. 17.2. Dependencies At runtime, the azure-storage-queue-sink Kamelet relies upon the presence of the following dependencies: camel:azure-storage-queue camel:kamelet 17.3. Usage This section describes how you can use the azure-storage-queue-sink . 17.3.1. Knative Sink You can use the azure-storage-queue-sink Kamelet as a Knative sink by binding it to a Knative object. azure-storage-queue-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: azure-storage-queue-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: azure-storage-queue-sink properties: accessKey: "The Access Key" accountName: "The Account Name" queueName: "The Queue Name" 17.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 17.3.1.2. Procedure for using the cluster CLI Save the azure-storage-queue-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f azure-storage-queue-sink-binding.yaml 17.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel azure-storage-queue-sink -p "sink.accessKey=The Access Key" -p "sink.accountName=The Account Name" -p "sink.queueName=The Queue Name" This command creates the KameletBinding in the current namespace on the cluster. 17.3.2. Kafka Sink You can use the azure-storage-queue-sink Kamelet as a Kafka sink by binding it to a Kafka topic. azure-storage-queue-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: azure-storage-queue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: azure-storage-queue-sink properties: accessKey: "The Access Key" accountName: "The Account Name" queueName: "The Queue Name" 17.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 17.3.2.2. Procedure for using the cluster CLI Save the azure-storage-queue-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f azure-storage-queue-sink-binding.yaml 17.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic azure-storage-queue-sink -p "sink.accessKey=The Access Key" -p "sink.accountName=The Account Name" -p "sink.queueName=The Queue Name" This command creates the KameletBinding in the current namespace on the cluster. 17.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/azure-storage-queue-sink.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: azure-storage-queue-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: azure-storage-queue-sink properties: accessKey: \"The Access Key\" accountName: \"The Account Name\" queueName: \"The Queue Name\"", "apply -f azure-storage-queue-sink-binding.yaml", "kamel bind channel:mychannel azure-storage-queue-sink -p \"sink.accessKey=The Access Key\" -p \"sink.accountName=The Account Name\" -p \"sink.queueName=The Queue Name\"", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: azure-storage-queue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: azure-storage-queue-sink properties: accessKey: \"The Access Key\" accountName: \"The Account Name\" queueName: \"The Queue Name\"", "apply -f azure-storage-queue-sink-binding.yaml", "kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic azure-storage-queue-sink -p \"sink.accessKey=The Access Key\" -p \"sink.accountName=The Account Name\" -p \"sink.queueName=The Queue Name\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/azure_storage_queue_sink
Chapter 11. Uninstalling a cluster on RHOSP from your own infrastructure
Chapter 11. Uninstalling a cluster on RHOSP from your own infrastructure You can remove a cluster that you deployed to Red Hat OpenStack Platform (RHOSP) on user-provisioned infrastructure. 11.1. Downloading playbook dependencies The Ansible playbooks that simplify the removal process on user-provisioned infrastructure require several Python modules. On the machine where you will run the process, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 11.2. Removing a cluster from RHOSP that uses your own infrastructure You can remove an OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP) that uses your own infrastructure. To complete the removal process quickly, run several Ansible playbooks. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies." You have the playbooks that you used to install the cluster. You modified the playbooks that are prefixed with down- to reflect any changes that you made to their corresponding installation playbooks. For example, changes to the bootstrap.yaml file are reflected in the down-bootstrap.yaml file. All of the playbooks are in a common directory. Procedure On a command line, run the playbooks that you downloaded: USD ansible-playbook -i inventory.yaml \ down-bootstrap.yaml \ down-control-plane.yaml \ down-compute-nodes.yaml \ down-load-balancers.yaml \ down-network.yaml \ down-security-groups.yaml Remove any DNS record changes you made for the OpenShift Container Platform installation. OpenShift Container Platform is removed from your infrastructure.
[ "sudo subscription-manager register # If not done already", "sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already", "sudo subscription-manager repos --disable=* # If not done already", "sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms", "sudo yum install python3-openstackclient ansible python3-openstacksdk", "sudo alternatives --set python /usr/bin/python3", "ansible-playbook -i inventory.yaml down-bootstrap.yaml down-control-plane.yaml down-compute-nodes.yaml down-load-balancers.yaml down-network.yaml down-security-groups.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_openstack/uninstalling-openstack-user
3.11. perf_event
3.11. perf_event When the perf_event subsystem is attached to a hierarchy, all cgroups in that hierarchy can be used to group processes and threads which can then be monitored with the perf tool, as opposed to monitoring each process or thread separately or per-CPU. Cgroups which use the perf_event subsystem do not contain any special tunable parameters other than the common parameters listed in Section 3.12, "Common Tunable Parameters" . For additional information on how tasks in a cgroup can be monitored using the perf tool, refer to the Red Hat Enterprise Linux Developer Guide , available from http://access.redhat.com/site/documentation/Red_Hat_Enterprise_Linux/ .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/sec-perf_event
9.4. JMS Master and Slave Back End Configuration
9.4. JMS Master and Slave Back End Configuration While using an Infinispan directory, it is recommended to use the JMS Master/Slave backend. In Infinispan, all nodes share the same index and since IndexWriter is active on different nodes, it acquires the lock on the same index. So instead of sending updates directly to the index, send it to a JMS queue and make a single node apply all changes on behalf of all other nodes. Warning Not enabling a JMS based backend will lead to timeout exceptions when multiple nodes attempt to write to the index. To configure a JMS slave, replace only the backend and set the directory provider to Infinispan. Set the same directory provider on the master and it will connect without the need to set up the copy job across nodes. For Master and Slave backend configuration examples, see the Back End Setup and Operations section of the Red Hat JBoss EAP Administration and Configuration Guide . Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/infinispan_query_guide/jms_master_and_slave_back_end_configuration
5. Networking
5. Networking NetworkManager NetworkManager is enabled by default if it is installed. However, NetworkManager is only installed by default in the client use cases. NetworkManager is available to be installed for the server use cases, but is not included in the default installation. 5.1. Technology Previews IPv6 support in IPVS The IPv6 support in IPVS (IP Virtual server) is considered Technology Preview.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/networking
4.3.5. Scripted bulk v2v process
4.3.5. Scripted bulk v2v process For bulk import scenarios, it is advantageous to be able to perform the scripted v2v process from a single host. Remote procedure calls to the Red Hat Enterprise Virtualization Manager can be made using the REST API. This enables a single script running on a single Linux host to perform both steps of the v2v process. Figure 4.5, "Scripted bulk v2v process" illustrates the steps performed by the script. Figure 4.5. Scripted bulk v2v process The scripted bulk v2v process involves the following steps, as shown in Figure 4.5, "Scripted bulk v2v process" : The virtual machine image is retrieved from the source hypervisor. The virtual machine image is packaged and copied to the export storage domain. A remote procedure call is made to the Red Hat Enterprise Virtualization Manager, telling it to import the virtual machine. The Manager imports the virtual machine from the export storage domain. To configure and run the scripted bulk v2v process: Procedure 4.9. Configuring and running the scripted bulk v2v process Ensure the REST API is enabled on the Red Hat Enterprise Virtualization Manager, and it is accessible from the Linux host running the v2v process. For more information about the REST API, see the Red Hat Enterprise Virtualization REST API Guide Guide . On the Linux host, create the file v2v.sh with the following contents. Ensure you edit the script to contain appropriate values for your environment. Example 4.5. Single host v2v script #!/bin/sh # Declare all VMs to import XENDOMAINS=("rhelxen" "rhel5") KVMDOMAINS=("rhelkvm") VMWAREVMS=("rhel54vmware") # Iterate through each Xen domain, performing the conversion for domain in USD{XENDOMAINS[@]} do virt-v2v -ic xen:///localhost -o rhev -os storage.example.com:/exportdomain --network rhevm USDdomain done # Iterate through each KVM domain, performing the conversion for domain in USD{KVMDOMAINS[@]} do virt-v2v -o rhev -os storage.example.com:/exportdomain --network rhevm USDdomain done # Iterate through each VMware VM, performing the conversion for vm in USD{VMWAREVMS[@]} do virt-v2v -ic esx://esx.example.com/?no_verify=1 -o rhev -os storage.example.com:/exportdomain --network rhevm USDvm done # Call the import VM procedure remotely on the RHEV Manager export BASE_URL='https://[rhevm-host]' export HTTP_USER='user@internal' export HTTP_PASSWORD='password' curl -o rhevm.cer http://[rhevm-host]/ca.crt # Get the export storage domains curl -X GET -H "Accept: application/xml" -u "USD{HTTP_USER}:USD{HTTP_PASSWORD}" --cacert rhevm.cer USD{BASE_URL}/api/storagedomains?search=nfs_export -o exportdomain EXPORT_DOMAIN=`xpath exportdomain '/storage_domains/storage_domain/@id' | sed -e 's/ id=//' | sed -e 's/"//g'` # Get the datacenter curl -X GET -H "Accept: application/xml" -u "USD{HTTP_USER}:USD{HTTP_PASSWORD}" --cacert rhevm.cer USD{BASE_URL}/api/datacenters?search=NFS -o dc DC=`xpath dc '/data_centers/data_center/@id' | sed -e 's/ id=//' | sed -e 's/"//g'` # Get the cluster curl -X GET -H "Accept: application/xml" -u "USD{HTTP_USER}:USD{HTTP_PASSWORD}" --cacert rhevm.cer USD{BASE_URL}/api/clusters?search=NFS -o cluster CLUSTER_ELEMENT=`xpath cluster '/clusters/cluster/@id' | sed -e 's/ id=//' | sed -e 's/"//g'` # List contents of export storage domain curl -X GET -H "Accept: application/xml" -u "USD{HTTP_USER}:USD{HTTP_PASSWORD}" --cacert rhevm.cer USD{BASE_URL}/api/storagedomains/USD{EXPORT_DOMAIN}/vms -o vms # For each vm, export VMS=`xpath vms '/vms/vm/actions/link[@rel="import"]/@href' | sed -e 's/ href="//g' | sed -e 's/"/ /g'` for vms in USDVMS do curl -v -u "USD{HTTP_USER}:USD{HTTP_PASSWORD}" -H "Content-type: application/xml" -d '<action><cluster><name>cluster_name</name></cluster><storage_domain><name>data_domain</name></storage_domain><overwrite>true</overwrite><discard_snapshots>true</discard_snapshots></action>' --cacert rhevm.cer USD{BASE_URL}USDvms done Note Use the POST method to export virtual machines with the REST API. For more information about using the REST API, see the Red Hat Enterprise Virtualization REST API Guide . Run the v2v.sh script. It can take several hours to convert and import a large number of virtual machines.
[ "#!/bin/sh Declare all VMs to import XENDOMAINS=(\"rhelxen\" \"rhel5\") KVMDOMAINS=(\"rhelkvm\") VMWAREVMS=(\"rhel54vmware\") Iterate through each Xen domain, performing the conversion for domain in USD{XENDOMAINS[@]} do virt-v2v -ic xen:///localhost -o rhev -os storage.example.com:/exportdomain --network rhevm USDdomain done Iterate through each KVM domain, performing the conversion for domain in USD{KVMDOMAINS[@]} do virt-v2v -o rhev -os storage.example.com:/exportdomain --network rhevm USDdomain done Iterate through each VMware VM, performing the conversion for vm in USD{VMWAREVMS[@]} do virt-v2v -ic esx://esx.example.com/?no_verify=1 -o rhev -os storage.example.com:/exportdomain --network rhevm USDvm done Call the import VM procedure remotely on the RHEV Manager export BASE_URL='https://[rhevm-host]' export HTTP_USER='user@internal' export HTTP_PASSWORD='password' curl -o rhevm.cer http://[rhevm-host]/ca.crt Get the export storage domains curl -X GET -H \"Accept: application/xml\" -u \"USD{HTTP_USER}:USD{HTTP_PASSWORD}\" --cacert rhevm.cer USD{BASE_URL}/api/storagedomains?search=nfs_export -o exportdomain EXPORT_DOMAIN=`xpath exportdomain '/storage_domains/storage_domain/@id' | sed -e 's/ id=//' | sed -e 's/\"//g'` Get the datacenter curl -X GET -H \"Accept: application/xml\" -u \"USD{HTTP_USER}:USD{HTTP_PASSWORD}\" --cacert rhevm.cer USD{BASE_URL}/api/datacenters?search=NFS -o dc DC=`xpath dc '/data_centers/data_center/@id' | sed -e 's/ id=//' | sed -e 's/\"//g'` Get the cluster curl -X GET -H \"Accept: application/xml\" -u \"USD{HTTP_USER}:USD{HTTP_PASSWORD}\" --cacert rhevm.cer USD{BASE_URL}/api/clusters?search=NFS -o cluster CLUSTER_ELEMENT=`xpath cluster '/clusters/cluster/@id' | sed -e 's/ id=//' | sed -e 's/\"//g'` List contents of export storage domain curl -X GET -H \"Accept: application/xml\" -u \"USD{HTTP_USER}:USD{HTTP_PASSWORD}\" --cacert rhevm.cer USD{BASE_URL}/api/storagedomains/USD{EXPORT_DOMAIN}/vms -o vms For each vm, export VMS=`xpath vms '/vms/vm/actions/link[@rel=\"import\"]/@href' | sed -e 's/ href=\"//g' | sed -e 's/\"/ /g'` for vms in USDVMS do curl -v -u \"USD{HTTP_USER}:USD{HTTP_PASSWORD}\" -H \"Content-type: application/xml\" -d '<action><cluster><name>cluster_name</name></cluster><storage_domain><name>data_domain</name></storage_domain><overwrite>true</overwrite><discard_snapshots>true</discard_snapshots></action>' --cacert rhevm.cer USD{BASE_URL}USDvms done" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/sect-Scripted_Bulk_v2v_Process
7.5. Configure Distribution Mode (Remote Client-Server Mode)
7.5. Configure Distribution Mode (Remote Client-Server Mode) Distribution mode is a clustered mode in Red Hat JBoss Data Grid. Distribution mode can be added to any cache container using the following procedure: Procedure 7.1. The distributed-cache Element The distributed-cache element configures settings for the distributed cache using the following parameters: The name parameter provides a unique identifier for the cache. The mode parameter sets the clustered cache mode. Valid values are SYNC (synchronous) and ASYNC (asynchronous). The (optional) segments parameter specifies the number of hash space segments per cluster. The recommended value for this parameter is ten multiplied by the cluster size and the default value is 20 . The start parameter specifies whether the cache starts when the server starts up or when it is requested or deployed. The owners parameter indicates the number of nodes that will contain the hash segment. If statistics are enabled at the container level, per-cache statistics can be selectively disabled for caches that do not require monitoring by setting the statistics attribute to false . Important JGroups must be appropriately configured for clustered mode before attempting to load this configuration. For details about the cache-container , locking , and transaction elements, see the appropriate chapter. Report a bug
[ "<cache-container name=\"clustered\" default-cache=\"default\" statistics=\"true\"> <transport executor=\"infinispan-transport\" lock-timeout=\"60000\"/> <distributed-cache name=\"default\" mode=\"SYNC\" segments=\"20\" start=\"EAGER\" owners=\"2\" statistics=\"true\"> <!-- Additional configuration information here --> </distributed-cache> </cache-container>" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/configure_distribution_mode
Chapter 4. Uninstalling OpenShift Data Foundation from external storage system
Chapter 4. Uninstalling OpenShift Data Foundation from external storage system Use the steps in this section to uninstall OpenShift Data Foundation. Uninstalling OpenShift Data Foundation does not remove the RBD pool from the external cluster, or uninstall the external Red Hat Ceph Storage cluster. Uninstall Annotations Annotations on the Storage Cluster are used to change the behavior of the uninstall process. To define the uninstall behavior, the following two annotations have been introduced in the storage cluster: uninstall.ocs.openshift.io/cleanup-policy: delete uninstall.ocs.openshift.io/mode: graceful Note The uninstall.ocs.openshift.io/cleanup-policy is not applicable for external mode. The below table provides information on the different values that can used with these annotations: Table 4.1. uninstall.ocs.openshift.io uninstall annotations descriptions Annotation Value Default Behavior cleanup-policy delete Yes Rook cleans up the physical drives and the DataDirHostPath cleanup-policy retain No Rook does not clean up the physical drives and the DataDirHostPath mode graceful Yes Rook and NooBaa pauses the uninstall process until the PVCs and the OBCs are removed by the administrator/user mode forced No Rook and NooBaa proceeds with uninstall even if PVCs/OBCs provisioned using Rook and NooBaa exist respectively You can change the uninstall mode by editing the value of the annotation by using the following commands: Prerequisites Ensure that the OpenShift Data Foundation cluster is in a healthy state. The uninstall process can fail when some of the pods are not terminated successfully due to insufficient resources or nodes. In case the cluster is in an unhealthy state, contact Red Hat Customer Support before uninstalling OpenShift Data Foundation. Ensure that applications are not consuming persistent volume claims (PVCs) or object bucket claims (OBCs) using the storage classes provided by OpenShift Data Foundation. Procedure Delete the volume snapshots that are using OpenShift Data Foundation. List the volume snapshots from all the namespaces From the output of the command, identify and delete the volume snapshots that are using OpenShift Data Foundation. Delete PVCs and OBCs that are using OpenShift Data Foundation. In the default uninstall mode (graceful), the uninstaller waits till all the PVCs and OBCs that use OpenShift Data Foundation are deleted. If you wish to delete the Storage Cluster without deleting the PVCs beforehand, you may set the uninstall mode annotation to "forced" and skip this step. Doing so will result in orphan PVCs and OBCs in the system. Delete OpenShift Container Platform monitoring stack PVCs using OpenShift Data Foundation. See Removing monitoring stack from OpenShift Data Foundation Delete OpenShift Container Platform Registry PVCs using OpenShift Data Foundation. Removing OpenShift Container Platform registry from OpenShift Data Foundation Delete OpenShift Container Platform logging PVCs using OpenShift Data Foundation. Removing the cluster logging operator from OpenShift Data Foundation Delete other PVCs and OBCs provisioned using OpenShift Data Foundation. Given below is a sample script to identify the PVCs and OBCs provisioned using OpenShift Data Foundation. The script ignores the PVCs and OBCs that are used internally by OpenShift Data Foundation. Delete the OBCs. Delete the PVCs. Ensure that you have removed any custom backing stores, bucket classes, and so on that are created in the cluster. Delete the Storage Cluster object and wait for the removal of the associated resources. Delete the namespace and wait until the deletion is complete. You will need to switch to another project if openshift-storage is the active project. For example: The project is deleted if the following command returns a NotFound error. Note While uninstalling OpenShift Data Foundation, if the namespace is not deleted completely and remains in Terminating state, perform the steps in Troubleshooting and deleting remaining resources during Uninstall to identify objects that are blocking the namespace from being terminated. Confirm all PVs provisioned using OpenShift Data Foundation are deleted. If there is any PV left in the Released state, delete it. Remove CustomResourceDefinitions . To ensure that OpenShift Data Foundation is uninstalled completely: In the OpenShift Container Platform Web Console, click Storage . Verify that OpenShift Data Foundation no longer appears under Storage. 4.1. Removing monitoring stack from OpenShift Data Foundation Use this section to clean up the monitoring stack from OpenShift Data Foundation. The PVCs that are created as a part of configuring the monitoring stack are in the openshift-monitoring namespace. Prerequisites PVCs are configured to use the OpenShift Container Platform monitoring stack. For information, see configuring monitoring stack . Procedure List the pods and PVCs that are currently running in the openshift-monitoring namespace. Edit the monitoring configmap . Remove any config sections that reference the OpenShift Data Foundation storage classes as shown in the following example and save it. Before editing After editing In this example, alertmanagerMain and prometheusK8s monitoring components are using the OpenShift Data Foundation PVCs. List the pods consuming the PVC. In this example, the alertmanagerMain and prometheusK8s pods that were consuming the PVCs are in the Terminating state. You can delete the PVCs once these pods are no longer using OpenShift Data Foundation PVC. Delete relevant PVCs. Make sure you delete all the PVCs that are consuming the storage classes. 4.2. Removing OpenShift Container Platform registry from OpenShift Data Foundation Use this section to clean up the OpenShift Container Platform registry from OpenShift Data Foundation. If you want to configure an alternative storage, see image registry The PVCs that are created as a part of configuring OpenShift Container Platform registry are in the openshift-image-registry namespace. Prerequisites The image registry should have been configured to use an OpenShift Data Foundation PVC. Procedure Edit the configs.imageregistry.operator.openshift.io object and remove the content in the storage section. Before editing After editing In this example, the PVC is called registry-cephfs-rwx-pvc , which is now safe to delete. Delete the PVC. 4.3. Removing the cluster logging operator from OpenShift Data Foundation Use this section to clean up the cluster logging operator from OpenShift Data Foundation. The Persistent Volume Claims (PVCs) that are created as a part of configuring the cluster logging operator are in the openshift-logging namespace. Prerequisites The cluster logging instance should have been configured to use the OpenShift Data Foundation PVCs. Procedure Remove the ClusterLogging instance in the namespace. The PVCs in the openshift-logging namespace are now safe to delete. Delete the PVCs. <pvc-name> Is the name of the PVC 4.4. Removing external IBM FlashSystem secret You need to clean up the FlashSystem secret from OpenShift Data Foundation while uninstalling. This secret is created when you configure the external IBM FlashSystem Storage. For more information, see Creating an OpenShift Data Foundation Cluster for external IBM FlashSystem storage . Procedure Remove the IBM FlashSystem secret by using the following command:
[ "oc annotate storagecluster ocs-external-storagecluster -n openshift-storage uninstall.ocs.openshift.io/mode=\"forced\" --overwrite storagecluster.ocs.openshift.io/ocs-external-storagecluster annotated", "oc get volumesnapshot --all-namespaces", "oc delete volumesnapshot <VOLUME-SNAPSHOT-NAME> -n <NAMESPACE>", "#!/bin/bash RBD_PROVISIONER=\"openshift-storage.rbd.csi.ceph.com\" CEPHFS_PROVISIONER=\"openshift-storage.cephfs.csi.ceph.com\" NOOBAA_PROVISIONER=\"openshift-storage.noobaa.io/obc\" RGW_PROVISIONER=\"openshift-storage.ceph.rook.io/bucket\" NOOBAA_DB_PVC=\"noobaa-db\" NOOBAA_BACKINGSTORE_PVC=\"noobaa-default-backing-store-noobaa-pvc\" Find all the OCS StorageClasses OCS_STORAGECLASSES=USD(oc get storageclasses | grep -e \"USDRBD_PROVISIONER\" -e \"USDCEPHFS_PROVISIONER\" -e \"USDNOOBAA_PROVISIONER\" -e \"USDRGW_PROVISIONER\" | awk '{print USD1}') List PVCs in each of the StorageClasses for SC in USDOCS_STORAGECLASSES do echo \"======================================================================\" echo \"USDSC StorageClass PVCs and OBCs\" echo \"======================================================================\" oc get pvc --all-namespaces --no-headers 2>/dev/null | grep USDSC | grep -v -e \"USDNOOBAA_DB_PVC\" -e \"USDNOOBAA_BACKINGSTORE_PVC\" oc get obc --all-namespaces --no-headers 2>/dev/null | grep USDSC echo done", "oc delete obc <obc name> -n <project name>", "oc delete pvc <pvc name> -n <project-name>", "oc delete -n openshift-storage storagesystem --all --wait=true", "oc project default oc delete project openshift-storage --wait=true --timeout=5m", "oc get project openshift-storage", "oc get pv oc delete pv <pv name>", "oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io cephobjectrealms.ceph.rook.io cephobjectzonegroups.ceph.rook.io cephobjectzones.ceph.rook.io cephrbdmirrors.ceph.rook.io storagesystems.odf.openshift.io --wait=true --timeout=5m", "oc get pod,pvc -n openshift-monitoring NAME READY STATUS RESTARTS AGE pod/alertmanager-main-0 3/3 Running 0 8d pod/alertmanager-main-1 3/3 Running 0 8d pod/alertmanager-main-2 3/3 Running 0 8d pod/cluster-monitoring- operator-84457656d-pkrxm 1/1 Running 0 8d pod/grafana-79ccf6689f-2ll28 2/2 Running 0 8d pod/kube-state-metrics- 7d86fb966-rvd9w 3/3 Running 0 8d pod/node-exporter-25894 2/2 Running 0 8d pod/node-exporter-4dsd7 2/2 Running 0 8d pod/node-exporter-6p4zc 2/2 Running 0 8d pod/node-exporter-jbjvg 2/2 Running 0 8d pod/node-exporter-jj4t5 2/2 Running 0 6d18h pod/node-exporter-k856s 2/2 Running 0 6d18h pod/node-exporter-rf8gn 2/2 Running 0 8d pod/node-exporter-rmb5m 2/2 Running 0 6d18h pod/node-exporter-zj7kx 2/2 Running 0 8d pod/openshift-state-metrics- 59dbd4f654-4clng 3/3 Running 0 8d pod/prometheus-adapter- 5df5865596-k8dzn 1/1 Running 0 7d23h pod/prometheus-adapter- 5df5865596-n2gj9 1/1 Running 0 7d23h pod/prometheus-k8s-0 6/6 Running 1 8d pod/prometheus-k8s-1 6/6 Running 1 8d pod/prometheus-operator- 55cfb858c9-c4zd9 1/1 Running 0 6d21h pod/telemeter-client- 78fc8fc97d-2rgfp 3/3 Running 0 8d NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-0 Bound pvc-0d519c4f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-1 Bound pvc-0d5a9825-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-2 Bound pvc-0d6413dc-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-0 Bound pvc-0b7c19b0-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-1 Bound pvc-0b8aed3f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", ". . . apiVersion: v1 data: config.yaml: | alertmanagerMain: volumeClaimTemplate: metadata: name: my-alertmanager-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-external-storagecluster-ceph-rbd prometheusK8s: volumeClaimTemplate: metadata: name: my-prometheus-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-external-storagecluster-ceph-rbd kind: ConfigMap metadata: creationTimestamp: \"2019-12-02T07:47:29Z\" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: \"22110\" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: fd6d988b-14d7-11ea-84ff-066035b9efa8 . . .", ". . . apiVersion: v1 data: config.yaml: | kind: ConfigMap metadata: creationTimestamp: \"2019-11-21T13:07:05Z\" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: \"404352\" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: d12c796a-0c5f-11ea-9832-063cd735b81c . . .", "oc get pod,pvc -n openshift-monitoring NAME READY STATUS RESTARTS AGE pod/alertmanager-main-0 3/3 Terminating 0 10h pod/alertmanager-main-1 3/3 Terminating 0 10h pod/alertmanager-main-2 3/3 Terminating 0 10h pod/cluster-monitoring-operator-84cd9df668-zhjfn 1/1 Running 0 18h pod/grafana-5db6fd97f8-pmtbf 2/2 Running 0 10h pod/kube-state-metrics-895899678-z2r9q 3/3 Running 0 10h pod/node-exporter-4njxv 2/2 Running 0 18h pod/node-exporter-b8ckz 2/2 Running 0 11h pod/node-exporter-c2vp5 2/2 Running 0 18h pod/node-exporter-cq65n 2/2 Running 0 18h pod/node-exporter-f5sm7 2/2 Running 0 11h pod/node-exporter-f852c 2/2 Running 0 18h pod/node-exporter-l9zn7 2/2 Running 0 11h pod/node-exporter-ngbs8 2/2 Running 0 18h pod/node-exporter-rv4v9 2/2 Running 0 18h pod/openshift-state-metrics-77d5f699d8-69q5x 3/3 Running 0 10h pod/prometheus-adapter-765465b56-4tbxx 1/1 Running 0 10h pod/prometheus-adapter-765465b56-s2qg2 1/1 Running 0 10h pod/prometheus-k8s-0 6/6 Terminating 1 9m47s pod/prometheus-k8s-1 6/6 Terminating 1 9m47s pod/prometheus-operator-cbfd89f9-ldnwc 1/1 Running 0 43m pod/telemeter-client-7b5ddb4489-2xfpz 3/3 Running 0 10h NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-0 Bound pvc-2eb79797-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-1 Bound pvc-2ebeee54-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-2 Bound pvc-2ec6a9cf-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-prometheus-claim-prometheus-k8s-0 Bound pvc-3162a80c-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-prometheus-claim-prometheus-k8s-1 Bound pvc-316e99e2-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h", "oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m", "oc edit configs.imageregistry.operator.openshift.io", ". . . storage: pvc: claim: registry-cephfs-rwx-pvc . . .", ". . . storage: emptyDir: {} . . .", "oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m", "oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5m", "oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m", "oc delete secret -n openshift-storage ibm-flashsystem-storage" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_in_external_mode/uninstalling-openshift-data-foundation-external-in-external-mode_rhodf
Chapter 23. Virtualization
Chapter 23. Virtualization USB 3.0 host adapter (xHCI) emulation, see the section called "USB 3.0 Support for KVM Guests" Open Virtual Machine Firmware (OVMF), see the section called "Open Virtual Machine Firmware" LPAR Watchdog for IBM System z, see the section called "LPAR Watchdog for IBM System z"
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/chap-tp-virtualization
probe::kprocess.start
probe::kprocess.start Name probe::kprocess.start - Starting new process Synopsis kprocess.start Values None Context Newly created process. Description Fires immediately before a new process begins execution.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-kprocess-start
Chapter 7. MachineConfigNode [machineconfiguration.openshift.io/v1alpha1]
Chapter 7. MachineConfigNode [machineconfiguration.openshift.io/v1alpha1] Description MachineConfigNode describes the health of the Machines on the system Compatibility level 4: No compatibility is provided, the API can change at any point for any reason. These capabilities should not be used by applications needing long term support. Type object Required spec 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec describes the configuration of the machine config node. status object status describes the last observed state of this machine config node. 7.1.1. .spec Description spec describes the configuration of the machine config node. Type object Required configVersion node pool Property Type Description configVersion object configVersion holds the desired config version for the node targeted by this machine config node resource. The desired version represents the machine config the node will attempt to update to. This gets set before the machine config operator validates the new machine config against the current machine config. node object node contains a reference to the node for this machine config node. pool object pool contains a reference to the machine config pool that this machine config node's referenced node belongs to. 7.1.2. .spec.configVersion Description configVersion holds the desired config version for the node targeted by this machine config node resource. The desired version represents the machine config the node will attempt to update to. This gets set before the machine config operator validates the new machine config against the current machine config. Type object Required desired Property Type Description desired string desired is the name of the machine config that the the node should be upgraded to. This value is set when the machine config pool generates a new version of its rendered configuration. When this value is changed, the machine config daemon starts the node upgrade process. This value gets set in the machine config node spec once the machine config has been targeted for upgrade and before it is validated. Must be a lowercase RFC-1123 hostname ( https://tools.ietf.org/html/rfc1123 ) It may consist of only alphanumeric characters, hyphens (-) and periods (.) and must be at most 253 characters in length. 7.1.3. .spec.node Description node contains a reference to the node for this machine config node. Type object Required name Property Type Description name string name is the object name. Must be a lowercase RFC-1123 hostname ( https://tools.ietf.org/html/rfc1123 ) It may consist of only alphanumeric characters, hyphens (-) and periods (.) and must be at most 253 characters in length. 7.1.4. .spec.pool Description pool contains a reference to the machine config pool that this machine config node's referenced node belongs to. Type object Required name Property Type Description name string name is the object name. Must be a lowercase RFC-1123 hostname ( https://tools.ietf.org/html/rfc1123 ) It may consist of only alphanumeric characters, hyphens (-) and periods (.) and must be at most 253 characters in length. 7.1.5. .status Description status describes the last observed state of this machine config node. Type object Required configVersion Property Type Description conditions array conditions represent the observations of a machine config node's current state. conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } configVersion object configVersion describes the current and desired machine config for this node. The current version represents the current machine config for the node and is updated after a successful update. The desired version represents the machine config the node will attempt to update to. This desired machine config has been compared to the current machine config and has been validated by the machine config operator as one that is valid and that exists. observedGeneration integer observedGeneration represents the generation observed by the controller. This field is updated when the controller observes a change to the desiredConfig in the configVersion of the machine config node spec. 7.1.6. .status.conditions Description conditions represent the observations of a machine config node's current state. Type array 7.1.7. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 7.1.8. .status.configVersion Description configVersion describes the current and desired machine config for this node. The current version represents the current machine config for the node and is updated after a successful update. The desired version represents the machine config the node will attempt to update to. This desired machine config has been compared to the current machine config and has been validated by the machine config operator as one that is valid and that exists. Type object Required desired Property Type Description current string current is the name of the machine config currently in use on the node. This value is updated once the machine config daemon has completed the update of the configuration for the node. This value should match the desired version unless an upgrade is in progress. Must be a lowercase RFC-1123 hostname ( https://tools.ietf.org/html/rfc1123 ) It may consist of only alphanumeric characters, hyphens (-) and periods (.) and must be at most 253 characters in length. desired string desired is the MachineConfig the node wants to upgrade to. This value gets set in the machine config node status once the machine config has been validated against the current machine config. Must be a lowercase RFC-1123 hostname ( https://tools.ietf.org/html/rfc1123 ) It may consist of only alphanumeric characters, hyphens (-) and periods (.) and must be at most 253 characters in length. 7.2. API endpoints The following API endpoints are available: /apis/machineconfiguration.openshift.io/v1alpha1/machineconfignodes DELETE : delete collection of MachineConfigNode GET : list objects of kind MachineConfigNode POST : create a MachineConfigNode /apis/machineconfiguration.openshift.io/v1alpha1/machineconfignodes/{name} DELETE : delete a MachineConfigNode GET : read the specified MachineConfigNode PATCH : partially update the specified MachineConfigNode PUT : replace the specified MachineConfigNode /apis/machineconfiguration.openshift.io/v1alpha1/machineconfignodes/{name}/status GET : read status of the specified MachineConfigNode PATCH : partially update status of the specified MachineConfigNode PUT : replace status of the specified MachineConfigNode 7.2.1. /apis/machineconfiguration.openshift.io/v1alpha1/machineconfignodes HTTP method DELETE Description delete collection of MachineConfigNode Table 7.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind MachineConfigNode Table 7.2. HTTP responses HTTP code Reponse body 200 - OK MachineConfigNodeList schema 401 - Unauthorized Empty HTTP method POST Description create a MachineConfigNode Table 7.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.4. Body parameters Parameter Type Description body MachineConfigNode schema Table 7.5. HTTP responses HTTP code Reponse body 200 - OK MachineConfigNode schema 201 - Created MachineConfigNode schema 202 - Accepted MachineConfigNode schema 401 - Unauthorized Empty 7.2.2. /apis/machineconfiguration.openshift.io/v1alpha1/machineconfignodes/{name} Table 7.6. Global path parameters Parameter Type Description name string name of the MachineConfigNode HTTP method DELETE Description delete a MachineConfigNode Table 7.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified MachineConfigNode Table 7.9. HTTP responses HTTP code Reponse body 200 - OK MachineConfigNode schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified MachineConfigNode Table 7.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.11. HTTP responses HTTP code Reponse body 200 - OK MachineConfigNode schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified MachineConfigNode Table 7.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.13. Body parameters Parameter Type Description body MachineConfigNode schema Table 7.14. HTTP responses HTTP code Reponse body 200 - OK MachineConfigNode schema 201 - Created MachineConfigNode schema 401 - Unauthorized Empty 7.2.3. /apis/machineconfiguration.openshift.io/v1alpha1/machineconfignodes/{name}/status Table 7.15. Global path parameters Parameter Type Description name string name of the MachineConfigNode HTTP method GET Description read status of the specified MachineConfigNode Table 7.16. HTTP responses HTTP code Reponse body 200 - OK MachineConfigNode schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified MachineConfigNode Table 7.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.18. HTTP responses HTTP code Reponse body 200 - OK MachineConfigNode schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified MachineConfigNode Table 7.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.20. Body parameters Parameter Type Description body MachineConfigNode schema Table 7.21. HTTP responses HTTP code Reponse body 200 - OK MachineConfigNode schema 201 - Created MachineConfigNode schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/machine_apis/machineconfignode-machineconfiguration-openshift-io-v1alpha1
Hardware Guide
Hardware Guide Red Hat Ceph Storage 6 Hardware selection recommendations for Red Hat Ceph Storage Red Hat Ceph Storage Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/hardware_guide/index
Chapter 8. System and Services
Chapter 8. System and Services systemd systemd is a system and service manager for Linux, and replaces SysV and Upstart used in releases of Red Hat Enterprise Linux. systemd is compatible with SysV and Linux Standard Base init scripts. systemd offers, among others, the following capabilities: Aggressive parallelization capabilities; Use of socket and D-Bus activation for starting services; On-demand starting of daemons; Managing of control groups; Creating of system state snapshots and restoring of the system state. For detailed information about systemd and its configuration, see System Administrator's Guide .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/chap-red_hat_enterprise_linux-7.0_release_notes-system_and_services
Chapter 1. Console APIs
Chapter 1. Console APIs 1.1. ConsoleCLIDownload [console.openshift.io/v1] Description ConsoleCLIDownload is an extension for configuring openshift web console command line interface (CLI) downloads. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.2. ConsoleExternalLogLink [console.openshift.io/v1] Description ConsoleExternalLogLink is an extension for customizing OpenShift web console log links. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.3. ConsoleLink [console.openshift.io/v1] Description ConsoleLink is an extension for customizing OpenShift web console links. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.4. ConsoleNotification [console.openshift.io/v1] Description ConsoleNotification is the extension for configuring openshift web console notifications. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.5. ConsolePlugin [console.openshift.io/v1] Description ConsolePlugin is an extension for customizing OpenShift web console by dynamically loading code from another service running on the cluster. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.6. ConsoleQuickStart [console.openshift.io/v1] Description ConsoleQuickStart is an extension for guiding user through various workflows in the OpenShift web console. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.7. ConsoleSample [console.openshift.io/v1] Description ConsoleSample is an extension to customizing OpenShift web console by adding samples. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.8. ConsoleYAMLSample [console.openshift.io/v1] Description ConsoleYAMLSample is an extension for customizing OpenShift web console YAML samples. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/console_apis/console-apis
Chapter 12. Configuring subsystem logs
Chapter 12. Configuring subsystem logs This chapter describes... For an overview on logs, see 2.3.14 Logs in Chapter 2 Architecture Overview in the Planning, Installation and Deployment Guide (Common Criteria Edition) . For log configuration during the installation and additional information, see Chapter 13 Configuring Logs in the Planning, Installation and Deployment Guide (Common Criteria Edition) . 12.1. Managing logs The Certificate System subsystem log files record events related to operations within that specific subsystem instance. For each subsystem, different logs are kept for issues such as installation, access, and web servers. All subsystems have similar log configuration, options, and administrative paths. 12.1.1. Configuring logs Logs are configured through the subsystem's CS.cfg file. Specialized logs, such as signed audit logs and custom logs, can also be created through the Console or configuration file. Audit logs can be configured through the subsystem Console for the CA, OCSP, TKS, and KRA subsystems. TPS logs are only configured through the configuration file. In the navigation tree of the Configuration tab, select Log . The Log Event Listener Management tab lists the currently configured listeners. To create a new log instance, click Add , and select a module plugin from the list in the Select Log Event Listener Plug-in Implementation window. Set or modify the fields in the Log Event Listener Editor window. The different parameters are listed in the below table. Table 12.1. Log event listener fields Field Description Log Event Listener ID Gives the unique name that identifies the listener. The names can have any combination of letters (aA to zZ), digits (0 to 9), an underscore (_), and a hyphen (-), but it cannot contain other characters or spaces. type Gives the type of log file. E.g. signedAudit . enabled Sets whether the log is active. Only enabled logs actually record events. The value is either true or false . level Sets the log level in the text field. The level must be manually entered in the field; there is no selection menu. The choices are Debug , Information , Warning , Failure , Misconfiguration , Catastrophe , and Security . For more information, see 13.1.2 Log Levels (Message Categories) in the Planning, Installation and Deployment Guide (Common Criteria Edition) . fileName Gives the full path, including the file name, to the log file. The subsystem user should have read/write permission to the file. [id=""] bufferSize Sets the buffer size in kilobytes (KB) for the log. Once the buffer reaches this size, the contents of the buffer are flushed out and copied to the log file. The default size is 512 KB. For more information on buffered logging, see 13.1.3 Buffered and unbuffered logging in the Planning, Installation and Deployment Guide (Common Criteria Edition) . flushInterval Sets the amount of time before the contents of the buffer are flushed out and added to the log file. The default interval is 5 seconds. maxFileSize Sets the size, in kilobytes (KB), a log file can become before it is rotated. Once it reaches this size, the file is copied to a rotated file, and the log file is started new. For more information on log file rotation, see 13.1.4 Log file rotation in the Planning, Installation and Deployment Guide (Common Criteria Edition) . The default size is 2000 KB. rolloverInterval Sets the frequency for the server to rotate the active log file. The available options are hourly, daily, weekly, monthly, and yearly. The default value is 2592000 which represents monthly in seconds. For more information, see 13.1.4 Log file rotation in the Planning, Installation and Deployment Guide (Common Criteria Edition) . During post-installation configuration, you have the opportunity to make some configuration changes directly by updating the configuration files prior to deployment. Log configuration is one of the operations. For instructions on how to configure logs by editing the CS.cfg file or running the pki-server command, see Chapter 13 Configuring Logs in the CS.cfg File in the Planning, Installation and Deployment Guide (Common Criteria Edition) . 12.1.2. Managing audit logs The audit log contains records for events that have been set up as recordable events. If the logSigning attribute is set to true , the audit log is signed with a log signing certificate belonging to the server. This certificate can be used by auditors to verify that the log has not been tampered with. By default, regular audit logs are located in the /var/log/pki/instance_name/subsystem_name/ directory with other types of logs, while signed audit logs are written to /var/log/pki/instance_name/subsystem_name/signedAudit/ . The default location for logs can be changed by modifying the configuration. Note Signed audit logs are optional. To enable them, please refer to Section 12.1.2.3, "Configuring a signed audit log in the console" The signed audit log creates a log recording system events, and the events are selected from a list of potential events. When enabled, signed audit logs record a verbose set of messages about the selected event activity. Signed audit logs are configured by default when the instance is first created, but it is possible to configure signed audits logs after installation. (See Section 12.1.2.2, "Enabling signed audit logging after installation" .) It is also possible to edit the configuration or change the signing certificates after configuration, as covered in Section 12.1.2.3, "Configuring a signed audit log in the console" . 12.1.2.1. A List of audit events For a list of audit events in Certificate System, see Appendix E, Audit events . 12.1.2.2. Enabling signed audit logging after installation By default, audit logging is enabled upon installation. However, you need to enable log signing manually after installation. Please see "Enabling signed audit logging" in the Installation Guide. 12.1.2.3. Configuring a signed audit log in the console Signed audit logs are configured by default when the instance is first created, but it is possible to edit the configuration or change the signing certificates after configuration. Note Provide enough space in the file system for the signed audit logs, since they can be large. A log is set to a signed audit log by setting the logSigning parameter to enable and providing the nickname of the certificate used to sign the log. A special log signing certificate is created when the subsystems are first configured. Only a user with auditor privileges can access and view a signed audit log. Auditors can use the AuditVerify tool to verify that signed audit logs have not been tampered with. The signed audit log is created and enabled when the subsystem is configured, but it needs additional configuration to begin creating and signing audit logs. Open the Console. Note To create or configure the audit log by editing the CS.cfg file, see Chapter 13 Configuring Logs in the CS.cfg File in the Planning, Installation and Deployment Guide (Common Criteria Edition) . In the navigation tree of the Configuration tab, select Log . In the Log Event Listener Management tab, select the SignedAudit entry. Click Edit/View . There are three fields which must be reset in the Log Event Listener Editor window. Fill in the signedAuditCertNickname . This is the nickname of the certificate used to sign audit logs. An audit signing certificate is created when the subsystem is configured; it has a nickname like auditSigningCert cert- instance_name__subsystem_name . Note To get the audit signing certificate nickname, list the certificates in the subsystem's certificate database using certutil . For example: Set the logSigning field to true to enable signed logging. Set any events which are logged to the audit log. Appendix E, Audit events lists the loggable events. Log events are separated by commas with no spaces. Set any other settings for the log, such as the file name, the log level, the file size, or the rotation schedule. Note By default, regular audit logs are located in the /var/log/pki/instance_name/subsystem_name/ directory with other types of logs, while signed audit logs are written to /var/log/pki/instance_name/subsystem_name/signedAudit/ . The default location for logs can be changed by modifying the configuration. Save the log configuration. After enabling signed audit logging, assign auditor users by creating the user and assigning that entry to the auditor group. Members of the auditor group are the only users who can view and verify the signed audit log. See Section 11.3.2.1, "Creating role users" for details about setting up auditors. Auditors can verify logs by using the AuditVerify tool. See the AuditVerify(1) man page for details about using this tool. 12.1.2.4. Handling audit logging failures There are events that could cause the audit logging function to fail, so events cannot be written to the log. For example, audit logging can fail when the file system containing the audit log file is full or when the file permissions for the log file are accidentally changed. If audit logging fails, the Certificate System instance shuts down in the following manner. Servlets are disabled and will not process new requests. All pending and new requests are killed. The subsystem is shut down. When this happens, administrators and auditors should work together with the operating system administrator to resolve the disk space or file permission issues. When the IT problem is resolved, the auditor should make sure that the last audit log entries are signed. If not, they should be preserved by manual signing, archived, and removed to prevent audit verification failures in the future. When this is completed, the administrators can restart the Certificate System. 12.2. Using logs 12.2.1. Displaying and verifying signed audit logs This section explains how a user in the Auditor group displays and verifies signed audit logs. 12.2.1.1. Listing audit logs As a user with auditor privileges, use the pki subsystem -audit-file-find command to list existing audit log files on the server. For example, to list the audit log files on the CA hosted on server.example.com:8443 : The command uses the client certificate with the auditor nickname stored in the ~/.dogtag/nssdb/ directory for authenticating to the CA. For further details about the parameters used in the command and alternative authentication methods, see the pki(1) man page. 12.2.1.2. Downloading audit logs As a user with auditor privileges, use the pki subsystem -audit-file-retrieve command to download a specific audit log from the server. For example, to download an audit log file from the CA hosted on server.example.com : Optionally, list the available log files on the CA. See Section 12.2.1.1, "Listing audit logs" . Download the log file. For example, to download the ca_audit file: The command uses the client certificate with the auditor nickname stored in the ~/.dogtag/nssdb/ directory for authenticating to the CA. For further details about the parameters used in the command and alternative authentication methods, see the pki(1) man page. After downloading a log file, you can search for specific log entries, for example, using the grep utility: 12.2.1.3. Verifying signed audit logs If audit log signing is enabled, users with auditor privileges can verify the logs: Initialize the NSS database and import the CA certificate. For details, see Section 2.5.1.1, "Initializing the pki CLI" and 10.5 Importing a certificate into an NSS Database in the Planning, Installation and Deployment Guide (Common Criteria Edition) . If the audit signing certificate does not exist in the PKI client database, import it: Search the audit signing certificate for the subsystem logs you want to verify. For example: Import the audit signing certificate into the PKI client: Download the audit logs. See Section 12.2.1.2, "Downloading audit logs" . Verify the audit logs. Create a text file that contains a list of the audit log files you want to verify in chronological order. For example: Use the AuditVerify utility to verify the signatures. For example: For further details about using AuditVerify , see the AuditVerify(1) man page. 12.2.1.4. Viewing signed audit logs using pkiconsole Log into the administrative console using Auditor user: Select Status in the right navigation menu: Select SignedAudit from the log dropdown: Select the audit event and click on View: 12.2.2. Viewing non-audit logs To troubleshoot the subsystem for a pki administrator, check the error or informational messages that the server has logged. Examining the log files can also monitor many aspects of the server's operation. For logs that are not audit logs, such as debug logs under /var/lib/pki/ <instance>/<subsystem type> /logs , contact your operating system administrators to share the non-audit logs with you. 12.2.3. Displaying OS-level audit logs Note To see operating system-level audit logs using the instructions below, the auditd logging framework must be configured per 13.2.1 Enabling OS-level Audit Logs in the Planning, Installation and Deployment Guide (Common Criteria Edition) . To display operating system-level access logs, use the ausearch utility as root or as a privileged user with the sudo utility. 12.2.3.1. Displaying audit log deletion events Since these events are keyed (with rhcs_audit_deletion ), use the -k parameter to find events matching that key: 12.2.3.2. Displaying access to the NSS database for secret and private keys Since these events are keyed (with rhcs_audit_nssdb ), use the -k parameter to find events matching that key: 12.2.3.3. Displaying time change events Since these events are keyed (with rhcs_audit_time_change ), use the -k parameter to find events matching that key: 12.2.3.4. Displaying package update events Since these events are a typed message (of type SOFTWARE_UPDATE ), use the -m parameter to find events matching that type: 12.2.3.5. Displaying changes to the PKI configuration Since these events are keyed (with rhcs_audit_config ), use the -k parameter to find events matching that key: 12.2.4. Smart card error codes Smart cards can report certain error codes to the TPS; these are recorded in the TPS's debug log file, depending on the cause for the message. Table 12.2. Smart card error codes Return Code Description General Error Codes 6400 No specific diagnosis 6700 Wrong length in Lc 6982 Security status not satisfied 6985 Conditions of use not satisfied 6a86 Incorrect P1 P2 6d00 Invalid instruction 6e00 Invalid class Install Load Errors 6581 Memory Failure 6a80 Incorrect parameters in data field 6a84 Not enough memory space 6a88 Referenced data not found Delete Errors 6200 Application has been logically deleted 6581 Memory failure 6985 Referenced data cannot be deleted 6a88 Referenced data not found 6a82 Application not found 6a80 Incorrect values in command data Get Data Errors 6a88 Referenced data not found Get Status Errors 6310 More data available 6a88 Referenced data not found 6a80 Incorrect values in command data Load Errors 6581 Memory failure 6a84 Not enough memory space 6a86 Incorrect P1/P2 6985 Conditions of use not satisfied
[ "certutil -L -d /var/lib/pki-tomcat/alias Certificate Authority - Example Domain CT,c, subsystemCert cert-pki-tomcat u,u,u Server-Cert cert-pki-tomcat u,u,u auditSigningCert cert-pki-tomcat CA u,u,Pu", "pki -h server.example.com -p 8443 -n auditor ca-audit-file-find ----------------- 3 entries matched ----------------- File name: ca_audit.20170331225716 Size: 2883 File name: ca_audit.20170401001030 Size: 189 File name: ca_audit Size: 6705 ---------------------------- Number of entries returned 3 ----------------------------", "pki -U https://server.example.com:8443 -n auditor ca-audit-file-retrieve ca_audit", "grep \"\\[AuditEvent=ACCESS_SESSION_ESTABLISH\\]\" log_file", "pki ca-cert-find --name \"CA Audit Signing Certificate\" --------------- 1 entries found --------------- Serial Number: 0x5 Subject DN: CN=CA Audit Signing Certificate,O=EXAMPLE Status: VALID Type: X.509 version 3 Key Algorithm: PKCS #1 RSA with 2048-bit key Not Valid Before: Fri Jul 08 03:56:08 CEST 2016 Not Valid After: Thu Jun 28 03:56:08 CEST 2018 Issued On: Fri Jul 08 03:56:08 CEST 2016 Issued By: system ---------------------------- Number of entries returned 1 ----------------------------", "pki client-cert-import \"CA Audit Signing Certificate\" --serial 0x5 --trust \",,P\" --------------------------------------------------- Imported certificate \"CA Audit Signing Certificate\" ---------------------------------------------------", "cat > ~/audit.txt << EOF ca_audit.20170331225716 ca_audit.20170401001030 ca_audit EOF", "AuditVerify -d ~/.dogtag/nssdb/ -n \"CA Audit Signing Certificate\" -a ~/audit.txt Verification process complete. Valid signatures: 10 Invalid signatures: 0", "pkiconsole -d /root/.dogtag/pki_ecc_bootstrap/certs_db/ -n ecc_SubCA_AuditV https://rhcs10.example.com:21443/ca", "ausearch -k rhcs_audit_deletion", "ausearch -k rhcs_audit_nssdb", "ausearch -k rhcs_audit_time_change", "ausearch -m SOFTWARE_UPDATE", "ausearch -k rhcs_audit_config" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide_common_criteria_edition/logs
Chapter 2. Installing Dev Spaces
Chapter 2. Installing Dev Spaces This section contains instructions to install Red Hat OpenShift Dev Spaces. You can deploy only one instance of OpenShift Dev Spaces per cluster. Section 2.1.2, "Installing Dev Spaces on OpenShift using CLI" Section 2.1.3, "Installing Dev Spaces on OpenShift using the web console" Section 2.1.4, "Installing Dev Spaces in a restricted environment" 2.1. Installing Dev Spaces in the cloud Deploy and run Red Hat OpenShift Dev Spaces in the cloud. Prerequisites A OpenShift cluster to deploy OpenShift Dev Spaces on. dsc : The command line tool for Red Hat OpenShift Dev Spaces. See: Section 1.2, "Installing the dsc management tool" . 2.1.1. Deploying OpenShift Dev Spaces in the cloud Follow the instructions below to start the OpenShift Dev Spaces Server in the cloud using the dsc tool. Section 2.1.2, "Installing Dev Spaces on OpenShift using CLI" Section 2.1.3, "Installing Dev Spaces on OpenShift using the web console" Section 2.1.4, "Installing Dev Spaces in a restricted environment" https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.14/html-single/user_guide/index#installing-che-on-microsoft-azure 2.1.2. Installing Dev Spaces on OpenShift using CLI You can install OpenShift Dev Spaces on OpenShift. Prerequisites OpenShift Container Platform An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI . dsc . See: Section 1.2, "Installing the dsc management tool" . Procedure Optional: If you previously deployed OpenShift Dev Spaces on this OpenShift cluster, ensure that the OpenShift Dev Spaces instance is removed: Create the OpenShift Dev Spaces instance: Verification steps Verify the OpenShift Dev Spaces instance status: Navigate to the OpenShift Dev Spaces cluster instance: 2.1.3. Installing Dev Spaces on OpenShift using the web console If you have trouble installing OpenShift Dev Spaces on the command line , you can install it through the OpenShift web console. Prerequisites An OpenShift web console session by a cluster administrator. See Accessing the web console . An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI . For a repeat installation on the same OpenShift cluster: you uninstalled the OpenShift Dev Spaces instance according to Chapter 8, Uninstalling Dev Spaces . Procedure In the Administrator view of the OpenShift web console, go to Operators OperatorHub and search for Red Hat OpenShift Dev Spaces . Install the Red Hat OpenShift Dev Spaces Operator. Tip See Installing from OperatorHub using the web console . Caution The Red Hat OpenShift Dev Spaces Operator depends on the Dev Workspace Operator. If you install the Red Hat OpenShift Dev Spaces Operator manually to a non-default namespace, ensure that the Dev Workspace Operator is also installed in the same namespace. This is required as the Operator Lifecycle Manager will attempt to install the Dev Workspace Operator as a dependency within the Red Hat OpenShift Dev Spaces Operator namespace, potentially resulting in two conflicting installations of the Dev Workspace Operator if the latter is installed in a different namespace. Caution If you want to onboard Web Terminal Operator on the cluster make sure to use the same installation namespace as Red Hat OpenShift Dev Spaces Operator since both depend on Dev Workspace Operator. Web Terminal Operator, Red Hat OpenShift Dev Spaces Operator, and Dev Workspace Operator must be installed in the same namespace. Create the openshift-devspaces project in OpenShift as follows: Go to Operators Installed Operators Red Hat OpenShift Dev Spaces instance Specification Create CheCluster YAML view . In the YAML view , replace namespace: openshift-operators with namespace: openshift-devspaces . Select Create . Tip See Creating applications from installed Operators . Verification In Red Hat OpenShift Dev Spaces instance Specification , go to devspaces , landing on the Details tab. Under Message , check that there is None , which means no errors. Under Red Hat OpenShift Dev Spaces URL , wait until the URL of the OpenShift Dev Spaces instance appears, and then open the URL to check the OpenShift Dev Spaces dashboard. In the Resources tab, view the resources for the OpenShift Dev Spaces deployment and their status. 2.1.4. Installing Dev Spaces in a restricted environment On an OpenShift cluster operating in a restricted network, public resources are not available. However, deploying OpenShift Dev Spaces and running workspaces requires the following public resources: Operator catalog Container images Sample projects To make these resources available, you can replace them with their copy in a registry accessible by the OpenShift cluster. Prerequisites The OpenShift cluster has at least 64 GB of disk space. The OpenShift cluster is ready to operate on a restricted network, and the OpenShift control plane has access to the public internet. See About disconnected installation mirroring and Using Operator Lifecycle Manager on restricted networks . An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI . An active oc registry session to the registry.redhat.io Red Hat Ecosystem Catalog. See: Red Hat Container Registry authentication . opm . See Installing the opm CLI . jq . See Downloading jq . podman . See Podman Installation Instructions . skopeo version 1.6 or higher. See Installing Skopeo . An active skopeo session with administrative access to the private Docker registry. Authenticating to a registry , and Mirroring images for a disconnected installation . dsc for OpenShift Dev Spaces version 3.14. See Section 1.2, "Installing the dsc management tool" . Procedure Download and execute the mirroring script to install a custom Operator catalog and mirror the related images: prepare-restricted-environment.sh . 1 The private Docker registry where the images will be mirrored Install OpenShift Dev Spaces with the configuration set in the che-operator-cr-patch.yaml during the step: Allow incoming traffic from the OpenShift Dev Spaces namespace to all Pods in the user projects. See: Section 3.7.1, "Configuring network policies" . Additional resources Red Hat-provided Operator catalogs Managing custom catalogs 2.1.4.1. Setting up an Ansible sample Follow these steps to use an Ansible sample in restricted environments. Prerequisites Microsoft Visual Studio Code - Open Source IDE A 64-bit x86 system. Procedure Mirror the following images: Configure the cluster proxy to allow access to the following domains: Note Support for the following IDE and CPU architectures is planned for a future release: IDE JetBrains IntelliJ IDEA Community Edition IDE ( Technology Preview ) CPU architectures IBM Power (ppc64le) IBM Z (s390x) 2.2. Finding the fully qualified domain name (FQDN) You can get the fully qualified domain name (FQDN) of your organization's instance of OpenShift Dev Spaces on the command line or in the OpenShift web console. Tip You can find the FQDN for your organization's OpenShift Dev Spaces instance in the Administrator view of the OpenShift web console as follows. Go to Operators Installed Operators Red Hat OpenShift Dev Spaces instance Specification devspaces Red Hat OpenShift Dev Spaces URL . Prerequisites An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI . Procedure Run the following command: oc get checluster devspaces -n openshift-devspaces -o jsonpath='{.status.cheURL}'
[ "dsc server:delete", "dsc server:deploy --platform openshift", "dsc server:status", "dsc dashboard:open", "create namespace openshift-devspaces", "bash prepare-restricted-environment.sh --devworkspace_operator_index registry.redhat.io/redhat/redhat-operator-index:v4.16 --devworkspace_operator_version \"v0.28.0\" --prod_operator_index \"registry.redhat.io/redhat/redhat-operator-index:v4.16\" --prod_operator_package_name \"devspaces\" --prod_operator_bundle_name \"devspacesoperator\" --prod_operator_version \"v3.14.0\" --my_registry \" <my_registry> \" 1", "dsc server:deploy --platform=openshift --olm-channel stable --catalog-source-name=devspaces-disconnected-install --catalog-source-namespace=openshift-marketplace --skip-devworkspace-operator --che-operator-cr-patch-yaml=che-operator-cr-patch.yaml", "ghcr.io/ansible/ansible-workspace-env-reference@sha256:03d7f0fe6caaae62ff2266906b63d67ebd9cf6e4a056c7c0a0c1320e6cfbebce registry.access.redhat.com/ubi8/python-39@sha256:301fec66443f80c3cc507ccaf72319052db5a1dc56deb55c8f169011d4bbaacb", ".ansible.com .ansible-galaxy-ng.s3.dualstack.us-east-1.amazonaws.com", "get checluster devspaces -n openshift-devspaces -o jsonpath='{.status.cheURL}'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.14/html/administration_guide/installing-devspaces
function::linuxmib_filter_key
function::linuxmib_filter_key Name function::linuxmib_filter_key - Default filter function for linuxmib.* probes Synopsis Arguments sk pointer to the struct sock op value to be counted if sk passes the filter Description This function is a default filter function. The user can replace this function with their own. The user-supplied filter function returns an index key based on the values in sk . A return value of 0 means this particular sk should be not be counted.
[ "linuxmib_filter_key:long(sk:long,op:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-linuxmib-filter-key
Chapter 16. Disk Quotas
Chapter 16. Disk Quotas Disk space can be restricted by implementing disk quotas which alert a system administrator before a user consumes too much disk space or a partition becomes full. Disk quotas can be configured for individual users as well as user groups. This makes it possible to manage the space allocated for user-specific files (such as email) separately from the space allocated to the projects a user works on (assuming the projects are given their own groups). In addition, quotas can be set not just to control the number of disk blocks consumed but to control the number of inodes (data structures that contain information about files in UNIX file systems). Because inodes are used to contain file-related information, this allows control over the number of files that can be created. The quota RPM must be installed to implement disk quotas. 16.1. Configuring Disk Quotas To implement disk quotas, use the following steps: Enable quotas per file system by modifying the /etc/fstab file. Remount the file system(s). Create the quota database files and generate the disk usage table. Assign quota policies. Each of these steps is discussed in detail in the following sections. 16.1.1. Enabling Quotas As root, using a text editor, edit the /etc/fstab file. Example 16.1. Edit /etc/fstab For example, to use the text editor vim type the following: Add the usrquota and/or grpquota options to the file systems that require quotas: Example 16.2. Add quotas In this example, the /home file system has both user and group quotas enabled. Note The following examples assume that a separate /home partition was created during the installation of Red Hat Enterprise Linux. The root ( / ) partition can be used for setting quota policies in the /etc/fstab file. 16.1.2. Remounting the File Systems After adding the usrquota and/or grpquota options, remount each file system whose fstab entry has been modified. If the file system is not in use the following commands: umount /mount-point For example, umount /work . mount /file-system /mount-point For example, mount /dev/vdb1 /work . If the file system is currently in use, the easiest method for remounting the file system is to reboot the system. 16.1.3. Creating the Quota Database Files After each quota-enabled file system is remounted run the quotacheck command. The quotacheck command examines quota-enabled file systems and builds a table of the current disk usage per file system. The table is then used to update the operating system's copy of disk usage. In addition, the file system's disk quota files are updated. To create the quota files ( aquota.user and aquota.group ) on the file system, use the -c option of the quotacheck command. Example 16.3. Create quota files For example, if user and group quotas are enabled for the /home file system, create the files in the /home directory: The -c option specifies that the quota files should be created for each file system with quotas enabled, the -u option specifies to check for user quotas, and the -g option specifies to check for group quotas. If neither the -u or -g options are specified, only the user quota file is created. If only -g is specified, only the group quota file is created. After the files are created, run the following command to generate the table of current disk usage per file system with quotas enabled: The options used are as follows: a Check all quota-enabled, locally-mounted file systems v Display verbose status information as the quota check proceeds u Check user disk quota information g Check group disk quota information After quotacheck has finished running, the quota files corresponding to the enabled quotas (user and/or group) are populated with data for each quota-enabled locally-mounted file system such as /home . 16.1.4. Assigning Quotas per User The last step is assigning the disk quotas with the edquota command. To configure the quota for a user, as root in a shell prompt, execute the command: Perform this step for each user who needs a quota. For example, if a quota is enabled in /etc/fstab for the /home partition ( /dev/VolGroup00/LogVol02 in the example below) and the command edquota testuser is executed, the following is shown in the editor configured as the default for the system: Note The text editor defined by the EDITOR environment variable is used by edquota . To change the editor, set the EDITOR environment variable in your ~/.bash_profile file to the full path of the editor of your choice. The first column is the name of the file system that has a quota enabled for it. The second column shows how many blocks the user is currently using. The two columns are used to set soft and hard block limits for the user on the file system. The inodes column shows how many inodes the user is currently using. The last two columns are used to set the soft and hard inode limits for the user on the file system. The hard block limit is the absolute maximum amount of disk space that a user or group can use. Once this limit is reached, no further disk space can be used. The soft block limit defines the maximum amount of disk space that can be used. However, unlike the hard limit, the soft limit can be exceeded for a certain amount of time. That time is known as the grace period . The grace period can be expressed in seconds, minutes, hours, days, weeks, or months. If any of the values are set to 0, that limit is not set. In the text editor, change the desired limits. Example 16.4. Change desired limits For example: To verify that the quota for the user has been set, use the command: 16.1.5. Assigning Quotas per Group Quotas can also be assigned on a per-group basis. For example, to set a group quota for the devel group (the group must exist prior to setting the group quota), use the command: This command displays the existing quota for the group in the text editor: Modify the limits, then save the file. To verify that the group quota has been set, use the command: 16.1.6. Setting the Grace Period for Soft Limits If a given quota has soft limits, you can edit the grace period (i.e. the amount of time a soft limit can be exceeded) with the following command: This command works on quotas for inodes or blocks, for either users or groups. Important While other edquota commands operate on quotas for a particular user or group, the -t option operates on every file system with quotas enabled.
[ "vim /etc/fstab", "/dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 /dev/VolGroup00/LogVol02 /home ext3 defaults,usrquota,grpquota 1 2 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 . . .", "quotacheck -cug /home", "quotacheck -avug", "edquota username", "Disk quotas for user testuser (uid 501): Filesystem blocks soft hard inodes soft hard /dev/VolGroup00/LogVol02 440436 0 0 37418 0 0", "Disk quotas for user testuser (uid 501): Filesystem blocks soft hard inodes soft hard /dev/VolGroup00/LogVol02 440436 500000 550000 37418 0 0", "quota username Disk quotas for user username (uid 501): Filesystem blocks quota limit grace files quota limit grace /dev/sdb 1000* 1000 1000 0 0 0", "edquota -g devel", "Disk quotas for group devel (gid 505): Filesystem blocks soft hard inodes soft hard /dev/VolGroup00/LogVol02 440400 0 0 37418 0 0", "quota -g devel", "edquota -t" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/ch-disk-quotas
18.2. Setup
18.2. Setup Download the Fuse Full Install binary from the Red Hat Customer Portal. Export the path to the folder where the CSV files will be placed by running the command: Run the following command in the root directory of the quickstart: Set the FUSE_INSTALL_PATH and FUSE_BINARY_PATH variables in the same shell: If running the camel-jbossdatagrid-fuse quickstart in JBoss Fuse 6.2.1 the following changes to the setupEverythingOnFuse.sh script must be made; otherwise proceed to the step: Change the version of Fuse being exported to reference the 6.2.1 component. Update the container name to child1 . Once the environment variables are set run the following from the root directory of the quickstart: After the script completes confirm that the Fuse Hawtio Console may be accessed without error. This console, by default, runs at http://127.0.0.1:8181/hawtio/index.html#/login ; the username and password are both admin . Confirm that both the child1 and child2 containers were created by accessing Fuse Fabric at http://127.0.0.1:8181/hawtio/index.html#/fabric/containers . Both containers should be highlighted in green to indicate they are ready. Report a bug
[ "export incomingFolderPath=[Full path to the CSV folder]", "mvn clean install -DincomingFolderPath=USDincomingFolderPath", "export FUSE_INSTALL_PATH = [Full path to the folder where Fuse will be installed] export FUSE_BINARY_PATH = [Full path to the Fuse zip file downloaded in step 1]", "Original line with old version export FUSE_VERSION=jboss-fuse-6.2.0.redhat-133 Updated line for Fuse 6.2.1: export FUSE_VERSION=jboss-fuse-6.2.1.redhat-084 Original line exporting the profile #sh client -r 2 -d 10 \"fabric:container-add-profile child demo-local_producer\" > /dev/null 2>&1 Updated line for Fuse 6.2.1: sh client -r 2 -d 10 \"fabric:container-add-profile child1 demo-local_producer\" > /dev/null 2>&1", "./setupEverythingOnFuse.sh" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/camel-jboss_data_grid_quickstart_setup
Chapter 1. The LVM Logical Volume Manager
Chapter 1. The LVM Logical Volume Manager This chapter provides a summary of the features of the LVM logical volume manager that are new since the initial release of Red Hat Enterprise Linux 7. This chapter also provides a high-level overview of the components of the Logical Volume Manager (LVM). 1.1. New and Changed Features This section lists features of the LVM logical volume manager that are new since the initial release of Red Hat Enterprise Linux 7. 1.1.1. New and Changed Features for Red Hat Enterprise Linux 7.1 Red Hat Enterprise Linux 7.1 includes the following documentation and feature updates and changes. The documentation for thinly-provisioned volumes and thinly-provisioned snapshots has been clarified. Additional information about LVM thin provisioning is now provided in the lvmthin (7) man page. For general information on thinly-provisioned logical volumes, see Section 2.3.4, "Thinly-Provisioned Logical Volumes (Thin Volumes)" . For information on thinly-provisioned snapshot volumes, see Section 2.3.6, "Thinly-Provisioned Snapshot Volumes" . This manual now documents the lvm dumpconfig command in Section B.2, "The lvmconfig Command" . Note that as of the Red Hat Enterprise Linux 7.2 release, this command was renamed lvmconfig , although the old format continues to work. This manual now documents LVM profiles in Section B.3, "LVM Profiles" . This manual now documents the lvm command in Section 3.6, "Displaying LVM Information with the lvm Command" . In the Red Hat Enterprise Linux 7.1 release, you can control activation of thin pool snapshots with the -k and -K options of the lvcreate and lvchange command, as documented in Section 4.4.20, "Controlling Logical Volume Activation" . This manual documents the --force argument of the vgimport command. This allows you to import volume groups that are missing physical volumes and subsequently run the vgreduce --removemissing command. For information on the vgimport command, refer to Section 4.3.15, "Moving a Volume Group to Another System" . This manual documents the --mirrorsonly argument of the vgreduce command. This allows you remove only the logical volumes that are mirror images from a physical volume that has failed. For information on using this option, refer to Section 4.3.15, "Moving a Volume Group to Another System" . In addition, small technical corrections and clarifications have been made throughout the document. 1.1.2. New and Changed Features for Red Hat Enterprise Linux 7.2 Red Hat Enterprise Linux 7.2 includes the following documentation and feature updates and changes. Many LVM processing commands now accept the -S or --select option to define selection criteria for those commands. LVM selection criteria are documented in the new appendix Appendix C, LVM Selection Criteria . This document provides basic procedures for creating cache logical volumes in Section 4.4.8, "Creating LVM Cache Logical Volumes" . The troubleshooting chapter of this document includes a new section, Section 6.7, "Duplicate PV Warnings for Multipathed Devices" . As of the Red Hat Enterprise Linux 7.2 release, the lvm dumpconfig command was renamed lvmconfig , although the old format continues to work. This change is reflected throughout this document. In addition, small technical corrections and clarifications have been made throughout the document. 1.1.3. New and Changed Features for Red Hat Enterprise Linux 7.3 Red Hat Enterprise Linux 7.3 includes the following documentation and feature updates and changes. LVM supports RAID0 segment types. RAID0 spreads logical volume data across multiple data subvolumes in units of stripe size. For information on creating RAID0 volumes, see Section 4.4.3.1, "Creating RAID0 Volumes (Red Hat Enterprise Linux 7.3 and Later)" . You can report information about physical volumes, volume groups, logical volumes, physical volume segments, and logical volume segments all at once with the lvm fullreport command. For information on this command and its capabilities, see the lvm-fullreport (8) man page. LVM supports log reports, which contain a log of operations, messages, and per-object status with complete object identification collected during LVM command execution. For an example of an LVM log report, see Section 4.8.6, "Command Log Reporting (Red Hat Enterprise Linux 7.3 and later)" . For further information about the LVM log report. see the lvmreport (7) man page. You can use the --reportformat option of the LVM display commands to display the output in JSON format. For an example of output displayed in JSON format, see Section 4.8.5, "JSON Format Output (Red Hat Enterprise Linux 7.3 and later)" . You can now configure your system to track thin snapshot and thin logical volumes that have been removed by enabling the record_lvs_history metadata option in the lvm.conf configuration file. This allows you to display a full thin snapshot dependency chain that includes logical volumes that have been removed from the original dependency chain and have become historical logical volumes. For information on historical logical volumes, see Section 4.4.21, "Tracking and Displaying Historical Logical Volumes (Red Hat Enterprise Linux 7.3 and Later)" . In addition, small technical corrections and clarifications have been made throughout the document. 1.1.4. New and Changed Features for Red Hat Enterprise Linux 7.4 Red Hat Enterprise Linux 7.4 includes the following documentation and feature updates and changes. Red Hat Enterprise Linux 7.4 provides support for RAID takeover and RAID reshaping. For a summary of these features, see Section 4.4.3.12, "RAID Takeover (Red Hat Enterprise Linux 7.4 and Later)" and Section 4.4.3.13, "Reshaping a RAID Logical Volume (Red Hat Enterprise Linux 7.4 and Later)" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/LVM_overview
14.11.2.4. Auto-starting a storage pool
14.11.2.4. Auto-starting a storage pool The pool-autostart pool-or-uuid --disable command enables or disables a storage pool to automatically start at boot. This command requires the pool name or UUID. To disable the pool-autostart command use the --disable option.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sub-sect-creating_defining_and_starting_storage_pools-auto_starting_a_storage_pool
Chapter 27. File Systems
Chapter 27. File Systems Setting the retry timeout can now prevent autofs from starting without mounts from SSSD When starting the autofs utility, the sss map source was previously sometimes not ready to provide map information, but sss did not return an appropriate error to distinguish between the map does not exist and not available condition. As a consequence, automounting did not work correctly, and autofs started without mounts from SSSD. To fix this bug, autofs retries asking SSSD for the master map when the map does not exist error occurs for a configurable amount of time. Now, you can set the retry timeout to a suitable value so that the master map is read and autofs starts as expected. (BZ# 1101782 ) The autofs package now contains the README.autofs-schema file and an updated schema The samples/autofs.schema distribution file was out of date and incorrect. As a consequence, it is possible that somebody is using an incorrect LDAP schema. However, a change of the schema in use cannot be enforced. With this update: The README.autofs-schema file has been added to describe the problem and recommend which schema to use, if possible. The schema included in the autofs package has been updated to samples/autofs.schema.new . (BZ# 1383910 ) automount no longer needs to be restarted to access maps stored on the NIS server Previously, the autofs utility did not wait for the NIS client service when starting. As a consequence, if the network map source was not available at program start, the master map could not be read, and the automount service had to be restarted to access maps stored on the NIS server. With this update, autofs waits until the master map is available to obtain a startup map. As a result, automount can access the map from the NIS domain, and autofs no longer needs to be restarted on every boot. If the NIS maps are still not available after the configured wait time, the autofs configuration master_wait option might need to be increased. In the majority of cases, the wait time used by the package is sufficient. (BZ#1383194) Checking local mount availability with autofs no longer leads to a lengthy timeout before failing Previously, a server availability probe was not done for mount requests that autofs considered local because a bind mount on the local machine is expected to be available for use. If the bind mount failed, an NFS mount on the local machine was then tried. However, if the NFS server was not running on the local machine, the mount attempt sometimes suffered a lengthy timeout before failing. An availability probe has been added to the case where a bind mount is first tried, but fails, and autofs now falls back to trying to use an NFS server on the local machine. As a result, if a bind mount on the local machine fails, the fallback to trying an NFS mount on the local machine fails quickly if the local NFS server is not running. (BZ# 1420574 ) The journal is marked as idle when mounting a GFS2 file system as read-only Previously, the kernel did not mark the file system journal as idle when mounting a GFS2 file system as read-only. As a consequence, the gfs2_log_flush() function incorrectly tried to write a header block to the journal and a sequence-out-of-order error was logged. A patch has been applied to mark the journal idle when mounting a GFS2 file system as read-only. As a result, the mentioned error no longer occurs in the described scenario. (BZ#1213119) The id command no longer shows incorrect UIDs and GIDs When running Red Hat Enterprise Linux on an NFSv4 client connected to an NFSv4 server, the id command showed incorrect UIDs and GIDs after the key expired out of the NFS idmapper keyring. The problem persisted for 5 minutes, until the expired keys were garbage collected, after which the new key was created in the keyring and the id command provided the correct output. With this update, the keyring facility has been fixed, and the id command no longer shows incorrect output under the described circumstances. (BZ#1408330) Labeled NFS is now turned off by default The SELinux labels on a Red hat Enterprise Linux NFS server are not normally visible to NFS clients. Instead, NFS clients see all files labeled as type nfs_t regardless of what label the files have on the server. Since Red Hat Enterprise Linux 7.3, the NFS server has the ability to communicate individual file labels to clients. Sufficiently recent clients, such as recent Fedora clients, see NFS files labeled with the same labels that those files have on the server. This is useful in certain cases, but it can also lead to unexpected access permission problems on recent clients after a server is upgraded to Red Hat Enterprise Linux 7.3 and later. Note that labeled NFS support is turned off by default on the NFS server. You can re-enable labeled NFS support by using the security_label export option. (BZ# 1406885 ) autofs mounts no longer enter an infinite loop after reaching a shutdown state If an autofs mount reached a shutdown state, and a mount request arrived and was processed before the mount-handling thread read the shutdown notification, the mount-handling thread previously exited without cleaning up the autofs mount. As a consequence, the main program never reached its exit condition and entered an infinite loop, as the autofs-managed mount was left mounted. To fix this bug, the exit condition check now takes place after each request is processed, and cleanup operations are now performed if an autofs mount has reached its shutdown state. As a result, the autofs daemon now exits as expected at shutdown. (BZ#1420584) autofs is now more reliable when handling namespaces Previously, the autofs kernel module was unable to check whether the last component of a path was a mount point in the current namespace, only whether it was a mount point in any namespace. Due to this bug, autofs sometimes incorrectly decided whether a mount point cloned into a propagation private namespace was already present. As a consequence, the automount point failed to be mounted and the error message Too many levels of symbolic links was returned. This happened, for example, when a systemd service that used the PrivateTmp option was restarted while an autofs mount was active. With this update, a namespace-aware mounted check has been added in the kernel. As a result, autofs is now more resilient to cases where a mount namespace that includes autofs mounts has been cloned to a propagation private namespace. For more details, see the KBase article at https://access.redhat.com/articles/3104671 . (BZ#1320588)
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/bug_fixes_file_systems
Providing feedback on Red Hat Ceph Storage documentation
Providing feedback on Red Hat Ceph Storage documentation We appreciate your input on our documentation. Please let us know how we could make it better. To do so, create a Bugzilla ticket: + . Go to the Bugzilla website. . In the Component drop-down, select Documentation . . In the Sub-Component drop-down, select the appropriate sub-component. . Select the appropriate version of the document. . Fill in the Summary and Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. . Optional: Add an attachment, if any. . Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/5.1_release_notes/providing-feedback-on-red-hat-ceph-storage-documentation
Chapter 8. Configuring distributed caches
Chapter 8. Configuring distributed caches Red Hat build of Keycloak is designed for high availability and multi-node clustered setups. The current distributed cache implementation is built on top of Infinispan , a high-performance, distributable in-memory data grid. 8.1. Enable distributed caching When you start Red Hat build of Keycloak in production mode, by using the start command, caching is enabled and all Red Hat build of Keycloak nodes in your network are discovered. By default, caches are using a UDP transport stack so that nodes are discovered using IP multicast transport based on UDP. For most production environments, there are better discovery alternatives to UDP available. Red Hat build of Keycloak allows you to either choose from a set of pre-defined default transport stacks, or to define your own custom stack, as you will see later in this chapter. To explicitly enable distributed infinispan caching, enter this command: bin/kc.[sh|bat] build --cache=ispn When you start Red Hat build of Keycloak in development mode, by using the start-dev command, Red Hat build of Keycloak uses only local caches and distributed caches are completely disabled by implicitly setting the --cache=local option. The local cache mode is intended only for development and testing purposes. 8.2. Configuring caches Red Hat build of Keycloak provides a cache configuration file with sensible defaults located at conf/cache-ispn.xml . The cache configuration is a regular Infinispan configuration file . The following table gives an overview of the specific caches Red Hat build of Keycloak uses. You configure these caches in conf/cache-ispn.xml : Cache name Cache Type Description realms Local Cache persisted realm data users Local Cache persisted user data authorization Local Cache persisted authorization data keys Local Cache external public keys work Replicated Propagate invalidation messages across nodes authenticationSessions Distributed Caches authentication sessions, created/destroyed/expired during the authentication process sessions Distributed Caches user sessions, created upon successful authentication and destroyed during logout, token revocation, or due to expiration clientSessions Distributed Caches client sessions, created upon successful authentication to a specific client and destroyed during logout, token revocation, or due to expiration offlineSessions Distributed Caches offline user sessions, created upon successful authentication and destroyed during logout, token revocation, or due to expiration offlineClientSessions Distributed Caches client sessions, created upon successful authentication to a specific client and destroyed during logout, token revocation, or due to expiration loginFailures Distributed keep track of failed logins, fraud detection actionTokens Distributed Caches action Tokens 8.2.1. Cache types and defaults Local caches Red Hat build of Keycloak caches persistent data locally to avoid unnecessary round-trips to the database. The following data is kept local to each node in the cluster using local caches: realms and related data like clients, roles, and groups. users and related data like granted roles and group memberships. authorization and related data like resources, permissions, and policies. keys Local caches for realms, users, and authorization are configured to hold up to 10,000 entries per default. The local key cache can hold up to 1,000 entries per default and defaults to expire every one hour. Therefore, keys are forced to be periodically downloaded from external clients or identity providers. In order to achieve an optimal runtime and avoid additional round-trips to the database you should consider looking at the configuration for each cache to make sure the maximum number of entries is aligned with the size of your database. More entries you can cache, less often the server needs to fetch data from the database. You should evaluate the trade-offs between memory utilization and performance. Invalidation of local caches Local caching improves performance, but adds a challenge in multi-node setups. When one Red Hat build of Keycloak node updates data in the shared database, all other nodes need to be aware of it, so they invalidate that data from their caches. The work cache is a replicated cache and used for sending these invalidation messages. The entries/messages in this cache are very short-lived, and you should not expect this cache growing in size over time. Authentication sessions Authentication sessions are created whenever a user tries to authenticate. They are automatically destroyed once the authentication process completes or due to reaching their expiration time. The authenticationSessions distributed cache is used to store authentication sessions and any other data associated with it during the authentication process. By relying on a distributable cache, authentication sessions are available to any node in the cluster so that users can be redirected to any node without losing their authentication state. However, production-ready deployments should always consider session affinity and favor redirecting users to the node where their sessions were initially created. By doing that, you are going to avoid unnecessary state transfer between nodes and improve CPU, memory, and network utilization. User sessions Once the user is authenticated, a user session is created. The user session tracks your active users and their state so that they can seamlessly authenticate to any application without being asked for their credentials again. For each application, the user authenticates with a client session is created too, so that the server can track the applications the user is authenticated with and their state on a per-application basis. User and client sessions are automatically destroyed whenever the user performs a logout, the client performs a token revocation, or due to reaching their expiration time. The following caches are used to store both user and client sessions: sessions clientSessions By relying on a distributable cache, user and client sessions are available to any node in the cluster so that users can be redirected to any node without loosing their state. However, production-ready deployments should always consider session affinity and favor redirecting users to the node where their sessions were initially created. By doing that, you are going to avoid unnecessary state transfer between nodes and improve CPU, memory, and network utilization. As an OpenID Connect Provider, the server is also capable of authenticating users and issuing offline tokens. Similarly to regular user and client sessions, when an offline token is issued by the server upon successful authentication, the server also creates an offline user session and an offline client session. However, due to the nature of offline tokens, offline sessions are handled differently as they are long-lived and should survive a complete cluster shutdown. Because of that, they are also persisted to the database. The following caches are used to store offline sessions: offlineSessions offlineClientSessions Upon a cluster restart, offline sessions are lazily loaded from the database and kept in a shared cache using the two caches above. Password brute force detection The loginFailures distributed cache is used to track data about failed login attempts. This cache is needed for the Brute Force Protection feature to work in a multi-node Red Hat build of Keycloak setup. Action tokens Action tokens are used for scenarios when a user needs to confirm an action asynchronously, for example in the emails sent by the forgot password flow. The actionTokens distributed cache is used to track metadata about action tokens. 8.2.2. Configuring caches for availability Distributed caches replicate cache entries on a subset of nodes in a cluster and assigns entries to fixed owner nodes. Each distributed cache has two owners per default, which means that two nodes have a copy of the specific cache entries. Non-owner nodes query the owners of a specific cache to obtain data. When both owner nodes are offline, all data is lost. This situation usually leads to users being logged out at the request and having to log in again. The default number of owners is enough to survive 1 node (owner) failure in a cluster setup with at least three nodes. You are free to change the number of owners accordingly to better fit into your availability requirements. To change the number of owners, open conf/cache-ispn.xml and change the value for owners=<value> for the distributed caches to your desired value. 8.2.3. Specify your own cache configuration file To specify your own cache configuration file, enter this command: bin/kc.[sh|bat] build --cache-config-file=my-cache-file.xml The configuration file is relative to the conf/ directory. 8.2.4. CLI options for remote server For configuration of Red Hat build of Keycloak server for high availability and multi-node clustered setup there was introduced following CLI options cache-remote-host , cache-remote-port , cache-remote-username and cache-remote-password simplifying configuration within the XML file. Once any of declared CLI parameters are present, it is expected there is no configuration related to remote store present in the XML file. 8.3. Transport stacks Transport stacks ensure that distributed cache nodes in a cluster communicate in a reliable fashion. Red Hat build of Keycloak supports a wide range of transport stacks: tcp udp kubernetes ec2 azure google To apply a specific cache stack, enter this command: bin/kc.[sh|bat] build --cache-stack=<stack> The default stack is set to udp when distributed caches are enabled. 8.3.1. Available transport stacks The following table shows transport stacks that are available without any further configuration than using the --cache-stack build option: Stack name Transport protocol Discovery tcp TCP MPING (uses UDP multicast). udp UDP UDP multicast The following table shows transport stacks that are available using the --cache-stack build option and a minimum configuration: Stack name Transport protocol Discovery kubernetes TCP DNS_PING (requires -Djgroups.dns.query=<headless-service-FQDN> to be added to JAVA_OPTS or JAVA_OPTS_APPEND environment variable). 8.3.2. Additional transport stacks The following table shows transport stacks that are supported by Red Hat build of Keycloak, but need some extra steps to work. Note that none of these stacks are Kubernetes / OpenShift stacks, so no need exists to enable the google stack if you want to run Red Hat build of Keycloak on top of the Google Kubernetes engine. In that case, use the kubernetes stack. Instead, when you have a distributed cache setup running on AWS EC2 instances, you would need to set the stack to ec2 , because ec2 does not support a default discovery mechanism such as UDP. Stack name Transport protocol Discovery ec2 TCP NATIVE_S3_PING google TCP GOOGLE_PING2 azure TCP AZURE_PING Cloud vendor specific stacks have additional dependencies for Red Hat build of Keycloak. For more information and links to repositories with these dependencies, see the Infinispan documentation . To provide the dependencies to Red Hat build of Keycloak, put the respective JAR in the providers directory and build Red Hat build of Keycloak by entering this command: bin/kc.[sh|bat] build --cache-stack=<ec2|google|azure> 8.3.3. Custom transport stacks If none of the available transport stacks are enough for your deployment, you are able to change your cache configuration file and define your own transport stack. For more details, see Using inline JGroups stacks . defining a custom transport stack By default, the value set to the cache-stack option has precedence over the transport stack you define in the cache configuration file. If you are defining a custom stack, make sure the cache-stack option is not used for the custom changes to take effect. 8.4. Securing cache communication The current Infinispan cache implementation should be secured by various security measures such as RBAC, ACLs, and transport stack encryption. JGroups handles all the communication between Red Hat build of Keycloak server, and it supports Java SSL sockets for TCP communication. Red Hat build of Keycloak uses CLI options to configure the TLS communication without having to create a customized JGroups stack or modifying the cache XML file. To enable TLS, cache-embedded-mtls-enabled must be set to true . It requires a keystore with the certificate to use: cache-embedded-mtls-key-store-file sets the path to the keystore, and cache-embedded-mtls-key-store-password sets the password to decrypt it. The truststore contains the valid certificates to accept connection from, and it can be configured with cache-embedded-mtls-trust-store-file (path to the truststore), and cache-embedded-mtls-trust-store-password (password to decrypt it). To restrict unauthorized access, use a self-signed certificate for each Red Hat build of Keycloak deployment. For JGroups stacks with UDP or TCP_NIO2 , see the JGroups Encryption documentation on how to set up the protocol stack. For more information about securing cache communication, see the Infinispan security guide . 8.5. Exposing metrics from caches By default, metrics from caches are not automatically exposed when the metrics are enabled. For more details about how to enable metrics, see Enabling Red Hat build of Keycloak Metrics . To enable global metrics for all caches within the cache-container , you need to change your cache configuration file (e.g.: conf/cache-ispn.xml ) to enable statistics at the cache-container level as follows: enabling metrics for all caches Similarly, you can enable metrics individually for each cache by enabling statistics as follows: enabling metrics for a specific cache 8.6. Relevant options Value cache 🛠 Defines the cache mechanism for high-availability. By default in production mode, a ispn cache is used to create a cluster between multiple server nodes. By default in development mode, a local cache disables clustering and is intended for development and testing purposes. CLI: --cache Env: KC_CACHE ispn (default), local cache-config-file 🛠 Defines the file from which cache configuration should be loaded from. The configuration file is relative to the conf/ directory. CLI: --cache-config-file Env: KC_CACHE_CONFIG_FILE cache-embedded-mtls-enabled Encrypts the network communication between Keycloak servers. CLI: --cache-embedded-mtls-enabled Env: KC_CACHE_EMBEDDED_MTLS_ENABLED true , false (default) cache-embedded-mtls-key-store-file The Keystore file path. The Keystore must contain the certificate to use by the TLS protocol. By default, it lookup cache-mtls-keystore.p12 under conf/ directory. CLI: --cache-embedded-mtls-key-store-file Env: KC_CACHE_EMBEDDED_MTLS_KEY_STORE_FILE cache-embedded-mtls-key-store-password The password to access the Keystore. CLI: --cache-embedded-mtls-key-store-password Env: KC_CACHE_EMBEDDED_MTLS_KEY_STORE_PASSWORD cache-embedded-mtls-trust-store-file The Truststore file path. It should contain the trusted certificates or the Certificate Authority that signed the certificates. By default, it lookup cache-mtls-truststore.p12 under conf/ directory. CLI: --cache-embedded-mtls-trust-store-file Env: KC_CACHE_EMBEDDED_MTLS_TRUST_STORE_FILE cache-embedded-mtls-trust-store-password The password to access the Truststore. CLI: --cache-embedded-mtls-trust-store-password Env: KC_CACHE_EMBEDDED_MTLS_TRUST_STORE_PASSWORD cache-remote-host The hostname of the remote server for the remote store configuration. It replaces the host attribute of remote-server tag of the configuration specified via XML file (see cache-config-file option.). If the option is specified, cache-remote-username and cache-remote-password are required as well and the related configuration in XML file should not be present. CLI: --cache-remote-host Env: KC_CACHE_REMOTE_HOST cache-remote-password The password for the authentication to the remote server for the remote store. It replaces the password attribute of digest tag of the configuration specified via XML file (see cache-config-file option.). If the option is specified, cache-remote-host and cache-remote-username are required as well and the related configuration in XML file should not be present. CLI: --cache-remote-password Env: KC_CACHE_REMOTE_PASSWORD cache-remote-port The port of the remote server for the remote store configuration. It replaces the port attribute of remote-server tag of the configuration specified via XML file (see cache-config-file option.). CLI: --cache-remote-port Env: KC_CACHE_REMOTE_PORT 11222 (default) cache-remote-username The username for the authentication to the remote server for the remote store. It replaces the username attribute of digest tag of the configuration specified via XML file (see cache-config-file option.). If the option is specified, cache-remote-host and cache-remote-password are required as well and the related configuration in XML file should not be present. CLI: --cache-remote-username Env: KC_CACHE_REMOTE_USERNAME cache-stack 🛠 Define the default stack to use for cluster communication and node discovery. This option only takes effect if cache is set to ispn . Default: udp. CLI: --cache-stack Env: KC_CACHE_STACK tcp , udp , kubernetes , ec2 , azure , google
[ "bin/kc.[sh|bat] build --cache=ispn", "bin/kc.[sh|bat] build --cache-config-file=my-cache-file.xml", "bin/kc.[sh|bat] build --cache-stack=<stack>", "bin/kc.[sh|bat] build --cache-stack=<ec2|google|azure>", "<jgroups> <stack name=\"my-encrypt-udp\" extends=\"udp\"> <SSL_KEY_EXCHANGE keystore_name=\"server.jks\" keystore_password=\"password\" stack.combine=\"INSERT_AFTER\" stack.position=\"VERIFY_SUSPECT2\"/> <ASYM_ENCRYPT asym_keylength=\"2048\" asym_algorithm=\"RSA\" change_key_on_coord_leave = \"false\" change_key_on_leave = \"false\" use_external_key_exchange = \"true\" stack.combine=\"INSERT_BEFORE\" stack.position=\"pbcast.NAKACK2\"/> </stack> </jgroups> <cache-container name=\"keycloak\"> <transport lock-timeout=\"60000\" stack=\"my-encrypt-udp\"/> </cache-container>", "<cache-container name=\"keycloak\" statistics=\"true\"> </cache-container>", "<local-cache name=\"realms\" statistics=\"true\"> </local-cache>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_guide/caching-
B.15.4. RHBA-2011:0294 - device-mapper-multipath bug fix update
B.15.4. RHBA-2011:0294 - device-mapper-multipath bug fix update Updated device-mapper-multipath packages that resolve an issue are now available for Red Hat Enterprise Linux 6. The device-mapper-multipath packages provide tools to manage multipath devices by giving the "dm-multipath" kernel module instructions on what to do, as well as by managing the creation and removal of partitions for Device-Mapper devices. Bug Fix BZ# 672151 Multipathd caches the value of sysfs attribute lookups for the path devices that make up a multipath device. Previously, these weren't being removed when the path devices were removed. As well, in some cases the cache was not helpful and not used. This occasionally caused memory leaks when path devices were removed and restored. With this update, the unnecessary caching has been completely removed and the cached values are now removed when the corresponding path device is removed. Consequently, the occasional memory leaks no longer occur. All device-mapper-multipath users are advised to upgrade to these updated packages, which resolve this issue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rhba-2011-0294
Chapter 1. The Red Hat OpenStack Platform Dashboard service (horizon)
Chapter 1. The Red Hat OpenStack Platform Dashboard service (horizon) The Red Hat OpenStack Platform (RHOSP) Dashboard (horizon) is a web-based graphical user interface that you can use to manage RHOSP services. To access the browser dashboard, you must install the Dashboard service, and you must know the dashboard host name, or IP, and login password. The dashboard URL is: 1.1. The Admin tab In the Admin tab you can view usage and manage instances, volumes, flavors, images, projects, users, services, and quotas. Note The Admin tab displays in the main window when you log in as an admin user. The following options are available in the Admin tab: Table 1.1. System Panel Parameter name Description Overview View basic reports. Resource Usage Use the following tabs to view the following usages: Usage Report - View the usage report. Stats - View the statistics of all resources. Hypervisors View the hypervisor summary. Host Aggregates View, create, and edit host aggregates. View the list of availability zones. Instances View, pause, resume, suspend, migrate, soft or hard reboot, and delete running instances that belong to users of some, but not all, projects. Also, view the log for an instance or access an instance with the console. Volumes View, create, edit, and delete volumes, and volume types. Flavors View, create, edit, view extra specifications for, and delete flavors. Flavors are the virtual hardware templates in Red Hat OpenStack Platform (RHOSP). Images View, create, edit properties for, and delete custom images. Networks View, create, edit properties for, and delete networks. Routers View, create, edit properties for, and delete routers. Floating IPs View allocated floating IP addresses for all projects. Defaults View and edit the default quotas (maximum limits) for resources in the environment. Metadata Definitions Import, view, and edit metadata definition namespaces, and associate the metadata definitions with specific resource types. System Information Contains the following tabs: Services - View a list of the services. Compute Services - View a list of all Compute services. Network Agents - View the network agents. Block Storage Services - View a list of all Block Storage services. Orchestration Services - View a list of all Orchestration services. 1.1.1. Viewing allocated floating IP addresses You can use the Floating IPs panel to view a list of allocated floating IP addresses. You can access the same information from the command line with the nova list --all-projects command. 1.2. The Project tab In the Project tab you can view and manage project resources. Set a project as active in Identity > Projects to view and manage resources in that project. The following options are available in the Project tab: Table 1.2. The Compute tab Parameter name Description Overview View reports for the project. Instances View, launch, create a snapshot from, stop, pause, or reboot instances, or connect to them through the console. Volumes Use the following tabs to complete these tasks: Volumes - View, create, edit, and delete volumes. Volume Snapshots - View, create, edit, and delete volume snapshots. Images View images, instance snapshots, and volume snapshots that project users create, and any images that are publicly available. Create, edit, and delete images, and launch instances from images and snapshots. Access & Security Use the following tabs to complete these tasks: Security Groups - View, create, edit, and delete security groups and security group rules. Key Pairs - View, create, edit, import, and delete key pairs. Floating IPs - Allocate an IP address to or release it from a project. API Access - View API endpoints, download the OpenStack RC file, download EC2 credentials, and view credentials for the current project user. Table 1.3. The Network tab Parameter name Description Network Topology View the interactive topology of the network. Networks Create and manage public and private networks and subnets. Routers Create and manage routers. Trunks Create and manage trunks. Requires the trunk extension enabled in OpenStack Networking (neutron). Table 1.4. The Object Store tab Parameter name Description Containers Create and manage storage containers. A container is a storage compartment for data, and provides a way for you to organize your data. It is similar to the concept of a Linux file directory, but it cannot be nested. Table 1.5. The Orchestration tab Parameter name Description Stacks Orchestrate multiple composite cloud applications with templates, through both an OpenStack-native REST API and a CloudFormation-compatible Query API. 1.3. The Identity tab In the Identity tab you can view and manage projects and users. The following options are available in the Identity tab: Projects - View, create, edit, and delete projects, view project usage, add or remove users as project members, modify quotas, and set an active project. Users - View, create, edit, disable, and delete users, and change user passwords. The Users tab is available when you log in as an admin user. For more information about managing your cloud with the Red Hat OpenStack Platform dashboard, see the following guides: Creating and Managing Instances Creating and Managing Images Networking guide Users and Identity Management guide
[ "http:// HOSTNAME /dashboard/" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/introduction_to_the_openstack_dashboard/the-openstack-dashboard_osp
Chapter 3. Getting started
Chapter 3. Getting started 3.1. Getting started with OpenShift Virtualization You can explore the features and functionalities of OpenShift Virtualization by installing and configuring a basic environment. Note Cluster configuration procedures require cluster-admin privileges. 3.1.1. Planning and installing OpenShift Virtualization Plan and install OpenShift Virtualization on an OpenShift Container Platform cluster: Plan your bare metal cluster for OpenShift Virtualization . Prepare your cluster for OpenShift Virtualization . Install the OpenShift Virtualization Operator . Install the virtctl command line interface (CLI) tool . Planning and installation resources About storage volumes for virtual machine disks . Using a CSI-enabled storage provider . Configuring local storage for virtual machines . Installing the Kubernetes NMState Operator . Specifying nodes for virtual machines . Virtctl commands . 3.1.2. Creating and managing virtual machines Create a virtual machine (VM): Create a VM from a Red Hat image . You can create a VM by using a Red Hat template or an instance type . Create a VM from a custom image . You can create a VM by importing a custom image from a container registry or a web page, by uploading an image from your local machine, or by cloning a persistent volume claim (PVC). Connect a VM to a secondary network: Linux bridge network . Open Virtual Network (OVN)-Kubernetes secondary network . Single Root I/O Virtualization (SR-IOV) network . Note VMs are connected to the pod network by default. Connect to a VM: Connect to the serial console or VNC console of a VM. Connect to a VM by using SSH . Connect to the desktop viewer for Windows VMs . Manage a VM: Manage a VM by using the web console . Manage a VM by using the virtctl CLI tool . Export a VM . 3.1.3. steps Review postinstallation configuration options . Configure storage options and automatic boot source updates . Learn about monitoring and health checks . Learn about live migration . Back up and restore VMs by using the OpenShift API for Data Protection (OADP) . Tune and scale your cluster . 3.2. Using the CLI tools You can manage OpenShift Virtualization resources by using the virtctl command line tool. You can access and modify virtual machine (VM) disk images by using the libguestfs command line tool. You deploy libguestfs by using the virtctl libguestfs command. 3.2.1. Installing virtctl To install virtctl on Red Hat Enterprise Linux (RHEL) 9, Linux, Windows, and MacOS operating systems, you download and install the virtctl binary file. To install virtctl on RHEL 8, you enable the OpenShift Virtualization repository and then install the kubevirt-virtctl package. 3.2.1.1. Installing the virtctl binary on RHEL 9, Linux, Windows, or macOS You can download the virtctl binary for your operating system from the OpenShift Container Platform web console and then install it. Procedure Navigate to the Virtualization Overview page in the web console. Click the Download virtctl link to download the virtctl binary for your operating system. Install virtctl : For RHEL 9 and other Linux operating systems: Decompress the archive file: USD tar -xvf <virtctl-version-distribution.arch>.tar.gz Run the following command to make the virtctl binary executable: USD chmod +x <path/virtctl-file-name> Move the virtctl binary to a directory in your PATH environment variable. You can check your path by running the following command: USD echo USDPATH Set the KUBECONFIG environment variable: USD export KUBECONFIG=/home/<user>/clusters/current/auth/kubeconfig For Windows: Decompress the archive file. Navigate the extracted folder hierarchy and double-click the virtctl executable file to install the client. Move the virtctl binary to a directory in your PATH environment variable. You can check your path by running the following command: C:\> path For macOS: Decompress the archive file. Move the virtctl binary to a directory in your PATH environment variable. You can check your path by running the following command: echo USDPATH 3.2.1.2. Installing the virtctl RPM on RHEL 8 You can install the virtctl RPM package on Red Hat Enterprise Linux (RHEL) 8 by enabling the OpenShift Virtualization repository and installing the kubevirt-virtctl package. Prerequisites Each host in your cluster must be registered with Red Hat Subscription Manager (RHSM) and have an active OpenShift Container Platform subscription. Procedure Enable the OpenShift Virtualization repository by using the subscription-manager CLI tool to run the following command: # subscription-manager repos --enable cnv-4.16-for-rhel-8-x86_64-rpms Install the kubevirt-virtctl package by running the following command: # yum install kubevirt-virtctl 3.2.2. virtctl commands The virtctl client is a command-line utility for managing OpenShift Virtualization resources. Note The virtual machine (VM) commands also apply to virtual machine instances (VMIs) unless otherwise specified. 3.2.2.1. virtctl information commands You use virtctl information commands to view information about the virtctl client. Table 3.1. Information commands Command Description virtctl version View the virtctl client and server versions. virtctl help View a list of virtctl commands. virtctl <command> -h|--help View a list of options for a specific command. virtctl options View a list of global command options for any virtctl command. 3.2.2.2. VM information commands You can use virtctl to view information about virtual machines (VMs) and virtual machine instances (VMIs). Table 3.2. VM information commands Command Description virtctl fslist <vm_name> View the file systems available on a guest machine. virtctl guestosinfo <vm_name> View information about the operating systems on a guest machine. virtctl userlist <vm_name> View the logged-in users on a guest machine. 3.2.2.3. VM manifest creation commands You can use virtctl create commands to create manifests for virtual machines, instance types, and preferences. Table 3.3. VM manifest creation commands Command Description virtctl create vm Create a VirtualMachine (VM) manifest. virtctl create vm --name <vm_name> Create a VM manifest, specifying a name for the VM. virtctl create vm --instancetype <instancetype_name> Create a VM manifest that uses an existing cluster-wide instance type. virtctl create vm --instancetype=virtualmachineinstancetype/<instancetype_name> Create a VM manifest that uses an existing namespaced instance type. virtctl create instancetype --cpu <cpu_value> --memory <memory_value> --name <instancetype_name> Create a manifest for a cluster-wide instance type. virtctl create instancetype --cpu <cpu_value> --memory <memory_value> --name <instancetype_name> --namespace <namespace_value> Create a manifest for a namespaced instance type. virtctl create preference --name <preference_name> Create a manifest for a cluster-wide VM preference, specifying a name for the preference. virtctl create preference --namespace <namespace_value> Create a manifest for a namespaced VM preference. 3.2.2.4. VM management commands You use virtctl virtual machine (VM) management commands to manage and migrate virtual machines (VMs) and virtual machine instances (VMIs). Table 3.4. VM management commands Command Description virtctl start <vm_name> Start a VM. virtctl start --paused <vm_name> Start a VM in a paused state. This option enables you to interrupt the boot process from the VNC console. virtctl stop <vm_name> Stop a VM. virtctl stop <vm_name> --grace-period 0 --force Force stop a VM. This option might cause data inconsistency or data loss. virtctl pause vm <vm_name> Pause a VM. The machine state is kept in memory. virtctl unpause vm <vm_name> Unpause a VM. virtctl migrate <vm_name> Migrate a VM. virtctl migrate-cancel <vm_name> Cancel a VM migration. virtctl restart <vm_name> Restart a VM. 3.2.2.5. VM connection commands You use virtctl connection commands to expose ports and connect to virtual machines (VMs) and virtual machine instances (VMIs). Table 3.5. VM connection commands Command Description virtctl console <vm_name> Connect to the serial console of a VM. virtctl expose vm <vm_name> --name <service_name> --type <ClusterIP|NodePort|LoadBalancer> --port <port> Create a service that forwards a designated port of a VM and expose the service on the specified port of the node. Example: virtctl expose vm rhel9_vm --name rhel9-ssh --type NodePort --port 22 virtctl scp -i <ssh_key> <file_name> <user_name>@<vm_name> Copy a file from your machine to a VM. This command uses the private key of an SSH key pair. The VM must be configured with the public key. virtctl scp -i <ssh_key> <user_name@<vm_name>:<file_name> . Copy a file from a VM to your machine. This command uses the private key of an SSH key pair. The VM must be configured with the public key. virtctl ssh -i <ssh_key> <user_name>@<vm_name> Open an SSH connection with a VM. This command uses the private key of an SSH key pair. The VM must be configured with the public key. virtctl vnc <vm_name> Connect to the VNC console of a VM. You must have virt-viewer installed. virtctl vnc --proxy-only=true <vm_name> Display the port number and connect manually to a VM by using any viewer through the VNC connection. virtctl vnc --port=<port-number> <vm_name> Specify a port number to run the proxy on the specified port, if that port is available. If a port number is not specified, the proxy runs on a random port. 3.2.2.6. VM export commands Use virtctl vmexport commands to create, download, or delete a volume exported from a VM, VM snapshot, or persistent volume claim (PVC). Certain manifests also contain a header secret, which grants access to the endpoint to import a disk image in a format that OpenShift Virtualization can use. Table 3.6. VM export commands Command Description virtctl vmexport create <vmexport_name> --vm|snapshot|pvc=<object_name> Create a VirtualMachineExport custom resource (CR) to export a volume from a VM, VM snapshot, or PVC. --vm : Exports the PVCs of a VM. --snapshot : Exports the PVCs contained in a VirtualMachineSnapshot CR. --pvc : Exports a PVC. Optional: --ttl=1h specifies the time to live. The default duration is 2 hours. virtctl vmexport delete <vmexport_name> Delete a VirtualMachineExport CR manually. virtctl vmexport download <vmexport_name> --output=<output_file> --volume=<volume_name> Download the volume defined in a VirtualMachineExport CR. --output specifies the file format. Example: disk.img.gz . --volume specifies the volume to download. This flag is optional if only one volume is available. Optional: --keep-vme retains the VirtualMachineExport CR after download. The default behavior is to delete the VirtualMachineExport CR after download. --insecure enables an insecure HTTP connection. virtctl vmexport download <vmexport_name> --<vm|snapshot|pvc>=<object_name> --output=<output_file> --volume=<volume_name> Create a VirtualMachineExport CR and then download the volume defined in the CR. virtctl vmexport download export --manifest Retrieve the manifest for an existing export. The manifest does not include the header secret. virtctl vmexport download export --manifest --vm=example Create a VM export for a VM example, and retrieve the manifest. The manifest does not include the header secret. virtctl vmexport download export --manifest --snap=example Create a VM export for a VM snapshot example, and retrieve the manifest. The manifest does not include the header secret. virtctl vmexport download export --manifest --include-secret Retrieve the manifest for an existing export. The manifest includes the header secret. virtctl vmexport download export --manifest --manifest-output-format=json Retrieve the manifest for an existing export in json format. The manifest does not include the header secret. virtctl vmexport download export --manifest --include-secret --output=manifest.yaml Retrieve the manifest for an existing export. The manifest includes the header secret and writes it to the file specified. 3.2.2.7. VM memory dump commands You can use the virtctl memory-dump command to output a VM memory dump on a PVC. You can specify an existing PVC or use the --create-claim flag to create a new PVC. Prerequisites The PVC volume mode must be FileSystem . The PVC must be large enough to contain the memory dump. The formula for calculating the PVC size is (VMMemorySize + 100Mi) * FileSystemOverhead , where 100Mi is the memory dump overhead. You must enable the hot plug feature gate in the HyperConverged custom resource by running the following command: USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "add", "path": "/spec/featureGates", \ "value": "HotplugVolumes"}]' Downloading the memory dump You must use the virtctl vmexport download command to download the memory dump: USD virtctl vmexport download <vmexport_name> --vm|pvc=<object_name> \ --volume=<volume_name> --output=<output_file> Table 3.7. VM memory dump commands Command Description virtctl memory-dump get <vm_name> --claim-name=<pvc_name> Save the memory dump of a VM on a PVC. The memory dump status is displayed in the status section of the VirtualMachine resource. Optional: --create-claim creates a new PVC with the appropriate size. This flag has the following options: --storage-class=<storage_class> : Specify a storage class for the PVC. --access-mode=<access_mode> : Specify ReadWriteOnce or ReadWriteMany . virtctl memory-dump get <vm_name> Rerun the virtctl memory-dump command with the same PVC. This command overwrites the memory dump. virtctl memory-dump remove <vm_name> Remove a memory dump. You must remove a memory dump manually if you want to change the target PVC. This command removes the association between the VM and the PVC, so that the memory dump is not displayed in the status section of the VirtualMachine resource. The PVC is not affected. 3.2.2.8. Hot plug and hot unplug commands You use virtctl to add or remove resources from running virtual machines (VMs) and virtual machine instances (VMIs). Table 3.8. Hot plug and hot unplug commands Command Description virtctl addvolume <vm_name> --volume-name=<datavolume_or_PVC> [--persist] [--serial=<label>] Hot plug a data volume or persistent volume claim (PVC). Optional: --persist mounts the virtual disk permanently on a VM. This flag does not apply to VMIs. --serial=<label> adds a label to the VM. If you do not specify a label, the default label is the data volume or PVC name. virtctl removevolume <vm_name> --volume-name=<virtual_disk> Hot unplug a virtual disk. virtctl addinterface <vm_name> --network-attachment-definition-name <net_attach_def_name> --name <interface_name> Hot plug a Linux bridge network interface. virtctl removeinterface <vm_name> --name <interface_name> Hot unplug a Linux bridge network interface. 3.2.2.9. Image upload commands You use the virtctl image-upload commands to upload a VM image to a data volume. Table 3.9. Image upload commands Command Description virtctl image-upload dv <datavolume_name> --image-path=</path/to/image> --no-create Upload a VM image to a data volume that already exists. virtctl image-upload dv <datavolume_name> --size=<datavolume_size> --image-path=</path/to/image> Upload a VM image to a new data volume of a specified requested size. 3.2.3. Deploying libguestfs by using virtctl You can use the virtctl guestfs command to deploy an interactive container with libguestfs-tools and a persistent volume claim (PVC) attached to it. Procedure To deploy a container with libguestfs-tools , mount the PVC, and attach a shell to it, run the following command: USD virtctl guestfs -n <namespace> <pvc_name> 1 1 The PVC name is a required argument. If you do not include it, an error message appears. 3.2.3.1. Libguestfs and virtctl guestfs commands Libguestfs tools help you access and modify virtual machine (VM) disk images. You can use libguestfs tools to view and edit files in a guest, clone and build virtual machines, and format and resize disks. You can also use the virtctl guestfs command and its sub-commands to modify, inspect, and debug VM disks on a PVC. To see a complete list of possible sub-commands, enter virt- on the command line and press the Tab key. For example: Command Description virt-edit -a /dev/vda /etc/motd Edit a file interactively in your terminal. virt-customize -a /dev/vda --ssh-inject root:string:<public key example> Inject an ssh key into the guest and create a login. virt-df -a /dev/vda -h See how much disk space is used by a VM. virt-customize -a /dev/vda --run-command 'rpm -qa > /rpm-list' See the full list of all RPMs installed on a guest by creating an output file containing the full list. virt-cat -a /dev/vda /rpm-list Display the output file list of all RPMs created using the virt-customize -a /dev/vda --run-command 'rpm -qa > /rpm-list' command in your terminal. virt-sysprep -a /dev/vda Seal a virtual machine disk image to be used as a template. By default, virtctl guestfs creates a session with everything needed to manage a VM disk. However, the command also supports several flag options if you want to customize the behavior: Flag Option Description --h or --help Provides help for guestfs . -n <namespace> option with a <pvc_name> argument To use a PVC from a specific namespace. If you do not use the -n <namespace> option, your current project is used. To change projects, use oc project <namespace> . If you do not include a <pvc_name> argument, an error message appears. --image string Lists the libguestfs-tools container image. You can configure the container to use a custom image by using the --image option. --kvm Indicates that kvm is used by the libguestfs-tools container. By default, virtctl guestfs sets up kvm for the interactive container, which greatly speeds up the libguest-tools execution because it uses QEMU. If a cluster does not have any kvm supporting nodes, you must disable kvm by setting the option --kvm=false . If not set, the libguestfs-tools pod remains pending because it cannot be scheduled on any node. --pull-policy string Shows the pull policy for the libguestfs image. You can also overwrite the image's pull policy by setting the pull-policy option. The command also checks if a PVC is in use by another pod, in which case an error message appears. However, once the libguestfs-tools process starts, the setup cannot avoid a new pod using the same PVC. You must verify that there are no active virtctl guestfs pods before starting the VM that accesses the same PVC. Note The virtctl guestfs command accepts only a single PVC attached to the interactive pod. 3.2.4. Using Ansible To use the Ansible collection for OpenShift Virtualization, see Red Hat Ansible Automation Hub (Red Hat Hybrid Cloud Console).
[ "tar -xvf <virtctl-version-distribution.arch>.tar.gz", "chmod +x <path/virtctl-file-name>", "echo USDPATH", "export KUBECONFIG=/home/<user>/clusters/current/auth/kubeconfig", "C:\\> path", "echo USDPATH", "subscription-manager repos --enable cnv-4.16-for-rhel-8-x86_64-rpms", "yum install kubevirt-virtctl", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"add\", \"path\": \"/spec/featureGates\", \"value\": \"HotplugVolumes\"}]'", "virtctl vmexport download <vmexport_name> --vm|pvc=<object_name> --volume=<volume_name> --output=<output_file>", "virtctl guestfs -n <namespace> <pvc_name> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/virtualization/getting-started
Chapter 5. Embedding MicroShift applications tutorial
Chapter 5. Embedding MicroShift applications tutorial The following tutorial gives a detailed example of how to embed applications in a RHEL for Edge image for use in a MicroShift cluster in various environments. 5.1. Embed application RPMs tutorial The following tutorial reviews the MicroShift installation steps and adds a description of the workflow for embedding applications. If you are already familiar with rpm-ostree systems such as Red Hat Enterprise Linux for Edge (RHEL for Edge) and MicroShift, you can go straight to the procedures. 5.1.1. Installation workflow review Embedding applications requires a similar workflow to embedding MicroShift into a RHEL for Edge image. The following image shows how system artifacts such as RPMs, containers, and files are added to a blueprint and used by the image composer to create an ostree commit. The ostree commit then can follow either the ISO path or the repository path to edge devices. The ISO path can be used for disconnected environments, while the repository path is often used in places were the network is usually connected. Embedding MicroShift workflow Reviewing these steps can help you understand the steps needed to embed an application: To embed MicroShift on RHEL for Edge, you added the MicroShift repositories to image builder. You created a blueprint that declared all the RPMs, container images, files and customizations you needed, including the addition of MicroShift. You added the blueprint to image builder and ran a build with the image builder CLI tool ( composer-cli ). This step created rpm-ostree commits, which were used to create the container image. This image contained RHEL for Edge. You added the installer blueprint to image builder to create an rpm-ostree image (ISO) to boot from. This build contained both RHEL for Edge and MicroShift. You downloaded the ISO with MicroShift embedded, prepared it for use, provisioned it, then installed it onto your edge devices. 5.1.2. Embed application RPMs workflow After you have set up a build host that meets the image builder requirements, you can add your application in the form of a directory of manifests to the image. After those steps, the simplest way to embed your application or workload into a new ISO is to create your own RPMs that include the manifests. Your application RPMs contain all of the configuration files describing your deployment. The following "Embedding applications workflow" image shows how Kubernetes application manifests and RPM spec files are combined in a single application RPM build. This build becomes the RPM artifact included in the workflow for embedding MicroShift in an ostree commit. Embedding applications workflow The following procedures use the rpmbuild tool to create a specification file and local repository. The specification file defines how the package is built, moving your application manifests to the correct location inside the RPM package for MicroShift to pick them up. That RPM package is then embedded in the ISO. 5.1.3. Preparing to make application RPMs To build your own RPMs, choose a tool of your choice, such as the rpmbuild tool, and initialize the RPM build tree in your home directory. The following is an example procedure. As long as your RPMs are accessible to image builder, you can use the method you prefer to build the application RPMs. Prerequisites You have set up a Red Hat Enterprise Linux for Edge (RHEL for Edge) 9.4 build host that meets the image builder system requirements. You have root access to the host. Procedure Install the rpmbuild tool and create the yum repository for it by running the following command: USD sudo dnf install rpmdevtools rpmlint yum-utils createrepo Create the file tree you need to build RPM packages by running the following command: USD rpmdev-setuptree Verification List the directories to confirm creation by running the following command: USD ls ~/rpmbuild/ Example output BUILD RPMS SOURCES SPECS SRPMS 5.1.4. Building the RPM package for the application manifests To build your own RPMs, you must create a spec file that adds the application manifests to the RPM package. The following is an example procedure. As long as the application RPMs and other elements needed for image building are accessible to image builder, you can use the method that you prefer. Prerequisites You have set up a Red Hat Enterprise Linux for Edge (RHEL for Edge) 9.4 build host that meets the image builder system requirements. You have root access to the host. The file tree required to build RPM packages was created. Procedure In the ~/rpmbuild/SPECS directory, create a file such as <application_workload_manifests.spec> using the following template: Example spec file Name: <application_workload_manifests> Version: 0.0.1 Release: 1%{?dist} Summary: Adds workload manifests to microshift BuildArch: noarch License: GPL Source0: %{name}-%{version}.tar.gz #Requires: microshift %description Adds workload manifests to microshift %prep %autosetup %install 1 rm -rf USDRPM_BUILD_ROOT mkdir -p USDRPM_BUILD_ROOT/%{_prefix}/lib/microshift/manifests cp -pr ~/manifests USDRPM_BUILD_ROOT/%{_prefix}/lib/microshift/ %clean rm -rf USDRPM_BUILD_ROOT %files %{_prefix}/lib/microshift/manifests/** %changelog * <DDD MM DD YYYY username@domain - V major.minor.patch> - <your_change_log_comment> 1 The %install section creates the target directory inside the RPM package, /usr/lib/microshift/manifests/ and copies the manifests from the source home directory, ~/manifests . Important All of the required YAML files must be in the source home directory ~/manifests , including a kustomize.yaml file if you are using kustomize. Build your RPM package in the ~/rpmbuild/RPMS directory by running the following command: USD rpmbuild -bb ~/rpmbuild/SPECS/<application_workload_manifests.spec> 5.1.5. Adding application RPMs to a blueprint To add application RPMs to a blueprint, you must create a local repository that image builder can use to create the ISO. With this procedure, the required container images for your workload can be pulled over the network. Prerequisites You have root access to the host. Workload or application RPMs exist in the ~/rpmbuild/RPMS directory. Procedure Create a local RPM repository by running the following command: USD createrepo ~/rpmbuild/RPMS/ Give image builder access to the RPM repository by running the following command: USD sudo chmod a+rx ~ Note You must ensure that image builder has all of the necessary permissions to access all of the files needed for image building, or the build cannot proceed. Create the blueprint file, repo-local-rpmbuild.toml using the following template: id = "local-rpm-build" name = "RPMs build locally" type = "yum-baseurl" url = "file://<path>/rpmbuild/RPMS" 1 check_gpg = false check_ssl = false system = false 1 Specify part of the path to create a location that you choose. Use this path in the later commands to set up the repository and copy the RPMs. Add the repository as a source for image builder by running the following command: USD sudo composer-cli sources add repo-local-rpmbuild.toml Add the RPM to your blueprint, by adding the following lines: ... [[packages]] name = "<application_workload_manifests>" 1 version = "*" ... 1 Add the name of your workload here. Push the updated blueprint to image builder by running the following command: USD sudo composer-cli blueprints push repo-local-rpmbuild.toml At this point, you can either run image builder to create the ISO, or embed the container images for offline use. To create the ISO, start image builder by running the following command: USD sudo composer-cli compose start-ostree repo-local-rpmbuild edge-commit In this scenario, the container images are pulled over the network by the edge device during startup. Additional resources Composing a RHEL for Edge image using the image builder CLI Network-based deployments workflow 5.2. Additional resources Embedding applications for offline use Embedding Red Hat build of MicroShift in an RPM-OSTree image Composing, installing, and managing RHEL for Edge images Preparing for image building Meet Red Hat Device Edge with Red Hat build of MicroShift How to create a Linux RPM package Composing a RHEL for Edge image using image builder command-line Image builder system requirements
[ "sudo dnf install rpmdevtools rpmlint yum-utils createrepo", "rpmdev-setuptree", "ls ~/rpmbuild/", "BUILD RPMS SOURCES SPECS SRPMS", "Name: <application_workload_manifests> Version: 0.0.1 Release: 1%{?dist} Summary: Adds workload manifests to microshift BuildArch: noarch License: GPL Source0: %{name}-%{version}.tar.gz #Requires: microshift %description Adds workload manifests to microshift %prep %autosetup %install 1 rm -rf USDRPM_BUILD_ROOT mkdir -p USDRPM_BUILD_ROOT/%{_prefix}/lib/microshift/manifests cp -pr ~/manifests USDRPM_BUILD_ROOT/%{_prefix}/lib/microshift/ %clean rm -rf USDRPM_BUILD_ROOT %files %{_prefix}/lib/microshift/manifests/** %changelog * <DDD MM DD YYYY username@domain - V major.minor.patch> - <your_change_log_comment>", "rpmbuild -bb ~/rpmbuild/SPECS/<application_workload_manifests.spec>", "createrepo ~/rpmbuild/RPMS/", "sudo chmod a+rx ~", "id = \"local-rpm-build\" name = \"RPMs build locally\" type = \"yum-baseurl\" url = \"file://<path>/rpmbuild/RPMS\" 1 check_gpg = false check_ssl = false system = false", "sudo composer-cli sources add repo-local-rpmbuild.toml", "... [[packages]] name = \"<application_workload_manifests>\" 1 version = \"*\" ...", "sudo composer-cli blueprints push repo-local-rpmbuild.toml", "sudo composer-cli compose start-ostree repo-local-rpmbuild edge-commit" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/running_applications/microshift-embedding-apps-tutorial
Chapter 4. Administrator tasks
Chapter 4. Administrator tasks 4.1. Adding Operators to a cluster Cluster administrators can install Operators to an OpenShift Container Platform cluster by subscribing Operators to namespaces with OperatorHub. 4.1.1. About Operator installation with OperatorHub OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster. As a user with the proper permissions, you can install an Operator from OperatorHub using the OpenShift Container Platform web console or CLI. During installation, you must determine the following initial settings for the Operator: Installation Mode Choose a specific namespace in which to install the Operator. Update Channel If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list. Approval Strategy You can choose automatic or manual updates. If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Understanding OperatorHub 4.1.2. Installing from OperatorHub using the web console You can install and subscribe to an Operator from OperatorHub using the OpenShift Container Platform web console. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Access to an OpenShift Container Platform cluster using an account with Operator installation permissions. Procedure Navigate in the web console to the Operators OperatorHub page. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type advanced to find the Advanced Cluster Management for Kubernetes Operator. You can also filter options by Infrastructure Features . For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments. Select the Operator to display additional information. Note Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing. Read the information about the Operator and click Install . On the Install Operator page: Select one of the following: All namespaces on the cluster (default) installs the Operator in the default openshift-operators namespace to watch and be made available to all namespaces in the cluster. This option is not always available. A specific namespace on the cluster allows you to choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace. Choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace. Select an Update Channel (if more than one is available). Select Automatic or Manual approval strategy, as described earlier. Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster. If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan. After approving on the Install Plan page, the subscription upgrade status moves to Up to date . If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention. After the upgrade status of the subscription is Up to date , select Operators Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should ultimately resolve to InstallSucceeded in the relevant namespace. Note For the All namespaces... installation mode, the status resolves to InstallSucceeded in the openshift-operators namespace, but the status is Copied if you check in other namespaces. If it does not: Check the logs in any pods in the openshift-operators project (or other relevant namespace if A specific namespace... installation mode was selected) on the Workloads Pods page that are reporting issues to troubleshoot further. 4.1.3. Installing from OperatorHub using the CLI Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub using the CLI. Use the oc command to create or update a Subscription object. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions. Install the oc command to your local system. Procedure View the list of Operators available to the cluster from OperatorHub: USD oc get packagemanifests -n openshift-marketplace Example output NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m ... couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m ... etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m ... Note the catalog for your desired Operator. Inspect your desired Operator to verify its supported install modes and available channels: USD oc describe packagemanifests <operator_name> -n openshift-marketplace An Operator group, defined by an OperatorGroup object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group. The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the AllNamespaces or SingleNamespace mode. If the Operator you intend to install uses the AllNamespaces , then the openshift-operators namespace already has an appropriate Operator group in place. However, if the Operator uses the SingleNamespace mode and you do not already have an appropriate Operator group in place, you must create one. Note The web console version of this procedure handles the creation of the OperatorGroup and Subscription objects automatically behind the scenes for you when choosing SingleNamespace mode. Create an OperatorGroup object YAML file, for example operatorgroup.yaml : Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace> Create the OperatorGroup object: USD oc apply -f operatorgroup.yaml Create a Subscription object YAML file to subscribe a namespace to an Operator, for example sub.yaml : Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: "-v=10" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: "Exists" resources: 11 requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" nodeSelector: 12 foo: bar 1 For AllNamespaces install mode usage, specify the openshift-operators namespace. Otherwise, specify the relevant single namespace for SingleNamespace install mode usage. 2 Name of the channel to subscribe to. 3 Name of the Operator to subscribe to. 4 Name of the catalog source that provides the Operator. 5 Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources. 6 The env parameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM. 7 The envFrom parameter defines a list of sources to populate Environment Variables in the container. 8 The volumes parameter defines a list of Volumes that must exist on the pod created by OLM. 9 The volumeMounts parameter defines a list of VolumeMounts that must exist in all containers in the pod created by OLM. If a volumeMount references a volume that does not exist, OLM fails to deploy the Operator. 10 The tolerations parameter defines a list of Tolerations for the pod created by OLM. 11 The resources parameter defines resource constraints for all the containers in the pod created by OLM. 12 The nodeSelector parameter defines a NodeSelector for the pod created by OLM. Create the Subscription object: USD oc apply -f sub.yaml At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation. Additional resources About Operator groups 4.1.4. Installing a specific version of an Operator You can install a specific version of an Operator by setting the cluster service version (CSV) in a Subscription object. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions OpenShift CLI ( oc ) installed Procedure Create a Subscription object YAML file that subscribes a namespace to an Operator with a specific version by setting the startingCSV field. Set the installPlanApproval field to Manual to prevent the Operator from automatically upgrading if a later version exists in the catalog. For example, the following sub.yaml file can be used to install the Red Hat Quay Operator specifically to version 3.4.0: Subscription with a specific starting Operator version apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay spec: channel: quay-v3.4 installPlanApproval: Manual 1 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: quay-operator.v3.4.0 2 1 Set the approval strategy to Manual in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation. 2 Set a specific version of an Operator CSV. Create the Subscription object: USD oc apply -f sub.yaml Manually approve the pending install plan to complete the Operator installation. Additional resources Manually approving a pending Operator update 4.1.5. Pod placement of Operator workloads By default, Operator Lifecycle Manager (OLM) places pods on arbitrary worker nodes when installing an Operator or deploying Operand workloads. As an administrator, you can use projects with a combination of node selectors, taints, and tolerations to control the placement of Operators and Operands to specific nodes. Controlling pod placement of Operator and Operand workloads has the following prerequisites: Determine a node or set of nodes to target for the pods per your requirements. If available, note an existing label, such as node-role.kubernetes.io/app , that identifies the node or nodes. Otherwise, add a label, such as myoperator , by using a machine set or editing the node directly. You will use this label in a later step as the node selector on your project. If you want to ensure that only pods with a certain label are allowed to run on the nodes, while steering unrelated workloads to other nodes, add a taint to the node or nodes by using a machine set or editing the node directly. Use an effect that ensures that new pods that do not match the taint cannot be scheduled on the nodes. For example, a myoperator:NoSchedule taint ensures that new pods that do not match the taint are not scheduled onto that node, but existing pods on the node are allowed to remain. Create a project that is configured with a default node selector and, if you added a taint, a matching toleration. At this point, the project you created can be used to steer pods towards the specified nodes in the following scenarios: For Operator pods Administrators can create a Subscription object in the project. As a result, the Operator pods are placed on the specified nodes. For Operand pods Using an installed Operator, users can create an application in the project, which places the custom resource (CR) owned by the Operator in the project. As a result, the Operand pods are placed on the specified nodes, unless the Operator is deploying cluster-wide objects or resources in other namespaces, in which case this customized pod placement does not apply. Additional resources Adding taints and tolerations manually to nodes or with machine sets Creating project-wide node selectors Creating a project with a node selector and toleration 4.2. Updating installed Operators As a cluster administrator, you can update Operators that have been previously installed using Operator Lifecycle Manager (OLM) on your OpenShift Container Platform cluster. 4.2.1. Preparing for an Operator update The subscription of an installed Operator specifies an update channel that tracks and receives updates for the Operator. You can change the update channel to start tracking and receiving updates from a newer channel. The names of update channels in a subscription can differ between Operators, but the naming scheme typically follows a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator ( 1.2 , 1.3 ) or a release frequency ( stable , fast ). Note You cannot change installed Operators to a channel that is older than the current channel. Red Hat Customer Portal Labs include the following application that helps administrators prepare to update their Operators: Red Hat OpenShift Container Platform Operator Update Information Checker You can use the application to search for Operator Lifecycle Manager-based Operators and verify the available Operator version per update channel across different versions of OpenShift Container Platform. Cluster Version Operator-based Operators are not included. 4.2.2. Changing the update channel for an Operator You can change the update channel for an Operator by using the OpenShift Container Platform web console. Tip If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the web console, navigate to Operators Installed Operators . Click the name of the Operator you want to change the update channel for. Click the Subscription tab. Click the name of the update channel under Channel . Click the newer update channel that you want to change to, then click Save . For subscriptions with an Automatic approval strategy, the update begins automatically. Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . For subscriptions with a Manual approval strategy, you can manually approve the update from the Subscription tab. 4.2.3. Manually approving a pending Operator update If an installed Operator has the approval strategy in its subscription set to Manual , when new updates are released in its current update channel, the update must be manually approved before installation can begin. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators . Operators that have a pending update display a status with Upgrade available . Click the name of the Operator you want to update. Click the Subscription tab. Any update requiring approval are displayed to Upgrade Status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for update. When satisfied, click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . 4.3. Deleting Operators from a cluster The following describes how to delete, or uninstall, Operators that were previously installed using Operator Lifecycle Manager (OLM) on your OpenShift Container Platform cluster. Important You must successfully and completely uninstall an Operator prior to attempting to reinstall the same Operator. Failure to fully uninstall the Operator properly can leave resources, such as a project or namespace, stuck in a "Terminating" state and cause "error resolving resource" messages to be observed when trying to reinstall the Operator. For more information, see Reinstalling Operators after failed uninstallation . 4.3.1. Deleting Operators from a cluster using the web console Cluster administrators can delete installed Operators from a selected namespace by using the web console. Prerequisites Access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions. Procedure Navigate to the Operators Installed Operators page. Scroll or enter a keyword into the Filter by name field to find the Operator that you want to remove. Then, click on it. On the right side of the Operator Details page, select Uninstall Operator from the Actions list. An Uninstall Operator? dialog box is displayed. Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates. Note This action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs. 4.3.2. Deleting Operators from a cluster using the CLI Cluster administrators can delete installed Operators from a selected namespace by using the CLI. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. oc command installed on workstation. Procedure Check the current version of the subscribed Operator (for example, jaeger ) in the currentCSV field: USD oc get subscription jaeger -n openshift-operators -o yaml | grep currentCSV Example output currentCSV: jaeger-operator.v1.8.2 Delete the subscription (for example, jaeger ): USD oc delete subscription jaeger -n openshift-operators Example output subscription.operators.coreos.com "jaeger" deleted Delete the CSV for the Operator in the target namespace using the currentCSV value from the step: USD oc delete clusterserviceversion jaeger-operator.v1.8.2 -n openshift-operators Example output clusterserviceversion.operators.coreos.com "jaeger-operator.v1.8.2" deleted 4.3.3. Refreshing failing subscriptions In Operator Lifecycle Manager (OLM), if you subscribe to an Operator that references images that are not accessible on your network, you can find jobs in the openshift-marketplace namespace that are failing with the following errors: Example output ImagePullBackOff for Back-off pulling image "example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e" Example output rpc error: code = Unknown desc = error pinging docker registry example.com: Get "https://example.com/v2/": dial tcp: lookup example.com on 10.0.0.1:53: no such host As a result, the subscription is stuck in this failing state and the Operator is unable to install or upgrade. You can refresh a failing subscription by deleting the subscription, cluster service version (CSV), and other related objects. After recreating the subscription, OLM then reinstalls the correct version of the Operator. Prerequisites You have a failing subscription that is unable to pull an inaccessible bundle image. You have confirmed that the correct bundle image is accessible. Procedure Get the names of the Subscription and ClusterServiceVersion objects from the namespace where the Operator is installed: USD oc get sub,csv -n <namespace> Example output NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded Delete the subscription: USD oc delete subscription <subscription_name> -n <namespace> Delete the cluster service version: USD oc delete csv <csv_name> -n <namespace> Get the names of any failing jobs and related config maps in the openshift-marketplace namespace: USD oc get job,configmap -n openshift-marketplace Example output NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s Delete the job: USD oc delete job <job_name> -n openshift-marketplace This ensures pods that try to pull the inaccessible image are not recreated. Delete the config map: USD oc delete configmap <configmap_name> -n openshift-marketplace Reinstall the Operator using OperatorHub in the web console. Verification Check that the Operator has been reinstalled successfully: USD oc get sub,csv,installplan -n <namespace> 4.4. Configuring Operator Lifecycle Manager features The Operator Lifecycle Manager (OLM) controller is configured by an OLMConfig custom resource (CR) named cluster . Cluster administrators can modify this resource to enable or disable certain features. This document outlines the features currently supported by OLM that are configured by the OLMConfig resource. 4.4.1. Disabling copied CSVs When an Operator is installed by Operator Lifecycle Manager (OLM), a simplified copy of its cluster service version (CSV) is created in every namespace that the Operator is configured to watch. These CSVs are known as copied CSVs and communicate to users which controllers are actively reconciling resource events in a given namespace. When Operators are configured to use the AllNamespaces install mode, versus targeting a single or specified set of namespaces, a copied CSV is created in every namespace on the cluster. On especially large clusters, with namespaces and installed Operators potentially in the hundreds or thousands, copied CSVs consume an untenable amount of resources, such as OLM's memory usage, cluster etcd limits, and networking. To support these larger clusters, cluster administrators can disable copied CSVs for Operators installed with the AllNamespaces mode. Warning If you disable copied CSVs, a user's ability to discover Operators in the OperatorHub and CLI is limited to Operators installed directly in the user's namespace. If an Operator is configured to reconcile events in the user's namespace but is installed in a different namespace, the user cannot view the Operator in the OperatorHub or CLI. Operators affected by this limitation are still available and continue to reconcile events in the user's namespace. This behavior occurs for the following reasons: Copied CSVs identify the Operators available for a given namespace. Role-based access control (RBAC) scopes the user's ability to view and discover Operators in the OperatorHub and CLI. Procedure Edit the OLMConfig object named cluster and set the spec.features.disableCopiedCSVs field to true : USD oc apply -f - <<EOF apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true 1 EOF 1 Disabled copied CSVs for AllNamespaces install mode Operators Verification When copied CSVs are disabled, OLM captures this information in an event in the Operator's namespace: USD oc get events Example output LAST SEEN TYPE REASON OBJECT MESSAGE 85s Warning DisabledCopiedCSVs clusterserviceversion/my-csv.v1.0.0 CSV copying disabled for operators/my-csv.v1.0.0 When the spec.features.disableCopiedCSVs field is missing or set to false , OLM recreates the copied CSVs for all Operators installed with the AllNamespaces mode and deletes the previously mentioned events. Additional resources Install modes 4.5. Configuring proxy support in Operator Lifecycle Manager If a global proxy is configured on the OpenShift Container Platform cluster, Operator Lifecycle Manager (OLM) automatically configures Operators that it manages with the cluster-wide proxy. However, you can also configure installed Operators to override the global proxy or inject a custom CA certificate. Additional resources Configuring the cluster-wide proxy Configuring a custom PKI (custom CA certificate) Developing Operators that support proxy settings for Go , Ansible , and Helm 4.5.1. Overriding proxy settings of an Operator If a cluster-wide egress proxy is configured, Operators running with Operator Lifecycle Manager (OLM) inherit the cluster-wide proxy settings on their deployments. Cluster administrators can also override these proxy settings by configuring the subscription of an Operator. Important Operators must handle setting environment variables for proxy settings in the pods for any managed Operands. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Navigate in the web console to the Operators OperatorHub page. Select the Operator and click Install . On the Install Operator page, modify the Subscription object to include one or more of the following environment variables in the spec section: HTTP_PROXY HTTPS_PROXY NO_PROXY For example: Subscription object with proxy setting overrides apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd-config-test namespace: openshift-operators spec: config: env: - name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test channel: clusterwide-alpha installPlanApproval: Automatic name: etcd source: community-operators sourceNamespace: openshift-marketplace startingCSV: etcdoperator.v0.9.4-clusterwide Note These environment variables can also be unset using an empty value to remove any previously set cluster-wide or custom proxy settings. OLM handles these environment variables as a unit; if at least one of them is set, all three are considered overridden and the cluster-wide defaults are not used for the deployments of the subscribed Operator. Click Install to make the Operator available to the selected namespaces. After the CSV for the Operator appears in the relevant namespace, you can verify that custom proxy environment variables are set in the deployment. For example, using the CLI: USD oc get deployment -n openshift-operators \ etcd-operator -o yaml \ | grep -i "PROXY" -A 2 Example output - name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21088a98b93838e284a6086b13917f96b0d9c ... 4.5.2. Injecting a custom CA certificate When a cluster administrator adds a custom CA certificate to a cluster using a config map, the Cluster Network Operator merges the user-provided certificates and system CA certificates into a single bundle. You can inject this merged bundle into your Operator running on Operator Lifecycle Manager (OLM), which is useful if you have a man-in-the-middle HTTPS proxy. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Custom CA certificate added to the cluster using a config map. Desired Operator installed and running on OLM. Procedure Create an empty config map in the namespace where the subscription for your Operator exists and include the following label: apiVersion: v1 kind: ConfigMap metadata: name: trusted-ca 1 labels: config.openshift.io/inject-trusted-cabundle: "true" 2 1 Name of the config map. 2 Requests the Cluster Network Operator to inject the merged bundle. After creating this config map, it is immediately populated with the certificate contents of the merged bundle. Update your the Subscription object to include a spec.config section that mounts the trusted-ca config map as a volume to each container within a pod that requires a custom CA: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: my-operator spec: package: etcd channel: alpha config: 1 selector: matchLabels: <labels_for_pods> 2 volumes: 3 - name: trusted-ca configMap: name: trusted-ca items: - key: ca-bundle.crt 4 path: tls-ca-bundle.pem 5 volumeMounts: 6 - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true 1 Add a config section if it does not exist. 2 Specify labels to match pods that are owned by the Operator. 3 Create a trusted-ca volume. 4 ca-bundle.crt is required as the config map key. 5 tls-ca-bundle.pem is required as the config map path. 6 Create a trusted-ca volume mount. Note Deployments of an Operator can fail to validate the authority and display a x509 certificate signed by unknown authority error. This error can occur even after injecting a custom CA when using the subscription of an Operator. In this case, you can set the mountPath as /etc/ssl/certs for trusted-ca by using the subscription of an Operator. 4.6. Viewing Operator status Understanding the state of the system in Operator Lifecycle Manager (OLM) is important for making decisions about and debugging problems with installed Operators. OLM provides insight into subscriptions and related catalog sources regarding their state and actions performed. This helps users better understand the healthiness of their Operators. 4.6.1. Operator subscription condition types Subscriptions can report the following condition types: Table 4.1. Subscription condition types Condition Description CatalogSourcesUnhealthy Some or all of the catalog sources to be used in resolution are unhealthy. InstallPlanMissing An install plan for a subscription is missing. InstallPlanPending An install plan for a subscription is pending installation. InstallPlanFailed An install plan for a subscription has failed. ResolutionFailed The dependency resolution for a subscription has failed. Note Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object. Additional resources Refreshing failing subscriptions 4.6.2. Viewing Operator subscription status by using the CLI You can view Operator subscription status by using the CLI. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List Operator subscriptions: USD oc get subs -n <operator_namespace> Use the oc describe command to inspect a Subscription resource: USD oc describe sub <subscription_name> -n <operator_namespace> In the command output, find the Conditions section for the status of Operator subscription condition types. In the following example, the CatalogSourcesUnhealthy condition type has a status of false because all available catalog sources are healthy: Example output Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy Note Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object. 4.6.3. Viewing Operator catalog source status by using the CLI You can view the status of an Operator catalog source by using the CLI. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List the catalog sources in a namespace. For example, you can check the openshift-marketplace namespace, which is used for cluster-wide catalog sources: USD oc get catalogsources -n openshift-marketplace Example output NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m Use the oc describe command to get more details and status about a catalog source: USD oc describe catalogsource example-catalog -n openshift-marketplace Example output Name: example-catalog Namespace: openshift-marketplace ... Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace In the preceding example output, the last observed state is TRANSIENT_FAILURE . This state indicates that there is a problem establishing a connection for the catalog source. List the pods in the namespace where your catalog source was created: USD oc get pods -n openshift-marketplace Example output NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m When a catalog source is created in a namespace, a pod for the catalog source is created in that namespace. In the preceding example output, the status for the example-catalog-bwt8z pod is ImagePullBackOff . This status indicates that there is an issue pulling the catalog source's index image. Use the oc describe command to inspect a pod for more detailed information: USD oc describe pod example-catalog-bwt8z -n openshift-marketplace Example output Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image "quay.io/example-org/example-catalog:v1": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull In the preceding example output, the error messages indicate that the catalog source's index image is failing to pull successfully because of an authorization issue. For example, the index image might be stored in a registry that requires login credentials. Additional resources Operator Lifecycle Manager concepts and resources Catalog source gRPC documentation: States of Connectivity Accessing images for Operators from private registries 4.7. Managing Operator conditions As a cluster administrator, you can manage Operator conditions by using Operator Lifecycle Manager (OLM). 4.7.1. Overriding Operator conditions As a cluster administrator, you might want to ignore a supported Operator condition reported by an Operator. When present, Operator conditions in the Spec.Overrides array override the conditions in the Spec.Conditions array, allowing cluster administrators to deal with situations where an Operator is incorrectly reporting a state to Operator Lifecycle Manager (OLM). Note By default, the Spec.Overrides array is not present in an OperatorCondition object until it is added by a cluster administrator. The Spec.Conditions array is also not present until it is either added by a user or as a result of custom Operator logic. For example, consider a known version of an Operator that always communicates that it is not upgradeable. In this instance, you might want to upgrade the Operator despite the Operator communicating that it is not upgradeable. This could be accomplished by overriding the Operator condition by adding the condition type and status to the Spec.Overrides array in the OperatorCondition object. Prerequisites An Operator with an OperatorCondition object, installed using OLM. Procedure Edit the OperatorCondition object for the Operator: USD oc edit operatorcondition <name> Add a Spec.Overrides array to the object: Example Operator condition override apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: overrides: - type: Upgradeable 1 status: "True" reason: "upgradeIsSafe" message: "This is a known issue with the Operator where it always reports that it cannot be upgraded." conditions: - type: Upgradeable status: "False" reason: "migration" message: "The operator is performing a migration." lastTransitionTime: "2020-08-24T23:15:55Z" 1 Allows the cluster administrator to change the upgrade readiness to True . 4.7.2. Updating your Operator to use Operator conditions Operator Lifecycle Manager (OLM) automatically creates an OperatorCondition resource for each ClusterServiceVersion resource that it reconciles. All service accounts in the CSV are granted the RBAC to interact with the OperatorCondition owned by the Operator. An Operator author can develop their Operator to use the operator-lib library such that, after the Operator has been deployed by OLM, it can set its own conditions. For more resources about setting Operator conditions as an Operator author, see the Enabling Operator conditions page. 4.7.2.1. Setting defaults In an effort to remain backwards compatible, OLM treats the absence of an OperatorCondition resource as opting out of the condition. Therefore, an Operator that opts in to using Operator conditions should set default conditions before the ready probe for the pod is set to true . This provides the Operator with a grace period to update the condition to the correct state. 4.7.3. Additional resources Operator conditions 4.8. Allowing non-cluster administrators to install Operators Cluster administrators can use Operator groups to allow regular users to install Operators. Additional resources Operator groups 4.8.1. Understanding Operator installation policy Operators can require wide privileges to run, and the required privileges can change between versions. Operator Lifecycle Manager (OLM) runs with cluster-admin privileges. By default, Operator authors can specify any set of permissions in the cluster service version (CSV), and OLM consequently grants it to the Operator. To ensure that an Operator cannot achieve cluster-scoped privileges and that users cannot escalate privileges using OLM, Cluster administrators can manually audit Operators before they are added to the cluster. Cluster administrators are also provided tools for determining and constraining which actions are allowed during an Operator installation or upgrade using service accounts. Cluster administrators can associate an Operator group with a service account that has a set of privileges granted to it. The service account sets policy on Operators to ensure they only run within predetermined boundaries by using role-based access control (RBAC) rules. As a result, the Operator is unable to do anything that is not explicitly permitted by those rules. By employing Operator groups, users with enough privileges can install Operators with a limited scope. As a result, more of the Operator Framework tools can safely be made available to more users, providing a richer experience for building applications with Operators. Note Role-based access control (RBAC) for Subscription objects is automatically granted to every user with the edit or admin role in a namespace. However, RBAC does not exist on OperatorGroup objects; this absence is what prevents regular users from installing Operators. Pre-installing Operator groups is effectively what gives installation privileges. Keep the following points in mind when associating an Operator group with a service account: The APIService and CustomResourceDefinition resources are always created by OLM using the cluster-admin role. A service account associated with an Operator group should never be granted privileges to write these resources. Any Operator tied to this Operator group is now confined to the permissions granted to the specified service account. If the Operator asks for permissions that are outside the scope of the service account, the install fails with appropriate errors so the cluster administrator can troubleshoot and resolve the issue. 4.8.1.1. Installation scenarios When determining whether an Operator can be installed or upgraded on a cluster, Operator Lifecycle Manager (OLM) considers the following scenarios: A cluster administrator creates a new Operator group and specifies a service account. All Operator(s) associated with this Operator group are installed and run against the privileges granted to the service account. A cluster administrator creates a new Operator group and does not specify any service account. OpenShift Container Platform maintains backward compatibility, so the default behavior remains and Operator installs and upgrades are permitted. For existing Operator groups that do not specify a service account, the default behavior remains and Operator installs and upgrades are permitted. A cluster administrator updates an existing Operator group and specifies a service account. OLM allows the existing Operator to continue to run with their current privileges. When such an existing Operator is going through an upgrade, it is reinstalled and run against the privileges granted to the service account like any new Operator. A service account specified by an Operator group changes by adding or removing permissions, or the existing service account is swapped with a new one. When existing Operators go through an upgrade, it is reinstalled and run against the privileges granted to the updated service account like any new Operator. A cluster administrator removes the service account from an Operator group. The default behavior remains and Operator installs and upgrades are permitted. 4.8.1.2. Installation workflow When an Operator group is tied to a service account and an Operator is installed or upgraded, Operator Lifecycle Manager (OLM) uses the following workflow: The given Subscription object is picked up by OLM. OLM fetches the Operator group tied to this subscription. OLM determines that the Operator group has a service account specified. OLM creates a client scoped to the service account and uses the scoped client to install the Operator. This ensures that any permission requested by the Operator is always confined to that of the service account in the Operator group. OLM creates a new service account with the set of permissions specified in the CSV and assigns it to the Operator. The Operator runs as the assigned service account. 4.8.2. Scoping Operator installations To provide scoping rules to Operator installations and upgrades on Operator Lifecycle Manager (OLM), associate a service account with an Operator group. Using this example, a cluster administrator can confine a set of Operators to a designated namespace. Procedure Create a new namespace: USD cat <<EOF | oc create -f - apiVersion: v1 kind: Namespace metadata: name: scoped EOF Allocate permissions that you want the Operator(s) to be confined to. This involves creating a new service account, relevant role(s), and role binding(s). USD cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: scoped namespace: scoped EOF The following example grants the service account permissions to do anything in the designated namespace for simplicity. In a production environment, you should create a more fine-grained set of permissions: USD cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: scoped namespace: scoped rules: - apiGroups: ["*"] resources: ["*"] verbs: ["*"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: scoped-bindings namespace: scoped roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: scoped subjects: - kind: ServiceAccount name: scoped namespace: scoped EOF Create an OperatorGroup object in the designated namespace. This Operator group targets the designated namespace to ensure that its tenancy is confined to it. In addition, Operator groups allow a user to specify a service account. Specify the service account created in the step: USD cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: scoped namespace: scoped spec: serviceAccountName: scoped targetNamespaces: - scoped EOF Any Operator installed in the designated namespace is tied to this Operator group and therefore to the service account specified. Create a Subscription object in the designated namespace to install an Operator: USD cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd namespace: scoped spec: channel: singlenamespace-alpha name: etcd source: <catalog_source_name> 1 sourceNamespace: <catalog_source_namespace> 2 EOF 1 Specify a catalog source that already exists in the designated namespace or one that is in the global catalog namespace. 2 Specify a namespace where the catalog source was created. Any Operator tied to this Operator group is confined to the permissions granted to the specified service account. If the Operator requests permissions that are outside the scope of the service account, the installation fails with relevant errors. 4.8.2.1. Fine-grained permissions Operator Lifecycle Manager (OLM) uses the service account specified in an Operator group to create or update the following resources related to the Operator being installed: ClusterServiceVersion Subscription Secret ServiceAccount Service ClusterRole and ClusterRoleBinding Role and RoleBinding To confine Operators to a designated namespace, cluster administrators can start by granting the following permissions to the service account: Note The following role is a generic example and additional rules might be required based on the specific Operator. kind: Role rules: - apiGroups: ["operators.coreos.com"] resources: ["subscriptions", "clusterserviceversions"] verbs: ["get", "create", "update", "patch"] - apiGroups: [""] resources: ["services", "serviceaccounts"] verbs: ["get", "create", "update", "patch"] - apiGroups: ["rbac.authorization.k8s.io"] resources: ["roles", "rolebindings"] verbs: ["get", "create", "update", "patch"] - apiGroups: ["apps"] 1 resources: ["deployments"] verbs: ["list", "watch", "get", "create", "update", "patch", "delete"] - apiGroups: [""] 2 resources: ["pods"] verbs: ["list", "watch", "get", "create", "update", "patch", "delete"] 1 2 Add permissions to create other resources, such as deployments and pods shown here. In addition, if any Operator specifies a pull secret, the following permissions must also be added: kind: ClusterRole 1 rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get"] --- kind: Role rules: - apiGroups: [""] resources: ["secrets"] verbs: ["create", "update", "patch"] 1 Required to get the secret from the OLM namespace. 4.8.3. Operator catalog access control When an Operator catalog is created in the global catalog namespace openshift-marketplace , the catalog's Operators are made available cluster-wide to all namespaces. A catalog created in other namespaces only makes its Operators available in that same namespace of the catalog. On clusters where non-cluster administrator users have been delegated Operator installation privileges, cluster administrators might want to further control or restrict the set of Operators those users are allowed to install. This can be achieved with the following actions: Disable all of the default global catalogs. Enable custom, curated catalogs in the same namespace where the relevant Operator groups have been pre-installed. Additional resources Disabling the default OperatorHub sources Adding a catalog source to a cluster 4.8.4. Troubleshooting permission failures If an Operator installation fails due to lack of permissions, identify the errors using the following procedure. Procedure Review the Subscription object. Its status has an object reference installPlanRef that points to the InstallPlan object that attempted to create the necessary [Cluster]Role[Binding] object(s) for the Operator: apiVersion: operators.coreos.com/v1 kind: Subscription metadata: name: etcd namespace: scoped status: installPlanRef: apiVersion: operators.coreos.com/v1 kind: InstallPlan name: install-4plp8 namespace: scoped resourceVersion: "117359" uid: 2c1df80e-afea-11e9-bce3-5254009c9c23 Check the status of the InstallPlan object for any errors: apiVersion: operators.coreos.com/v1 kind: InstallPlan status: conditions: - lastTransitionTime: "2019-07-26T21:13:10Z" lastUpdateTime: "2019-07-26T21:13:10Z" message: 'error creating clusterrole etcdoperator.v0.9.4-clusterwide-dsfx4: clusterroles.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:scoped:scoped" cannot create resource "clusterroles" in API group "rbac.authorization.k8s.io" at the cluster scope' reason: InstallComponentFailed status: "False" type: Installed phase: Failed The error message tells you: The type of resource it failed to create, including the API group of the resource. In this case, it was clusterroles in the rbac.authorization.k8s.io group. The name of the resource. The type of error: is forbidden tells you that the user does not have enough permission to do the operation. The name of the user who attempted to create or update the resource. In this case, it refers to the service account specified in the Operator group. The scope of the operation: cluster scope or not. The user can add the missing permission to the service account and then iterate. Note Operator Lifecycle Manager (OLM) does not currently provide the complete list of errors on the first try. 4.9. Managing custom catalogs Cluster administrators and Operator catalog maintainers can create and manage custom catalogs packaged using the bundle format on Operator Lifecycle Manager (OLM) in OpenShift Container Platform. Important Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. As a result, Operators are unable to use removed APIs starting with the version of OpenShift Container Platform that uses the Kubernetes version that removed the API. If your cluster is using custom catalogs, see Controlling Operator compatibility with OpenShift Container Platform versions for more details about how Operator authors can update their projects to help avoid workload issues and prevent incompatible upgrades. Additional resources Red Hat-provided Operator catalogs 4.9.1. Prerequisites Install the opm CLI . 4.9.2. File-based catalogs File-based catalogs are the latest iteration of the catalog format in Operator Lifecycle Manager (OLM). It is a plain text-based (JSON or YAML) and declarative config evolution of the earlier SQLite database format, and it is fully backwards compatible. For more details about the file-based catalog specification, see Operator Framework packaging format . 4.9.2.1. Creating a file-based catalog image You can create a catalog image that uses the plain text file-based catalog format (JSON or YAML), which replaces the deprecated SQLite database format. The opm CLI provides tooling that helps initialize a catalog in the file-based format, render new records into it, and validate that the catalog is valid. Prerequisites opm podman version 1.9.3+ A bundle image built and pushed to a registry that supports Docker v2-2 Procedure Initialize a catalog for a file-based catalog: Create a directory for the catalog: USD mkdir <operator_name>-index Create a Dockerfile that can build a catalog image: Example <operator_name>-index.Dockerfile # The base image is expected to contain # /bin/opm (with a serve subcommand) and /bin/grpc_health_probe FROM registry.redhat.io/openshift4/ose-operator-registry:v4.9 # Configure the entrypoint and command ENTRYPOINT ["/bin/opm"] CMD ["serve", "/configs"] # Copy declarative config root into image at /configs ADD <operator_name>-index /configs # Set DC-specific label for the location of the DC root directory # in the image LABEL operators.operatorframework.io.index.configs.v1=/configs The Dockerfile must be in the same parent directory as the catalog directory that you created in the step: Example directory structure . ├── <operator_name>-index └── <operator_name>-index.Dockerfile Populate the catalog with your package definition: USD opm init <operator_name> \ 1 --default-channel=preview \ 2 --description=./README.md \ 3 --icon=./operator-icon.svg \ 4 --output yaml \ 5 > <operator_name>-index/index.yaml 6 1 Operator, or package, name. 2 Channel that subscription will default to if unspecified. 3 Path to the Operator's README.md or other documentation. 4 Path to the Operator's icon. 5 Output format: JSON or YAML. 6 Path for creating the catalog configuration file. This command generates an olm.package declarative config blob in the specified catalog configuration file. Add a bundle to the catalog: USD opm render <registry>/<namespace>/<bundle_image_name>:<tag> \ 1 --output=yaml \ >> <operator_name>-index/index.yaml 2 1 Pull spec for the bundle image. 2 Path to the catalog configuration file. The opm render command generates a declarative config blob from the provided catalog images and bundle images. Note Channels must contain at least one bundle. Add a channel entry for the bundle. For example, modify the following example to your specifications, and add it to your <operator_name>-index/index.yaml file: Example channel entry --- schema: olm.channel package: <operator_name> name: preview entries: - name: <operator_name>.v0.1.0 1 1 Ensure that you include the period ( . ) after <operator_name> but before the v in the version. Otherwise, the entry will fail to pass the opm validate command. Validate the file-based catalog: Run the opm validate command against the catalog directory: USD opm validate <operator_name>-index Check that the error code is 0 : USD echo USD? Example output 0 Build the catalog image: USD podman build . \ -f <operator_name>-index.Dockerfile \ -t <registry>/<namespace>/<catalog_image_name>:<tag> Push the catalog image to a registry: If required, authenticate with your target registry: USD podman login <registry> Push the catalog image: USD podman push <registry>/<namespace>/<catalog_image_name>:<tag> 4.9.3. SQLite-based catalogs Important The SQLite database format for Operator catalogs is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. 4.9.3.1. Creating a SQLite-based index image You can create an index image based on the SQLite database format by using the opm CLI. Prerequisites opm podman version 1.9.3+ A bundle image built and pushed to a registry that supports Docker v2-2 Procedure Start a new index: USD opm index add \ --bundles <registry>/<namespace>/<bundle_image_name>:<tag> \ 1 --tag <registry>/<namespace>/<index_image_name>:<tag> \ 2 [--binary-image <registry_base_image>] 3 1 Comma-separated list of bundle images to add to the index. 2 The image tag that you want the index image to have. 3 Optional: An alternative registry base image to use for serving the catalog. Push the index image to a registry. If required, authenticate with your target registry: USD podman login <registry> Push the index image: USD podman push <registry>/<namespace>/<index_image_name>:<tag> 4.9.3.2. Updating a SQLite-based index image After configuring OperatorHub to use a catalog source that references a custom index image, cluster administrators can keep the available Operators on their cluster up to date by adding bundle images to the index image. You can update an existing index image using the opm index add command. Prerequisites opm podman version 1.9.3+ An index image built and pushed to a registry. An existing catalog source referencing the index image. Procedure Update the existing index by adding bundle images: USD opm index add \ --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \ 1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \ 2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \ 3 --pull-tool podman 4 1 The --bundles flag specifies a comma-separated list of additional bundle images to add to the index. 2 The --from-index flag specifies the previously pushed index. 3 The --tag flag specifies the image tag to apply to the updated index image. 4 The --pull-tool flag specifies the tool used to pull container images. where: <registry> Specifies the hostname of the registry, such as quay.io or mirror.example.com . <namespace> Specifies the namespace of the registry, such as ocs-dev or abc . <new_bundle_image> Specifies the new bundle image to add to the registry, such as ocs-operator . <digest> Specifies the SHA image ID, or digest, of the bundle image, such as c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 . <existing_index_image> Specifies the previously pushed image, such as abc-redhat-operator-index . <existing_tag> Specifies a previously pushed image tag, such as 4.10 . <updated_tag> Specifies the image tag to apply to the updated index image, such as 4.10.1 . Example command USD opm index add \ --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 \ --from-index mirror.example.com/abc/abc-redhat-operator-index:4.10 \ --tag mirror.example.com/abc/abc-redhat-operator-index:4.10.1 \ --pull-tool podman Push the updated index image: USD podman push <registry>/<namespace>/<existing_index_image>:<updated_tag> After Operator Lifecycle Manager (OLM) automatically polls the index image referenced in the catalog source at its regular interval, verify that the new packages are successfully added: USD oc get packagemanifests -n openshift-marketplace 4.9.3.3. Filtering a SQLite-based index image An index image, based on the Operator bundle format, is a containerized snapshot of an Operator catalog. You can filter, or prune , an index of all but a specified list of packages, which creates a copy of the source index containing only the Operators that you want. Prerequisites podman version 1.9.3+ grpcurl (third-party command-line tool) opm Access to a registry that supports Docker v2-2 Procedure Authenticate with your target registry: USD podman login <target_registry> Determine the list of packages you want to include in your pruned index. Run the source index image that you want to prune in a container. For example: USD podman run -p50051:50051 \ -it registry.redhat.io/redhat/redhat-operator-index:v4.10 Example output Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.10... Getting image source signatures Copying blob ae8a0c23f5b1 done ... INFO[0000] serving registry database=/database/index.db port=50051 In a separate terminal session, use the grpcurl command to get a list of the packages provided by the index: USD grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out Inspect the packages.out file and identify which package names from this list you want to keep in your pruned index. For example: Example snippets of packages list ... { "name": "advanced-cluster-management" } ... { "name": "jaeger-product" } ... { { "name": "quay-operator" } ... In the terminal session where you executed the podman run command, press Ctrl and C to stop the container process. Run the following command to prune the source index of all but the specified packages: USD opm index prune \ -f registry.redhat.io/redhat/redhat-operator-index:v4.10 \ 1 -p advanced-cluster-management,jaeger-product,quay-operator \ 2 [-i registry.redhat.io/openshift4/ose-operator-registry:v4.9] \ 3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.10 4 1 Index to prune. 2 Comma-separated list of packages to keep. 3 Required only for IBM Power and IBM Z images: Operator Registry base image with the tag that matches the target OpenShift Container Platform cluster major and minor version. 4 Custom tag for new index image being built. Run the following command to push the new index image to your target registry: USD podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.10 where <namespace> is any existing namespace on the registry. 4.9.4. Adding a catalog source to a cluster Adding a catalog source to an OpenShift Container Platform cluster enables the discovery and installation of Operators for users. Cluster administrators can create a CatalogSource object that references an index image. OperatorHub uses catalog sources to populate the user interface. Prerequisites An index image built and pushed to a registry. Procedure Create a CatalogSource object that references your index image. Modify the following to your specifications and save it as a catalogSource.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace 1 annotations: olm.catalogImageTemplate: 2 "<registry>/<namespace>/<index_image_name>:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}" spec: sourceType: grpc image: <registry>/<namespace>/<index_image_name>:<tag> 3 displayName: My Operator Catalog publisher: <publisher_name> 4 updateStrategy: registryPoll: 5 interval: 30m 1 If you want the catalog source to be available globally to users in all namespaces, specify the openshift-marketplace namespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace. 2 Optional: Set the olm.catalogImageTemplate annotation to your index image name and use one or more of the Kubernetes cluster version variables as shown when constructing the template for the image tag. 3 Specify your index image. If you specify a tag after the image name, for example :v4.10 , the catalog source pod uses an image pull policy of Always , meaning the pod always pulls the image prior to starting the container. If you specify a digest, for example @sha256:<id> , the image pull policy is IfNotPresent , meaning the pod pulls the image only if it does not already exist on the node. 4 Specify your name or an organization name publishing the catalog. 5 Catalog sources can automatically check for new versions to keep up to date. Use the file to create the CatalogSource object: USD oc apply -f catalogSource.yaml Verify the following resources are created successfully. Check the pods: USD oc get pods -n openshift-marketplace Example output NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h Check the catalog source: USD oc get catalogsource -n openshift-marketplace Example output NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s Check the package manifest: USD oc get packagemanifest -n openshift-marketplace Example output NAME CATALOG AGE jaeger-product My Operator Catalog 93s You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console. Additional resources Operator Lifecycle Manager concepts and resources Catalog source Accessing images for Operators from private registries Image pull policy 4.9.5. Accessing images for Operators from private registries If certain images relevant to Operators managed by Operator Lifecycle Manager (OLM) are hosted in an authenticated container image registry, also known as a private registry, OLM and OperatorHub are unable to pull the images by default. To enable access, you can create a pull secret that contains the authentication credentials for the registry. By referencing one or more pull secrets in a catalog source, OLM can handle placing the secrets in the Operator and catalog namespace to allow installation. Other images required by an Operator or its Operands might require access to private registries as well. OLM does not handle placing the secrets in target tenant namespaces for this scenario, but authentication credentials can be added to the global cluster pull secret or individual namespace service accounts to enable the required access. The following types of images should be considered when determining whether Operators managed by OLM have appropriate pull access: Index images A CatalogSource object can reference an index image, which use the Operator bundle format and are catalog sources packaged as container images hosted in images registries. If an index image is hosted in a private registry, a secret can be used to enable pull access. Bundle images Operator bundle images are metadata and manifests packaged as container images that represent a unique version of an Operator. If any bundle images referenced in a catalog source are hosted in one or more private registries, a secret can be used to enable pull access. Operator and Operand images If an Operator installed from a catalog source uses a private image, either for the Operator image itself or one of the Operand images it watches, the Operator will fail to install because the deployment will not have access to the required registry authentication. Referencing secrets in a catalog source does not enable OLM to place the secrets in target tenant namespaces in which Operands are installed. Instead, the authentication details can be added to the global cluster pull secret in the openshift-config namespace, which provides access to all namespaces on the cluster. Alternatively, if providing access to the entire cluster is not permissible, the pull secret can be added to the default service accounts of the target tenant namespaces. Prerequisites At least one of the following hosted in a private registry: An index image or catalog image. An Operator bundle image. An Operator or Operand image. Procedure Create a secret for each required private registry. Log in to the private registry to create or update your registry credentials file: USD podman login <registry>:<port> Note The file path of your registry credentials can be different depending on the container tool used to log in to the registry. For the podman CLI, the default location is USD{XDG_RUNTIME_DIR}/containers/auth.json . For the docker CLI, the default location is /root/.docker/config.json . It is recommended to include credentials for only one registry per secret, and manage credentials for multiple registries in separate secrets. Multiple secrets can be included in a CatalogSource object in later steps, and OpenShift Container Platform will merge the secrets into a single virtual credentials file for use during an image pull. A registry credentials file can, by default, store details for more than one registry or for multiple repositories in one registry. Verify the current contents of your file. For example: File storing credentials for multiple registries { "auths": { "registry.redhat.io": { "auth": "FrNHNydQXdzclNqdg==" }, "quay.io": { "auth": "fegdsRib21iMQ==" }, "https://quay.io/my-namespace/my-user/my-image": { "auth": "eWfjwsDdfsa221==" }, "https://quay.io/my-namespace/my-user": { "auth": "feFweDdscw34rR==" }, "https://quay.io/my-namespace": { "auth": "frwEews4fescyq==" } } } Because this file is used to create secrets in later steps, ensure that you are storing details for only one registry per file. This can be accomplished by using either of the following methods: Use the podman logout <registry> command to remove credentials for additional registries until only the one registry you want remains. Edit your registry credentials file and separate the registry details to be stored in multiple files. For example: File storing credentials for one registry { "auths": { "registry.redhat.io": { "auth": "FrNHNydQXdzclNqdg==" } } } File storing credentials for another registry { "auths": { "quay.io": { "auth": "Xd2lhdsbnRib21iMQ==" } } } Create a secret in the openshift-marketplace namespace that contains the authentication credentials for a private registry: USD oc create secret generic <secret_name> \ -n openshift-marketplace \ --from-file=.dockerconfigjson=<path/to/registry/credentials> \ --type=kubernetes.io/dockerconfigjson Repeat this step to create additional secrets for any other required private registries, updating the --from-file flag to specify another registry credentials file path. Create or update an existing CatalogSource object to reference one or more secrets: apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace spec: sourceType: grpc secrets: 1 - "<secret_name_1>" - "<secret_name_2>" image: <registry>:<port>/<namespace>/<image>:<tag> displayName: My Operator Catalog publisher: <publisher_name> updateStrategy: registryPoll: interval: 30m 1 Add a spec.secrets section and specify any required secrets. If any Operator or Operand images that are referenced by a subscribed Operator require access to a private registry, you can either provide access to all namespaces in the cluster, or individual target tenant namespaces. To provide access to all namespaces in the cluster, add authentication details to the global cluster pull secret in the openshift-config namespace. Warning Cluster resources must adjust to the new global pull secret, which can temporarily limit the usability of the cluster. Extract the .dockerconfigjson file from the global pull secret: USD oc extract secret/pull-secret -n openshift-config --confirm Update the .dockerconfigjson file with your authentication credentials for the required private registry or registries and save it as a new file: USD cat .dockerconfigjson | \ jq --compact-output '.auths["<registry>:<port>/<namespace>/"] |= . + {"auth":"<token>"}' \ 1 > new_dockerconfigjson 1 Replace <registry>:<port>/<namespace> with the private registry details and <token> with your authentication credentials. Update the global pull secret with the new file: USD oc set data secret/pull-secret -n openshift-config \ --from-file=.dockerconfigjson=new_dockerconfigjson To update an individual namespace, add a pull secret to the service account for the Operator that requires access in the target tenant namespace. Recreate the secret that you created for the openshift-marketplace in the tenant namespace: USD oc create secret generic <secret_name> \ -n <tenant_namespace> \ --from-file=.dockerconfigjson=<path/to/registry/credentials> \ --type=kubernetes.io/dockerconfigjson Verify the name of the service account for the Operator by searching the tenant namespace: USD oc get sa -n <tenant_namespace> 1 1 If the Operator was installed in an individual namespace, search that namespace. If the Operator was installed for all namespaces, search the openshift-operators namespace. Example output NAME SECRETS AGE builder 2 6m1s default 2 6m1s deployer 2 6m1s etcd-operator 2 5m18s 1 1 Service account for an installed etcd Operator. Link the secret to the service account for the Operator: USD oc secrets link <operator_sa> \ -n <tenant_namespace> \ <secret_name> \ --for=pull Additional resources See What is a secret? for more information on the types of secrets, including those used for registry credentials. See Updating the global cluster pull secret for more details on the impact of changing this secret. See Allowing pods to reference images from other secured registries for more details on linking pull secrets to service accounts per namespace. 4.9.6. Disabling the default OperatorHub sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. As a cluster administrator, you can disable the set of default catalogs. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources. 4.9.7. Removing custom catalogs As a cluster administrator, you can remove custom Operator catalogs that have been previously added to your cluster by deleting the related catalog source. Procedure In the Administrator perspective of the web console, navigate to Administration Cluster Settings . Click the Configuration tab, and then click OperatorHub . Click the Sources tab. Select the Options menu for the catalog that you want to remove, and then click Delete CatalogSource . 4.10. Using Operator Lifecycle Manager on restricted networks For OpenShift Container Platform clusters that are installed on restricted networks, also known as disconnected clusters , Operator Lifecycle Manager (OLM) by default cannot access the Red Hat-provided OperatorHub sources hosted on remote registries because those remote sources require full internet connectivity. However, as a cluster administrator you can still enable your cluster to use OLM in a restricted network if you have a workstation that has full internet access. The workstation, which requires full internet access to pull the remote OperatorHub content, is used to prepare local mirrors of the remote sources, and push the content to a mirror registry. The mirror registry can be located on a bastion host, which requires connectivity to both your workstation and the disconnected cluster, or a completely disconnected, or airgapped , host, which requires removable media to physically move the mirrored content to the disconnected environment. This guide describes the following process that is required to enable OLM in restricted networks: Disable the default remote OperatorHub sources for OLM. Use a workstation with full internet access to create and push local mirrors of the OperatorHub content to a mirror registry. Configure OLM to install and manage Operators from local sources on the mirror registry instead of the default remote sources. After enabling OLM in a restricted network, you can continue to use your unrestricted workstation to keep your local OperatorHub sources updated as newer versions of Operators are released. Important While OLM can manage Operators from local sources, the ability for a given Operator to run successfully in a restricted network still depends on the Operator itself meeting the following criteria: List any related images, or other container images that the Operator might require to perform their functions, in the relatedImages parameter of its ClusterServiceVersion (CSV) object. Reference all specified images by a digest (SHA) and not by a tag. You can search software on the Red Hat Ecosystem Catalog for a list of Red Hat Operators that support running in disconnected mode by filtering with the following selections: Type Containerized application Deployment method Operator Infrastructure features Disconnected Additional resources Red Hat-provided Operator catalogs Enabling your Operator for restricted network environments 4.10.1. Prerequisites Log in to your OpenShift Container Platform cluster as a user with cluster-admin privileges. If you want to prune the default catalog and selectively mirror only a subset of Operators, install the opm CLI . Note If you are using OLM in a restricted network on IBM Z, you must have at least 12 GB allocated to the directory where you place your registry. 4.10.2. Disabling the default OperatorHub sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. You can then configure OperatorHub to use local catalog sources. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources. 4.10.3. Filtering a SQLite-based index image An index image, based on the Operator bundle format, is a containerized snapshot of an Operator catalog. You can filter, or prune , an index of all but a specified list of packages, which creates a copy of the source index containing only the Operators that you want. When configuring Operator Lifecycle Manager (OLM) to use mirrored content on restricted network OpenShift Container Platform clusters, use this pruning method if you want to only mirror a subset of Operators from the default catalogs. For the steps in this procedure, the target registry is an existing mirror registry that is accessible by your workstation with unrestricted network access. This example also shows pruning the index image for the default redhat-operators catalog, but the process is the same for any index image. Prerequisites Workstation with unrestricted network access podman version 1.9.3+ grpcurl (third-party command-line tool) opm Access to a registry that supports Docker v2-2 Procedure Authenticate with registry.redhat.io : USD podman login registry.redhat.io Authenticate with your target registry: USD podman login <target_registry> Determine the list of packages you want to include in your pruned index. Run the source index image that you want to prune in a container. For example: USD podman run -p50051:50051 \ -it registry.redhat.io/redhat/redhat-operator-index:v4.10 Example output Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.10... Getting image source signatures Copying blob ae8a0c23f5b1 done ... INFO[0000] serving registry database=/database/index.db port=50051 In a separate terminal session, use the grpcurl command to get a list of the packages provided by the index: USD grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out Inspect the packages.out file and identify which package names from this list you want to keep in your pruned index. For example: Example snippets of packages list ... { "name": "advanced-cluster-management" } ... { "name": "jaeger-product" } ... { { "name": "quay-operator" } ... In the terminal session where you executed the podman run command, press Ctrl and C to stop the container process. Run the following command to prune the source index of all but the specified packages: USD opm index prune \ -f registry.redhat.io/redhat/redhat-operator-index:v4.10 \ 1 -p advanced-cluster-management,jaeger-product,quay-operator \ 2 [-i registry.redhat.io/openshift4/ose-operator-registry:v4.9] \ 3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.10 4 1 Index to prune. 2 Comma-separated list of packages to keep. 3 Required only for IBM Power and IBM Z images: Operator Registry base image with the tag that matches the target OpenShift Container Platform cluster major and minor version. 4 Custom tag for new index image being built. Run the following command to push the new index image to your target registry: USD podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.10 where <namespace> is any existing namespace on the registry. For example, you might create an olm-mirror namespace to push all mirrored content to. 4.10.4. Mirroring an Operator catalog For instructions about mirroring Operator catalogs for use with disconnected clusters, see Installing Mirroring images for a disconnected installation . 4.10.5. Adding a catalog source to a cluster Adding a catalog source to an OpenShift Container Platform cluster enables the discovery and installation of Operators for users. Cluster administrators can create a CatalogSource object that references an index image. OperatorHub uses catalog sources to populate the user interface. Prerequisites An index image built and pushed to a registry. Procedure Create a CatalogSource object that references your index image. If you used the oc adm catalog mirror command to mirror your catalog to a target registry, you can use the generated catalogSource.yaml file in your manifests directory as a starting point. Modify the following to your specifications and save it as a catalogSource.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc image: <registry>/<namespace>/redhat-operator-index:v4.10 3 displayName: My Operator Catalog publisher: <publisher_name> 4 updateStrategy: registryPoll: 5 interval: 30m 1 If you mirrored content to local files before uploading to a registry, remove any backslash ( / ) characters from the metadata.name field to avoid an "invalid resource name" error when you create the object. 2 If you want the catalog source to be available globally to users in all namespaces, specify the openshift-marketplace namespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace. 3 Specify your index image. If you specify a tag after the image name, for example :v4.10 , the catalog source pod uses an image pull policy of Always , meaning the pod always pulls the image prior to starting the container. If you specify a digest, for example @sha256:<id> , the image pull policy is IfNotPresent , meaning the pod pulls the image only if it does not already exist on the node. 4 Specify your name or an organization name publishing the catalog. 5 Catalog sources can automatically check for new versions to keep up to date. Use the file to create the CatalogSource object: USD oc apply -f catalogSource.yaml Verify the following resources are created successfully. Check the pods: USD oc get pods -n openshift-marketplace Example output NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h Check the catalog source: USD oc get catalogsource -n openshift-marketplace Example output NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s Check the package manifest: USD oc get packagemanifest -n openshift-marketplace Example output NAME CATALOG AGE jaeger-product My Operator Catalog 93s You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console. Additional resources Accessing images for Operators from private registries Image template for custom catalog sources Image pull policy 4.10.6. Updating a SQLite-based index image After configuring OperatorHub to use a catalog source that references a custom index image, cluster administrators can keep the available Operators on their cluster up to date by adding bundle images to the index image. You can update an existing index image using the opm index add command. For restricted networks, the updated content must also be mirrored again to the cluster. Prerequisites opm podman version 1.9.3+ An index image built and pushed to a registry. An existing catalog source referencing the index image. Procedure Update the existing index by adding bundle images: USD opm index add \ --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \ 1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \ 2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \ 3 --pull-tool podman 4 1 The --bundles flag specifies a comma-separated list of additional bundle images to add to the index. 2 The --from-index flag specifies the previously pushed index. 3 The --tag flag specifies the image tag to apply to the updated index image. 4 The --pull-tool flag specifies the tool used to pull container images. where: <registry> Specifies the hostname of the registry, such as quay.io or mirror.example.com . <namespace> Specifies the namespace of the registry, such as ocs-dev or abc . <new_bundle_image> Specifies the new bundle image to add to the registry, such as ocs-operator . <digest> Specifies the SHA image ID, or digest, of the bundle image, such as c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 . <existing_index_image> Specifies the previously pushed image, such as abc-redhat-operator-index . <existing_tag> Specifies a previously pushed image tag, such as 4.10 . <updated_tag> Specifies the image tag to apply to the updated index image, such as 4.10.1 . Example command USD opm index add \ --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 \ --from-index mirror.example.com/abc/abc-redhat-operator-index:4.10 \ --tag mirror.example.com/abc/abc-redhat-operator-index:4.10.1 \ --pull-tool podman Push the updated index image: USD podman push <registry>/<namespace>/<existing_index_image>:<updated_tag> Follow the steps in the Mirroring an Operator catalog procedure again to mirror the updated content. However, when you get to the step about creating the ImageContentSourcePolicy (ICSP) object, use the oc replace command instead of the oc create command. For example: USD oc replace -f ./manifests-redhat-operator-index-<random_number>/imageContentSourcePolicy.yaml This change is required because the object already exists and must be updated. Note Normally, the oc apply command can be used to update existing objects that were previously created using oc apply . However, due to a known issue regarding the size of the metadata.annotations field in ICSP objects, the oc replace command must be used for this step currently. After Operator Lifecycle Manager (OLM) automatically polls the index image referenced in the catalog source at its regular interval, verify that the new packages are successfully added: USD oc get packagemanifests -n openshift-marketplace Additional resources Mirroring an Operator catalog 4.11. Catalog source pod scheduling When an Operator Lifecycle Manager (OLM) catalog source of source type grpc defines a spec.image , the Catalog Operator creates a pod that serves the defined image content. By default, this pod defines the following in its spec: Only the kubernetes.io/os=linux node selector No priority class name No tolerations As an administrator, you can override these values by modifying fields in the CatalogSource object's optional spec.grpcPodConfig section. Additional resources OLM concepts and resources Catalog source 4.11.1. Overriding the node selector for catalog source pods Prequisites CatalogSource object of source type grpc with spec.image defined Procedure Edit the CatalogSource object and add or modify the spec.grpcPodConfig section to include the following: grpcPodConfig: nodeSelector: custom_label: <label> where <label> is the label for the node selector that you want catalog source pods to use for scheduling. Additional resources Placing pods on specific nodes using node selectors 4.11.2. Overriding the priority class name for catalog source pods Prequisites CatalogSource object of source type grpc with spec.image defined Procedure Edit the CatalogSource object and add or modify the spec.grpcPodConfig section to include the following: grpcPodConfig: priorityClassName: <priority_class> where <priority_class> is one of the following: One of the default priority classes provided by Kubernetes: system-cluster-critical or system-node-critical An empty set ( "" ) to assign the default priority A pre-existing and custom defined priority class Note Previously, the only pod scheduling parameter that could be overriden was priorityClassName . This was done by adding the operatorframework.io/priorityclass annotation to the CatalogSource object. For example: apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: example-catalog namespace: namespace: openshift-marketplace annotations: operatorframework.io/priorityclass: system-cluster-critical If a CatalogSource object defines both the annotation and spec.grpcPodConfig.priorityClassName , the annotation takes precedence over the configuration parameter. Additional resources Pod priority classes 4.11.3. Overriding tolerations for catalog source pods Prequisites CatalogSource object of source type grpc with spec.image defined Procedure Edit the CatalogSource object and add or modify the spec.grpcPodConfig section to include the following: grpcPodConfig: tolerations: - key: "<key_name>" operator: "<operator_type>" value: "<value>" effect: "<effect>" Additional resources Understanding taints and tolerations
[ "oc get packagemanifests -n openshift-marketplace", "NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m", "oc describe packagemanifests <operator_name> -n openshift-marketplace", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>", "oc apply -f operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar", "oc apply -f sub.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay spec: channel: quay-v3.4 installPlanApproval: Manual 1 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: quay-operator.v3.4.0 2", "oc apply -f sub.yaml", "oc get subscription jaeger -n openshift-operators -o yaml | grep currentCSV", "currentCSV: jaeger-operator.v1.8.2", "oc delete subscription jaeger -n openshift-operators", "subscription.operators.coreos.com \"jaeger\" deleted", "oc delete clusterserviceversion jaeger-operator.v1.8.2 -n openshift-operators", "clusterserviceversion.operators.coreos.com \"jaeger-operator.v1.8.2\" deleted", "ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"", "rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host", "oc get sub,csv -n <namespace>", "NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded", "oc delete subscription <subscription_name> -n <namespace>", "oc delete csv <csv_name> -n <namespace>", "oc get job,configmap -n openshift-marketplace", "NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s", "oc delete job <job_name> -n openshift-marketplace", "oc delete configmap <configmap_name> -n openshift-marketplace", "oc get sub,csv,installplan -n <namespace>", "oc apply -f - <<EOF apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true 1 EOF", "oc get events", "LAST SEEN TYPE REASON OBJECT MESSAGE 85s Warning DisabledCopiedCSVs clusterserviceversion/my-csv.v1.0.0 CSV copying disabled for operators/my-csv.v1.0.0", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd-config-test namespace: openshift-operators spec: config: env: - name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test channel: clusterwide-alpha installPlanApproval: Automatic name: etcd source: community-operators sourceNamespace: openshift-marketplace startingCSV: etcdoperator.v0.9.4-clusterwide", "oc get deployment -n openshift-operators etcd-operator -o yaml | grep -i \"PROXY\" -A 2", "- name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21088a98b93838e284a6086b13917f96b0d9c", "apiVersion: v1 kind: ConfigMap metadata: name: trusted-ca 1 labels: config.openshift.io/inject-trusted-cabundle: \"true\" 2", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: my-operator spec: package: etcd channel: alpha config: 1 selector: matchLabels: <labels_for_pods> 2 volumes: 3 - name: trusted-ca configMap: name: trusted-ca items: - key: ca-bundle.crt 4 path: tls-ca-bundle.pem 5 volumeMounts: 6 - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true", "oc get subs -n <operator_namespace>", "oc describe sub <subscription_name> -n <operator_namespace>", "Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy", "oc get catalogsources -n openshift-marketplace", "NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m", "oc describe catalogsource example-catalog -n openshift-marketplace", "Name: example-catalog Namespace: openshift-marketplace Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m", "oc describe pod example-catalog-bwt8z -n openshift-marketplace", "Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull", "oc edit operatorcondition <name>", "apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: overrides: - type: Upgradeable 1 status: \"True\" reason: \"upgradeIsSafe\" message: \"This is a known issue with the Operator where it always reports that it cannot be upgraded.\" conditions: - type: Upgradeable status: \"False\" reason: \"migration\" message: \"The operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"", "cat <<EOF | oc create -f - apiVersion: v1 kind: Namespace metadata: name: scoped EOF", "cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: scoped namespace: scoped EOF", "cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: scoped namespace: scoped rules: - apiGroups: [\"*\"] resources: [\"*\"] verbs: [\"*\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: scoped-bindings namespace: scoped roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: scoped subjects: - kind: ServiceAccount name: scoped namespace: scoped EOF", "cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: scoped namespace: scoped spec: serviceAccountName: scoped targetNamespaces: - scoped EOF", "cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd namespace: scoped spec: channel: singlenamespace-alpha name: etcd source: <catalog_source_name> 1 sourceNamespace: <catalog_source_namespace> 2 EOF", "kind: Role rules: - apiGroups: [\"operators.coreos.com\"] resources: [\"subscriptions\", \"clusterserviceversions\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"\"] resources: [\"services\", \"serviceaccounts\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"rbac.authorization.k8s.io\"] resources: [\"roles\", \"rolebindings\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"apps\"] 1 resources: [\"deployments\"] verbs: [\"list\", \"watch\", \"get\", \"create\", \"update\", \"patch\", \"delete\"] - apiGroups: [\"\"] 2 resources: [\"pods\"] verbs: [\"list\", \"watch\", \"get\", \"create\", \"update\", \"patch\", \"delete\"]", "kind: ClusterRole 1 rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"get\"] --- kind: Role rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"create\", \"update\", \"patch\"]", "apiVersion: operators.coreos.com/v1 kind: Subscription metadata: name: etcd namespace: scoped status: installPlanRef: apiVersion: operators.coreos.com/v1 kind: InstallPlan name: install-4plp8 namespace: scoped resourceVersion: \"117359\" uid: 2c1df80e-afea-11e9-bce3-5254009c9c23", "apiVersion: operators.coreos.com/v1 kind: InstallPlan status: conditions: - lastTransitionTime: \"2019-07-26T21:13:10Z\" lastUpdateTime: \"2019-07-26T21:13:10Z\" message: 'error creating clusterrole etcdoperator.v0.9.4-clusterwide-dsfx4: clusterroles.rbac.authorization.k8s.io is forbidden: User \"system:serviceaccount:scoped:scoped\" cannot create resource \"clusterroles\" in API group \"rbac.authorization.k8s.io\" at the cluster scope' reason: InstallComponentFailed status: \"False\" type: Installed phase: Failed", "mkdir <operator_name>-index", "The base image is expected to contain /bin/opm (with a serve subcommand) and /bin/grpc_health_probe FROM registry.redhat.io/openshift4/ose-operator-registry:v4.9 Configure the entrypoint and command ENTRYPOINT [\"/bin/opm\"] CMD [\"serve\", \"/configs\"] Copy declarative config root into image at /configs ADD <operator_name>-index /configs Set DC-specific label for the location of the DC root directory in the image LABEL operators.operatorframework.io.index.configs.v1=/configs", ". ├── <operator_name>-index └── <operator_name>-index.Dockerfile", "opm init <operator_name> \\ 1 --default-channel=preview \\ 2 --description=./README.md \\ 3 --icon=./operator-icon.svg \\ 4 --output yaml \\ 5 > <operator_name>-index/index.yaml 6", "opm render <registry>/<namespace>/<bundle_image_name>:<tag> \\ 1 --output=yaml >> <operator_name>-index/index.yaml 2", "--- schema: olm.channel package: <operator_name> name: preview entries: - name: <operator_name>.v0.1.0 1", "opm validate <operator_name>-index", "echo USD?", "0", "podman build . -f <operator_name>-index.Dockerfile -t <registry>/<namespace>/<catalog_image_name>:<tag>", "podman login <registry>", "podman push <registry>/<namespace>/<catalog_image_name>:<tag>", "opm index add --bundles <registry>/<namespace>/<bundle_image_name>:<tag> \\ 1 --tag <registry>/<namespace>/<index_image_name>:<tag> \\ 2 [--binary-image <registry_base_image>] 3", "podman login <registry>", "podman push <registry>/<namespace>/<index_image_name>:<tag>", "opm index add --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \\ 1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \\ 2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \\ 3 --pull-tool podman 4", "opm index add --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 --from-index mirror.example.com/abc/abc-redhat-operator-index:4.10 --tag mirror.example.com/abc/abc-redhat-operator-index:4.10.1 --pull-tool podman", "podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>", "oc get packagemanifests -n openshift-marketplace", "podman login <target_registry>", "podman run -p50051:50051 -it registry.redhat.io/redhat/redhat-operator-index:v4.10", "Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.10 Getting image source signatures Copying blob ae8a0c23f5b1 done INFO[0000] serving registry database=/database/index.db port=50051", "grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out", "{ \"name\": \"advanced-cluster-management\" } { \"name\": \"jaeger-product\" } { { \"name\": \"quay-operator\" }", "opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v4.10 \\ 1 -p advanced-cluster-management,jaeger-product,quay-operator \\ 2 [-i registry.redhat.io/openshift4/ose-operator-registry:v4.9] \\ 3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.10 4", "podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.10", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace 1 annotations: olm.catalogImageTemplate: 2 \"<registry>/<namespace>/<index_image_name>:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}\" spec: sourceType: grpc image: <registry>/<namespace>/<index_image_name>:<tag> 3 displayName: My Operator Catalog publisher: <publisher_name> 4 updateStrategy: registryPoll: 5 interval: 30m", "oc apply -f catalogSource.yaml", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h", "oc get catalogsource -n openshift-marketplace", "NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s", "oc get packagemanifest -n openshift-marketplace", "NAME CATALOG AGE jaeger-product My Operator Catalog 93s", "podman login <registry>:<port>", "{ \"auths\": { \"registry.redhat.io\": { \"auth\": \"FrNHNydQXdzclNqdg==\" }, \"quay.io\": { \"auth\": \"fegdsRib21iMQ==\" }, \"https://quay.io/my-namespace/my-user/my-image\": { \"auth\": \"eWfjwsDdfsa221==\" }, \"https://quay.io/my-namespace/my-user\": { \"auth\": \"feFweDdscw34rR==\" }, \"https://quay.io/my-namespace\": { \"auth\": \"frwEews4fescyq==\" } } }", "{ \"auths\": { \"registry.redhat.io\": { \"auth\": \"FrNHNydQXdzclNqdg==\" } } }", "{ \"auths\": { \"quay.io\": { \"auth\": \"Xd2lhdsbnRib21iMQ==\" } } }", "oc create secret generic <secret_name> -n openshift-marketplace --from-file=.dockerconfigjson=<path/to/registry/credentials> --type=kubernetes.io/dockerconfigjson", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace spec: sourceType: grpc secrets: 1 - \"<secret_name_1>\" - \"<secret_name_2>\" image: <registry>:<port>/<namespace>/<image>:<tag> displayName: My Operator Catalog publisher: <publisher_name> updateStrategy: registryPoll: interval: 30m", "oc extract secret/pull-secret -n openshift-config --confirm", "cat .dockerconfigjson | jq --compact-output '.auths[\"<registry>:<port>/<namespace>/\"] |= . + {\"auth\":\"<token>\"}' \\ 1 > new_dockerconfigjson", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=new_dockerconfigjson", "oc create secret generic <secret_name> -n <tenant_namespace> --from-file=.dockerconfigjson=<path/to/registry/credentials> --type=kubernetes.io/dockerconfigjson", "oc get sa -n <tenant_namespace> 1", "NAME SECRETS AGE builder 2 6m1s default 2 6m1s deployer 2 6m1s etcd-operator 2 5m18s 1", "oc secrets link <operator_sa> -n <tenant_namespace> <secret_name> --for=pull", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "podman login registry.redhat.io", "podman login <target_registry>", "podman run -p50051:50051 -it registry.redhat.io/redhat/redhat-operator-index:v4.10", "Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.10 Getting image source signatures Copying blob ae8a0c23f5b1 done INFO[0000] serving registry database=/database/index.db port=50051", "grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out", "{ \"name\": \"advanced-cluster-management\" } { \"name\": \"jaeger-product\" } { { \"name\": \"quay-operator\" }", "opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v4.10 \\ 1 -p advanced-cluster-management,jaeger-product,quay-operator \\ 2 [-i registry.redhat.io/openshift4/ose-operator-registry:v4.9] \\ 3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.10 4", "podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.10", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc image: <registry>/<namespace>/redhat-operator-index:v4.10 3 displayName: My Operator Catalog publisher: <publisher_name> 4 updateStrategy: registryPoll: 5 interval: 30m", "oc apply -f catalogSource.yaml", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h", "oc get catalogsource -n openshift-marketplace", "NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s", "oc get packagemanifest -n openshift-marketplace", "NAME CATALOG AGE jaeger-product My Operator Catalog 93s", "opm index add --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \\ 1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \\ 2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \\ 3 --pull-tool podman 4", "opm index add --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 --from-index mirror.example.com/abc/abc-redhat-operator-index:4.10 --tag mirror.example.com/abc/abc-redhat-operator-index:4.10.1 --pull-tool podman", "podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>", "oc replace -f ./manifests-redhat-operator-index-<random_number>/imageContentSourcePolicy.yaml", "oc get packagemanifests -n openshift-marketplace", "grpcPodConfig: nodeSelector: custom_label: <label>", "grpcPodConfig: priorityClassName: <priority_class>", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: example-catalog namespace: namespace: openshift-marketplace annotations: operatorframework.io/priorityclass: system-cluster-critical", "grpcPodConfig: tolerations: - key: \"<key_name>\" operator: \"<operator_type>\" value: \"<value>\" effect: \"<effect>\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/operators/administrator-tasks
Appendix G. Swift response headers
Appendix G. Swift response headers The response from the server should include an X-Auth-Token value. The response might also contain a X-Storage-Url that provides the API_VERSION / ACCOUNT prefix that is specified in other requests throughout the API documentation. Table G.1. Response Headers Name Description Type X-Storage-Token The authorization token for the X-Auth-User specified in the request. String X-Storage-Url The URL and API_VERSION / ACCOUNT path for the user. String
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/developer_guide/swift-response-headers_dev
Chapter 5. GDM
Chapter 5. GDM GDM is the GNOME Display Manager , which provides a graphical login environment. After the transition from GNOME 2 to GNOME 3, configuring GDM is only possible through systemd as it no longer supports other init systems. the gdm package The gdm package has replaced xorg-x11-xdm , which provided a legacy display login manager for the X Window System. As mentioned before, the gdm package provides the graphical login screen, shown shortly after boot up, log out, and when user-switching. GDM and logind GDM now uses logind for defining and tracking users. For more information, see Chapter 2, logind . System administrators can also set up automatic login manually in the GDM custom configuration file: /etc/gdm/custom.conf . custom.conf GDM configuration is now found in /etc/gdm/custom.conf . However for backwards compatibility, if /etc/gdm/gdm.conf is found it will be used instead of custom.conf . When upgrading, Red Hat recommends removing your old gdm.conf file and migrating any custom configuration to custom.conf . Getting More Information For more information on GDM , see Section 14.1, "What Is GDM?" . For information on configuring and managing user sessions, see Section 14.3, "User Sessions" . For information on customizing the login screen appearance, see Section 10.4, "Customizing the Login Screen" .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/gdm
Chapter 8. Application credentials
Chapter 8. Application credentials Use Application Credentials to avoid the practice of embedding user account credentials in configuration files. Instead, the user creates an Application Credential that receives delegated access to a single project and has its own distinct secret. The user can also limit the delegated privileges to a single role in that project. This allows you to adopt the principle of least privilege, where the authenticated service gains access only to the one project and role that it needs to function, rather than all projects and roles. You can use this methodology to consume an API without revealing your user credentials, and applications can authenticate to Keystone without requiring embedded user credentials. You can use Application Credentials to generate tokens and configure keystone_authtoken settings for applications. These use cases are described in the following sections. Note The Application Credential is dependent on the user account that created it, so it will terminate if that account is ever deleted, or loses access to the relevant role. 8.1. Using Application Credentials to generate tokens Application Credentials are available to users as a self-service function in the dashboard. This example demonstrates how a user can create an Application Credential and then use it to generate a token. Create a test project, and test user accounts: Create a project called AppCreds : Create a user called AppCredsUser : Grant AppCredsUser access to the member role for the AppCreds project: Log in to the dashboard as AppCredsUser and create an Application Credential: Overview Identity Application Credentials +Create Application Credential . Note Ensure that you download the clouds.yaml file contents, because you cannot access it again after you close the pop-up window titled Your Application Credential . Create a file named /home/stack/.config/openstack/clouds.yaml using the CLI and paste the contents of the clouds.yaml file. Note These values will be different for your deployment. Use the Application Credential to generate a token. You must not be sourced as any specific user when using the following command, and you must be in the same directory as your clouds.yaml file. Note If you receive an error similar to init () got an unexpected keyword argument 'application_credential_secret' , then you might still be sourced to the credentials. For a fresh environment, run sudo su - stack . 8.2. Integrating Application Credentials with applications Application Credentials can be used to authenticate applications to keystone. When you use Application Credentials, the keystone_authtoken settings use v3applicationcredential as the authentication type and contain the credentials that you receive during the credential creation process. Enter the following values: application_credential_secret : The Application Credential secret. application_credential_id : The Application Credential id. (Optional) application_credential_name : You might use this parameter if you use a named application credential, rather than an ID. For example: 8.3. Managing Application Credentials You can use the command line to create and delete Application Credentials. The create subcommand creates an application credential based on the currently sourced account. For example, creating the credential when sourced as an admin user will grant the same roles to the Application Credential: Warning Using the --unrestricted parameter enables the application credential to create and delete other application credentials and trusts. This is potentially dangerous behavior and is disabled by default. You cannot use the --unrestricted parameter in combination with other access rules. By default, the resulting role membership includes all the roles assigned to the account that created the credentials. You can limit the role membership by delegating access only to a specific role: To delete an Application Credential: 8.4. Replacing Application Credentials Application credentials are bound to the user account that created them and become invalid if the user account is ever deleted, or if the user loses access to the delegated role. As a result, you should be prepared to generate a new application credential as needed. Replacing existing application credentials for configuration files Update the application credentials assigned to an application (using a configuration file): Create a new set of application credentials. Add the new credentials to the application configuration file, replacing the existing credentials. For more information, see Integrating Application Credentials with applications . Restart the application service to apply the change. Delete the old application credential, if appropriate. For more information about the command line options, see Managing Application Credentials . Replacing the existing application credentials in clouds.yaml When you replace an application credential used by clouds.yaml , you must create the replacement credentials using OpenStack user credentials. By default, you cannot use application credentials to create another set of application credentials. The openstack application credential create command creates an application credential based on the currently sourced account. Authenticate as the OpenStack user that originally created the authentication credentials that are about to expire. For example, if you used the procedure Using Application Credentials to generate tokens , you must log in again as AppCredsUser . Create an Application Credential called AppCred2 . This can be done using the OpenStack Dashboard, or the openstack CLI interface: Copy the id and secret parameters from the output of the command. The secret parameter value cannot be accessed again. Replace the application_credential_id and application_credential_secret parameter values in the USD{HOME}/.config/openstack/clouds.yaml file with the secret and id values that you copied. Verification Generate a token with clouds.yaml to confirm that the credentials are working as expected. You must not be sourced as any specific user when using the following command, and you must be in the same directory as your clouds.yaml file: Example output:
[ "openstack project create AppCreds", "openstack user create --project AppCreds --password-prompt AppCredsUser", "openstack role add --user AppCredsUser --project AppCreds member", "This is a clouds.yaml file, which can be used by OpenStack tools as a source of configuration on how to connect to a cloud. If this is your only cloud, just put this file in ~/.config/openstack/clouds.yaml and tools like python-openstackclient will just work with no further config. (You will need to add your password to the auth section) If you have more than one cloud account, add the cloud entry to the clouds section of your existing file and you can refer to them by name with OS_CLOUD=openstack or --os-cloud=openstack clouds: openstack: auth: auth_url: http://10.0.0.10:5000/v3 application_credential_id: \"6d141f23732b498e99db8186136c611b\" application_credential_secret: \"<example secret value>\" region_name: \"regionOne\" interface: \"public\" identity_api_version: 3 auth_type: \"v3applicationcredential\"", "[stack@undercloud-0 openstack]USD openstack --os-cloud=openstack token issue +------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | expires | 2018-08-29T05:37:29+0000 | | id | gAAAAABbhiMJ4TxxFlTMdsYJpfStsGotPrns0lnpvJq9ILtdi-NKqisWBeNiJlUXwmnoGQDh2CMyK9OeTsuEXnJNmFfKjxiHWmcQVYzAhMKo6_QMUtu_Qm6mtpzYYHBrUGboa_Ay0LBuFDtsjtgtvJ-r8G3TsJMowbKF-yo--O_XLhERU_QQVl3hl8zmMRdmLh_P9Cbhuolt | | project_id | 1a74eabbf05c41baadd716179bb9e1da | | user_id | ef679eeddfd14f8b86becfd7e1dc84f2 | +------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+", "[keystone_authtoken] auth_url = http://10.0.0.10:5000/v3 auth_type = v3applicationcredential application_credential_id = \"6cb5fa6a13184e6fab65ba2108adf50c\" application_credential_secret = \"<example password>\"", "openstack application credential create --description \"App Creds - All roles\" AppCredsUser +--------------+----------------------------------------------------------------------------------------+ | Field | Value | +--------------+----------------------------------------------------------------------------------------+ | description | App Creds - All roles | | expires_at | None | | id | fc17651c2c114fd6813f86fdbb430053 | | name | AppCredsUser | | project_id | 507663d0cfe244f8bc0694e6ed54d886 | | roles | member reader admin | | secret | fVnqa6I_XeRDDkmQnB5lx361W1jHtOtw3ci_mf_tOID-09MrPAzkU7mv-by8ykEhEa1QLPFJLNV4cS2Roo9lOg | | unrestricted | False | +--------------+----------------------------------------------------------------------------------------+", "openstack application credential create --description \"App Creds - Member\" --role member AppCredsUser +--------------+----------------------------------------------------------------------------------------+ | Field | Value | +--------------+----------------------------------------------------------------------------------------+ | description | App Creds - Member | | expires_at | None | | id | e21e7f4b578240f79814085a169c9a44 | | name | AppCredsUser | | project_id | 507663d0cfe244f8bc0694e6ed54d886 | | roles | member | | secret | XCLVUTYIreFhpMqLVB5XXovs_z9JdoZWpdwrkaG1qi5GQcmBMUFG7cN2htzMlFe5T5mdPsnf5JMNbu0Ih-4aCg | | unrestricted | False | +--------------+----------------------------------------------------------------------------------------+", "openstack application credential delete AppCredsUser", "openstack application credential create --description \"App Creds 2 - Member\" --role member AppCred2", "[stack@undercloud-0 openstack]USD openstack --os-cloud=openstack token issue", "+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | expires | 2018-08-29T05:37:29+0000 | | id | gAAAAABbhiMJ4TxxFlTMdsYJpfStsGotPrns0lnpvJq9ILtdi-NKqisWBeNiJlUXwmnoGQDh2CMyK9OeTsuEXnJNmFfKjxiHWmcQVYzAhMKo6_QMUtu_Qm6mtpzYYHBrUGboa_Ay0LBuFDtsjtgtvJ-r8G3TsJMowbKF-yo--O_XLhERU_QQVl3hl8zmMRdmLh_P9Cbhuolt | | project_id | 1a74eabbf05c41baadd716179bb9e1da | | user_id | ef679eeddfd14f8b86becfd7e1dc84f2 | +------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/managing_openstack_identity_resources/assembly_application-credentials
Chapter 18. Red Hat Software Collections
Chapter 18. Red Hat Software Collections Red Hat Software Collections is a Red Hat content set that provides a set of dynamic programming languages, database servers, and related packages that you can install and use on all supported releases of Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 on AMD64 and Intel 64 architectures. Dynamic languages, database servers, and other tools distributed with Red Hat Software Collections do not replace the default system tools provided with Red Hat Enterprise Linux, nor are they used in preference to these tools. Red Hat Software Collections uses an alternative packaging mechanism based on the scl utility to provide a parallel set of packages. This set allows for optional use of alternative package versions on Red Hat Enterprise Linux. By using the scl utility, users can pick and choose which package version they want to run at any time. Red Hat Developer Toolset is now a part of Red Hat Software Collections. It is included as a separate Software Collection. Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. It provides current versions of the GNU Compiler Collection, GNU Debugger, Eclipse development platform, and other development, debugging, and performance monitoring tools. Important Red Hat Software Collections has a shorter life cycle and support term than Red Hat Enterprise Linux. For more information, see the Red Hat Software Collections Product Life Cycle . See the Red Hat Software Collections documentation for the components included in the set, system requirements, known problems, usage, and specifics of individual Software Collections. See the Red Hat Developer Toolset documentation for more information about the components included in this Software Collection, installation, usage, known problems, and more.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.2_release_notes/ch-red_hat_software_collections
19.2. Mounting a File System
19.2. Mounting a File System To attach a certain file system, use the mount command in the following form: The device can be identified by: a full path to a block device : for example, /dev/sda3 a universally unique identifier ( UUID ): for example, UUID=34795a28-ca6d-4fd8-a347-73671d0c19cb a volume label : for example, LABEL=home Note that while a file system is mounted, the original content of the directory is not accessible. Important Linux does not prevent a user from mounting a file system to a directory with a file system already attached to it. To determine whether a particular directory serves as a mount point, run the findmnt utility with the directory as its argument and verify the exit code: If no file system is attached to the directory, the given command returns 1 . When you run the mount command without all required information, that is without the device name, the target directory, or the file system type, the mount reads the contents of the /etc/fstab file to check if the given file system is listed. The /etc/fstab file contains a list of device names and the directories in which the selected file systems are set to be mounted as well as the file system type and mount options. Therefore, when mounting a file system that is specified in /etc/fstab , you can choose one of the following options: Note that permissions are required to mount the file systems unless the command is run as root (see Section 19.2.2, "Specifying the Mount Options" ). Note To determine the UUID and-if the device uses it-the label of a particular device, use the blkid command in the following form: For example, to display information about /dev/sda3 : 19.2.1. Specifying the File System Type In most cases, mount detects the file system automatically. However, there are certain file systems, such as NFS (Network File System) or CIFS (Common Internet File System), that are not recognized, and need to be specified manually. To specify the file system type, use the mount command in the following form: Table 19.1, "Common File System Types" provides a list of common file system types that can be used with the mount command. For a complete list of all available file system types, see the section called "Manual Page Documentation" . Table 19.1. Common File System Types Type Description ext2 The ext2 file system. ext3 The ext3 file system. ext4 The ext4 file system. btrfs The btrfs file system. xfs The xfs file system. iso9660 The ISO 9660 file system. It is commonly used by optical media, typically CDs. nfs The NFS file system. It is commonly used to access files over the network. nfs4 The NFSv4 file system. It is commonly used to access files over the network. udf The UDF file system. It is commonly used by optical media, typically DVDs. vfat The FAT file system. It is commonly used on machines that are running the Windows operating system, and on certain digital media such as USB flash drives or floppy disks. See Example 19.2, "Mounting a USB Flash Drive" for an example usage. Example 19.2. Mounting a USB Flash Drive Older USB flash drives often use the FAT file system. Assuming that such drive uses the /dev/sdc1 device and that the /media/flashdisk/ directory exists, mount it to this directory by typing the following at a shell prompt as root : 19.2.2. Specifying the Mount Options To specify additional mount options, use the command in the following form: When supplying multiple options, do not insert a space after a comma, or mount interprets incorrectly the values following spaces as additional parameters. Table 19.2, "Common Mount Options" provides a list of common mount options. For a complete list of all available options, consult the relevant manual page as referred to in the section called "Manual Page Documentation" . Table 19.2. Common Mount Options Option Description async Allows the asynchronous input/output operations on the file system. auto Allows the file system to be mounted automatically using the mount -a command. defaults Provides an alias for async,auto,dev,exec,nouser,rw,suid . exec Allows the execution of binary files on the particular file system. loop Mounts an image as a loop device. noauto Default behavior disallows the automatic mount of the file system using the mount -a command. noexec Disallows the execution of binary files on the particular file system. nouser Disallows an ordinary user (that is, other than root ) to mount and unmount the file system. remount Remounts the file system in case it is already mounted. ro Mounts the file system for reading only. rw Mounts the file system for both reading and writing. user Allows an ordinary user (that is, other than root ) to mount and unmount the file system. See Example 19.3, "Mounting an ISO Image" for an example usage. Example 19.3. Mounting an ISO Image An ISO image (or a disk image in general) can be mounted by using the loop device. Assuming that the ISO image of the Fedora 14 installation disc is present in the current working directory and that the /media/cdrom/ directory exists, mount the image to this directory by running the following command: Note that ISO 9660 is by design a read-only file system. 19.2.3. Sharing Mounts Occasionally, certain system administration tasks require access to the same file system from more than one place in the directory tree (for example, when preparing a chroot environment). This is possible, and Linux allows you to mount the same file system to as many directories as necessary. Additionally, the mount command implements the --bind option that provides a means for duplicating certain mounts. Its usage is as follows: Although this command allows a user to access the file system from both places, it does not apply on the file systems that are mounted within the original directory. To include these mounts as well, use the following command: Additionally, to provide as much flexibility as possible, Red Hat Enterprise Linux 7 implements the functionality known as shared subtrees . This feature allows the use of the following four mount types: Shared Mount A shared mount allows the creation of an exact replica of a given mount point. When a mount point is marked as a shared mount, any mount within the original mount point is reflected in it, and vice versa. To change the type of a mount point to a shared mount, type the following at a shell prompt: Alternatively, to change the mount type for the selected mount point and all mount points under it: See Example 19.4, "Creating a Shared Mount Point" for an example usage. Example 19.4. Creating a Shared Mount Point There are two places where other file systems are commonly mounted: the /media/ directory for removable media, and the /mnt/ directory for temporarily mounted file systems. By using a shared mount, you can make these two directories share the same content. To do so, as root , mark the /media/ directory as shared: Create its duplicate in /mnt/ by using the following command: It is now possible to verify that a mount within /media/ also appears in /mnt/ . For example, if the CD-ROM drive contains non-empty media and the /media/cdrom/ directory exists, run the following commands: Similarly, it is possible to verify that any file system mounted in the /mnt/ directory is reflected in /media/ . For instance, if a non-empty USB flash drive that uses the /dev/sdc1 device is plugged in and the /mnt/flashdisk/ directory is present, type: Slave Mount A slave mount allows the creation of a limited duplicate of a given mount point. When a mount point is marked as a slave mount, any mount within the original mount point is reflected in it, but no mount within a slave mount is reflected in its original. To change the type of a mount point to a slave mount, type the following at a shell prompt: Alternatively, it is possible to change the mount type for the selected mount point and all mount points under it by typing: See Example 19.5, "Creating a Slave Mount Point" for an example usage. Example 19.5. Creating a Slave Mount Point This example shows how to get the content of the /media/ directory to appear in /mnt/ as well, but without any mounts in the /mnt/ directory to be reflected in /media/ . As root , first mark the /media/ directory as shared: Then create its duplicate in /mnt/ , but mark it as "slave": Now verify that a mount within /media/ also appears in /mnt/ . For example, if the CD-ROM drive contains non-empty media and the /media/cdrom/ directory exists, run the following commands: Also verify that file systems mounted in the /mnt/ directory are not reflected in /media/ . For instance, if a non-empty USB flash drive that uses the /dev/sdc1 device is plugged in and the /mnt/flashdisk/ directory is present, type: Private Mount A private mount is the default type of mount, and unlike a shared or slave mount, it does not receive or forward any propagation events. To explicitly mark a mount point as a private mount, type the following at a shell prompt: Alternatively, it is possible to change the mount type for the selected mount point and all mount points under it: See Example 19.6, "Creating a Private Mount Point" for an example usage. Example 19.6. Creating a Private Mount Point Taking into account the scenario in Example 19.4, "Creating a Shared Mount Point" , assume that a shared mount point has been previously created by using the following commands as root : To mark the /mnt/ directory as private, type: It is now possible to verify that none of the mounts within /media/ appears in /mnt/ . For example, if the CD-ROM drives contains non-empty media and the /media/cdrom/ directory exists, run the following commands: It is also possible to verify that file systems mounted in the /mnt/ directory are not reflected in /media/ . For instance, if a non-empty USB flash drive that uses the /dev/sdc1 device is plugged in and the /mnt/flashdisk/ directory is present, type: Unbindable Mount In order to prevent a given mount point from being duplicated whatsoever, an unbindable mount is used. To change the type of a mount point to an unbindable mount, type the following at a shell prompt: Alternatively, it is possible to change the mount type for the selected mount point and all mount points under it: See Example 19.7, "Creating an Unbindable Mount Point" for an example usage. Example 19.7. Creating an Unbindable Mount Point To prevent the /media/ directory from being shared, as root : This way, any subsequent attempt to make a duplicate of this mount fails with an error: 19.2.4. Moving a Mount Point To change the directory in which a file system is mounted, use the following command: See Example 19.8, "Moving an Existing NFS Mount Point" for an example usage. Example 19.8. Moving an Existing NFS Mount Point An NFS storage contains user directories and is already mounted in /mnt/userdirs/ . As root , move this mount point to /home by using the following command: To verify the mount point has been moved, list the content of both directories: 19.2.5. Setting Read-only Permissions for root Sometimes, you need to mount the root file system with read-only permissions. Example use cases include enhancing security or ensuring data integrity after an unexpected system power-off. 19.2.5.1. Configuring root to Mount with Read-only Permissions on Boot In the /etc/sysconfig/readonly-root file, change READONLY to yes : Change defaults to ro in the root entry ( / ) in the /etc/fstab file: Add ro to the GRUB_CMDLINE_LINUX directive in the /etc/default/grub file and ensure that it does not contain rw : Recreate the GRUB2 configuration file: If you need to add files and directories to be mounted with write permissions in the tmpfs file system, create a text file in the /etc/rwtab.d/ directory and put the configuration there. For example, to mount /etc/example/file with write permissions, add this line to the /etc/rwtab.d/ example file: Important Changes made to files and directories in tmpfs do not persist across boots. See Section 19.2.5.3, "Files and Directories That Retain Write Permissions" for more information on this step. Reboot the system. 19.2.5.2. Remounting root Instantly If root ( / ) was mounted with read-only permissions on system boot, you can remount it with write permissions: This can be particularly useful when / is incorrectly mounted with read-only permissions. To remount / with read-only permissions again, run: Note This command mounts the whole / with read-only permissions. A better approach is to retain write permissions for certain files and directories by copying them into RAM, as described in Section 19.2.5.1, "Configuring root to Mount with Read-only Permissions on Boot" . 19.2.5.3. Files and Directories That Retain Write Permissions For the system to function properly, some files and directories need to retain write permissions. With root in read-only mode, they are mounted in RAM in the tmpfs temporary file system. The default set of such files and directories is read from the /etc/rwtab file, which contains: Entries in the /etc/rwtab file follow this format: A file or directory can be copied to tmpfs in the following three ways: empty path : An empty path is copied to tmpfs . Example: empty /tmp dirs path : A directory tree is copied to tmpfs , empty. Example: dirs /var/run files path : A file or a directory tree is copied to tmpfs intact. Example: files /etc/resolv.conf The same format applies when adding custom paths to /etc/rwtab.d/ .
[ "mount [ option ... ] device directory", "findmnt directory ; echo USD?", "mount [ option ... ] directory mount [ option ... ] device", "blkid device", "blkid /dev/sda3 /dev/sda3: LABEL=\"home\" UUID=\"34795a28-ca6d-4fd8-a347-73671d0c19cb\" TYPE=\"ext3\"", "mount -t type device directory", "~]# mount -t vfat /dev/sdc1 /media/flashdisk", "mount -o options device directory", "mount -o ro,loop Fedora-14-x86_64-Live-Desktop.iso /media/cdrom", "mount --bind old_directory new_directory", "mount --rbind old_directory new_directory", "mount --make-shared mount_point", "mount --make-rshared mount_point", "mount --bind /media /media # mount --make-shared /media", "mount --bind /media /mnt", "mount /dev/cdrom /media/cdrom # ls /media/cdrom EFI GPL isolinux LiveOS # ls /mnt/cdrom EFI GPL isolinux LiveOS", "# mount /dev/sdc1 /mnt/flashdisk # ls /media/flashdisk en-US publican.cfg # ls /mnt/flashdisk en-US publican.cfg", "mount --make-slave mount_point", "mount --make-rslave mount_point", "~]# mount --bind /media /media ~]# mount --make-shared /media", "~]# mount --bind /media /mnt ~]# mount --make-slave /mnt", "~]# mount /dev/cdrom /media/cdrom ~]# ls /media/cdrom EFI GPL isolinux LiveOS ~]# ls /mnt/cdrom EFI GPL isolinux LiveOS", "~]# mount /dev/sdc1 /mnt/flashdisk ~]# ls /media/flashdisk ~]# ls /mnt/flashdisk en-US publican.cfg", "mount --make-private mount_point", "mount --make-rprivate mount_point", "~]# mount --bind /media /media ~]# mount --make-shared /media ~]# mount --bind /media /mnt", "~]# mount --make-private /mnt", "~]# mount /dev/cdrom /media/cdrom ~]# ls /media/cdrom EFI GPL isolinux LiveOS ~]# ls /mnt/cdrom ~]#", "~]# mount /dev/sdc1 /mnt/flashdisk ~]# ls /media/flashdisk ~]# ls /mnt/flashdisk en-US publican.cfg", "mount --make-unbindable mount_point", "mount --make-runbindable mount_point", "mount --bind /media /media # mount --make-unbindable /media", "mount --bind /media /mnt mount: wrong fs type, bad option, bad superblock on /media, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so", "mount --move old_directory new_directory", "mount --move /mnt/userdirs /home", "ls /mnt/userdirs # ls /home jill joe", "Set to 'yes' to mount the file systems as read-only. READONLY=yes [output truncated]", "/dev/mapper/luks-c376919e... / ext4 ro ,x-systemd.device-timeout=0 1 1", "GRUB_CMDLINE_LINUX=\"crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet ro \"", "grub2-mkconfig -o /boot/grub2/grub.cfg", "files /etc/example/file", "mount -o remount,rw /", "mount -o remount,ro /", "dirs /var/cache/man dirs /var/gdm [output truncated] empty /tmp empty /var/cache/foomatic [output truncated] files /etc/adjtime files /etc/ntp.conf [output truncated]", "how the file or directory is copied to tmpfs path to the file or directory" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/sect-using_the_mount_command-mounting
Chapter 9. Server Message Block (SMB)
Chapter 9. Server Message Block (SMB) The Server Message Block (SMB) protocol implements an application-layer network protocol used to access resources on a server, such as file shares and shared printers. On Microsoft Windows, SMB is implemented by default. If you run Red Hat Enterprise Linux, use Samba to provide SMB shares and the cifs-utils utility to mount SMB shares from a remote server. Note In the context of SMB, you sometimes read about the Common Internet File System (CIFS) protocol, which is a dialect of SMB. Both the SMB and CIFS protocol are supported and the kernel module and utilities involved in mounting SMB and CIFS shares both use the name cifs . 9.1. Providing SMB Shares See the Samba section in the Red Hat System Administrator's Guide .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/ch-server_message_block-smb
Chapter 11. AWS 2 Simple Queue Service FIFO sink
Chapter 11. AWS 2 Simple Queue Service FIFO sink Send message to an AWS SQS FIFO Queue 11.1. Configuration Options The following table summarizes the configuration options available for the aws-sqs-fifo-sink Kamelet: Property Name Description Type Default Example accessKey * Access Key The access key obtained from AWS string queueNameOrArn * Queue Name The SQS Queue name or ARN string region * AWS Region The AWS region to connect to string "eu-west-1" secretKey * Secret Key The secret key obtained from AWS string autoCreateQueue Autocreate Queue Setting the autocreation of the SQS queue. boolean false contentBasedDeduplication Content-Based Deduplication Use content-based deduplication (should be enabled in the SQS FIFO queue first) boolean false Note Fields marked with an asterisk (*) are mandatory. 11.2. Dependencies At runtime, the aws-sqs-fifo-sink Kamelet relies upon the presence of the following dependencies: camel:aws2-sqs camel:core camel:kamelet 11.3. Usage This section describes how you can use the aws-sqs-fifo-sink . 11.3.1. Knative Sink You can use the aws-sqs-fifo-sink Kamelet as a Knative sink by binding it to a Knative object. aws-sqs-fifo-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-sqs-fifo-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-sqs-fifo-sink properties: accessKey: "The Access Key" queueNameOrArn: "The Queue Name" region: "eu-west-1" secretKey: "The Secret Key" 11.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 11.3.1.2. Procedure for using the cluster CLI Save the aws-sqs-fifo-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f aws-sqs-fifo-sink-binding.yaml 11.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel aws-sqs-fifo-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" This command creates the KameletBinding in the current namespace on the cluster. 11.3.2. Kafka Sink You can use the aws-sqs-fifo-sink Kamelet as a Kafka sink by binding it to a Kafka topic. aws-sqs-fifo-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-sqs-fifo-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-sqs-fifo-sink properties: accessKey: "The Access Key" queueNameOrArn: "The Queue Name" region: "eu-west-1" secretKey: "The Secret Key" 11.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 11.3.2.2. Procedure for using the cluster CLI Save the aws-sqs-fifo-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f aws-sqs-fifo-sink-binding.yaml 11.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-sqs-fifo-sink -p "sink.accessKey=The Access Key" -p "sink.queueNameOrArn=The Queue Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" This command creates the KameletBinding in the current namespace on the cluster. 11.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/aws-sqs-fifo-sink.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-sqs-fifo-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-sqs-fifo-sink properties: accessKey: \"The Access Key\" queueNameOrArn: \"The Queue Name\" region: \"eu-west-1\" secretKey: \"The Secret Key\"", "apply -f aws-sqs-fifo-sink-binding.yaml", "kamel bind channel:mychannel aws-sqs-fifo-sink -p \"sink.accessKey=The Access Key\" -p \"sink.queueNameOrArn=The Queue Name\" -p \"sink.region=eu-west-1\" -p \"sink.secretKey=The Secret Key\"", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-sqs-fifo-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-sqs-fifo-sink properties: accessKey: \"The Access Key\" queueNameOrArn: \"The Queue Name\" region: \"eu-west-1\" secretKey: \"The Secret Key\"", "apply -f aws-sqs-fifo-sink-binding.yaml", "kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-sqs-fifo-sink -p \"sink.accessKey=The Access Key\" -p \"sink.queueNameOrArn=The Queue Name\" -p \"sink.region=eu-west-1\" -p \"sink.secretKey=The Secret Key\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/aws-sqs-fifo-sink
Chapter 1. Release notes for the Red Hat OpenShift distributed tracing platform
Chapter 1. Release notes for the Red Hat OpenShift distributed tracing platform 1.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use the Red Hat OpenShift distributed tracing platform for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With the distributed tracing platform, you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis You can use the Red Hat OpenShift distributed tracing platform (Tempo) in combination with the Red Hat build of OpenTelemetry . Note Only supported features are documented. Undocumented features are currently unsupported. If you need assistance with a feature, contact Red Hat's support. 1.2. Release notes for Red Hat OpenShift distributed tracing platform 3.5 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.2.1. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) 3.5 is provided through the Tempo Operator 0.15.3 . Note The Red Hat OpenShift distributed tracing platform (Tempo) 3.5 is based on the open source Grafana Tempo 2.7.1. 1.2.1.1. New features and enhancements This update introduces the following enhancements: With this update, you can configure the Tempo backend services to report the internal tracing data by using the OpenTelemetry Protocol (OTLP). With this update, the traces.span.metrics namespace becomes the default metrics namespace on which the Jaeger query retrieves the Prometheus metrics. The purpose of this change is to provide compatibility with the OpenTelemetry Collector version 0.109.0 and later where this namespace is the default. Customers who are still using an earlier OpenTelemetry Collector version can configure this namespace by adding the following field and value: spec.template.queryFrontend.jaegerQuery.monitorTab.redMetricsNamespace: "" . 1.2.1.2. Bug fixes This update introduces the following bug fix: Before this update, the Tempo Operator failed when the TempoStack custom resource had the spec.storage.tls.enabled field set to true and used an Amazon S3 object store with the Security Token Service (STS) authentication. With this update, such a TempoStack custom resource configuration does not cause the Tempo Operator to fail. 1.2.2. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) 3.5 is the last release of the Red Hat OpenShift distributed tracing platform (Jaeger) that Red Hat plans to support. In the Red Hat OpenShift distributed tracing platform 3.5, Jaeger and support for Elasticsearch remain deprecated. Warning Support for the Red Hat OpenShift distributed tracing platform (Jaeger) ends on November 3, 2025. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog on November 3, 2025. You must migrate to the Red Hat build of OpenTelemetry Operator and the Tempo Operator for distributed tracing collection and storage. For more information, see the following resources: Migrating (Red Hat build of OpenTelemetry documentation) Installing (Red Hat build of OpenTelemetry documentation) Installing (distributed tracing platform (Tempo) documentation) Jaeger Deprecation and Removal in OpenShift (Red Hat Knowledgebase solution) Note The Red Hat OpenShift distributed tracing platform (Jaeger) 3.5 is based on the open source Jaeger release 1.65.0. The Red Hat OpenShift distributed tracing platform (Jaeger) 3.5 is provided through the Red Hat OpenShift distributed tracing platform Operator 1.65.0 . Important Jaeger does not use FIPS validated cryptographic modules. 1.2.2.1. Support for the OpenShift Elasticsearch Operator The Red Hat OpenShift distributed tracing platform (Jaeger) 3.5 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. Additional resources Jaeger Deprecation and Removal in OpenShift (Red Hat Knowledgebase) Migrating (Red Hat build of OpenTelemetry documentation) Installing (Red Hat build of OpenTelemetry documentation) Installing (Distributed tracing platform (Tempo) documentation) 1.2.2.2. Known issues There are currently known issues: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.3. Release notes for Red Hat OpenShift distributed tracing platform 3.4 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.3.1. CVEs This release fixes the following CVEs: CVE-2024-21536 CVE-2024-43796 CVE-2024-43799 CVE-2024-43800 CVE-2024-45296 CVE-2024-45590 CVE-2024-45811 CVE-2024-45812 CVE-2024-47068 Cross-site Scripting (XSS) in serialize-javascript 1.3.2. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) 3.4 is provided through the Tempo Operator 0.14.1 . Note The Red Hat OpenShift distributed tracing platform (Tempo) 3.4 is based on the open source Grafana Tempo 2.6.1. 1.3.2.1. New features and enhancements This update introduces the following enhancements: The monitor tab in the Jaeger UI for TempoStack instances uses a new default metrics namespace: traces.span.metrics . Before this update, the Jaeger UI used an empty namespace. The new traces.span.metrics namespace default is also used by the OpenTelemetry Collector 0.113.0. You can set the empty value for the metrics namespace by using the following field in the TempoStack custom resouce: spec.template.queryFrontend.monitorTab.redMetricsNamespace: "" . Warning This is a breaking change. If you are using both the Red Hat OpenShift distributed tracing platform (Tempo) and Red Hat build of OpenTelemetry, you must upgrade to the Red Hat build of OpenTelemetry 3.4 before upgrading to the Red Hat OpenShift distributed tracing platform (Tempo) 3.4. New and optional spec.timeout field in the TempoStack and TempoMonolithic custom resource definitions for configuring one timeout value for all components. The timeout value is set to 30 seconds, 30s , by default. Warning This is a breaking change. 1.3.2.2. Bug fixes This update introduces the following bug fixes: Before this update, the distributed tracing platform (Tempo) failed on the IBM Z ( s390x ) architecture. With this update, the distributed tracing platform (Tempo) is available for the IBM Z ( s390x ) architecture. ( TRACING-3545 ) Before this update, the distributed tracing platform (Tempo) failed on clusters with non-private networks. With this update, you can deploy the distributed tracing platform (Tempo) on clusters with non-private networks. ( TRACING-4507 ) Before this update, the Jaeger UI might fail due to reaching a trace quantity limit, resulting in the 504 Gateway Timeout error in the tempo-query logs. After this update, the issue is resolved by introducing two optional fields in the tempostack or tempomonolithic custom resource: New spec.timeout field for configuring the timeout. New spec.template.queryFrontend.jaegerQuery.findTracesConcurrentRequests field for improving the query performance of the Jaeger UI. Tip One querier can handle up to 20 concurrent queries by default. Increasing the number of concurrent queries further is achieved by scaling up the querier instances. 1.3.3. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) 3.4 is provided through the Red Hat OpenShift distributed tracing platform Operator 1.62.0 . Note The Red Hat OpenShift distributed tracing platform (Jaeger) 3.4 is based on the open source Jaeger release 1.62.0. Important Jaeger does not use FIPS validated cryptographic modules. 1.3.3.1. Support for the OpenShift Elasticsearch Operator The Red Hat OpenShift distributed tracing platform (Jaeger) 3.4 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. 1.3.3.2. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.4, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. Red Hat will provide support for these components and fixes for CVEs and bugs with critical and higher severity during the current release lifecycle, but these components will no longer receive feature enhancements. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog in a future release. For more information, see the Red Hat Knowledgebase solution Jaeger Deprecation and Removal in OpenShift . You must migrate to the Red Hat build of OpenTelemetry Operator and the Tempo Operator for distributed tracing collection and storage. For more information, see Migrating in the Red Hat build of OpenTelemetry documentation, Installing in the Red Hat build of OpenTelemetry documentation, and Installing in the distributed tracing platform (Tempo) documentation. Additional resources Jaeger Deprecation and Removal in OpenShift (Red Hat Knowledgebase) Migrating (Red Hat build of OpenTelemetry documentation) Installing (Red Hat build of OpenTelemetry documentation) Installing (Distributed tracing platform (Tempo) documentation) 1.3.3.3. Bug fixes This update introduces the following bug fix: Before this update, the Jaeger UI could fail with the 502 - Bad Gateway Timeout error. After this update, you can configure timeout in ingress annotations. ( TRACING-4238 ) 1.3.3.4. Known issues There are currently known issues: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.4. Release notes for Red Hat OpenShift distributed tracing platform 3.3.1 The Red Hat OpenShift distributed tracing platform 3.3.1 is a maintenance release with no changes because the Red Hat OpenShift distributed tracing platform is bundled with the Red Hat build of OpenTelemetry that is released with a bug fix. This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.4.1. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. The Red Hat OpenShift distributed tracing platform (Tempo) 3.3.1 is based on the open source Grafana Tempo 2.5.0. 1.4.1.1. Known issues There is currently a known issue: Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.4.2. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. The Red Hat OpenShift distributed tracing platform (Jaeger) 3.3.1 is based on the open source Jaeger release 1.57.0. Important Jaeger does not use FIPS validated cryptographic modules. 1.4.2.1. Support for the OpenShift Elasticsearch Operator The Red Hat OpenShift distributed tracing platform (Jaeger) 3.3.1 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. 1.4.2.2. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.3.1, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. Red Hat will provide support for these components and fixes for CVEs and bugs with critical and higher severity during the current release lifecycle, but these components will no longer receive feature enhancements. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog in a future release. Users must migrate to the Tempo Operator and the Red Hat build of OpenTelemetry for distributed tracing collection and storage. 1.4.2.3. Known issues There are currently known issues: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.5. Release notes for Red Hat OpenShift distributed tracing platform 3.3 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.5.1. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. The Red Hat OpenShift distributed tracing platform (Tempo) 3.3 is based on the open source Grafana Tempo 2.5.0. 1.5.1.1. New features and enhancements This update introduces the following enhancements: Support for securing the Jaeger UI and Jaeger APIs with the OpenShift OAuth Proxy. ( TRACING-4108 ) Support for using the service serving certificates, which are generated by OpenShift Container Platform, on ingestion APIs when multitenancy is disabled. ( TRACING-3954 ) Support for ingesting by using the OTLP/HTTP protocol when multitenancy is enabled. ( TRACING-4171 ) Support for the AWS S3 Secure Token authentication. ( TRACING-4176 ) Support for automatically reloading certificates. ( TRACING-4185 ) Support for configuring the duration for which service names are available for querying. ( TRACING-4214 ) 1.5.1.2. Bug fixes This update introduces the following bug fixes: Before this update, storage certificate names did not support dots. With this update, storage certificate name can contain dots. ( TRACING-4348 ) Before this update, some users had to select a certificate when accessing the gateway route. With this update, there is no prompt to select a certificate. ( TRACING-4431 ) Before this update, the gateway component was not scalable. With this update, the gateway component is scalable. ( TRACING-4497 ) Before this update the Jaeger UI might fail with the 504 Gateway Time-out error when accessed via a route. With this update, users can specify route annotations for increasing timeout, such as haproxy.router.openshift.io/timeout: 3m , when querying large data sets. ( TRACING-4511 ) 1.5.1.3. Known issues There is currently a known issue: Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.5.2. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. The Red Hat OpenShift distributed tracing platform (Jaeger) 3.3 is based on the open source Jaeger release 1.57.0. Important Jaeger does not use FIPS validated cryptographic modules. 1.5.2.1. Support for the OpenShift Elasticsearch Operator The Red Hat OpenShift distributed tracing platform (Jaeger) 3.3 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. 1.5.2.2. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.3, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. Red Hat will provide support for these components and fixes for CVEs and bugs with critical and higher severity during the current release lifecycle, but these components will no longer receive feature enhancements. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog in a future release. Users must migrate to the Tempo Operator and the Red Hat build of OpenTelemetry for distributed tracing collection and storage. 1.5.2.3. Known issues There are currently known issues: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.6. Release notes for Red Hat OpenShift distributed tracing platform 3.2.2 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.6.1. CVEs This release fixes the following CVEs: CVE-2023-2953 CVE-2024-28182 1.6.2. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. 1.6.2.1. Bug fixes This update introduces the following bug fix: Before this update, secrets were perpetually generated on OpenShift Container Platform 4.16 because the operator tried to reconcile a new openshift.io/internal-registry-pull-secret-ref annotation for service accounts, causing a loop. With this update, the operator ignores this new annotation. ( TRACING-4434 ) 1.6.2.2. Known issues There is currently a known issue: Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.6.3. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. Important Jaeger does not use FIPS validated cryptographic modules. 1.6.3.1. Known issues There is currently a known issue: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.7. Release notes for Red Hat OpenShift distributed tracing platform 3.2.1 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.7.1. CVEs This release fixes CVE-2024-25062 . 1.7.2. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. 1.7.2.1. Known issues There is currently a known issue: Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.7.3. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. Important Jaeger does not use FIPS validated cryptographic modules. 1.7.3.1. Known issues There is currently a known issue: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.8. Release notes for Red Hat OpenShift distributed tracing platform 3.2 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.8.1. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. 1.8.1.1. Technology Preview features This update introduces the following Technology Preview feature: Support for the Tempo monolithic deployment. Important The Tempo monolithic deployment is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.8.1.2. New features and enhancements This update introduces the following enhancements: Red Hat OpenShift distributed tracing platform (Tempo) 3.2 is based on the open source Grafana Tempo 2.4.1. Allowing the overriding of resources per component. 1.8.1.3. Bug fixes This update introduces the following bug fixes: Before this update, the Jaeger UI only displayed services that sent traces in the 15 minutes. With this update, the availability of the service and operation names can be configured by using the following field: spec.template.queryFrontend.jaegerQuery.servicesQueryDuration . ( TRACING-3139 ) Before this update, the query-frontend pod might get stopped when out-of-memory (OOM) as a result of searching a large trace. With this update, resource limits can be set to prevent this issue. ( TRACING-4009 ) 1.8.1.4. Known issues There is currently a known issue: Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.8.2. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. Important Jaeger does not use FIPS validated cryptographic modules. 1.8.2.1. Support for OpenShift Elasticsearch Operator Red Hat OpenShift distributed tracing platform (Jaeger) 3.2 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. 1.8.2.2. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.2, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. Red Hat will provide support for these components and fixes for CVEs and bugs with critical and higher severity during the current release lifecycle, but these components will no longer receive feature enhancements. The Tempo Operator and the Red Hat build of OpenTelemetry are the preferred Operators for distributed tracing collection and storage. Users must adopt the OpenTelemetry and Tempo distributed tracing stack because it is the stack to be enhanced going forward. In the Red Hat OpenShift distributed tracing platform 3.2, the Jaeger agent is deprecated and planned to be removed in the following release. Red Hat will provide bug fixes and support for the Jaeger agent during the current release lifecycle, but the Jaeger agent will no longer receive enhancements and will be removed. The OpenTelemetry Collector provided by the Red Hat build of OpenTelemetry is the preferred Operator for injecting the trace collector agent. 1.8.2.3. New features and enhancements This update introduces the following enhancements for the distributed tracing platform (Jaeger): Red Hat OpenShift distributed tracing platform (Jaeger) 3.2 is based on the open source Jaeger release 1.57.0. 1.8.2.4. Known issues There is currently a known issue: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.9. Release notes for Red Hat OpenShift distributed tracing platform 3.1.1 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.9.1. CVEs This release fixes CVE-2023-39326 . 1.9.2. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. 1.9.2.1. Known issues There are currently known issues: Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.9.3. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. Important Jaeger does not use FIPS validated cryptographic modules. 1.9.3.1. Support for OpenShift Elasticsearch Operator Red Hat OpenShift distributed tracing platform (Jaeger) 3.1.1 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. 1.9.3.2. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.1.1, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. In the Red Hat OpenShift distributed tracing platform 3.1.1, Tempo provided by the Tempo Operator and the OpenTelemetry Collector provided by the Red Hat build of OpenTelemetry are the preferred Operators for distributed tracing collection and storage. The OpenTelemetry and Tempo distributed tracing stack is to be adopted by all users because this will be the stack that will be enhanced going forward. 1.9.3.3. Known issues There are currently known issues: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.10. Release notes for Red Hat OpenShift distributed tracing platform 3.1 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.10.1. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. 1.10.1.1. New features and enhancements This update introduces the following enhancements for the distributed tracing platform (Tempo): Red Hat OpenShift distributed tracing platform (Tempo) 3.1 is based on the open source Grafana Tempo 2.3.1. Support for cluster-wide proxy environments. Support for TraceQL to Gateway component. 1.10.1.2. Bug fixes This update introduces the following bug fixes for the distributed tracing platform (Tempo): Before this update, when a TempoStack instance was created with the monitorTab enabled in OpenShift Container Platform 4.15, the required tempo-redmetrics-cluster-monitoring-view ClusterRoleBinding was not created. This update resolves the issue by fixing the Operator RBAC for the monitor tab when the Operator is deployed in an arbitrary namespace. ( TRACING-3786 ) Before this update, when a TempoStack instance was created on an OpenShift Container Platform cluster with only an IPv6 networking stack, the compactor and ingestor pods ran in the CrashLoopBackOff state, resulting in multiple errors. This update provides support for IPv6 clusters.( TRACING-3226 ) 1.10.1.3. Known issues There are currently known issues: Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.10.2. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. Important Jaeger does not use FIPS validated cryptographic modules. 1.10.2.1. Support for OpenShift Elasticsearch Operator Red Hat OpenShift distributed tracing platform (Jaeger) 3.1 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. 1.10.2.2. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.1, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. In the Red Hat OpenShift distributed tracing platform 3.1, Tempo provided by the Tempo Operator and the OpenTelemetry Collector provided by the Red Hat build of OpenTelemetry are the preferred Operators for distributed tracing collection and storage. The OpenTelemetry and Tempo distributed tracing stack is to be adopted by all users because this will be the stack that will be enhanced going forward. 1.10.2.3. New features and enhancements This update introduces the following enhancements for the distributed tracing platform (Jaeger): Red Hat OpenShift distributed tracing platform (Jaeger) 3.1 is based on the open source Jaeger release 1.53.0. 1.10.2.4. Bug fixes This update introduces the following bug fix for the distributed tracing platform (Jaeger): Before this update, the connection target URL for the jaeger-agent container in the jager-query pod was overwritten with another namespace URL in OpenShift Container Platform 4.13. This was caused by a bug in the sidecar injection code in the jaeger-operator , causing nondeterministic jaeger-agent injection. With this update, the Operator prioritizes the Jaeger instance from the same namespace as the target deployment. ( TRACING-3722 ) 1.10.2.5. Known issues There are currently known issues: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.11. Release notes for Red Hat OpenShift distributed tracing platform 3.0 1.11.1. Component versions in the Red Hat OpenShift distributed tracing platform 3.0 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.51.0 Red Hat OpenShift distributed tracing platform (Tempo) Tempo 2.3.0 1.11.2. Red Hat OpenShift distributed tracing platform (Jaeger) 1.11.2.1. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.0, Jaeger and support for Elasticsearch are deprecated, and both are planned to be removed in a future release. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. In the Red Hat OpenShift distributed tracing platform 3.0, Tempo provided by the Tempo Operator and the OpenTelemetry Collector provided by the Red Hat build of OpenTelemetry are the preferred Operators for distributed tracing collection and storage. The OpenTelemetry and Tempo distributed tracing stack is to be adopted by all users because this will be the stack that will be enhanced going forward. 1.11.2.2. New features and enhancements This update introduces the following enhancements for the distributed tracing platform (Jaeger): Support for the ARM architecture. Support for cluster-wide proxy environments. 1.11.2.3. Bug fixes This update introduces the following bug fix for the distributed tracing platform (Jaeger): Before this update, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator used other images than relatedImages . This caused the ImagePullBackOff error in disconnected network environments when launching the jaeger pod because the oc adm catalog mirror command mirrors images specified in relatedImages . This update provides support for disconnected environments when using the oc adm catalog mirror CLI command. ( TRACING-3546 ) 1.11.2.4. Known issues There is currently a known issue: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.11.3. Red Hat OpenShift distributed tracing platform (Tempo) 1.11.3.1. New features and enhancements This update introduces the following enhancements for the distributed tracing platform (Tempo): Support for the ARM architecture. Support for span request count, duration, and error count (RED) metrics. The metrics can be visualized in the Jaeger console deployed as part of Tempo or in the web console in the Observe menu. 1.11.3.2. Bug fixes This update introduces the following bug fixes for the distributed tracing platform (Tempo): Before this update, the TempoStack CRD was not accepting custom CA certificate despite the option to choose CA certificates. This update fixes support for the custom TLS CA option for connecting to object storage. ( TRACING-3462 ) Before this update, when mirroring the Red Hat OpenShift distributed tracing platform Operator images to a mirror registry for use in a disconnected cluster, the related Operator images for tempo , tempo-gateway , opa-openshift , and tempo-query were not mirrored. This update fixes support for disconnected environments when using the oc adm catalog mirror CLI command. ( TRACING-3523 ) Before this update, the query frontend service of the Red Hat OpenShift distributed tracing platform was using internal mTLS when gateway was not deployed. This caused endpoint failure errors. This update fixes mTLS when Gateway is not deployed. ( TRACING-3510 ) 1.11.3.3. Known issues There are currently known issues: Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.12. Release notes for Red Hat OpenShift distributed tracing platform 2.9.2 1.12.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.9.2 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.47.0 Red Hat OpenShift distributed tracing platform (Tempo) Tempo 2.1.1 1.12.2. CVEs This release fixes CVE-2023-46234 . 1.12.3. Red Hat OpenShift distributed tracing platform (Jaeger) 1.12.3.1. Known issues There are currently known issues: Apache Spark is not supported. The streaming deployment via AMQ/Kafka is unsupported on the IBM Z and IBM Power architectures. 1.12.4. Red Hat OpenShift distributed tracing platform (Tempo) Important The Red Hat OpenShift distributed tracing platform (Tempo) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.12.4.1. Known issues There are currently known issues: Currently, the custom TLS CA option is not implemented for connecting to object storage. ( TRACING-3462 ) Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) Currently, the Tempo query frontend service must not use internal mTLS when Gateway is not deployed. This issue does not affect the Jaeger Query API. The workaround is to disable mTLS. ( TRACING-3510 ) Workaround Disable mTLS as follows: Open the Tempo Operator ConfigMap for editing by running the following command: USD oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1 1 The project where the Tempo Operator is installed. Disable the mTLS in the Operator configuration by updating the YAML file: data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false Restart the Tempo Operator pod by running the following command: USD oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator Missing images for running the Tempo Operator in restricted environments. The Red Hat OpenShift distributed tracing platform (Tempo) CSV is missing references to the operand images. ( TRACING-3523 ) Workaround Add the Tempo Operator related images in the mirroring tool to mirror the images to the registry: kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e 1.13. Release notes for Red Hat OpenShift distributed tracing platform 2.9.1 1.13.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.9.1 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.47.0 Red Hat OpenShift distributed tracing platform (Tempo) Tempo 2.1.1 1.13.2. CVEs This release fixes CVE-2023-44487 . 1.13.3. Red Hat OpenShift distributed tracing platform (Jaeger) 1.13.3.1. Known issues There are currently known issues: Apache Spark is not supported. The streaming deployment via AMQ/Kafka is unsupported on the IBM Z and IBM Power architectures. 1.13.4. Red Hat OpenShift distributed tracing platform (Tempo) Important The Red Hat OpenShift distributed tracing platform (Tempo) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.13.4.1. Known issues There are currently known issues: Currently, the custom TLS CA option is not implemented for connecting to object storage. ( TRACING-3462 ) Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) Currently, the Tempo query frontend service must not use internal mTLS when Gateway is not deployed. This issue does not affect the Jaeger Query API. The workaround is to disable mTLS. ( TRACING-3510 ) Workaround Disable mTLS as follows: Open the Tempo Operator ConfigMap for editing by running the following command: USD oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1 1 The project where the Tempo Operator is installed. Disable the mTLS in the Operator configuration by updating the YAML file: data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false Restart the Tempo Operator pod by running the following command: USD oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator Missing images for running the Tempo Operator in restricted environments. The Red Hat OpenShift distributed tracing platform (Tempo) CSV is missing references to the operand images. ( TRACING-3523 ) Workaround Add the Tempo Operator related images in the mirroring tool to mirror the images to the registry: kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e 1.14. Release notes for Red Hat OpenShift distributed tracing platform 2.9 1.14.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.9 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.47.0 Red Hat OpenShift distributed tracing platform (Tempo) Tempo 2.1.1 1.14.2. Red Hat OpenShift distributed tracing platform (Jaeger) 1.14.2.1. Bug fixes Before this update, connection was refused due to a missing gRPC port on the jaeger-query deployment. This issue resulted in transport: Error while dialing: dial tcp :16685: connect: connection refused error message. With this update, the Jaeger Query gRPC port (16685) is successfully exposed on the Jaeger Query service. ( TRACING-3322 ) Before this update, the wrong port was exposed for jaeger-production-query , resulting in refused connection. With this update, the issue is fixed by exposing the Jaeger Query gRPC port (16685) on the Jaeger Query deployment. ( TRACING-2968 ) Before this update, when deploying Service Mesh on single-node OpenShift clusters in disconnected environments, the Jaeger pod frequently went into the Pending state. With this update, the issue is fixed. ( TRACING-3312 ) Before this update, the Jaeger Operator pod restarted with the default memory value due to the reason: OOMKilled error message. With this update, this issue is fixed by removing the resource limits. ( TRACING-3173 ) 1.14.2.2. Known issues There are currently known issues: Apache Spark is not supported. The streaming deployment via AMQ/Kafka is unsupported on the IBM Z and IBM Power architectures. 1.14.3. Red Hat OpenShift distributed tracing platform (Tempo) Important The Red Hat OpenShift distributed tracing platform (Tempo) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.14.3.1. New features and enhancements This release introduces the following enhancements for the distributed tracing platform (Tempo): Support the operator maturity Level IV, Deep Insights, which enables upgrading, monitoring, and alerting of the TempoStack instances and the Tempo Operator. Add Ingress and Route configuration for the Gateway. Support the managed and unmanaged states in the TempoStack custom resource. Expose the following additional ingestion protocols in the Distributor service: Jaeger Thrift binary, Jaeger Thrift compact, Jaeger gRPC, and Zipkin. When the Gateway is enabled, only the OpenTelemetry protocol (OTLP) gRPC is enabled. Expose the Jaeger Query gRPC endpoint on the Query Frontend service. Support multitenancy without Gateway authentication and authorization. 1.14.3.2. Bug fixes Before this update, the Tempo Operator was not compatible with disconnected environments. With this update, the Tempo Operator supports disconnected environments. ( TRACING-3145 ) Before this update, the Tempo Operator with TLS failed to start on OpenShift Container Platform. With this update, the mTLS communication is enabled between Tempo components, the Operand starts successfully, and the Jaeger UI is accessible. ( TRACING-3091 ) Before this update, the resource limits from the Tempo Operator caused error messages such as reason: OOMKilled . With this update, the resource limits for the Tempo Operator are removed to avoid such errors. ( TRACING-3204 ) 1.14.3.3. Known issues There are currently known issues: Currently, the custom TLS CA option is not implemented for connecting to object storage. ( TRACING-3462 ) Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) Currently, the Tempo query frontend service must not use internal mTLS when Gateway is not deployed. This issue does not affect the Jaeger Query API. The workaround is to disable mTLS. ( TRACING-3510 ) Workaround Disable mTLS as follows: Open the Tempo Operator ConfigMap for editing by running the following command: USD oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1 1 The project where the Tempo Operator is installed. Disable the mTLS in the Operator configuration by updating the YAML file: data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false Restart the Tempo Operator pod by running the following command: USD oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator Missing images for running the Tempo Operator in restricted environments. The Red Hat OpenShift distributed tracing platform (Tempo) CSV is missing references to the operand images. ( TRACING-3523 ) Workaround Add the Tempo Operator related images in the mirroring tool to mirror the images to the registry: kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e 1.15. Release notes for Red Hat OpenShift distributed tracing platform 2.8 1.15.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.8 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.42 Red Hat OpenShift distributed tracing platform (Tempo) Tempo 0.1.0 1.15.2. Technology Preview features This release introduces support for the Red Hat OpenShift distributed tracing platform (Tempo) as a Technology Preview feature for Red Hat OpenShift distributed tracing platform. Important The Red Hat OpenShift distributed tracing platform (Tempo) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The feature uses version 0.1.0 of the Red Hat OpenShift distributed tracing platform (Tempo) and version 2.0.1 of the upstream distributed tracing platform (Tempo) components. You can use the distributed tracing platform (Tempo) to replace Jaeger so that you can use S3-compatible storage instead of ElasticSearch. Most users who use the distributed tracing platform (Tempo) instead of Jaeger will not notice any difference in functionality because the distributed tracing platform (Tempo) supports the same ingestion and query protocols as Jaeger and uses the same user interface. If you enable this Technology Preview feature, note the following limitations of the current implementation: The distributed tracing platform (Tempo) currently does not support disconnected installations. ( TRACING-3145 ) When you use the Jaeger user interface (UI) with the distributed tracing platform (Tempo), the Jaeger UI lists only services that have sent traces within the last 15 minutes. For services that have not sent traces within the last 15 minutes, those traces are still stored even though they are not visible in the Jaeger UI. ( TRACING-3139 ) Expanded support for the Tempo Operator is planned for future releases of the Red Hat OpenShift distributed tracing platform. Possible additional features might include support for TLS authentication, multitenancy, and multiple clusters. For more information about the Tempo Operator, see the Tempo community documentation . 1.15.3. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.16. Release notes for Red Hat OpenShift distributed tracing platform 2.7 1.16.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.7 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.39 1.16.2. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.17. Release notes for Red Hat OpenShift distributed tracing platform 2.6 1.17.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.6 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.38 1.17.2. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.18. Release notes for Red Hat OpenShift distributed tracing platform 2.5 1.18.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.5 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.36 1.18.2. New features and enhancements This release introduces support for ingesting OpenTelemetry protocol (OTLP) to the Red Hat OpenShift distributed tracing platform (Jaeger) Operator. The Operator now automatically enables the OTLP ports: Port 4317 for the OTLP gRPC protocol. Port 4318 for the OTLP HTTP protocol. This release also adds support for collecting Kubernetes resource attributes to the Red Hat build of OpenTelemetry Operator. 1.18.3. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.19. Release notes for Red Hat OpenShift distributed tracing platform 2.4 1.19.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.4 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.34.1 1.19.2. New features and enhancements This release adds support for auto-provisioning certificates using the OpenShift Elasticsearch Operator. Self-provisioning by using the Red Hat OpenShift distributed tracing platform (Jaeger) Operator to call the OpenShift Elasticsearch Operator during installation. + Important When upgrading to the Red Hat OpenShift distributed tracing platform 2.4, the Operator recreates the Elasticsearch instance, which might take five to ten minutes. Distributed tracing will be down and unavailable for that period. 1.19.3. Technology Preview features Creating the Elasticsearch instance and certificates first and then configuring the distributed tracing platform (Jaeger) to use the certificate is a Technology Preview for this release. 1.19.4. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.20. Release notes for Red Hat OpenShift distributed tracing platform 2.3 1.20.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.3.1 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.30.2 1.20.2. Component versions in the Red Hat OpenShift distributed tracing platform 2.3.0 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.30.1 1.20.3. New features and enhancements With this release, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator is now installed to the openshift-distributed-tracing namespace by default. Before this update, the default installation had been in the openshift-operators namespace. 1.20.4. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.21. Release notes for Red Hat OpenShift distributed tracing platform 2.2 1.21.1. Technology Preview features The unsupported OpenTelemetry Collector components included in the 2.1 release are removed. 1.21.2. Bug fixes This release of the Red Hat OpenShift distributed tracing platform addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.22. Release notes for Red Hat OpenShift distributed tracing platform 2.1 1.22.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.1 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.29.1 1.22.2. Technology Preview features This release introduces a breaking change to how to configure certificates in the OpenTelemetry custom resource file. With this update, the ca_file moves under tls in the custom resource, as shown in the following examples. CA file configuration for OpenTelemetry version 0.33 spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" CA file configuration for OpenTelemetry version 0.41.1 spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" 1.22.3. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.23. Release notes for Red Hat OpenShift distributed tracing platform 2.0 1.23.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.0 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.28.0 1.23.2. New features and enhancements This release introduces the following new features and enhancements: Rebrands Red Hat OpenShift Jaeger as the Red Hat OpenShift distributed tracing platform. Updates Red Hat OpenShift distributed tracing platform (Jaeger) Operator to Jaeger 1.28. Going forward, the Red Hat OpenShift distributed tracing platform will only support the stable Operator channel. Channels for individual releases are no longer supported. Adds support for OpenTelemetry protocol (OTLP) to the Query service. Introduces a new distributed tracing icon that appears in the OperatorHub. Includes rolling updates to the documentation to support the name change and new features. 1.23.3. Technology Preview features This release adds the Red Hat build of OpenTelemetry as a Technology Preview , which you install using the Red Hat build of OpenTelemetry Operator. Red Hat build of OpenTelemetry is based on the OpenTelemetry APIs and instrumentation. The Red Hat build of OpenTelemetry includes the OpenTelemetry Operator and Collector. You can use the Collector to receive traces in the OpenTelemetry or Jaeger protocol and send the trace data to the Red Hat OpenShift distributed tracing platform. Other capabilities of the Collector are not supported at this time. The OpenTelemetry Collector allows developers to instrument their code with vendor agnostic APIs, avoiding vendor lock-in and enabling a growing ecosystem of observability tooling. 1.23.4. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.24. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 1.25. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
[ "oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1", "data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false", "oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e", "oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1", "data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false", "oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e", "oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1", "data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false", "oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e", "spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"", "spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/distributed_tracing/distr-tracing-rn
Chapter 2. Configuring an IBM Cloud account
Chapter 2. Configuring an IBM Cloud account Before you can install OpenShift Container Platform, you must configure an IBM Cloud account. Important IBM Power Virtual Server using installer-provisioned infrastructure is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.1. Prerequisites You have an IBM Cloud account with a subscription. You cannot install OpenShift Container Platform on a free or on a trial IBM Cloud account. 2.2. Quotas and limits on IBM Power Virtual Server The OpenShift Container Platform cluster uses several IBM Cloud and IBM Power Virtual Server components, and the default quotas and limits affect your ability to install OpenShift Container Platform clusters. If you use certain cluster configurations, deploy your cluster in certain regions, or run multiple clusters from your account, you might need to request additional resources for your IBM Cloud VPC account. For a comprehensive list of the default IBM Cloud VPC quotas and service limits, see the IBM Cloud documentation for Quotas and service limits . Virtual Private Cloud Each OpenShift Container Platform cluster creates its own Virtual Private Cloud (VPC). The default quota of VPCs per region is 10. If you have 10 VPCs created, you will need to increase your quota before attempting an installation. Application load balancer By default, each cluster creates two application load balancers (ALBs): Internal load balancer for the control plane API server External load balancer for the control plane API server You can create additional LoadBalancer service objects to create additional ALBs. The default quota of VPC ALBs are 50 per region. To have more than 50 ALBs, you must increase this quota. VPC ALBs are supported. Classic ALBs are not supported for IBM Power Virtual Server. Cloud connections There is a limit of two cloud connections per IBM Power Virtual Server instance. It is recommended that you have only one cloud connection in your IBM Power Virtual Server instance to serve your cluster. Dynamic Host Configuration Protocol Service There is a limit of one Dynamic Host Configuration Protocol (DHCP) service per IBM Power Virtual Server instance. Networking Due to networking limitations, there is a restriction of one OpenShift cluster installed through IPI per zone per account. This is not configurable. Virtual Server Instances By default, a cluster creates server instances with the following resources : 0.5 CPUs 32 GB RAM System Type: s922 Processor Type: uncapped , shared Storage Tier: Tier-3 The following nodes are created: One bootstrap machine, which is removed after the installation is complete Three control plane nodes Three compute nodes For more information, see Creating a Power Systems Virtual Server in the IBM Cloud documentation. 2.3. Configuring DNS resolution How you configure DNS resolution depends on the type of OpenShift Container Platform cluster you are installing: If you are installing a public cluster, you use IBM Cloud Internet Services (CIS). If you are installing a private cluster, you use IBM Cloud DNS Services (DNS Services). 2.4. Using IBM Cloud Internet Services for DNS resolution The installation program uses IBM Cloud Internet Services (CIS) to configure cluster DNS resolution and provide name lookup for a public cluster. Note This offering does not support IPv6, so dual stack or IPv6 environments are not possible. You must create a domain zone in CIS in the same account as your cluster. You must also ensure the zone is authoritative for the domain. You can do this using a root domain or subdomain. Prerequisites You have installed the IBM Cloud CLI . You have an existing domain and registrar. For more information, see the IBM documentation . Procedure Create a CIS instance to use with your cluster: Install the CIS plugin: USD ibmcloud plugin install cis Log in to IBM Cloud by using the CLI: USD ibmcloud login Create the CIS instance: USD ibmcloud cis instance-create <instance_name> standard 1 1 At a minimum, a Standard plan is required for CIS to manage the cluster subdomain and its DNS records. Connect an existing domain to your CIS instance: Set the context instance for CIS: USD ibmcloud cis instance-set <instance_CRN> 1 1 The instance CRN (Cloud Resource Name). For example: ibmcloud cis instance-set crn:v1:bluemix:public:power-iaas:osa21:a/65b64c1f1c29460d8c2e4bbfbd893c2c:c09233ac-48a5-4ccb-a051-d1cfb3fc7eb5:: Add the domain for CIS: USD ibmcloud cis domain-add <domain_name> 1 1 The fully qualified domain name. You can use either the root domain or subdomain value as the domain name, depending on which you plan to configure. Note A root domain uses the form openshiftcorp.com . A subdomain uses the form clusters.openshiftcorp.com . Open the CIS web console , navigate to the Overview page, and note your CIS name servers. These name servers will be used in the step. Configure the name servers for your domains or subdomains at the domain's registrar or DNS provider. For more information, see the IBM Cloud documentation . 2.5. IBM Cloud VPC IAM Policies and API Key To install OpenShift Container Platform into your IBM Cloud account, the installation program requires an IAM API key, which provides authentication and authorization to access IBM Cloud service APIs. You can use an existing IAM API key that contains the required policies or create a new one. For an IBM Cloud IAM overview, see the IBM Cloud documentation . 2.5.1. Pre-requisite permissions Table 2.1. Pre-requisite permissions Role Access Viewer, Operator, Editor, Administrator, Reader, Writer, Manager Internet Services service in <resource_group> resource group Viewer, Operator, Editor, Administrator, User API key creator, Service ID creator IAM Identity Service service Viewer, Operator, Administrator, Editor, Reader, Writer, Manager, Console Administrator VPC Infrastructure Services service in <resource_group> resource group Viewer Resource Group: Access to view the resource group itself. The resource type should equal Resource group , with a value of <your_resource_group_name>. 2.5.2. Cluster-creation permissions Table 2.2. Cluster-creation permissions Role Access Viewer <resource_group> (Resource Group Created for Your Team) Viewer, Operator, Editor, Reader, Writer, Manager All service in Default resource group Viewer, Reader Internet Services service Viewer, Operator, Reader, Writer, Manager, Content Reader, Object Reader, Object Writer, Editor Cloud Object Storage service Viewer Default resource group: The resource type should equal Resource group , with a value of Default . If your account administrator changed your account's default resource group to something other than Default, use that value instead. Viewer, Operator, Editor, Reader, Manager Power Systems Virtual Server service in <resource_group> resource group Viewer, Operator, Editor, Reader, Writer, Manager, Administrator Internet Services service in <resource_group> resource group: CIS functional scope string equals reliability Viewer, Operator, Editor Direct Link service Viewer, Operator, Editor, Administrator, Reader, Writer, Manager, Console Administrator VPC Infrastructure Services service <resource_group> resource group 2.5.3. Access policy assignment In IBM Cloud IAM, access policies can be attached to different subjects: Access group (Recommended) Service ID User The recommended method is to define IAM access policies in an access group . This helps organize all the access required for OpenShift Container Platform and enables you to onboard users and service IDs to this group. You can also assign access to users and service IDs directly, if desired. 2.5.4. Creating an API key You must create a user API key or a service ID API key for your IBM Cloud account. Prerequisites You have assigned the required access policies to your IBM Cloud account. You have attached you IAM access policies to an access group, or other appropriate resource. Procedure Create an API key, depending on how you defined your IAM access policies. For example, if you assigned your access policies to a user, you must create a user API key . If you assigned your access policies to a service ID, you must create a service ID API key . If your access policies are assigned to an access group, you can use either API key type. For more information on IBM Cloud VPC API keys, see Understanding API keys . 2.6. Supported IBM Power Virtual Server regions and zones You can deploy an OpenShift Container Platform cluster to the following regions: dal (Dallas, USA) dal12 us-east (Washington DC, USA) us-east eu-de (Frankfurt, Germany) eu-de-1 eu-de-2 lon (London, UK) lon04 lon06 osa (Osaka, Japan) osa21 sao (Sao Paulo, Brazil) sao01 syd (Sydney, Australia) syd04 tok (Tokyo, Japan) tok04 tor (Toronto, Canada) tor01 You might optionally specify the IBM Cloud VPC region in which the installer will create any VPC components. Supported regions in IBM Cloud are: us-south eu-de eu-gb jp-osa au-syd br-sao ca-tor jp-tok 2.7. steps Creating an IBM Power Virtual Server workspace
[ "ibmcloud plugin install cis", "ibmcloud login", "ibmcloud cis instance-create <instance_name> standard 1", "ibmcloud cis instance-set <instance_CRN> 1", "ibmcloud cis domain-add <domain_name> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_ibm_power_virtual_server/installing-ibm-cloud-account-power-vs
8.216. sgml-common
8.216. sgml-common 8.216.1. RHBA-2014:0540 - sgml-common bug fix update Updated sgml-common packages that fix one bug are now available for Red Hat Enterprise Linux 6. The sgml-common packages contain a collection of entities and document type definitions (DTDs) that are useful for processing Standard Generalized Markup Language (SGML), but do not need to be included in multiple packages. The sgml-common packages also include an up-to-date Open Catalog file. Bug Fix BZ# 613637 Prior to this update, the licensing COPYING file and documentation was missing from the xml-common subpackage. This update adds the missing basic documentation and COPYING file to xml-common. Users of sgml-common are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/sgml-common
2.2.4.2. Securing NFS Mount Options
2.2.4.2. Securing NFS Mount Options The use of the mount command in the /etc/fstab file is explained in the Storage Administration Guide. From a security administration point of view it is worthwhile to note that the NFS mount options can also be specified in /etc/nfsmount.conf , which can be used to set custom default options. 2.2.4.2.1. Review the NFS Server Warning Only export entire file systems. Exporting a subdirectory of a file system can be a security issue. It is possible in some cases for a client to "break out" of the exported part of the file system and get to unexported parts (see the section on subtree checking in the exports(5) man page. Use the ro option to export the file system as read-only whenever possible to reduce the number of users able to write to the mounted file system. Only use the rw option when specifically required. Refer to the man exports(5) page for more information. Allowing write access increases the risk from symlink attacks for example. This includes temporary directories such as /tmp and /usr/tmp . Where directories must be mounted with the rw option avoid making them world-writable whenever possible to reduce risk. Exporting home directories is also viewed as a risk as some applications store passwords in clear text or weakly encrypted. This risk is being reduced as application code is reviewed and improved. Some users do not set passwords on their SSH keys so this too means home directories present a risk. Enforcing the use of passwords or using Kerberos would mitigate that risk. Restrict exports only to clients that need access. Use the showmount -e command on an NFS server to review what the server is exporting. Do not export anything that is not specifically required. Do not use the no_root_squash option and review existing installations to make sure it is not used. Refer to Section 2.2.4.4, "Do Not Use the no_root_squash Option" for more information. The secure option is the server-side export option used to restrict exports to " reserved " ports. By default, the server allows client communication only from " reserved " ports (ports numbered less than 1024), because traditionally clients have only allowed " trusted " code (such as in-kernel NFS clients) to use those ports. However, on many networks it is not difficult for anyone to become root on some client, so it is rarely safe for the server to assume that communication from a reserved port is privileged. Therefore the restriction to reserved ports is of limited value; it is better to rely on Kerberos, firewalls, and restriction of exports to particular clients. Most clients still do use reserved ports when possible. However, reserved ports are a limited resource, so clients (especially those with a large number of NFS mounts) may choose to use higher-numbered ports as well. Linux clients may do this using the " noresvport " mount option. If you want to allow this on an export, you may do so with the " insecure " export option. It is good practice not to allow users to login to a server. While reviewing the above settings on an NFS server conduct a review of who and what can access the server.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-securing_nfs-mount_options
B.39. krb5
B.39. krb5 B.39.1. RHSA-2010:0863 - Important: krb5 security update Updated krb5 packages that fix one security issue are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link(s) associated with each description below. Kerberos is a network authentication system which allows clients and servers to authenticate to each other using symmetric encryption and a trusted third party, the Key Distribution Center (KDC). CVE-2010-1322 An uninitialized pointer use flaw was found in the way the MIT Kerberos KDC handled TGS (Ticket-granting Server) request messages. A remote, authenticated attacker could use this flaw to crash the KDC or, possibly, disclose KDC memory or execute arbitrary code with the privileges of the KDC (krb5kdc). Red Hat would like to thank the MIT Kerberos Team for reporting this issue. Upstream acknowledges Mike Roszkowski as the original reporter. All krb5 users should upgrade to these updated packages, which contain a backported patch to correct this issue. After installing the updated packages, the krb5kdc daemon will be restarted automatically.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/krb5
Machine management
Machine management OpenShift Container Platform 4.11 Adding and maintaining cluster machines Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/machine_management/index
Using AMQ Core Protocol JMS
Using AMQ Core Protocol JMS Red Hat AMQ Core Protocol JMS 7.11 Developing an AMQ messaging client using Java
null
https://docs.redhat.com/en/documentation/red_hat_amq_core_protocol_jms/7.11/html/using_amq_core_protocol_jms/index
2.2. Volume Groups
2.2. Volume Groups Physical volumes are combined into volume groups (VGs). This creates a pool of disk space out of which logical volumes can be allocated. Within a volume group, the disk space available for allocation is divided into units of a fixed-size called extents. An extent is the smallest unit of space that can be allocated, Within a physical volume, extents are referred to as physical extents. A logical volume is allocated into logical extents of the same size as the physical extents. The extent size is thus the same for all logical volumes in the volume group. The volume group maps the logical extents to physical extents.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/volume_group_overview
Chapter 6. Operational Attributes and Object Classes
Chapter 6. Operational Attributes and Object Classes Operational attributes are attributes used to perform directory operations and are available for every entry in the directory, regardless of whether they are defined for the object class of the entry. Operational attributes are only returned in an ldapsearch operation if specifically requested. To return all operational attributes of an object, specify + . Operational attributes are created and managed by Directory Server on entries, such as the time the entry is created or modified and the creator's name. These attributes can be set on any entry, regardless of other attributes or object classes on the entry. 6.1. accountUnlockTime The accountUnlockTime attribute contains the date and time in GMT-format at which the account will become unlocked. A value of 0 means that the account must be unlocked by an administrator. OID 2.16.840.1.113730.3.1.95 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 6.2. aci This attribute is used by the Directory Server to evaluate what rights are granted or denied when it receives an LDAP request from a client. OID 2.16.840.1.113730.3.1.55 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Directory Server 6.3. altServer The values of this attribute are URLs of other servers which may be contacted when this server becomes unavailable. If the server does not know of any other servers which could be used, this attribute is absent. This information can be cached in case the preferred LDAP server later becomes unavailable. OID 1.3.6.1.4.1.1466.101.120.6 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in RFC 2252 6.4. createTimestamp This attribute contains the date and time that the entry was initially created. OID 2.5.18.1 Syntax GeneralizedTime Multi- or Single-Valued Single-valued Defined in RFC 1274 6.5. creatorsName This attribute contains the name of the user which created the entry. OID 2.5.18.3 Syntax DN Multi- or Single-Valued Single-valued Defined in RFC 1274 6.6. dITContentRules This attribute defines the DIT content rules which are in force within a subschema. Each value defines one DIT content rule. Each value is tagged by the object identifier of the structural object class to which it pertains. OID 2.5.21.2 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2252 6.7. dITStructureRules This attribute defines the DIT structure rules which are in force within a subschema. Each value defines one DIT structure rule. OID 2.5.21.1 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2252 6.8. entryusn When the USN Plug-in is enabled, the server automatically assigns an update sequence number to entries every time a write operation (add, modify, modrdn, or delete) is performed. The USN is stored in the entryUSN operational attribute on the entry; the entryUSN , then, shows the number for the most recent change on any entry. Note The entryUSN attribute increments only with operations performed by LDAP clients. It does not count internal operations. By default, the entryUSN is unique per back end database instance, so entries in other databases may have the same USN. The nsslapd-entryusn-global parameter changes the assignment of USNs from local to global, that is, from being counted on a single database to being counted for all databases in the topology. The parameter is turned off by default. A corresponding entry, lastusn , is kept in the root DSE entry, which shows the most recently- assigned USN. In local mode, lastusn shows the most recently- assigned USN per back end database. In global mode, lastusn shows the most recently assigned USN for the entire topology. OID 2.16.840.1.113730.3.1.606 Syntax Integer Multi- or Single-Valued Single-valued Defined in Directory Server 6.9. internalCreatorsName For entries which were created by a plug-in or by the server, rather than a Directory Server user, this attribute records what internal user (by plug-in DN) created the entry. The internalCreatorsname attributes always show a plug-in as the identity. This plug-in could be an additional plug-in, such as the MemberOf Plug-in. If the change is made by the core Directory Server, then the plug-in is the database plug-in, cn=ldbm database,cn=plugins,cn=config . OID 2.16.840.1.113730.3.1.2114 Syntax DN Multi- or Single-Valued Single-valued Defined in Directory Server 6.10. internalModifiersName If an entry is edited by a plug-in or by the server, rather than a Directory Server user, this attribute records what internal user (by plug-in DN) modified the entry. The internalModifiersname attributes always show a plug-in as the identity. This plug-in could be an additional plug-in, such as the MemberOf Plug-in. If the change is made by the core Directory Server, then the plug-in is the database plug-in, cn=ldbm database,cn=plugins,cn=config . OID 2.16.840.1.113730.3.1.2113 Syntax DN Multi- or Single-Valued Single-valued Defined in Directory Server 6.11. hasSubordinates This attribute indicates whether the entry has subordinate entries. OID 1.3.6.1.4.1.1466.115.121.1.7 Syntax Boolean Multi- or Single-Valued Single-valued Defined in numSubordinates Internet Draft 6.12. lastLoginTime The lastLoginTime attribute contains a timestamp of the last time that the given account authenticated to the directory, in the format YYYMMDDHHMMSS Z . For example: This is used to evaluate account lockout policies based on account inactivity. OID 2.16.840.1.113719.1.1.4.1.35 Syntax GeneralizedTime Multi- or Single-Valued Single-valued Defined in Directory Server 6.13. lastModifiedBy The lastModifiedBy attribute contains the distinguished name (DN) of the user who last edited the entry. For example: OID 0.9.2342.19200300.100.1.24 Syntax DN Multi- or Single-Valued Multi-valued Defined in RFC 1274 6.14. lastModifiedTime The lastModifiedTime attribute contains the time, in UTC format, an entry was last modified. For example: OID 0.9.2342.19200300.100.1.23 Syntax DirectyString Multi- or Single-Valued Multi-valued Defined in RFC 1274 6.15. ldapSubEntry These entries hold operational data. This object class is defined in the LDAP Subentry Internet Draft. Superior Class top OID 2.16.840.1.113719.2.142.6.1.1 Table 6.1. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Table 6.2. Allowed Attributes Attribute Definition Section 5.2.25, "cn (commonName)" Specifies the common name of the entry. 6.16. ldapSyntaxes This attribute identifies the syntaxes implemented, with each value corresponding to one syntax. OID 1.3.6.1.4.1.1466.101.120.16 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2252 6.17. matchingRules This attribute defines the matching rules used within a subschema. Each value defines one matching rule. OID 2.5.21.4 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2252 6.18. matchingRuleUse This attribute indicates the attribute types to which a matching rule applies in a subschema. OID 2.5.21.8 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2252 6.19. modifyTimestamp This attribute contains the date and time that the entry was most recently modified. OID 2.5.18.2 Syntax GeneralizedTime Multi- or Single-Valued Single-valued Defined in RFC 1274 6.20. modifiersName This attribute contains the name of the user which last modified the entry. OID 2.5.18.4 Syntax DN Multi- or Single-Valued Single-valued Defined in RFC 1274 6.21. nameForms This attribute defines the name forms used in a subschema. Each value defines one name form. OID 2.5.21.7 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2252 6.22. nsAccountLock This attribute shows whether the account is active or inactive. OID 2.16.840.1.113730.3.1.610 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 6.23. nsAIMStatusGraphic This attribute contains a path pointing to the graphic which illustrates the AIM user status. OID 2.16.840.1.113730.3.1.2018 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 6.24. nsAIMStatusText This attribute contains the text which indicates the current AIM user status. OID 2.16.840.1.113730.3.1.2017 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 6.25. nsBackendSuffix This contains the suffix used by the back end. OID 2.16.840.1.113730.3.1.803 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 6.26. nscpEntryDN This attribute contains the (former) entry DN for a tombstone entry. OID 2.16.840.1.113730.3.1.545 Syntax DN Multi- or Single-Valued Single-valued Defined in Directory Server 6.27. nsDS5ReplConflict This attribute is included on entries that have a change conflict that cannot be resolved automatically by the synchronization or replication process. The value of the nsDS5ReplConflict contains information about which entries are in conflict, usually by referring to them by their nsUniqueID for both current entries and tombstone entries. OID 2.16.840.1.113730.3.1.973 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 6.28. nsICQStatusGraphic This attribute contains a path pointing to the graphic which illustrates the ICQ user status. OID 2.16.840.1.113730.3.1.2022 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 6.29. nsICQStatusText This attribute contains the text for the current ICQ user status. OID 2.16.840.1.113730.3.1.2021 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 6.30. nsIdleTimeout This attribute identifies the user-based connection idle timeout period, in seconds. OID 2.16.840.1.113730.3.1.573 Syntax Integer Multi- or Single-Valued Single-valued Defined in Directory Server 6.31. nsIDListScanLimit This attribute specifies the number of entry IDs that are searched during a search operation. Keep the default value to improve search performance. For a more detailed explanation of the effect of ID lists on search performance, see the "Overview of the Searching Algorithm" section of the "Managing Indexes" chapter in the Red Hat Directory Server Administration Guide . OID 2.16.840.1.113730.3.1.2106 Syntax Integer Multi- or Single-Valued Single-valued Defined in Directory Server 6.32. nsLookThroughLimit This attribute sets the maximum number of entries for that user through which the server is allowed to look during a search operation. This attribute is configured in the server itself and applied to a user when he initiates a search. OID 2.16.840.1.113730.3.1.570 Syntax Integer Multi- or Single-Valued Single-valued Defined in Directory Server 6.33. nsPagedIDListScanLimit This attribute specifies the number of entry IDs that are searched, specifically, for a search operation using the simple paged results control. This attribute works the same as the nsIDListScanLimit attribute, except that it only applies to searches with the simple paged results control. If this attribute is not present or is set to zero, then the nsIDListScanLimit is used to paged searches as well as non-paged searches. OID 2.16.840.1.113730.3.1.2109 Syntax Integer Multi- or Single-Valued Single-valued Defined in Directory Server 6.34. nsPagedLookThroughLimit This attribute specifies the maximum number of entries that the Directory Server will check when examining candidate entries for a search which uses the simple paged results control. This attribute works the same as the nsLookThroughLimit attribute, except that it only applies to searches with the simple paged results control. If this attribute is not present or is set to zero, then the nsLookThroughLimit is used to paged searches as well as non-paged searches. OID 2.16.840.1.113730.3.1.2108 Syntax Integer Multi- or Single-Valued Single-valued Defined in Directory Server 6.35. nsPagedSizeLimit This attribute sets the maximum number of entries to return from a search operation specifically which uses the simple paged results control . This overrides the nsSizeLimit attribute for paged searches. If this value is set to zero, then the nsSizeLimit attribute is used for paged searches as well as non-paged searches for the user, or the global configuration settings are used. OID 2.16.840.1.113730.3.1.2107 Syntax Integer Multi- or Single-Valued Single-valued Defined in Directory Server 6.36. nsParentUniqueId For tombstone (deleted) entries stored in replication, the nsParentUniqueId attribute contains the DN or entry ID for the parent of the original entry. OID 2.16.840.1.113730.3.1.544 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 6.37. nsRole This attribute is a computed attribute that is not stored with the entry itself. It identifies to which roles an entry belongs. OID 2.16.840.1.113730.3.1.574 Syntax DN Multi- or Single-Valued Multi-valued Defined in Directory Server 6.38. nsRoleDn This attribute contains the distinguished name of all roles that apply to an entry. Membership of a managed role is granted upon an entry by adding the role's DN to the entry's nsRoleDN attribute. For example: A nested role specifies containment of one or more roles of any type. In that case, nsRoleDN defines the DN of the contained roles. For example: OID 2.16.840.1.113730.3.1.575 Syntax DN Multi- or Single-Valued Multi-valued Defined in Directory Server 6.39. nsRoleFilter This attribute sets the filter identifies entries which belong to the role. OID 2.16.840.1.113730.3.1.576 Syntax IA5String Multi- or Single-Valued Single-valued Defined in RFC 2252 6.40. nsSchemaCSN This attribute is one of the subschema DSE attribute types. OID 2.5.21.82.16.840.1.113730.3.1.804 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 6.41. nsSizeLimit This attribute shows the default size limit for a database or database link in bytes. OID 2.16.840.1.113730.3.1.571 Syntax Integer Multi- or Single-Valued Single-valued Defined in Directory Server 6.42. nsTimeLimit This attribute shows the default search time limit for a database or database link. OID 2.16.840.1.113730.3.1.572 Syntax Integer Multi- or Single-Valued Single-valued Defined in Directory Server 6.43. nsTombstone (Object Class) Tombstone entries are entries which have been deleted from Directory Server. For replication and restore operations, these deleted entries are saved so that they can be resurrected and replaced if necessary. Each tombstone entry has the nsTombstone object class, automatically. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.113 Table 6.3. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Table 6.4. Allowed Attributes Attribute Definition Section 6.36, "nsParentUniqueId" Identifies the unique ID of the parent entry of the original entry. Section 6.26, "nscpEntryDN" Identifies the orignal entry DN in a tombstone entry. 6.44. nsUniqueId This attribute identifies or assigns a unique ID to a server entry. OID 2.16.840.1.113730.3.1.542 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 6.45. nsYIMStatusGraphic This attribute contains a path pointing to the graphic which illustrates the Yahoo IM user status. OID 2.16.840.1.113730.3.1.2020 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 6.46. nsYIMStatusText This attribute contains the text for the current Yahoo IM user status. OID 2.16.840.1.113730.3.1.2019 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 6.47. numSubordinates This attribute indicates now many immediate subordinates an entry has. For example, numSubordinates=0 in a leaf entry. OID 1.3.1.1.4.1.453.16.2.103 Syntax Integer Multi- or Single-Valued Single-valued Defined in numSubordinates Internet Draft 6.48. passwordGraceUserTime The passwordGraceUserTime attribute counts the number of login attempts the user has made with the expired password. OID 2.16.840.1.113730.3.1.998 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 6.49. passwordRetryCount This attribute counts the number of consecutive failed attempts at entering the correct password. OID 2.16.840.1.113730.3.1.93 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 6.50. pwdpolicysubentry This attribute value points to the entry DN of the new password policy. OID 2.16.840.1.113730.3.1.997 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 6.51. pwdUpdateTime This attribute value stores the time of the most recent password change for the account. OID 2.16.840.1.113730.3.1.2133 Syntax GeneralizedTime Multi- or Single-Valued Single-valued Defined in Directory Server 6.52. subschemaSubentry This attribute contains the DN of an entry that contains schema information. For example: OID 2.5.18.10 Syntax DN Multi- or Single-Valued Single-valued Defined in RFC 2252 6.53. glue (Object Class) The glue object class defines an entry in a special state: resurrected due to a replication conflict. This object class is defined by Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.30 Table 6.5. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. 6.54. passwordObject (Object Class) This object class is used for entries which store password information for a user in the directory. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.12 Table 6.6. Required Attributes Section 5.2.284, "objectClass" Defines the object classes for the entry. Table 6.7. Allowed Attributes Section 6.1, "accountUnlockTime" Refers to the amount of time that must pass after an account lockout before the user can bind to the directory again. Section 3.1.1.178, "passwordAllowChangeTime" Specifies the length of time that must pass before users are allowed to change their passwords. Section 3.1.1.183, "passwordExpirationTime" Specifies the length of time that passes before the user's password expires. Section 3.1.1.184, "passwordExpWarned" Indicates that a password expiration warning has been sent to the user. Section 6.48, "passwordGraceUserTime" Counts the number of login attempts the user has made with the expired password. Section 3.1.1.186, "passwordHistory (Password History)" Contains the history of the user's passwords. Section 6.49, "passwordRetryCount" Counts the number of consecutive failed attempts at entering the correct password. Section 6.50, "pwdpolicysubentry" Points to the entry DN of the new password policy. Section 3.1.1.222, "retryCountResetTime" Specifies the length of time that passes before the passwordRetryCount attribute is reset. 6.55. subschema (Object Class) This identifies an auxiliary object class subentry which administers the subschema for the subschema administrative area. It holds the operational attributes representing the policy parameters which express the subschema. This object class is defined in RFC 2252. Superior Class top OID 2.5.20.1 Table 6.8. Required Attributes Section 5.2.284, "objectClass" Defines the object classes for the entry. Table 6.9. Allowed Attributes Section 5.2.11, "attributeTypes" Attribute types used within a subschema. Section 6.6, "dITContentRules" Defines the DIT content rules which are in force within a subschema. Section 6.7, "dITStructureRules" Defines the DIT structure rules which are in force within a subschema. Section 6.18, "matchingRuleUse" Indicates the attribute types to which a matching rule applies in a subschema. Section 6.17, "matchingRules" Defines the matching rules used within a subschema. Section 6.21, "nameForms" Defines the name forms used in a subschema. Section 5.2.285, "objectClasses" Defines the object classes used in a subschema.
[ "lastLoginTime: 20200527001051Z", "lastModifiedBy: cn=Barbara Jensen,ou=Engineering,dc=example,dc=com", "lastModifiedTime: Thursday, 22-Sep-93 14:15:00 GMT", "dn: cn=staff,ou=employees,dc=example,dc=com objectclass: LDAPsubentry objectclass: nsRoleDefinition objectclass: nsSimpleRoleDefinition objectclass: nsManagedRoleDefinition dn: cn=userA,ou=users,ou=employees,dc=example,dc=com objectclass: top objectclass: person sn: uA userpassword: secret nsroledn: cn=staff,ou=employees,dc=example,dc=com", "dn: cn=everybody,ou=employees,dc=example,dc=com objectclass: LDAPsubentry objectclass: nsRoleDefinition objectclass: nsComplexRoleDefinition objectclass: nsNestedRoleDefinition nsroledn: cn=manager,ou=employees,dc=example,dc=com nsroledn: cn=staff,ou=employees,dc=example,dc=com", "subschemaSubentry: cn=schema" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/configuration_command_and_file_reference/Operational_Attributes_Special_Attributes_and_Special_Object_Classes
Chapter 120. AclRuleTopicResource schema reference
Chapter 120. AclRuleTopicResource schema reference Used in: AclRule The type property is a discriminator that distinguishes use of the AclRuleTopicResource type from AclRuleGroupResource , AclRuleClusterResource , AclRuleTransactionalIdResource . It must have the value topic for the type AclRuleTopicResource . Property Property type Description type string Must be topic . name string Name of resource for which given ACL rule applies. Can be combined with patternType field to use prefix pattern. patternType string (one of [prefix, literal]) Describes the pattern used in the resource field. The supported types are literal and prefix . With literal pattern type, the resource field will be used as a definition of a full topic name. With prefix pattern type, the resource name will be used only as a prefix. Default value is literal .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-aclruletopicresource-reference
Chapter 22. Upgrading Certificate System 10 to the latest minor version
Chapter 22. Upgrading Certificate System 10 to the latest minor version To upgrade Red Hat Certificate System 10.0 to the latest minor version, for example 10.1, you need to update the packages and the configuration files. Upgrade all packages on the server: During the update, Certificate System configuration files, such as /etc/pki/ <instance_name> / subsystem /CS.cfg and /etc/pki/ <instance_name> /server.xml , are automatically modified to the new version. Log files generated during the Certificate System upgrade are: /var/log/pki/pki-server-upgrade- version .log /var/log/pki/pki-upgrade- version .log Note that the version number in the log file's name represent the pki* package version and not the Certificate System version. Note If Directory Server is installed on a different host, additionally update all packages on that host as well. For details about updating Directory Server, see: the corresponding chapter in the Red Hat Directory Server Installation G e . Red Hat Directory Server Release Notes for notable changes, bug fixes, and known issues in Directory Server. Restart the Certificate System instance: Note To migrate from a lower major version, such as 9.7, to Certificate System 10, see Chapter 21, Upgrading from Certificate System 9 to Certificate System 10 .
[ "yum update", "systemctl restart pki-tomcatd@ <instance_name> .service" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/upgrading_certificate_system_from_10_x_to_the_latest_version
Chapter 5. Device Drivers
Chapter 5. Device Drivers This chapter provides a comprehensive listing of all device drivers that are new or have been updated in Red Hat Enterprise Linux 7.9. 5.1. New Drivers Graphics Drivers and Miscellaneous Drivers MC Driver for Intel 10nm server processors (i10nm_edac.ko.xz) 5.2. Updated Drivers Network Driver Updates The Netronome Flow Processor (NFP) driver (nfp.ko.xz) has been updated to version 3.10.0-1150.el7.x86_64. VMware vmxnet3 virtual NIC driver (vmxnet3.ko.xz) has been updated to version 1.4.17.0-k. Storage Driver Updates QLogic FCoE Driver (bnx2fc.ko.xz) has been updated to version 2.12.13. Driver for HP Smart Array Controller (hpsa.ko.xz) has been updated to version 3.4.20-170-RH5. Broadcom MegaRAID SAS Driver (megaraid_sas.ko.xz) has been updated to version 07.714.04.00-rh1. QLogic Fibre Channel HBA Driver (qla2xxx.ko.xz) has been updated to version 10.01.00.22.07.9-k. Driver for Microsemi Smart Family Controller version (smartpqi.ko.xz) has been updated to version 1.2.10-099.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.9_release_notes/device_drivers
Chapter 377. XQuery Component
Chapter 377. XQuery Component Available as of Camel version 1.0 Camel supports XQuery to allow an Expression or Predicate to be used in the DSL or Xml Configuration . For example you could use XQuery to create an Predicate in a Message Filter or as an Expression for a Recipient List. 377.1. Options The XQuery component supports 4 options, which are listed below. Name Description Default Type moduleURIResolver (advanced) To use the custom ModuleURIResolver ModuleURIResolver configuration (advanced) To use a custom Saxon configuration Configuration configurationProperties (advanced) To set custom Saxon configuration properties Map resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The XQuery endpoint is configured using URI syntax: with the following path and query parameters: 377.1.1. Path Parameters (1 parameters): Name Description Default Type resourceUri Required The name of the template to load from classpath or file system String 377.1.2. Query Parameters (31 parameters): Name Description Default Type allowStAX (common) Whether to allow using StAX mode false boolean headerName (common) To use a Camel Message header as the input source instead of Message body. String namespacePrefixes (common) Allows to control which namespace prefixes to use for a set of namespace mappings Map resultsFormat (common) What output result to use DOM ResultFormat resultType (common) What output result to use defined as a class Class stripsAllWhiteSpace (common) Whether to strip all whitespaces true boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern pollStrategy (consumer) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPoll Strategy configuration (advanced) To use a custom Saxon configuration Configuration configurationProperties (advanced) To set custom Saxon configuration properties Map moduleURIResolver (advanced) To use the custom ModuleURIResolver ModuleURIResolver parameters (advanced) Additional parameters Map properties (advanced) Properties to configure the serialization parameters Properties staticQueryContext (advanced) To use a custom Saxon StaticQueryContext StaticQueryContext synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 1000 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutor Service scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz2 component none ScheduledPollConsumer Scheduler schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz2, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean 377.2. Spring Boot Auto-Configuration The component supports 8 options, which are listed below. Name Description Default Type camel.component.xquery.configuration To use a custom Saxon configuration. The option is a net.sf.saxon.Configuration type. String camel.component.xquery.configuration-properties To set custom Saxon configuration properties Map camel.component.xquery.enabled Enable xquery component true Boolean camel.component.xquery.module-u-r-i-resolver To use the custom ModuleURIResolver. The option is a net.sf.saxon.lib.ModuleURIResolver type. String camel.component.xquery.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.language.xquery.enabled Enable xquery language true Boolean camel.language.xquery.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks true Boolean camel.language.xquery.type Sets the class name of the result type (type from output) The default result type is NodeSet String 377.3. Examples from("queue:foo").filter(). xquery("//foo"). to("queue:bar") You can also use functions inside your query, in which case you need an explicit type conversion (or you will get a org.w3c.dom.DOMException: HIERARCHY_REQUEST_ERR) by passing the Class as a second argument to the xquery() method. from("direct:start"). recipientList().xquery("concat('mock:foo.', /person/@city)", String.class); 377.4. Variables The IN message body will be set as the contextItem . Besides this these Variables is also added as parameters: Variable Type Description exchange Exchange The current Exchange in.body Object The In message's body out.body Object The OUT message's body (if any) in.headers.* Object You can access the value of exchange.in.headers with key foo by using the variable which name is in.headers.foo out.headers.* Object You can access the value of exchange.out.headers with key foo by using the variable which name is out.headers.foo variable key name Object Any exchange.properties and exchange.in.headers and any additional parameters set using setParameters(Map) . These parameters is added with they own key name, for instance if there is an IN header with the key name foo then its added as foo . 377.5. Using XML configuration If you prefer to configure your routes in your Spring XML file then you can use XPath expressions as follows <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:foo="http://example.com/person" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <camelContext id="camel" xmlns="http://activemq.apache.org/camel/schema/spring"> <route> <from uri="activemq:MyQueue"/> <filter> <xquery>/foo:person[@name='James']</xquery> <to uri="mqseries:SomeOtherQueue"/> </filter> </route> </camelContext> </beans> Notice how we can reuse the namespace prefixes, foo in this case, in the XPath expression for easier namespace based XQuery expressions! When you use functions in your XQuery expression you need an explicit type conversion which is done in the xml configuration via the @type attribute: <xquery type="java.lang.String">concat('mock:foo.', /person/@city)</xquery> 377.6. Using XQuery as transformation We can do a message translation using transform or setBody in the route, as shown below: from("direct:start"). transform().xquery("/people/person"); Notice that xquery will use DOMResult by default, so if we want to grab the value of the person node, using text() we need to tell xquery to use String as result type, as shown: from("direct:start"). transform().xquery("/people/person/text()", String.class); 377.7. Using XQuery as an endpoint Sometimes an XQuery expression can be quite large; it can essentally be used for Templating. So you may want to use an XQuery Endpoint so you can route using XQuery templates. The following example shows how to take a message of an ActiveMQ queue (MyQueue) and transform it using XQuery and send it to MQSeries. <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="activemq:MyQueue"/> <to uri="xquery:com/acme/someTransform.xquery"/> <to uri="mqseries:SomeOtherQueue"/> </route> </camelContext> Currently custom functions in XQuery might result in a NullPointerException (Camel 2.18, 2.19 and 2.20). This is expected to be fixed in Camel 2.21 377.8. Examples Here is a simple example using an XQuery expression as a predicate in a Message Filter This example uses XQuery with namespaces as a predicate in a Message Filter 377.9. Learning XQuery XQuery is a very powerful language for querying, searching, sorting and returning XML. For help learning XQuery try these tutorials Mike Kay's XQuery Primer the W3Schools XQuery Tutorial You might also find the XQuery function reference useful 377.10. Loading script from external resource Available as of Camel 2.11 You can externalize the script and have Camel load it from a resource such as "classpath:" , "file:" , or "http:" . This is done using the following syntax: "resource:scheme:location" , eg to refer to a file on the classpath you can do: .setHeader("myHeader").xquery("resource:classpath:myxquery.txt", String.class) 377.11. Dependencies To use XQuery in your camel routes you need to add the a dependency on camel-saxon which implements the XQuery language. If you use maven you could just add the following to your pom.xml, substituting the version number for the latest & greatest release (see the download page for the latest versions). <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-saxon</artifactId> <version>x.x.x</version> </dependency>
[ "xquery:resourceUri", "from(\"queue:foo\").filter(). xquery(\"//foo\"). to(\"queue:bar\")", "from(\"direct:start\"). recipientList().xquery(\"concat('mock:foo.', /person/@city)\", String.class);", "<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:foo=\"http://example.com/person\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\"> <camelContext id=\"camel\" xmlns=\"http://activemq.apache.org/camel/schema/spring\"> <route> <from uri=\"activemq:MyQueue\"/> <filter> <xquery>/foo:person[@name='James']</xquery> <to uri=\"mqseries:SomeOtherQueue\"/> </filter> </route> </camelContext> </beans>", "<xquery type=\"java.lang.String\">concat('mock:foo.', /person/@city)</xquery>", "from(\"direct:start\"). transform().xquery(\"/people/person\");", "from(\"direct:start\"). transform().xquery(\"/people/person/text()\", String.class);", "<camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"activemq:MyQueue\"/> <to uri=\"xquery:com/acme/someTransform.xquery\"/> <to uri=\"mqseries:SomeOtherQueue\"/> </route> </camelContext>", ".setHeader(\"myHeader\").xquery(\"resource:classpath:myxquery.txt\", String.class)", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-saxon</artifactId> <version>x.x.x</version> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/xquery-component
Chapter 30. ImageService
Chapter 30. ImageService 30.1. ExportImages GET /v1/export/images 30.1.1. Description 30.1.2. Parameters 30.1.2.1. Query Parameters Name Description Required Default Pattern timeout - null query - null 30.1.3. Return Type Stream_result_of_v1ExportImageResponse 30.1.4. Content Type application/json 30.1.5. Responses Table 30.1. HTTP Response Codes Code Message Datatype 200 A successful response.(streaming responses) Stream_result_of_v1ExportImageResponse 0 An unexpected error response. GooglerpcStatus 30.1.6. Samples 30.1.7. Common object reference 30.1.7.1. CVSSV2AccessComplexity Enum Values ACCESS_HIGH ACCESS_MEDIUM ACCESS_LOW 30.1.7.2. CVSSV2Authentication Enum Values AUTH_MULTIPLE AUTH_SINGLE AUTH_NONE 30.1.7.3. CVSSV3Complexity Enum Values COMPLEXITY_LOW COMPLEXITY_HIGH 30.1.7.4. CVSSV3Privileges Enum Values PRIVILEGE_NONE PRIVILEGE_LOW PRIVILEGE_HIGH 30.1.7.5. CVSSV3UserInteraction Enum Values UI_NONE UI_REQUIRED 30.1.7.6. EmbeddedVulnerabilityVulnerabilityType Enum Values UNKNOWN_VULNERABILITY IMAGE_VULNERABILITY K8S_VULNERABILITY ISTIO_VULNERABILITY NODE_VULNERABILITY OPENSHIFT_VULNERABILITY 30.1.7.7. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 30.1.7.8. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 30.1.7.8.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 30.1.7.9. StorageCVSSScore Field Name Required Nullable Type Description Format source StorageSource SOURCE_UNKNOWN, SOURCE_RED_HAT, SOURCE_OSV, SOURCE_NVD, url String cvssv2 StorageCVSSV2 cvssv3 StorageCVSSV3 30.1.7.10. StorageCVSSV2 Field Name Required Nullable Type Description Format vector String attackVector StorageCVSSV2AttackVector ATTACK_LOCAL, ATTACK_ADJACENT, ATTACK_NETWORK, accessComplexity CVSSV2AccessComplexity ACCESS_HIGH, ACCESS_MEDIUM, ACCESS_LOW, authentication CVSSV2Authentication AUTH_MULTIPLE, AUTH_SINGLE, AUTH_NONE, confidentiality StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, integrity StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, availability StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, exploitabilityScore Float float impactScore Float float score Float float severity StorageCVSSV2Severity UNKNOWN, LOW, MEDIUM, HIGH, 30.1.7.11. StorageCVSSV2AttackVector Enum Values ATTACK_LOCAL ATTACK_ADJACENT ATTACK_NETWORK 30.1.7.12. StorageCVSSV2Impact Enum Values IMPACT_NONE IMPACT_PARTIAL IMPACT_COMPLETE 30.1.7.13. StorageCVSSV2Severity Enum Values UNKNOWN LOW MEDIUM HIGH 30.1.7.14. StorageCVSSV3 Field Name Required Nullable Type Description Format vector String exploitabilityScore Float float impactScore Float float attackVector StorageCVSSV3AttackVector ATTACK_LOCAL, ATTACK_ADJACENT, ATTACK_NETWORK, ATTACK_PHYSICAL, attackComplexity CVSSV3Complexity COMPLEXITY_LOW, COMPLEXITY_HIGH, privilegesRequired CVSSV3Privileges PRIVILEGE_NONE, PRIVILEGE_LOW, PRIVILEGE_HIGH, userInteraction CVSSV3UserInteraction UI_NONE, UI_REQUIRED, scope StorageCVSSV3Scope UNCHANGED, CHANGED, confidentiality StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, integrity StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, availability StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, score Float float severity StorageCVSSV3Severity UNKNOWN, NONE, LOW, MEDIUM, HIGH, CRITICAL, 30.1.7.15. StorageCVSSV3AttackVector Enum Values ATTACK_LOCAL ATTACK_ADJACENT ATTACK_NETWORK ATTACK_PHYSICAL 30.1.7.16. StorageCVSSV3Impact Enum Values IMPACT_NONE IMPACT_LOW IMPACT_HIGH 30.1.7.17. StorageCVSSV3Scope Enum Values UNCHANGED CHANGED 30.1.7.18. StorageCVSSV3Severity Enum Values UNKNOWN NONE LOW MEDIUM HIGH CRITICAL 30.1.7.19. StorageCosignSignature Field Name Required Nullable Type Description Format rawSignature byte[] byte signaturePayload byte[] byte certPem byte[] byte certChainPem byte[] byte 30.1.7.20. StorageDataSource Field Name Required Nullable Type Description Format id String name String mirror String 30.1.7.21. StorageEmbeddedImageScanComponent Field Name Required Nullable Type Description Format name String version String license StorageLicense vulns List of StorageEmbeddedVulnerability layerIndex Integer int32 priority String int64 source StorageSourceType OS, PYTHON, JAVA, RUBY, NODEJS, GO, DOTNETCORERUNTIME, INFRASTRUCTURE, location String topCvss Float float riskScore Float float fixedBy String Component version that fixes all the fixable vulnerabilities in this component. executables List of StorageEmbeddedImageScanComponentExecutable 30.1.7.22. StorageEmbeddedImageScanComponentExecutable Field Name Required Nullable Type Description Format path String dependencies List of string 30.1.7.23. StorageEmbeddedVulnerability Field Name Required Nullable Type Description Format cve String cvss Float float summary String link String fixedBy String scoreVersion StorageEmbeddedVulnerabilityScoreVersion V2, V3, cvssV2 StorageCVSSV2 cvssV3 StorageCVSSV3 publishedOn Date date-time lastModified Date date-time vulnerabilityType EmbeddedVulnerabilityVulnerabilityType UNKNOWN_VULNERABILITY, IMAGE_VULNERABILITY, K8S_VULNERABILITY, ISTIO_VULNERABILITY, NODE_VULNERABILITY, OPENSHIFT_VULNERABILITY, vulnerabilityTypes List of EmbeddedVulnerabilityVulnerabilityType suppressed Boolean suppressActivation Date date-time suppressExpiry Date date-time firstSystemOccurrence Date Time when the CVE was first seen, for this specific distro, in the system. date-time firstImageOccurrence Date Time when the CVE was first seen in this image. date-time severity StorageVulnerabilitySeverity UNKNOWN_VULNERABILITY_SEVERITY, LOW_VULNERABILITY_SEVERITY, MODERATE_VULNERABILITY_SEVERITY, IMPORTANT_VULNERABILITY_SEVERITY, CRITICAL_VULNERABILITY_SEVERITY, state StorageVulnerabilityState OBSERVED, DEFERRED, FALSE_POSITIVE, cvssMetrics List of StorageCVSSScore nvdCvss Float float 30.1.7.24. StorageEmbeddedVulnerabilityScoreVersion V2: No unset for automatic backwards compatibility Enum Values V2 V3 30.1.7.25. StorageImage Field Name Required Nullable Type Description Format id String name StorageImageName names List of StorageImageName This should deprecate the ImageName field long-term, allowing images with the same digest to be associated with different locations. TODO(dhaus): For now, this message will be without search tags due to duplicated search tags otherwise. metadata StorageImageMetadata scan StorageImageScan signatureVerificationData StorageImageSignatureVerificationData signature StorageImageSignature components Integer int32 cves Integer int32 fixableCves Integer int32 lastUpdated Date date-time notPullable Boolean isClusterLocal Boolean priority String int64 riskScore Float float topCvss Float float notes List of StorageImageNote 30.1.7.26. StorageImageLayer Field Name Required Nullable Type Description Format instruction String value String created Date date-time author String empty Boolean 30.1.7.27. StorageImageMetadata Field Name Required Nullable Type Description Format v1 StorageV1Metadata v2 StorageV2Metadata layerShas List of string dataSource StorageDataSource version String uint64 30.1.7.28. StorageImageName Field Name Required Nullable Type Description Format registry String remote String tag String fullName String 30.1.7.29. StorageImageNote Enum Values MISSING_METADATA MISSING_SCAN_DATA MISSING_SIGNATURE MISSING_SIGNATURE_VERIFICATION_DATA 30.1.7.30. StorageImageScan Field Name Required Nullable Type Description Format scannerVersion String scanTime Date date-time components List of StorageEmbeddedImageScanComponent operatingSystem String dataSource StorageDataSource notes List of StorageImageScanNote hash String uint64 30.1.7.31. StorageImageScanNote Enum Values UNSET OS_UNAVAILABLE PARTIAL_SCAN_DATA OS_CVES_UNAVAILABLE OS_CVES_STALE LANGUAGE_CVES_UNAVAILABLE CERTIFIED_RHEL_SCAN_UNAVAILABLE 30.1.7.32. StorageImageSignature Field Name Required Nullable Type Description Format signatures List of StorageSignature fetched Date date-time 30.1.7.33. StorageImageSignatureVerificationData Field Name Required Nullable Type Description Format results List of StorageImageSignatureVerificationResult 30.1.7.34. StorageImageSignatureVerificationResult Field Name Required Nullable Type Description Format verificationTime Date date-time verifierId String verifier_id correlates to the ID of the signature integration used to verify the signature. status StorageImageSignatureVerificationResultStatus UNSET, VERIFIED, FAILED_VERIFICATION, INVALID_SIGNATURE_ALGO, CORRUPTED_SIGNATURE, GENERIC_ERROR, description String description is set in the case of an error with the specific error's message. Otherwise, this will not be set. verifiedImageReferences List of string The full image names that are verified by this specific signature integration ID. 30.1.7.35. StorageImageSignatureVerificationResultStatus Status represents the status of the result. VERIFIED: VERIFIED is set when the signature's verification was successful. FAILED_VERIFICATION: FAILED_VERIFICATION is set when the signature's verification failed. INVALID_SIGNATURE_ALGO: INVALID_SIGNATURE_ALGO is set when the signature's algorithm is invalid and unsupported. CORRUPTED_SIGNATURE: CORRUPTED_SIGNATURE is set when the raw signature is corrupted, i.e. wrong base64 encoding. GENERIC_ERROR: GENERIC_ERROR is set when an error occurred during verification that cannot be associated with a specific status. Enum Values UNSET VERIFIED FAILED_VERIFICATION INVALID_SIGNATURE_ALGO CORRUPTED_SIGNATURE GENERIC_ERROR 30.1.7.36. StorageLicense Field Name Required Nullable Type Description Format name String type String url String 30.1.7.37. StorageSignature Field Name Required Nullable Type Description Format cosign StorageCosignSignature 30.1.7.38. StorageSource Enum Values SOURCE_UNKNOWN SOURCE_RED_HAT SOURCE_OSV SOURCE_NVD 30.1.7.39. StorageSourceType Enum Values OS PYTHON JAVA RUBY NODEJS GO DOTNETCORERUNTIME INFRASTRUCTURE 30.1.7.40. StorageV1Metadata Field Name Required Nullable Type Description Format digest String created Date date-time author String layers List of StorageImageLayer user String command List of string entrypoint List of string volumes List of string labels Map of string 30.1.7.41. StorageV2Metadata Field Name Required Nullable Type Description Format digest String 30.1.7.42. StorageVulnerabilitySeverity Enum Values UNKNOWN_VULNERABILITY_SEVERITY LOW_VULNERABILITY_SEVERITY MODERATE_VULNERABILITY_SEVERITY IMPORTANT_VULNERABILITY_SEVERITY CRITICAL_VULNERABILITY_SEVERITY 30.1.7.43. StorageVulnerabilityState VulnerabilityState indicates if vulnerability is being observed or deferred(/suppressed). By default, it vulnerabilities are observed. OBSERVED: [Default state] Enum Values OBSERVED DEFERRED FALSE_POSITIVE 30.1.7.44. StreamResultOfV1ExportImageResponse Field Name Required Nullable Type Description Format result V1ExportImageResponse error GooglerpcStatus 30.1.7.45. V1ExportImageResponse Field Name Required Nullable Type Description Format image StorageImage 30.2. InvalidateScanAndRegistryCaches GET /v1/images/cache/invalidate InvalidateScanAndRegistryCaches removes the image metadata cache. 30.2.1. Description 30.2.2. Parameters 30.2.3. Return Type Object 30.2.4. Content Type application/json 30.2.5. Responses Table 30.2. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 30.2.6. Samples 30.2.7. Common object reference 30.2.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 30.2.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 30.2.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 30.3. CountImages GET /v1/imagescount CountImages returns a count of images that match the input query. 30.3.1. Description 30.3.2. Parameters 30.3.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 30.3.3. Return Type V1CountImagesResponse 30.3.4. Content Type application/json 30.3.5. Responses Table 30.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1CountImagesResponse 0 An unexpected error response. GooglerpcStatus 30.3.6. Samples 30.3.7. Common object reference 30.3.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 30.3.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 30.3.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 30.3.7.3. V1CountImagesResponse Field Name Required Nullable Type Description Format count Integer int32 30.4. DeleteImages DELETE /v1/images DeleteImage removes the images based on a query 30.4.1. Description 30.4.2. Parameters 30.4.2.1. Query Parameters Name Description Required Default Pattern query.query - null query.pagination.limit - null query.pagination.offset - null query.pagination.sortOption.field - null query.pagination.sortOption.reversed - null query.pagination.sortOption.aggregateBy.aggrFunc - UNSET query.pagination.sortOption.aggregateBy.distinct - null confirm - null 30.4.3. Return Type V1DeleteImagesResponse 30.4.4. Content Type application/json 30.4.5. Responses Table 30.4. HTTP Response Codes Code Message Datatype 200 A successful response. V1DeleteImagesResponse 0 An unexpected error response. GooglerpcStatus 30.4.6. Samples 30.4.7. Common object reference 30.4.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 30.4.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 30.4.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 30.4.7.3. V1DeleteImagesResponse Field Name Required Nullable Type Description Format numDeleted Long int64 dryRun Boolean 30.5. ListImages GET /v1/images ListImages returns all the images that match the input query. 30.5.1. Description 30.5.2. Parameters 30.5.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 30.5.3. Return Type V1ListImagesResponse 30.5.4. Content Type application/json 30.5.5. Responses Table 30.5. HTTP Response Codes Code Message Datatype 200 A successful response. V1ListImagesResponse 0 An unexpected error response. GooglerpcStatus 30.5.6. Samples 30.5.7. Common object reference 30.5.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 30.5.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 30.5.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 30.5.7.3. StorageListImage Field Name Required Nullable Type Description Format id String name String components Integer int32 cves Integer int32 fixableCves Integer int32 created Date date-time lastUpdated Date date-time priority String int64 30.5.7.4. V1ListImagesResponse Field Name Required Nullable Type Description Format images List of StorageListImage 30.6. GetImage GET /v1/images/{id} GetImage returns the image given its ID. 30.6.1. Description 30.6.2. Parameters 30.6.2.1. Path Parameters Name Description Required Default Pattern id X null 30.6.2.2. Query Parameters Name Description Required Default Pattern includeSnoozed - null stripDescription - null 30.6.3. Return Type StorageImage 30.6.4. Content Type application/json 30.6.5. Responses Table 30.6. HTTP Response Codes Code Message Datatype 200 A successful response. StorageImage 0 An unexpected error response. GooglerpcStatus 30.6.6. Samples 30.6.7. Common object reference 30.6.7.1. CVSSV2AccessComplexity Enum Values ACCESS_HIGH ACCESS_MEDIUM ACCESS_LOW 30.6.7.2. CVSSV2Authentication Enum Values AUTH_MULTIPLE AUTH_SINGLE AUTH_NONE 30.6.7.3. CVSSV3Complexity Enum Values COMPLEXITY_LOW COMPLEXITY_HIGH 30.6.7.4. CVSSV3Privileges Enum Values PRIVILEGE_NONE PRIVILEGE_LOW PRIVILEGE_HIGH 30.6.7.5. CVSSV3UserInteraction Enum Values UI_NONE UI_REQUIRED 30.6.7.6. EmbeddedVulnerabilityVulnerabilityType Enum Values UNKNOWN_VULNERABILITY IMAGE_VULNERABILITY K8S_VULNERABILITY ISTIO_VULNERABILITY NODE_VULNERABILITY OPENSHIFT_VULNERABILITY 30.6.7.7. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 30.6.7.8. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 30.6.7.8.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 30.6.7.9. StorageCVSSScore Field Name Required Nullable Type Description Format source StorageSource SOURCE_UNKNOWN, SOURCE_RED_HAT, SOURCE_OSV, SOURCE_NVD, url String cvssv2 StorageCVSSV2 cvssv3 StorageCVSSV3 30.6.7.10. StorageCVSSV2 Field Name Required Nullable Type Description Format vector String attackVector StorageCVSSV2AttackVector ATTACK_LOCAL, ATTACK_ADJACENT, ATTACK_NETWORK, accessComplexity CVSSV2AccessComplexity ACCESS_HIGH, ACCESS_MEDIUM, ACCESS_LOW, authentication CVSSV2Authentication AUTH_MULTIPLE, AUTH_SINGLE, AUTH_NONE, confidentiality StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, integrity StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, availability StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, exploitabilityScore Float float impactScore Float float score Float float severity StorageCVSSV2Severity UNKNOWN, LOW, MEDIUM, HIGH, 30.6.7.11. StorageCVSSV2AttackVector Enum Values ATTACK_LOCAL ATTACK_ADJACENT ATTACK_NETWORK 30.6.7.12. StorageCVSSV2Impact Enum Values IMPACT_NONE IMPACT_PARTIAL IMPACT_COMPLETE 30.6.7.13. StorageCVSSV2Severity Enum Values UNKNOWN LOW MEDIUM HIGH 30.6.7.14. StorageCVSSV3 Field Name Required Nullable Type Description Format vector String exploitabilityScore Float float impactScore Float float attackVector StorageCVSSV3AttackVector ATTACK_LOCAL, ATTACK_ADJACENT, ATTACK_NETWORK, ATTACK_PHYSICAL, attackComplexity CVSSV3Complexity COMPLEXITY_LOW, COMPLEXITY_HIGH, privilegesRequired CVSSV3Privileges PRIVILEGE_NONE, PRIVILEGE_LOW, PRIVILEGE_HIGH, userInteraction CVSSV3UserInteraction UI_NONE, UI_REQUIRED, scope StorageCVSSV3Scope UNCHANGED, CHANGED, confidentiality StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, integrity StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, availability StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, score Float float severity StorageCVSSV3Severity UNKNOWN, NONE, LOW, MEDIUM, HIGH, CRITICAL, 30.6.7.15. StorageCVSSV3AttackVector Enum Values ATTACK_LOCAL ATTACK_ADJACENT ATTACK_NETWORK ATTACK_PHYSICAL 30.6.7.16. StorageCVSSV3Impact Enum Values IMPACT_NONE IMPACT_LOW IMPACT_HIGH 30.6.7.17. StorageCVSSV3Scope Enum Values UNCHANGED CHANGED 30.6.7.18. StorageCVSSV3Severity Enum Values UNKNOWN NONE LOW MEDIUM HIGH CRITICAL 30.6.7.19. StorageCosignSignature Field Name Required Nullable Type Description Format rawSignature byte[] byte signaturePayload byte[] byte certPem byte[] byte certChainPem byte[] byte 30.6.7.20. StorageDataSource Field Name Required Nullable Type Description Format id String name String mirror String 30.6.7.21. StorageEmbeddedImageScanComponent Field Name Required Nullable Type Description Format name String version String license StorageLicense vulns List of StorageEmbeddedVulnerability layerIndex Integer int32 priority String int64 source StorageSourceType OS, PYTHON, JAVA, RUBY, NODEJS, GO, DOTNETCORERUNTIME, INFRASTRUCTURE, location String topCvss Float float riskScore Float float fixedBy String Component version that fixes all the fixable vulnerabilities in this component. executables List of StorageEmbeddedImageScanComponentExecutable 30.6.7.22. StorageEmbeddedImageScanComponentExecutable Field Name Required Nullable Type Description Format path String dependencies List of string 30.6.7.23. StorageEmbeddedVulnerability Field Name Required Nullable Type Description Format cve String cvss Float float summary String link String fixedBy String scoreVersion StorageEmbeddedVulnerabilityScoreVersion V2, V3, cvssV2 StorageCVSSV2 cvssV3 StorageCVSSV3 publishedOn Date date-time lastModified Date date-time vulnerabilityType EmbeddedVulnerabilityVulnerabilityType UNKNOWN_VULNERABILITY, IMAGE_VULNERABILITY, K8S_VULNERABILITY, ISTIO_VULNERABILITY, NODE_VULNERABILITY, OPENSHIFT_VULNERABILITY, vulnerabilityTypes List of EmbeddedVulnerabilityVulnerabilityType suppressed Boolean suppressActivation Date date-time suppressExpiry Date date-time firstSystemOccurrence Date Time when the CVE was first seen, for this specific distro, in the system. date-time firstImageOccurrence Date Time when the CVE was first seen in this image. date-time severity StorageVulnerabilitySeverity UNKNOWN_VULNERABILITY_SEVERITY, LOW_VULNERABILITY_SEVERITY, MODERATE_VULNERABILITY_SEVERITY, IMPORTANT_VULNERABILITY_SEVERITY, CRITICAL_VULNERABILITY_SEVERITY, state StorageVulnerabilityState OBSERVED, DEFERRED, FALSE_POSITIVE, cvssMetrics List of StorageCVSSScore nvdCvss Float float 30.6.7.24. StorageEmbeddedVulnerabilityScoreVersion V2: No unset for automatic backwards compatibility Enum Values V2 V3 30.6.7.25. StorageImage Field Name Required Nullable Type Description Format id String name StorageImageName names List of StorageImageName This should deprecate the ImageName field long-term, allowing images with the same digest to be associated with different locations. TODO(dhaus): For now, this message will be without search tags due to duplicated search tags otherwise. metadata StorageImageMetadata scan StorageImageScan signatureVerificationData StorageImageSignatureVerificationData signature StorageImageSignature components Integer int32 cves Integer int32 fixableCves Integer int32 lastUpdated Date date-time notPullable Boolean isClusterLocal Boolean priority String int64 riskScore Float float topCvss Float float notes List of StorageImageNote 30.6.7.26. StorageImageLayer Field Name Required Nullable Type Description Format instruction String value String created Date date-time author String empty Boolean 30.6.7.27. StorageImageMetadata Field Name Required Nullable Type Description Format v1 StorageV1Metadata v2 StorageV2Metadata layerShas List of string dataSource StorageDataSource version String uint64 30.6.7.28. StorageImageName Field Name Required Nullable Type Description Format registry String remote String tag String fullName String 30.6.7.29. StorageImageNote Enum Values MISSING_METADATA MISSING_SCAN_DATA MISSING_SIGNATURE MISSING_SIGNATURE_VERIFICATION_DATA 30.6.7.30. StorageImageScan Field Name Required Nullable Type Description Format scannerVersion String scanTime Date date-time components List of StorageEmbeddedImageScanComponent operatingSystem String dataSource StorageDataSource notes List of StorageImageScanNote hash String uint64 30.6.7.31. StorageImageScanNote Enum Values UNSET OS_UNAVAILABLE PARTIAL_SCAN_DATA OS_CVES_UNAVAILABLE OS_CVES_STALE LANGUAGE_CVES_UNAVAILABLE CERTIFIED_RHEL_SCAN_UNAVAILABLE 30.6.7.32. StorageImageSignature Field Name Required Nullable Type Description Format signatures List of StorageSignature fetched Date date-time 30.6.7.33. StorageImageSignatureVerificationData Field Name Required Nullable Type Description Format results List of StorageImageSignatureVerificationResult 30.6.7.34. StorageImageSignatureVerificationResult Field Name Required Nullable Type Description Format verificationTime Date date-time verifierId String verifier_id correlates to the ID of the signature integration used to verify the signature. status StorageImageSignatureVerificationResultStatus UNSET, VERIFIED, FAILED_VERIFICATION, INVALID_SIGNATURE_ALGO, CORRUPTED_SIGNATURE, GENERIC_ERROR, description String description is set in the case of an error with the specific error's message. Otherwise, this will not be set. verifiedImageReferences List of string The full image names that are verified by this specific signature integration ID. 30.6.7.35. StorageImageSignatureVerificationResultStatus Status represents the status of the result. VERIFIED: VERIFIED is set when the signature's verification was successful. FAILED_VERIFICATION: FAILED_VERIFICATION is set when the signature's verification failed. INVALID_SIGNATURE_ALGO: INVALID_SIGNATURE_ALGO is set when the signature's algorithm is invalid and unsupported. CORRUPTED_SIGNATURE: CORRUPTED_SIGNATURE is set when the raw signature is corrupted, i.e. wrong base64 encoding. GENERIC_ERROR: GENERIC_ERROR is set when an error occurred during verification that cannot be associated with a specific status. Enum Values UNSET VERIFIED FAILED_VERIFICATION INVALID_SIGNATURE_ALGO CORRUPTED_SIGNATURE GENERIC_ERROR 30.6.7.36. StorageLicense Field Name Required Nullable Type Description Format name String type String url String 30.6.7.37. StorageSignature Field Name Required Nullable Type Description Format cosign StorageCosignSignature 30.6.7.38. StorageSource Enum Values SOURCE_UNKNOWN SOURCE_RED_HAT SOURCE_OSV SOURCE_NVD 30.6.7.39. StorageSourceType Enum Values OS PYTHON JAVA RUBY NODEJS GO DOTNETCORERUNTIME INFRASTRUCTURE 30.6.7.40. StorageV1Metadata Field Name Required Nullable Type Description Format digest String created Date date-time author String layers List of StorageImageLayer user String command List of string entrypoint List of string volumes List of string labels Map of string 30.6.7.41. StorageV2Metadata Field Name Required Nullable Type Description Format digest String 30.6.7.42. StorageVulnerabilitySeverity Enum Values UNKNOWN_VULNERABILITY_SEVERITY LOW_VULNERABILITY_SEVERITY MODERATE_VULNERABILITY_SEVERITY IMPORTANT_VULNERABILITY_SEVERITY CRITICAL_VULNERABILITY_SEVERITY 30.6.7.43. StorageVulnerabilityState VulnerabilityState indicates if vulnerability is being observed or deferred(/suppressed). By default, it vulnerabilities are observed. OBSERVED: [Default state] Enum Values OBSERVED DEFERRED FALSE_POSITIVE 30.7. ScanImage POST /v1/images/scan ScanImage scans a single image and returns the result 30.7.1. Description 30.7.2. Parameters 30.7.2.1. Body Parameter Name Description Required Default Pattern body V1ScanImageRequest X 30.7.3. Return Type StorageImage 30.7.4. Content Type application/json 30.7.5. Responses Table 30.7. HTTP Response Codes Code Message Datatype 200 A successful response. StorageImage 0 An unexpected error response. GooglerpcStatus 30.7.6. Samples 30.7.7. Common object reference 30.7.7.1. CVSSV2AccessComplexity Enum Values ACCESS_HIGH ACCESS_MEDIUM ACCESS_LOW 30.7.7.2. CVSSV2Authentication Enum Values AUTH_MULTIPLE AUTH_SINGLE AUTH_NONE 30.7.7.3. CVSSV3Complexity Enum Values COMPLEXITY_LOW COMPLEXITY_HIGH 30.7.7.4. CVSSV3Privileges Enum Values PRIVILEGE_NONE PRIVILEGE_LOW PRIVILEGE_HIGH 30.7.7.5. CVSSV3UserInteraction Enum Values UI_NONE UI_REQUIRED 30.7.7.6. EmbeddedVulnerabilityVulnerabilityType Enum Values UNKNOWN_VULNERABILITY IMAGE_VULNERABILITY K8S_VULNERABILITY ISTIO_VULNERABILITY NODE_VULNERABILITY OPENSHIFT_VULNERABILITY 30.7.7.7. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 30.7.7.8. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 30.7.7.8.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 30.7.7.9. StorageCVSSScore Field Name Required Nullable Type Description Format source StorageSource SOURCE_UNKNOWN, SOURCE_RED_HAT, SOURCE_OSV, SOURCE_NVD, url String cvssv2 StorageCVSSV2 cvssv3 StorageCVSSV3 30.7.7.10. StorageCVSSV2 Field Name Required Nullable Type Description Format vector String attackVector StorageCVSSV2AttackVector ATTACK_LOCAL, ATTACK_ADJACENT, ATTACK_NETWORK, accessComplexity CVSSV2AccessComplexity ACCESS_HIGH, ACCESS_MEDIUM, ACCESS_LOW, authentication CVSSV2Authentication AUTH_MULTIPLE, AUTH_SINGLE, AUTH_NONE, confidentiality StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, integrity StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, availability StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, exploitabilityScore Float float impactScore Float float score Float float severity StorageCVSSV2Severity UNKNOWN, LOW, MEDIUM, HIGH, 30.7.7.11. StorageCVSSV2AttackVector Enum Values ATTACK_LOCAL ATTACK_ADJACENT ATTACK_NETWORK 30.7.7.12. StorageCVSSV2Impact Enum Values IMPACT_NONE IMPACT_PARTIAL IMPACT_COMPLETE 30.7.7.13. StorageCVSSV2Severity Enum Values UNKNOWN LOW MEDIUM HIGH 30.7.7.14. StorageCVSSV3 Field Name Required Nullable Type Description Format vector String exploitabilityScore Float float impactScore Float float attackVector StorageCVSSV3AttackVector ATTACK_LOCAL, ATTACK_ADJACENT, ATTACK_NETWORK, ATTACK_PHYSICAL, attackComplexity CVSSV3Complexity COMPLEXITY_LOW, COMPLEXITY_HIGH, privilegesRequired CVSSV3Privileges PRIVILEGE_NONE, PRIVILEGE_LOW, PRIVILEGE_HIGH, userInteraction CVSSV3UserInteraction UI_NONE, UI_REQUIRED, scope StorageCVSSV3Scope UNCHANGED, CHANGED, confidentiality StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, integrity StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, availability StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, score Float float severity StorageCVSSV3Severity UNKNOWN, NONE, LOW, MEDIUM, HIGH, CRITICAL, 30.7.7.15. StorageCVSSV3AttackVector Enum Values ATTACK_LOCAL ATTACK_ADJACENT ATTACK_NETWORK ATTACK_PHYSICAL 30.7.7.16. StorageCVSSV3Impact Enum Values IMPACT_NONE IMPACT_LOW IMPACT_HIGH 30.7.7.17. StorageCVSSV3Scope Enum Values UNCHANGED CHANGED 30.7.7.18. StorageCVSSV3Severity Enum Values UNKNOWN NONE LOW MEDIUM HIGH CRITICAL 30.7.7.19. StorageCosignSignature Field Name Required Nullable Type Description Format rawSignature byte[] byte signaturePayload byte[] byte certPem byte[] byte certChainPem byte[] byte 30.7.7.20. StorageDataSource Field Name Required Nullable Type Description Format id String name String mirror String 30.7.7.21. StorageEmbeddedImageScanComponent Field Name Required Nullable Type Description Format name String version String license StorageLicense vulns List of StorageEmbeddedVulnerability layerIndex Integer int32 priority String int64 source StorageSourceType OS, PYTHON, JAVA, RUBY, NODEJS, GO, DOTNETCORERUNTIME, INFRASTRUCTURE, location String topCvss Float float riskScore Float float fixedBy String Component version that fixes all the fixable vulnerabilities in this component. executables List of StorageEmbeddedImageScanComponentExecutable 30.7.7.22. StorageEmbeddedImageScanComponentExecutable Field Name Required Nullable Type Description Format path String dependencies List of string 30.7.7.23. StorageEmbeddedVulnerability Field Name Required Nullable Type Description Format cve String cvss Float float summary String link String fixedBy String scoreVersion StorageEmbeddedVulnerabilityScoreVersion V2, V3, cvssV2 StorageCVSSV2 cvssV3 StorageCVSSV3 publishedOn Date date-time lastModified Date date-time vulnerabilityType EmbeddedVulnerabilityVulnerabilityType UNKNOWN_VULNERABILITY, IMAGE_VULNERABILITY, K8S_VULNERABILITY, ISTIO_VULNERABILITY, NODE_VULNERABILITY, OPENSHIFT_VULNERABILITY, vulnerabilityTypes List of EmbeddedVulnerabilityVulnerabilityType suppressed Boolean suppressActivation Date date-time suppressExpiry Date date-time firstSystemOccurrence Date Time when the CVE was first seen, for this specific distro, in the system. date-time firstImageOccurrence Date Time when the CVE was first seen in this image. date-time severity StorageVulnerabilitySeverity UNKNOWN_VULNERABILITY_SEVERITY, LOW_VULNERABILITY_SEVERITY, MODERATE_VULNERABILITY_SEVERITY, IMPORTANT_VULNERABILITY_SEVERITY, CRITICAL_VULNERABILITY_SEVERITY, state StorageVulnerabilityState OBSERVED, DEFERRED, FALSE_POSITIVE, cvssMetrics List of StorageCVSSScore nvdCvss Float float 30.7.7.24. StorageEmbeddedVulnerabilityScoreVersion V2: No unset for automatic backwards compatibility Enum Values V2 V3 30.7.7.25. StorageImage Field Name Required Nullable Type Description Format id String name StorageImageName names List of StorageImageName This should deprecate the ImageName field long-term, allowing images with the same digest to be associated with different locations. TODO(dhaus): For now, this message will be without search tags due to duplicated search tags otherwise. metadata StorageImageMetadata scan StorageImageScan signatureVerificationData StorageImageSignatureVerificationData signature StorageImageSignature components Integer int32 cves Integer int32 fixableCves Integer int32 lastUpdated Date date-time notPullable Boolean isClusterLocal Boolean priority String int64 riskScore Float float topCvss Float float notes List of StorageImageNote 30.7.7.26. StorageImageLayer Field Name Required Nullable Type Description Format instruction String value String created Date date-time author String empty Boolean 30.7.7.27. StorageImageMetadata Field Name Required Nullable Type Description Format v1 StorageV1Metadata v2 StorageV2Metadata layerShas List of string dataSource StorageDataSource version String uint64 30.7.7.28. StorageImageName Field Name Required Nullable Type Description Format registry String remote String tag String fullName String 30.7.7.29. StorageImageNote Enum Values MISSING_METADATA MISSING_SCAN_DATA MISSING_SIGNATURE MISSING_SIGNATURE_VERIFICATION_DATA 30.7.7.30. StorageImageScan Field Name Required Nullable Type Description Format scannerVersion String scanTime Date date-time components List of StorageEmbeddedImageScanComponent operatingSystem String dataSource StorageDataSource notes List of StorageImageScanNote hash String uint64 30.7.7.31. StorageImageScanNote Enum Values UNSET OS_UNAVAILABLE PARTIAL_SCAN_DATA OS_CVES_UNAVAILABLE OS_CVES_STALE LANGUAGE_CVES_UNAVAILABLE CERTIFIED_RHEL_SCAN_UNAVAILABLE 30.7.7.32. StorageImageSignature Field Name Required Nullable Type Description Format signatures List of StorageSignature fetched Date date-time 30.7.7.33. StorageImageSignatureVerificationData Field Name Required Nullable Type Description Format results List of StorageImageSignatureVerificationResult 30.7.7.34. StorageImageSignatureVerificationResult Field Name Required Nullable Type Description Format verificationTime Date date-time verifierId String verifier_id correlates to the ID of the signature integration used to verify the signature. status StorageImageSignatureVerificationResultStatus UNSET, VERIFIED, FAILED_VERIFICATION, INVALID_SIGNATURE_ALGO, CORRUPTED_SIGNATURE, GENERIC_ERROR, description String description is set in the case of an error with the specific error's message. Otherwise, this will not be set. verifiedImageReferences List of string The full image names that are verified by this specific signature integration ID. 30.7.7.35. StorageImageSignatureVerificationResultStatus Status represents the status of the result. VERIFIED: VERIFIED is set when the signature's verification was successful. FAILED_VERIFICATION: FAILED_VERIFICATION is set when the signature's verification failed. INVALID_SIGNATURE_ALGO: INVALID_SIGNATURE_ALGO is set when the signature's algorithm is invalid and unsupported. CORRUPTED_SIGNATURE: CORRUPTED_SIGNATURE is set when the raw signature is corrupted, i.e. wrong base64 encoding. GENERIC_ERROR: GENERIC_ERROR is set when an error occurred during verification that cannot be associated with a specific status. Enum Values UNSET VERIFIED FAILED_VERIFICATION INVALID_SIGNATURE_ALGO CORRUPTED_SIGNATURE GENERIC_ERROR 30.7.7.36. StorageLicense Field Name Required Nullable Type Description Format name String type String url String 30.7.7.37. StorageSignature Field Name Required Nullable Type Description Format cosign StorageCosignSignature 30.7.7.38. StorageSource Enum Values SOURCE_UNKNOWN SOURCE_RED_HAT SOURCE_OSV SOURCE_NVD 30.7.7.39. StorageSourceType Enum Values OS PYTHON JAVA RUBY NODEJS GO DOTNETCORERUNTIME INFRASTRUCTURE 30.7.7.40. StorageV1Metadata Field Name Required Nullable Type Description Format digest String created Date date-time author String layers List of StorageImageLayer user String command List of string entrypoint List of string volumes List of string labels Map of string 30.7.7.41. StorageV2Metadata Field Name Required Nullable Type Description Format digest String 30.7.7.42. StorageVulnerabilitySeverity Enum Values UNKNOWN_VULNERABILITY_SEVERITY LOW_VULNERABILITY_SEVERITY MODERATE_VULNERABILITY_SEVERITY IMPORTANT_VULNERABILITY_SEVERITY CRITICAL_VULNERABILITY_SEVERITY 30.7.7.43. StorageVulnerabilityState VulnerabilityState indicates if vulnerability is being observed or deferred(/suppressed). By default, it vulnerabilities are observed. OBSERVED: [Default state] Enum Values OBSERVED DEFERRED FALSE_POSITIVE 30.7.7.44. V1ScanImageRequest Field Name Required Nullable Type Description Format imageName String force Boolean includeSnoozed Boolean cluster String Cluster to delegate scan to, may be the cluster's name or ID. 30.8. UnwatchImage DELETE /v1/watchedimages UnwatchImage marks an image name to no longer be watched. It returns successfully if the image is no longer being watched after the call, irrespective of whether the image was already being watched. 30.8.1. Description 30.8.2. Parameters 30.8.2.1. Query Parameters Name Description Required Default Pattern name The name of the image to unwatch. Should match the name of a previously watched image. - null 30.8.3. Return Type Object 30.8.4. Content Type application/json 30.8.5. Responses Table 30.8. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 30.8.6. Samples 30.8.7. Common object reference 30.8.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 30.8.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 30.8.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 30.9. GetWatchedImages GET /v1/watchedimages GetWatchedImages returns the list of image names that are currently being watched. 30.9.1. Description 30.9.2. Parameters 30.9.3. Return Type V1GetWatchedImagesResponse 30.9.4. Content Type application/json 30.9.5. Responses Table 30.9. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetWatchedImagesResponse 0 An unexpected error response. GooglerpcStatus 30.9.6. Samples 30.9.7. Common object reference 30.9.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 30.9.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 30.9.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 30.9.7.3. StorageWatchedImage Field Name Required Nullable Type Description Format name String 30.9.7.4. V1GetWatchedImagesResponse Field Name Required Nullable Type Description Format watchedImages List of StorageWatchedImage 30.10. WatchImage POST /v1/watchedimages WatchImage marks an image name as to be watched. 30.10.1. Description 30.10.2. Parameters 30.10.2.1. Body Parameter Name Description Required Default Pattern body V1WatchImageRequest X 30.10.3. Return Type V1WatchImageResponse 30.10.4. Content Type application/json 30.10.5. Responses Table 30.10. HTTP Response Codes Code Message Datatype 200 A successful response. V1WatchImageResponse 0 An unexpected error response. GooglerpcStatus 30.10.6. Samples 30.10.7. Common object reference 30.10.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 30.10.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 30.10.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 30.10.7.3. V1WatchImageRequest Field Name Required Nullable Type Description Format name String The name of the image. This must be fully qualified, including a tag, but must NOT include a SHA. 30.10.7.4. V1WatchImageResponse Field Name Required Nullable Type Description Format normalizedName String errorType WatchImageResponseErrorType NO_ERROR, INVALID_IMAGE_NAME, NO_VALID_INTEGRATION, SCAN_FAILED, errorMessage String Only set if error_type is NOT equal to \"NO_ERROR\". 30.10.7.5. WatchImageResponseErrorType Enum Values NO_ERROR INVALID_IMAGE_NAME NO_VALID_INTEGRATION SCAN_FAILED
[ "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Next Tag: 13", "Next Tag: 22", "ScoreVersion can be deprecated ROX-26066", "Next Tag: 19", "If any fields of ImageMetadata are modified including subfields, please check pkg/images/enricher/metadata.go to ensure that those changes will be automatically picked up Next Tag: 6", "Next tag: 8", "Next Tag: 6", "Stream result of v1ExportImageResponse", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Next Tag: 13", "Next Tag: 22", "ScoreVersion can be deprecated ROX-26066", "Next Tag: 19", "If any fields of ImageMetadata are modified including subfields, please check pkg/images/enricher/metadata.go to ensure that those changes will be automatically picked up Next Tag: 6", "Next tag: 8", "Next Tag: 6", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Next Tag: 13", "Next Tag: 22", "ScoreVersion can be deprecated ROX-26066", "Next Tag: 19", "If any fields of ImageMetadata are modified including subfields, please check pkg/images/enricher/metadata.go to ensure that those changes will be automatically picked up Next Tag: 6", "Next tag: 8", "Next Tag: 6", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/imageservice
Chapter 4. Resolved issues
Chapter 4. Resolved issues The following issue is resolved for this release: Issue Summary JBCS-1640 .postinstall duplicates modifications to apachectl for each run For details of any security fixes in this release, see the errata links in Advisories related to this release .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_2_release_notes/resolved_issues
Chapter 7. Deploying storage at the edge
Chapter 7. Deploying storage at the edge You can leverage Red Hat OpenStack Platform director to extend distributed compute node deployments to include distributed image management and persistent storage at the edge with the benefits of using Red Hat OpenStack Platform and Ceph Storage. 7.1. Roles for edge deployments with storage The following roles are available for edge deployments with storage. Select the appropriate roles for your environment based on your chosen configuration. 7.1.1. Storage without hyperconverged nodes When you deploy edge with storage, and are not deploying hyperconverged nodes, use one of the following four roles. DistributedCompute The DistributedCompute role is used for the first three compute nodes in storage deployments. The DistributedCompute role includes the GlanceApiEdge service, which ensures that Image services are consumed at the local edge site rather than at the central hub location. For any additional nodes use the DistributedComputeScaleOut role. DistributedComputeScaleOut The DistributedComputeScaleOut role includes the HAproxyEdge service, which enables instances created on the DistributedComputeScaleOut role to proxy requests for Image services to nodes that provide that service at the edge site. After you deploy three nodes with a role of DistributedCompute , you can use the DistributedComputeScaleOut role to scale compute resources. There is no minimum number of hosts required to deploy with the DistrubutedComputeScaleOut role. CephAll The CephAll role includes the Ceph OSD, Ceph mon, and Ceph Mgr services. You can deploy up to three nodes using the CephAll role. For any additional storage capacity use the CephStorage role. CephStorage The CephStorage role includes the Ceph OSD service. If three CephAll nodes do not provide enough storage capacity, then add as many CephStorage nodes as needed. 7.1.2. Storage with hyperconverged nodes When you are deploying edge with storage, and you plan to have hyperconverged nodes that combine compute and storage, use one of the following two roles. DistributedComputeHCI The DistributedComputeHCI role enables a hyperconverged deployment at the edge by including Ceph Management and OSD services. You must use exactly three nodes when using the DistributedComputeHCI role. DistributedComputeHCIScaleOut The DistributedComputeHCIScaleOut role includes the Ceph OSD service, which allows storage capacity to be scaled with compute when more nodes are added to the edge. This role also includes the HAproxyEdge service to redirect image download requests to the GlanceAPIEdge nodes at the edge site. This role enables a hyperconverged deployment at the edge. You must use exactly three nodes when using the DistributedComputeHCI role. 7.2. Architecture of a DCN edge site with storage To deploy DCN with storage you must also deploy Red Hat Ceph Storage at the central location. You must use the dcn-storage.yaml and cephadm.yaml environment files. For edge sites that include non-hyperconverged Red Hat Ceph Storage nodes, use the DistributedCompute , DistributedComputeScaleOut , CephAll , and CephStorage roles. With block storage at the edge Red Hat Ceph Block Devices (RBD) is used as an Image (glance) service backend. Multi-backend Image service (glance) is available so that images may be copied between the central and DCN sites. The Block Storage (cinder) service is available at all sites and is accessed by using the Red Hat Ceph Block Devices (RBD) driver. The Block Storage (cinder) service runs on the Compute nodes, and Red Hat Ceph Storage runs separately on dedicated storage nodes. Nova ephemeral storage is backed by Ceph (RBD). For more information, see Section 5.2, "Deploying the central site with storage" . 7.3. Architecture of a DCN edge site with hyperconverged storage To deploy this configuration you must also deploy Red Hat Ceph Storage at the central location. You need to configure the dcn-storage.yaml and cephadm.yaml environment files. Use the DistributedComputeHCI , and DistributedComputeHCIScaleOut roles. You can also use the DistributedComputeScaleOut role to add Compute nodes that do not participate in providing Red Hat Ceph Storage services. With hyperconverged storage at the edge Red Hat Ceph Block Devices (RBD) is used as an Image (glance) service backend. Multi-backend Image service (glance) is available so that images may be copied between the central and DCN sites. The Block Storage (cinder) service is available at all sites and is accessed by using the Red Hat Ceph Block Devices (RBD) driver. Both the Block Storage service and Red Hat Ceph Storage run on the Compute nodes. For more information, see Section 7.4, "Deploying edge sites with hyperconverged storage" . When you deploy Red Hat OpenStack Platform in a distributed compute architecture, you have the option of deploying multiple storage topologies, with a unique configuration at each site. You must deploy the central location with Red Hat Ceph storage to deploy any of the edge sites with storage. 7.4. Deploying edge sites with hyperconverged storage After you deploy the central site, build out the edge sites and ensure that each edge location connects primarily to its own storage back end, as well as to the storage back end at the central location. A spine and leaf networking configuration should be included with this configuration, with the addition of the storage and storage_mgmt networks that ceph needs. For more information, see Spine Leaf Networking. You must have connectivity between the storage network at the central location and the storage network at each edge site so that you can move Image service (glance) images between sites. Ensure that the central location can communicate with the mons and OSDs at each of the edge sites. However, you should terminate the storage management network at site location boundaries because the storage management network is used for OSD rebalancing. Prerequisites You must create the network_data.yaml file specific to your environment. You can find sample files in /usr/share/openstack-tripleo-heat-templates/network-data-samples . You must create an overcloud-baremetal-deploy.yaml file specific to your environment. For more information see Provisioning bare metal nodes for the overcloud . You have hardware for three Image Service (glance) servers at a central location and in each availability zone, or in each geographic location where storage services are required. At edge locations, the Image service is deployed to the DistributedComputeHCI nodes. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Generate an environment file ~/dcn0/dcn0-images-env.yaml: Generate the appropriate roles for the dcn0 edge location: Provision networks for the overcloud. This command takes a definition file for overcloud networks as input. You must use the output file in your command to deploy the overcloud: Provision bare metal instances. This command takes a definition file for bare metal nodes as input. You must use the output file in your command to deploy the overcloud: If you are deploying the edge site with hyperconverged storage, you must create an initial-ceph.conf configuration file with the following parameters. For more information, see Configuring the Red Hat Ceph Storage cluster for HCI : Use the deployed_metal.yaml file as input to the openstack overcloud ceph deploy command. The openstack overcloud ceph deploy command outputs a yaml file that describes the deployed Ceph cluster: 1 Include initial-ceph.conf only when deploying hyperconverged infrastructure. Configure the naming conventions for your site in the site-name.yaml environment file. The Nova availability zone and the Cinder storage availability zone must match: Configure a glance.yaml template with contents similar to the following: Deploy the stack for the dcn0 location:[d] 7.5. Using a pre-installed Red Hat Ceph Storage cluster at the edge You can configure Red Hat OpenStack Platform to use a pre-existing Ceph cluster. This is called an external Ceph deployment. Prerequisites You must have a preinstalled Ceph cluster that is local to your DCN site so that latency requirements are not exceeded. Procedure Create the following pools in your Ceph cluster. If you are deploying at the central location, include the backups and metrics pools: Replace <_PGnum_> with the number of placement groups. You can use the Ceph Placement Groups (PGs) per Pool Calculator to determine a suitable value. Create the OpenStack client user in Ceph to provide the Red Hat OpenStack Platform environment access to the appropriate pools: Save the provided Ceph client key that is returned. Use this key as the value for the CephClientKey parameter when you configure the undercloud. Note If you run this command at the central location and plan to use Cinder backup or telemetry services, add allow rwx pool=backups, allow pool=metrics to the command. Save the file system ID of your Ceph Storage cluster. The value of the fsid parameter in the [global] section of your Ceph configuration file is the file system ID: Use this value as the value for the CephClusterFSID parameter when you configure the undercloud. On the undercloud, create an environment file to configure your nodes to connect to the unmanaged Ceph cluster. Use a recognizable naming convention, such as ceph-external-<SITE>.yaml where SITE is the location for your deployment, such as ceph-external-central.yaml, ceph-external-dcn1.yaml, and so on. Use the previously saved values for the CephClusterFSID and CephClientKey parameters. Use a comma delimited list of ip addresses from the Ceph monitors as the value for the CephExternalMonHost parameter. You must select a unique value for the CephClusterName parameter amongst edge sites. Reusing a name will result in the configuration file being overwritten. If you deployed Red Hat Ceph storage using Red Hat OpenStack Platform director at the central location, then you can export the ceph configuration to an environment file central_ceph_external.yaml . This environment file connects DCN sites to the central hub Ceph cluster, so the information is specific to the Ceph cluster deployed in the steps: If the central location has Red Hat Ceph Storage deployed externally, then you cannot use the openstack overcloud export ceph command to generate the central_ceph_external.yaml file. You must create the central_ceph_external.yaml file manually instead: Create an environment file with similar details about each site with an unmanaged Red Hat Ceph Storage cluster for the central location. The openstack overcloud export ceph command does not work for sites with unmanaged Red Hat Ceph Storage clusters. When you update the central location, this file will allow the central location the storage clusters at your edge sites as secondary locations Use the external-ceph.yaml, ceph-external-<SITE>.yaml, and the central_ceph_external.yaml environment files when deploying the overcloud: Redeploy the central location after all edge locations have been deployed. 7.6. Updating the central location After you configure and deploy all of the edge sites using the sample procedure, update the configuration at the central location so that the central Image service can push images to the edge sites. Warning This procedure restarts the Image service (glance) and interrupts any long running Image service process. For example, if an image is being copied from the central Image service server to a DCN Image service server, that image copy is interrupted and you must restart it. For more information, see Clearing residual data after interrupted Image service processes . Procedure Create a ~/central/glance_update.yaml file similar to the following. This example includes a configuration for two edge sites, dcn0 and dcn1: Create the dcn_ceph.yaml file. In the following example, this file configures the glance service at the central site as a client of the Ceph clusters of the edge sites, dcn0 and dcn1 . Redeploy the central site using the original templates and include the newly created dcn_ceph.yaml and glance_update.yaml files. On a controller at the central location, restart the cinder-volume service. If you deployed the central location with the cinder-backup service, then restart the cinder-backup service too: 7.6.1. Clearing residual data after interrupted Image service processes When you restart the central location, any long-running Image service (glance) processes are interrupted. Before you can restart these processes, you must first clean up residual data on the Controller node that you rebooted, and in the Ceph and Image service databases. Procedure Check and clear residual data in the Controller node that was rebooted. Compare the files in the glance-api.conf file for staging store with the corresponding images in the Image service database, for example <image_ID>.raw . If these corresponding images show importing status, you must recreate the image. If the images show active status, you must delete the data from staging and restart the copy import. Check and clear residual data in Ceph stores. The images that you cleaned from the staging area must have matching records in their stores property in the Ceph stores that contain the image. The image name in Ceph is the image id in the Image service database. Clear the Image service database. Clear any images that are in importing status from the import jobs there were interrupted: 7.7. Deploying Red Hat Ceph Storage Dashboard on DCN Procedure To deploy the Red Hat Ceph Storage Dashboard to the central location, see Adding the Red Hat Ceph Storage Dashboard to an overcloud deployment . These steps should be completed prior to deploying the central location. To deploy Red Hat Ceph Storage Dashboard to edge locations, complete the same steps that you completed for central, however you must complete the following following: Ensure that the ManageNetworks parameter has a value of false in your templates for deploying the edge site. When you set ManageNetworks to false , Edge sites will use the existing networks that were already created in the central stack: You must deploy your own solution for load balancing in order to create a high availability virtual IP. Edge sites do not deploy haproxy, nor pacemaker. When you deploy Red Hat Ceph Storage Dashboard to edge locations, the deployment is exposed on the storage network. The dashboard is installed on each of the three DistributedComputeHCI nodes with distinct IP addresses without a load balancing solution. You can create an additional network to host virtual IP where the Ceph dashboard can be exposed. You must not be reusing network resources for multiple stacks. For more information on reusing network resources, see Reusing network resources in multiple stacks . To create this additional network resource, use the provided network_data_dashboard.yaml heat template. The name of the created network is StorageDashboard . Procedure Log in to Red Hat OpenStack Platform Director as stack . Generate the DistributedComputeHCIDashboard role and any other roles appropriate for your environment: Include the roles.yaml and the network_data_dashboard.yaml in the overcloud deploy command: Note The deployment provides the three ip addresses where the dashboard is enabled on the storage network. Verification To confirm the dashboard is operational at the central location and that the data it displays from the Ceph cluster is correct, see Accessing Ceph Dashboard . You can confirm that the dashboard is operating at an edge location through similar steps, however, there are exceptions as there is no load balancer at edge locations. Retrieve dashboard admin login credentials specific to the selected stack: Within the inventory specific to the selected stack, /home/stack/config-download/<stack>/cephadm/inventory.yml , locate the DistributedComputeHCI role hosts list and save all three of the storage_ip values. In the example below the first two dashboard IPs are 172.16.11.84 and 172.16.11.87: You can check that the Ceph Dashboard is active at one of these IP addresses if they are accessible to you. These IP addresses are on the storage network and are not routed. If these IP addresses are not available, you must configure a load balancer for the three IP addresses that you get from the inventory to obtain a virtual IP address for verification.
[ "[stack@director ~]USD source ~/stackrc", "sudo openstack tripleo container image prepare -e containers.yaml --output-env-file /home/stack/dcn0/dcn0-images-env.yaml", "openstack overcloud roles generate DistributedComputeHCI DistributedComputeHCIScaleOut -o ~/dcn0/dcn0_roles.yaml", "(undercloud)USD openstack overcloud network provision --output /home/stack/dcn0/overcloud-networks-deployed.yaml /home/stack/network_data.yaml", "(undercloud)USD openstack overcloud node provision --stack dcn0 --network-config -o /home/stack/dcn0/deployed_metal.yaml /home/stack/overcloud-baremetal-deploy.yaml", "[osd] osd_memory_target_autotune = true osd_numa_auto_affinity = true [mgr] mgr/cephadm/autotune_memory_target_ratio = 0.2", "openstack overcloud ceph deploy /home/stack/dcn0/deployed_metal.yaml --stack dcn0 --config ~/dcn0/initial-ceph.conf \\ 1 --output ~/dcn0/deployed_ceph.yaml --container-image-prepare ~/containers.yaml --network-data ~/network-data.yaml --cluster dcn0 --roles-data dcn_roles.yaml", "parameter_defaults: NovaComputeAvailabilityZone: dcn0 ControllerExtraConfig: nova::availability_zone::default_schedule_zone: dcn0 NovaCrossAZAttach: false CinderStorageAvailabilityZone: dcn0 CinderVolumeCluster: dcn0 GlanceBackendID: dcn0", "parameter_defaults: GlanceEnabledImportMethods: web-download,copy-image GlanceBackend: rbd GlanceStoreDescription: 'dcn0 rbd glance store' GlanceBackendID: dcn0 GlanceMultistoreConfig: central: GlanceBackend: rbd GlanceStoreDescription: 'central rbd glance store' CephClusterName: central", "openstack overcloud deploy --deployed-server --stack dcn0 --templates /usr/share/openstack-tripleo-heat-templates/ -r ~/dcn0/dcn0_roles.yaml -n ~/dcn0/network-data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/dcn-storage.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm-rbd-only.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/nova-az-config.yaml -e /home/stack/overcloud-deploy/central/central-export.yaml -e /home/stack/dcn0/deployed_ceph.yaml -e /home/stack/dcn-common/central_ceph_external.yaml -e /home/stack/dcn0/overcloud-vip-deployed.yaml -e /home/stack/dcn0/deployed_metal.yaml -e /home/stack/dcn0/overcloud-networks-deployed.yaml -e ~/control-plane/glance.yaml", "ceph osd pool create volumes <_PGnum_> ceph osd pool create images <_PGnum_> ceph osd pool create vms <_PGnum_> ceph osd pool create backups <_PGnum_> ceph osd pool create metrics <_PGnum_>", "ceph auth add client.openstack mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rwx pool=images'", "[global] fsid = 4b5c8c0a-ff60-454b-a1b4-9747aa737d19", "parameter_defaults: # The cluster FSID CephClusterFSID: '4b5c8c0a-ff60-454b-a1b4-9747aa737d19' # The CephX user auth key CephClientKey: 'AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==' # The list of IPs or hostnames of the Ceph monitors CephExternalMonHost: '172.16.1.7, 172.16.1.8, 172.16.1.9' # The desired name of the generated key and conf files CephClusterName: dcn1", "sudo -E openstack overcloud export ceph --stack central --output-file /home/stack/dcn-common/central_ceph_external.yaml", "parameter_defaults: CephExternalMultiConfig: - cluster: \"central\" fsid: \"3161a3b4-e5ff-42a0-9f53-860403b29a33\" external_cluster_mon_ips: \"172.16.11.84, 172.16.11.87, 172.16.11.92\" keys: - name: \"client.openstack\" caps: mgr: \"allow *\" mon: \"profile rbd\" osd: \"profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=images\" key: \"AQD29WteAAAAABAAphgOjFD7nyjdYe8Lz0mQ5Q==\" mode: \"0600\" dashboard_enabled: false ceph_conf_overrides: client: keyring: /etc/ceph/central.client.openstack.keyring", "parameter_defaults: CephExternalMultiConfig: cluster: dcn1 ... cluster: dcn2 ...", "openstack overcloud deploy --stack dcn1 --templates /usr/share/openstack-tripleo-heat-templates/ -r ~/dcn1/roles_data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/dcn-storage.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/nova-az-config.yaml -e /home/stack/dnc1/ceph-external-dcn1.yaml . -e /home/stack/overcloud-deploy/central/central-export.yaml -e /home/stack/dcn-common/central_ceph_external.yaml -e /home/stack/dcn1/dcn_ceph_keys.yaml -e /home/stack/dcn1/role-counts.yaml -e /home/stack/dcn1/ceph.yaml -e /home/stack/dcn1/site-name.yaml -e /home/stack/dcn1/tuning.yaml -e /home/stack/dcn1/glance.yaml", "parameter_defaults: GlanceEnabledImportMethods: web-download,copy-image GlanceBackend: rbd GlanceStoreDescription: 'central rbd glance store' CephClusterName: central GlanceBackendID: central GlanceMultistoreConfig: dcn0: GlanceBackend: rbd GlanceStoreDescription: 'dcn0 rbd glance store' CephClientUserName: 'openstack' CephClusterName: dcn0 GlanceBackendID: dcn0 dcn1: GlanceBackend: rbd GlanceStoreDescription: 'dcn1 rbd glance store' CephClientUserName: 'openstack' CephClusterName: dcn1 GlanceBackendID: dcn1", "openstack overcloud export ceph --stack dcn0,dcn1 --output-file ~/central/dcn_ceph.yaml", "openstack overcloud deploy --deployed-server --stack central --templates /usr/share/openstack-tripleo-heat-templates/ -r ~/control-plane/central_roles.yaml -n ~/network-data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/dcn-storage.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/nova-az-config.yaml -e /home/stack/central/overcloud-networks-deployed.yaml -e /home/stack/central/overcloud-vip-deployed.yaml -e /home/stack/central/deployed_metal.yaml -e /home/stack/central/deployed_ceph.yaml -e /home/stack/central/dcn_ceph.yaml -e /home/stack/central/glance_update.yaml", "openstack overcloud deploy --stack central --templates /usr/share/openstack-tripleo-heat-templates/ -r ~/central/central_roles.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/nova-az-config.yaml -e ~/central/central-images-env.yaml -e ~/central/role-counts.yaml -e ~/central/site-name.yaml -e ~/central/ceph.yaml -e ~/central/ceph_keys.yaml -e ~/central/glance.yaml -e ~/central/dcn_ceph_external.yaml", "ssh tripleo-admin@controller-0 sudo pcs resource restart openstack-cinder-volume ssh tripleo-admin@controller-0 sudo pcs resource restart openstack-cinder-backup", "glance image-delete <image_id>", "parameter_defaults: ManageNetworks: false", "openstack overcloud roles generate DistributedComputeHCIDashboard -o ~/dnc0/roles.yaml", "openstack overcloud deploy --templates -r ~/<dcn>/<dcn_site_roles>.yaml -n /usr/share/openstack-tripleo-heat-templates/network_data_dashboard.yaml -e <overcloud_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm-rbd-only.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-dashboard.yaml \\", "grep grafana_admin /home/stack/config-download/<stack>/cephadm/cephadm-extra-vars-heat.yml", "DistributedComputeHCI: hosts: dcn1-distributed-compute-hci-0: ansible_host: 192.168.24.16 storage_hostname: dcn1-distributed-compute-hci-0.storage.localdomain storage_ip: 172.16.11.84 dcn1-distributed-compute-hci-1: ansible_host: 192.168.24.22 storage_hostname: dcn1-distributed-compute-hci-1.storage.localdomain storage_ip: 172.16.11.87" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/distributed_compute_node_and_storage_deployment/assembly_deploying-storage-at-the-edge
function::print_regs
function::print_regs Name function::print_regs - Print a register dump Synopsis Arguments None Description This function prints a register dump. Does nothing if no registers are available for the probe point.
[ "print_regs()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-print-regs
Managing content
Managing content Red Hat Satellite 6.16 Import content from Red Hat and custom sources, manage applications lifecycle across lifecycle environments, filter content by using Content Views, synchronize content between Satellite Servers, and more Red Hat Satellite Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_content/index
Chapter 11. Deploying installer-provisioned clusters on IBM Cloud
Chapter 11. Deploying installer-provisioned clusters on IBM Cloud 11.1. Prerequisites You can use installer-provisioned installation to install OpenShift Container Platform on IBM Cloud(R) nodes. This document describes the prerequisites and procedures when installing OpenShift Container Platform on IBM Cloud nodes. Important Red Hat supports IPMI and PXE on the provisioning network only. Red Hat has not tested Red Fish, virtual media, or other complementary technologies such as Secure Boot on IBM Cloud deployments. The provisioning network is required. Installer-provisioned installation of OpenShift Container Platform requires: One provisioner node with Red Hat Enterprise Linux CoreOS (RHCOS) 8.x installed Three control plane nodes One routable network One network for provisioning nodes Before starting an installer-provisioned installation of OpenShift Container Platform on IBM Cloud, address the following prerequisites and requirements. 11.1.1. Setting up IBM Cloud infrastructure To deploy an OpenShift Container Platform cluster on IBM Cloud(R), you must first provision the IBM Cloud nodes. Important Red Hat supports IPMI and PXE on the provisioning network only. Red Hat has not tested Red Fish, virtual media, or other complementary technologies such as Secure Boot on IBM Cloud deployments. The provisioning network is required. You can customize IBM Cloud nodes using the IBM Cloud API. When creating IBM Cloud nodes, you must consider the following requirements. Use one data center per cluster All nodes in the OpenShift Container Platform cluster must run in the same IBM Cloud data center. Create public and private VLANs Create all nodes with a single public VLAN and a single private VLAN. Ensure subnets have sufficient IP addresses IBM Cloud public VLAN subnets use a /28 prefix by default, which provides 16 IP addresses. That is sufficient for a cluster consisting of three control plane nodes, four worker nodes, and two IP addresses for the API VIP and Ingress VIP on the baremetal network. For larger clusters, you might need a smaller prefix. IBM Cloud private VLAN subnets use a /26 prefix by default, which provides 64 IP addresses. IBM Cloud will use private network IP addresses to access the Baseboard Management Controller (BMC) of each node. OpenShift Container Platform creates an additional subnet for the provisioning network. Network traffic for the provisioning network subnet routes through the private VLAN. For larger clusters, you might need a smaller prefix. Table 11.1. IP addresses per prefix IP addresses Prefix 32 /27 64 /26 128 /25 256 /24 Configuring NICs OpenShift Container Platform deploys with two networks: provisioning : The provisioning network is a non-routable network used for provisioning the underlying operating system on each node that is a part of the OpenShift Container Platform cluster. baremetal : The baremetal network is a routable network. You can use any NIC order to interface with the baremetal network, provided it is not the NIC specified in the provisioningNetworkInterface configuration setting or the NIC associated to a node's bootMACAddress configuration setting for the provisioning network. While the cluster nodes can contain more than two NICs, the installation process only focuses on the first two NICs. For example: NIC Network VLAN NIC1 provisioning <provisioning_vlan> NIC2 baremetal <baremetal_vlan> In the example, NIC1 on all control plane and worker nodes connects to the non-routable network ( provisioning ) that is only used for the installation of the OpenShift Container Platform cluster. NIC2 on all control plane and worker nodes connects to the routable baremetal network. PXE Boot order NIC1 PXE-enabled provisioning network 1 NIC2 baremetal network. 2 Note Ensure PXE is enabled on the NIC used for the provisioning network and is disabled on all other NICs. Configuring canonical names Clients access the OpenShift Container Platform cluster nodes over the baremetal network. Configure IBM Cloud subdomains or subzones where the canonical name extension is the cluster name. For example: Creating DNS entries You must create DNS A record entries resolving to unused IP addresses on the public subnet for the following: Usage Host Name IP API api.<cluster_name>.<domain> <ip> Ingress LB (apps) *.apps.<cluster_name>.<domain> <ip> Control plane and worker nodes already have DNS entries after provisioning. The following table provides an example of fully qualified domain names. The API and Nameserver addresses begin with canonical name extensions. The host names of the control plane and worker nodes are examples, so you can use any host naming convention you prefer. Usage Host Name IP API api.<cluster_name>.<domain> <ip> Ingress LB (apps) *.apps.<cluster_name>.<domain> <ip> Provisioner node provisioner.<cluster_name>.<domain> <ip> Master-0 openshift-master-0.<cluster_name>.<domain> <ip> Master-1 openshift-master-1.<cluster_name>.<domain> <ip> Master-2 openshift-master-2.<cluster_name>.<domain> <ip> Worker-0 openshift-worker-0.<cluster_name>.<domain> <ip> Worker-1 openshift-worker-1.<cluster_name>.<domain> <ip> Worker-n openshift-worker-n.<cluster_name>.<domain> <ip> OpenShift Container Platform includes functionality that uses cluster membership information to generate A records. This resolves the node names to their IP addresses. After the nodes are registered with the API, the cluster can disperse node information without using CoreDNS-mDNS. This eliminates the network traffic associated with multicast DNS. Important After provisioning the IBM Cloud nodes, you must create a DNS entry for the api.<cluster_name>.<domain> domain name on the external DNS because removing CoreDNS causes the local entry to disappear. Failure to create a DNS record for the api.<cluster_name>.<domain> domain name in the external DNS server prevents worker nodes from joining the cluster. Network Time Protocol (NTP) Each OpenShift Container Platform node in the cluster must have access to an NTP server. OpenShift Container Platform nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL certificates that require validation, which might fail if the date and time between the nodes are not in sync. Important Define a consistent clock date and time format in each cluster node's BIOS settings, or installation might fail. Configure a DHCP server IBM Cloud does not run DHCP on the public or private VLANs. After provisioning IBM Cloud nodes, you must set up a DHCP server for the public VLAN, which corresponds to OpenShift Container Platform's baremetal network. Note The IP addresses allocated to each node do not need to match the IP addresses allocated by the IBM Cloud provisioning system. See the "Configuring the public subnet" section for details. Ensure BMC access privileges The "Remote management" page for each node on the dashboard contains the node's intelligent platform management interface (IPMI) credentials. The default IPMI privileges prevent the user from making certain boot target changes. You must change the privilege level to OPERATOR so that Ironic can make those changes. In the install-config.yaml file, add the privilegelevel parameter to the URLs used to configure each BMC. See the "Configuring the install-config.yaml file" section for additional details. For example: ipmi://<IP>:<port>?privilegelevel=OPERATOR Alternatively, contact IBM Cloud support and request that they increase the IPMI privileges to ADMINISTRATOR for each node. Create bare metal servers Create bare metal servers in the IBM Cloud dashboard by navigating to Create resource Bare Metal Server . Alternatively, you can create bare metal servers with the ibmcloud CLI utility. For example: USD ibmcloud sl hardware create --hostname <SERVERNAME> \ --domain <DOMAIN> \ --size <SIZE> \ --os <OS-TYPE> \ --datacenter <DC-NAME> \ --port-speed <SPEED> \ --billing <BILLING> See Installing the stand-alone IBM Cloud CLI for details on installing the IBM Cloud CLI. Note IBM Cloud servers might take 3-5 hours to become available. 11.2. Setting up the environment for an OpenShift Container Platform installation 11.2.1. Preparing the provisioner node for OpenShift Container Platform installation on IBM Cloud Perform the following steps to prepare the provisioner node. Procedure Log in to the provisioner node via ssh . Create a non-root user ( kni ) and provide that user with sudo privileges: # useradd kni # passwd kni # echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni # chmod 0440 /etc/sudoers.d/kni Create an ssh key for the new user: # su - kni -c "ssh-keygen -f /home/kni/.ssh/id_rsa -N ''" Log in as the new user on the provisioner node: # su - kni Use Red Hat Subscription Manager to register the provisioner node: USD sudo subscription-manager register --username=<user> --password=<pass> --auto-attach USD sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms \ --enable=rhel-8-for-x86_64-baseos-rpms Note For more information about Red Hat Subscription Manager, see Using and Configuring Red Hat Subscription Manager . Install the following packages: USD sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool Modify the user to add the libvirt group to the newly created user: USD sudo usermod --append --groups libvirt kni Start firewalld : USD sudo systemctl start firewalld Enable firewalld : USD sudo systemctl enable firewalld Start the http service: USD sudo firewall-cmd --zone=public --add-service=http --permanent USD sudo firewall-cmd --reload Start and enable the libvirtd service: USD sudo systemctl enable libvirtd --now Set the ID of the provisioner node: USD PRVN_HOST_ID=<ID> You can view the ID with the following ibmcloud command: USD ibmcloud sl hardware list Set the ID of the public subnet: USD PUBLICSUBNETID=<ID> You can view the ID with the following ibmcloud command: USD ibmcloud sl subnet list Set the ID of the private subnet: USD PRIVSUBNETID=<ID> You can view the ID with the following ibmcloud command: USD ibmcloud sl subnet list Set the provisioner node public IP address: USD PRVN_PUB_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | jq .primaryIpAddress -r) Set the CIDR for the public network: USD PUBLICCIDR=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .cidr) Set the IP address and CIDR for the public network: USD PUB_IP_CIDR=USDPRVN_PUB_IP/USDPUBLICCIDR Set the gateway for the public network: USD PUB_GATEWAY=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .gateway -r) Set the private IP address of the provisioner node: USD PRVN_PRIV_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | \ jq .primaryBackendIpAddress -r) Set the CIDR for the private network: USD PRIVCIDR=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .cidr) Set the IP address and CIDR for the private network: USD PRIV_IP_CIDR=USDPRVN_PRIV_IP/USDPRIVCIDR Set the gateway for the private network: USD PRIV_GATEWAY=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .gateway -r) Set up the bridges for the baremetal and provisioning networks: USD sudo nohup bash -c " nmcli --get-values UUID con show | xargs -n 1 nmcli con delete nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname eth1 master provisioning nmcli connection add ifname baremetal type bridge con-name baremetal nmcli con add type bridge-slave ifname eth2 master baremetal nmcli connection modify baremetal ipv4.addresses USDPUB_IP_CIDR ipv4.method manual ipv4.gateway USDPUB_GATEWAY nmcli connection modify provisioning ipv4.addresses 172.22.0.1/24,USDPRIV_IP_CIDR ipv4.method manual nmcli connection modify provisioning +ipv4.routes \"10.0.0.0/8 USDPRIV_GATEWAY\" nmcli con down baremetal nmcli con up baremetal nmcli con down provisioning nmcli con up provisioning init 6 " Note For eth1 and eth2 , substitute the appropriate interface name, as needed. If required, SSH back into the provisioner node: # ssh kni@provisioner.<cluster-name>.<domain> Verify the connection bridges have been properly created: USD sudo nmcli con show Example output NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eth1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eth1 bridge-slave-eth2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eth2 Create a pull-secret.txt file: USD vim pull-secret.txt In a web browser, navigate to Install on Bare Metal with user-provisioned infrastructure . In step 1, click Download pull secret . Paste the contents into the pull-secret.txt file and save the contents in the kni user's home directory. 11.2.2. Configuring the public subnet All of the OpenShift Container Platform cluster nodes must be on the public subnet. IBM Cloud(R) does not provide a DHCP server on the subnet. Set it up separately on the provisioner node. You must reset the BASH variables defined when preparing the provisioner node. Rebooting the provisioner node after preparing it will delete the BASH variables previously set. Procedure Install dnsmasq : USD sudo dnf install dnsmasq Open the dnsmasq configuration file: USD sudo vi /etc/dnsmasq.conf Add the following configuration to the dnsmasq configuration file: interface=baremetal except-interface=lo bind-dynamic log-dhcp dhcp-range=<ip_addr>,<ip_addr>,<pub_cidr> 1 dhcp-option=baremetal,121,0.0.0.0/0,<pub_gateway>,<prvn_priv_ip>,<prvn_pub_ip> 2 dhcp-hostsfile=/var/lib/dnsmasq/dnsmasq.hostsfile 1 Set the DHCP range. Replace both instances of <ip_addr> with one unused IP address from the public subnet so that the dhcp-range for the baremetal network begins and ends with the same the IP address. Replace <pub_cidr> with the CIDR of the public subnet. 2 Set the DHCP option. Replace <pub_gateway> with the IP address of the gateway for the baremetal network. Replace <prvn_priv_ip> with the IP address of the provisioner node's private IP address on the provisioning network. Replace <prvn_pub_ip> with the IP address of the provisioner node's public IP address on the baremetal network. To retrieve the value for <pub_cidr> , execute: USD ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .cidr Replace <publicsubnetid> with the ID of the public subnet. To retrieve the value for <pub_gateway> , execute: USD ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .gateway -r Replace <publicsubnetid> with the ID of the public subnet. To retrieve the value for <prvn_priv_ip> , execute: USD ibmcloud sl hardware detail <id> --output JSON | \ jq .primaryBackendIpAddress -r Replace <id> with the ID of the provisioner node. To retrieve the value for <prvn_pub_ip> , execute: USD ibmcloud sl hardware detail <id> --output JSON | jq .primaryIpAddress -r Replace <id> with the ID of the provisioner node. Obtain the list of hardware for the cluster: USD ibmcloud sl hardware list Obtain the MAC addresses and IP addresses for each node: USD ibmcloud sl hardware detail <id> --output JSON | \ jq '.networkComponents[] | \ "\(.primaryIpAddress) \(.macAddress)"' | grep -v null Replace <id> with the ID of the node. Example output "10.196.130.144 00:e0:ed:6a:ca:b4" "141.125.65.215 00:e0:ed:6a:ca:b5" Make a note of the MAC address and IP address of the public network. Make a separate note of the MAC address of the private network, which you will use later in the install-config.yaml file. Repeat this procedure for each node until you have all the public MAC and IP addresses for the public baremetal network, and the MAC addresses of the private provisioning network. Add the MAC and IP address pair of the public baremetal network for each node into the dnsmasq.hostsfile file: USD sudo vim /var/lib/dnsmasq/dnsmasq.hostsfile Example input 00:e0:ed:6a:ca:b5,141.125.65.215,master-0 <mac>,<ip>,master-1 <mac>,<ip>,master-2 <mac>,<ip>,worker-0 <mac>,<ip>,worker-1 ... Replace <mac>,<ip> with the public MAC address and public IP address of the corresponding node name. Start dnsmasq : USD sudo systemctl start dnsmasq Enable dnsmasq so that it starts when booting the node: USD sudo systemctl enable dnsmasq Verify dnsmasq is running: USD sudo systemctl status dnsmasq Example output ● dnsmasq.service - DNS caching server. Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-10-05 05:04:14 CDT; 49s ago Main PID: 3101 (dnsmasq) Tasks: 1 (limit: 204038) Memory: 732.0K CGroup: /system.slice/dnsmasq.service └─3101 /usr/sbin/dnsmasq -k Open ports 53 and 67 with UDP protocol: USD sudo firewall-cmd --add-port 53/udp --permanent USD sudo firewall-cmd --add-port 67/udp --permanent Add provisioning to the external zone with masquerade: USD sudo firewall-cmd --change-zone=provisioning --zone=external --permanent This step ensures network address translation for IPMI calls to the management subnet. Reload the firewalld configuration: USD sudo firewall-cmd --reload 11.2.3. Retrieving the OpenShift Container Platform installer Use the stable-4.x version of the installer to deploy the generally available stable version of OpenShift Container Platform: USD export VERSION=stable-4.9 export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}') 11.2.4. Extracting the OpenShift Container Platform installer After retrieving the installer, the step is to extract it. Procedure Set the environment variables: USD export cmd=openshift-baremetal-install USD export pullsecret_file=~/pull-secret.txt USD export extract_dir=USD(pwd) Get the oc binary: USD curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc Extract the installer: USD sudo cp oc /usr/local/bin USD oc adm release extract --registry-config "USD{pullsecret_file}" --command=USDcmd --to "USD{extract_dir}" USD{RELEASE_IMAGE} USD sudo cp openshift-baremetal-install /usr/local/bin 11.2.5. Configuring the install-config.yaml file The install-config.yaml file requires some additional details. Most of the information is teaching the installer and the resulting cluster enough about the available IBM Cloud(R) hardware so that it is able to fully manage it. The material difference between installing on bare metal and installing on IBM Cloud is that you must explicitly set the privilege level for IPMI in the BMC section of the install-config.yaml file. Procedure Configure install-config.yaml . Change the appropriate variables to match the environment, including pullSecret and sshKey . apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public-cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIP: <api_ip> ingressVIP: <wildcard_ip> provisioningNetworkInterface: <NIC1> provisioningNetworkCIDR: <CIDR> hosts: - name: openshift-master-0 role: master bmc: address: ipmi://10.196.130.145?privilegelevel=OPERATOR 1 username: root password: <password> bootMACAddress: 00:e0:ed:6a:ca:b4 2 rootDeviceHints: deviceName: "/dev/sda" - name: openshift-worker-0 role: worker bmc: address: ipmi://<out-of-band-ip>?privilegelevel=OPERATOR 3 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> 4 rootDeviceHints: deviceName: "/dev/sda" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>' 1 3 The bmc.address provides a privilegelevel configuration setting with the value set to OPERATOR . This is required for IBM Cloud. 2 4 Add the MAC address of the private provisioning network NIC for the corresponding node. Note You can use the ibmcloud command-line utility to retrieve the password. USD ibmcloud sl hardware detail <id> --output JSON | \ jq '"(.networkManagementIpAddress) (.remoteManagementAccounts[0].password)"' Replace <id> with the ID of the node. Create a directory to store the cluster configuration: USD mkdir ~/clusterconfigs Copy the install-config.yaml file into the directory: USD cp install-config.yaml ~/clusterconfig Ensure all bare metal nodes are powered off prior to installing the OpenShift Container Platform cluster: USD ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off Remove old bootstrap resources if any are left over from a deployment attempt: for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done 11.2.6. Additional install-config parameters See the following tables for the required parameters, the hosts parameter, and the bmc parameter for the install-config.yaml file. Table 11.2. Required parameters Parameters Default Description baseDomain The domain name for the cluster. For example, example.com . bootMode UEFI The boot mode for a node. Options are legacy , UEFI , and UEFISecureBoot . If bootMode is not set, Ironic sets it while inspecting the node. sshKey The sshKey configuration setting contains the key in the ~/.ssh/id_rsa.pub file required to access the control plane nodes and worker nodes. Typically, this key is from the provisioner node. pullSecret The pullSecret configuration setting contains a copy of the pull secret downloaded from the Install OpenShift on Bare Metal page when preparing the provisioner node. The name to be given to the OpenShift Container Platform cluster. For example, openshift . The public CIDR (Classless Inter-Domain Routing) of the external network. For example, 10.0.0.0/24 . The OpenShift Container Platform cluster requires a name be provided for worker (or compute) nodes even if there are zero nodes. Replicas sets the number of worker (or compute) nodes in the OpenShift Container Platform cluster. The OpenShift Container Platform cluster requires a name for control plane (master) nodes. Replicas sets the number of control plane (master) nodes included as part of the OpenShift Container Platform cluster. provisioningNetworkInterface The name of the network interface on nodes connected to the provisioning network. For OpenShift Container Platform 4.9 and later releases, use the bootMACAddress configuration setting to enable Ironic to identify the IP address of the NIC instead of using the provisioningNetworkInterface configuration setting to identify the name of the NIC. defaultMachinePlatform The default configuration used for machine pools without a platform configuration. apiVIP (Optional) The virtual IP address for Kubernetes API communication. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or pre-configured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the apiVIP configuration setting in the install-config.yaml file. The IP address must be from the primary IPv4 network when using dual stack networking. If not set, the installer uses api.<cluster_name>.<base_domain> to derive the IP address from the DNS. disableCertificateVerification False redfish and redfish-virtualmedia need this parameter to manage BMC addresses. The value should be True when using a self-signed certificate for BMC addresses. ingressVIP (Optional) The virtual IP address for ingress traffic. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or pre-configured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the ingressVIP configuration setting in the install-config.yaml file. The IP address must be from the primary IPv4 network when using dual stack networking. If not set, the installer uses test.apps.<cluster_name>.<base_domain> to derive the IP address from the DNS. Table 11.3. Optional Parameters Parameters Default Description provisioningDHCPRange 172.22.0.10,172.22.0.100 Defines the IP range for nodes on the provisioning network. provisioningNetworkCIDR 172.22.0.0/24 The CIDR for the network to use for provisioning. This option is required when not using the default address range on the provisioning network. clusterProvisioningIP The third IP address of the provisioningNetworkCIDR . The IP address within the cluster where the provisioning services run. Defaults to the third IP address of the provisioning subnet. For example, 172.22.0.3 . bootstrapProvisioningIP The second IP address of the provisioningNetworkCIDR . The IP address on the bootstrap VM where the provisioning services run while the installer is deploying the control plane (master) nodes. Defaults to the second IP address of the provisioning subnet. For example, 172.22.0.2 or 2620:52:0:1307::2 . externalBridge baremetal The name of the baremetal bridge of the hypervisor attached to the baremetal network. provisioningBridge provisioning The name of the provisioning bridge on the provisioner host attached to the provisioning network. defaultMachinePlatform The default configuration used for machine pools without a platform configuration. bootstrapOSImage A URL to override the default operating system image for the bootstrap node. The URL must contain a SHA-256 hash of the image. For example: https://mirror.openshift.com/rhcos-<version>-qemu.qcow2.gz?sha256=<uncompressed_sha256> . clusterOSImage A URL to override the default operating system for cluster nodes. The URL must include a SHA-256 hash of the image. For example, https://mirror.openshift.com/images/rhcos-<version>-openstack.qcow2.gz?sha256=<compressed_sha256> . provisioningNetwork The provisioningNetwork configuration setting determines whether the cluster uses the provisioning network. If it does, the configuration setting also determines if the cluster manages the network. Disabled : Set this parameter to Disabled to disable the requirement for a provisioning network. When set to Disabled , you must only use virtual media based provisioning, or bring up the cluster using the assisted installer. If Disabled and using power management, BMCs must be accessible from the baremetal network. If Disabled , you must provide two IP addresses on the baremetal network that are used for the provisioning services. Managed : Set this parameter to Managed , which is the default, to fully manage the provisioning network, including DHCP, TFTP, and so on. Unmanaged : Set this parameter to Unmanaged to enable the provisioning network but take care of manual configuration of DHCP. Virtual media provisioning is recommended but PXE is still available if required. httpProxy Set this parameter to the appropriate HTTP proxy used within your environment. httpsProxy Set this parameter to the appropriate HTTPS proxy used within your environment. noProxy Set this parameter to the appropriate list of exclusions for proxy usage within your environment. Hosts The hosts parameter is a list of separate bare metal assets used to build the cluster. Table 11.4. Hosts Name Default Description name The name of the BareMetalHost resource to associate with the details. For example, openshift-master-0 . role The role of the bare metal node. Either master or worker . bmc Connection details for the baseboard management controller. See the BMC addressing section for additional details. bootMACAddress The MAC address of the NIC that the host uses for the provisioning network. Ironic retrieves the IP address using the bootMACAddress configuration setting. Then, it binds to the host. Note You must provide a valid MAC address from the host if you disabled the provisioning network. 11.2.7. Root device hints The rootDeviceHints parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it. Table 11.5. Subfields Subfield Description deviceName A string containing a Linux device name like /dev/vda . The hint must match the actual value exactly. hctl A string containing a SCSI bus address like 0:0:0:0 . The hint must match the actual value exactly. model A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. vendor A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. serialNumber A string containing the device serial number. The hint must match the actual value exactly. minSizeGigabytes An integer representing the minimum size of the device in gigabytes. wwn A string containing the unique storage identifier. The hint must match the actual value exactly. wwnWithExtension A string containing the unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. wwnVendorExtension A string containing the unique vendor storage identifier. The hint must match the actual value exactly. rotational A boolean indicating whether the device should be a rotating disk (true) or not (false). Example usage - name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: "/dev/sda" 11.2.8. Creating the OpenShift Container Platform manifests Create the OpenShift Container Platform manifests. USD ./openshift-baremetal-install --dir ~/clusterconfigs create manifests INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated 11.2.9. Deploying the cluster via the OpenShift Container Platform installer Run the OpenShift Container Platform installer: USD ./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster 11.2.10. Following the installation During the deployment process, you can check the installation's overall status by issuing the tail command to the .openshift_install.log log file in the install directory folder. USD tail -f /path/to/install-dir/.openshift_install.log
[ "<cluster_name>.<domain>", "test-cluster.example.com", "ipmi://<IP>:<port>?privilegelevel=OPERATOR", "ibmcloud sl hardware create --hostname <SERVERNAME> --domain <DOMAIN> --size <SIZE> --os <OS-TYPE> --datacenter <DC-NAME> --port-speed <SPEED> --billing <BILLING>", "useradd kni", "passwd kni", "echo \"kni ALL=(root) NOPASSWD:ALL\" | tee -a /etc/sudoers.d/kni", "chmod 0440 /etc/sudoers.d/kni", "su - kni -c \"ssh-keygen -f /home/kni/.ssh/id_rsa -N ''\"", "su - kni", "sudo subscription-manager register --username=<user> --password=<pass> --auto-attach", "sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms", "sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool", "sudo usermod --append --groups libvirt kni", "sudo systemctl start firewalld", "sudo systemctl enable firewalld", "sudo firewall-cmd --zone=public --add-service=http --permanent", "sudo firewall-cmd --reload", "sudo systemctl enable libvirtd --now", "PRVN_HOST_ID=<ID>", "ibmcloud sl hardware list", "PUBLICSUBNETID=<ID>", "ibmcloud sl subnet list", "PRIVSUBNETID=<ID>", "ibmcloud sl subnet list", "PRVN_PUB_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | jq .primaryIpAddress -r)", "PUBLICCIDR=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .cidr)", "PUB_IP_CIDR=USDPRVN_PUB_IP/USDPUBLICCIDR", "PUB_GATEWAY=USD(ibmcloud sl subnet detail USDPUBLICSUBNETID --output JSON | jq .gateway -r)", "PRVN_PRIV_IP=USD(ibmcloud sl hardware detail USDPRVN_HOST_ID --output JSON | jq .primaryBackendIpAddress -r)", "PRIVCIDR=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .cidr)", "PRIV_IP_CIDR=USDPRVN_PRIV_IP/USDPRIVCIDR", "PRIV_GATEWAY=USD(ibmcloud sl subnet detail USDPRIVSUBNETID --output JSON | jq .gateway -r)", "sudo nohup bash -c \" nmcli --get-values UUID con show | xargs -n 1 nmcli con delete nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname eth1 master provisioning nmcli connection add ifname baremetal type bridge con-name baremetal nmcli con add type bridge-slave ifname eth2 master baremetal nmcli connection modify baremetal ipv4.addresses USDPUB_IP_CIDR ipv4.method manual ipv4.gateway USDPUB_GATEWAY nmcli connection modify provisioning ipv4.addresses 172.22.0.1/24,USDPRIV_IP_CIDR ipv4.method manual nmcli connection modify provisioning +ipv4.routes \\\"10.0.0.0/8 USDPRIV_GATEWAY\\\" nmcli con down baremetal nmcli con up baremetal nmcli con down provisioning nmcli con up provisioning init 6 \"", "ssh kni@provisioner.<cluster-name>.<domain>", "sudo nmcli con show", "NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eth1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eth1 bridge-slave-eth2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eth2", "vim pull-secret.txt", "sudo dnf install dnsmasq", "sudo vi /etc/dnsmasq.conf", "interface=baremetal except-interface=lo bind-dynamic log-dhcp dhcp-range=<ip_addr>,<ip_addr>,<pub_cidr> 1 dhcp-option=baremetal,121,0.0.0.0/0,<pub_gateway>,<prvn_priv_ip>,<prvn_pub_ip> 2 dhcp-hostsfile=/var/lib/dnsmasq/dnsmasq.hostsfile", "ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .cidr", "ibmcloud sl subnet detail <publicsubnetid> --output JSON | jq .gateway -r", "ibmcloud sl hardware detail <id> --output JSON | jq .primaryBackendIpAddress -r", "ibmcloud sl hardware detail <id> --output JSON | jq .primaryIpAddress -r", "ibmcloud sl hardware list", "ibmcloud sl hardware detail <id> --output JSON | jq '.networkComponents[] | \"\\(.primaryIpAddress) \\(.macAddress)\"' | grep -v null", "\"10.196.130.144 00:e0:ed:6a:ca:b4\" \"141.125.65.215 00:e0:ed:6a:ca:b5\"", "sudo vim /var/lib/dnsmasq/dnsmasq.hostsfile", "00:e0:ed:6a:ca:b5,141.125.65.215,master-0 <mac>,<ip>,master-1 <mac>,<ip>,master-2 <mac>,<ip>,worker-0 <mac>,<ip>,worker-1", "sudo systemctl start dnsmasq", "sudo systemctl enable dnsmasq", "sudo systemctl status dnsmasq", "● dnsmasq.service - DNS caching server. Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-10-05 05:04:14 CDT; 49s ago Main PID: 3101 (dnsmasq) Tasks: 1 (limit: 204038) Memory: 732.0K CGroup: /system.slice/dnsmasq.service └─3101 /usr/sbin/dnsmasq -k", "sudo firewall-cmd --add-port 53/udp --permanent", "sudo firewall-cmd --add-port 67/udp --permanent", "sudo firewall-cmd --change-zone=provisioning --zone=external --permanent", "sudo firewall-cmd --reload", "export VERSION=stable-4.9 export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}')", "export cmd=openshift-baremetal-install export pullsecret_file=~/pull-secret.txt export extract_dir=USD(pwd)", "curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc", "sudo cp oc /usr/local/bin oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=USDcmd --to \"USD{extract_dir}\" USD{RELEASE_IMAGE} sudo cp openshift-baremetal-install /usr/local/bin", "apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public-cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIP: <api_ip> ingressVIP: <wildcard_ip> provisioningNetworkInterface: <NIC1> provisioningNetworkCIDR: <CIDR> hosts: - name: openshift-master-0 role: master bmc: address: ipmi://10.196.130.145?privilegelevel=OPERATOR 1 username: root password: <password> bootMACAddress: 00:e0:ed:6a:ca:b4 2 rootDeviceHints: deviceName: \"/dev/sda\" - name: openshift-worker-0 role: worker bmc: address: ipmi://<out-of-band-ip>?privilegelevel=OPERATOR 3 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> 4 rootDeviceHints: deviceName: \"/dev/sda\" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>'", "ibmcloud sl hardware detail <id> --output JSON | jq '\"(.networkManagementIpAddress) (.remoteManagementAccounts[0].password)\"'", "mkdir ~/clusterconfigs", "cp install-config.yaml ~/clusterconfig", "ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off", "for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done", "metadata: name:", "networking: machineNetwork: - cidr:", "compute: - name: worker", "compute: replicas: 2", "controlPlane: name: master", "controlPlane: replicas: 3", "- name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: \"/dev/sda\"", "./openshift-baremetal-install --dir ~/clusterconfigs create manifests", "INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated", "./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster", "tail -f /path/to/install-dir/.openshift_install.log" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/installing/deploying-installer-provisioned-clusters-on-ibm-cloud
Chapter 1. Red Hat Certified, validated, and Ansible Galaxy content in automation hub
Chapter 1. Red Hat Certified, validated, and Ansible Galaxy content in automation hub Ansible Certified Content Collections are included in your subscription to Red Hat Ansible Automation Platform. Red Hat Ansible content includes two types of content: Ansible Certified Content Collections and Ansible validated content. Using Ansible automation hub, you can access and curate a unique set of collections from all forms of Ansible content. Red Hat Ansible content contains two types of content: Ansible Certified Content Collections Ansible validated content collections Ansible validated collections are available in your private automation hub through the Platform Installer. When you download Red Hat Ansible Automation Platform with the bundled installer, validated content is pre-populated into the private automation hub by default, but only if you enable the private automation hub as part of the inventory. If you are not using the bundle installer, you can use a Red Hat supplied Ansible playbook to install validated content. For further information, see Ansible validated content . You can update these collections manually by downloading their packages. Why certify Ansible collections? The Ansible certification program enables a shared statement of support for Red Hat Ansible Certified Content between Red Hat and the ecosystem partner. An end customer, experiencing trouble with Ansible and certified partner content, can open a support ticket, for example, a request for information, or a problem with Red Hat, and expect the ticket to be resolved by Red Hat and the ecosystem partner. Red Hat offers go-to-market benefits for Certified Partners to grow market awareness, generate demand, and sell collaboratively. Red Hat Ansible Certified Content Collections are distributed through Ansible automation hub (subscription required), a centralized repository for jointly supported Ansible Content. As a certified partner, publishing collections to Ansible automation hub provides end customers the power to manage how trusted automation content is used in their production environment with a well-known support life cycle. For more information about getting started with certifying a solution, see Red Hat Partner Connect . How do I get a collection certified? For instructions on certifying your collection, see the Ansible certification policy guide on Red Hat Partner Connect . How does the joint support agreement on Certified Collections work? If a customer raises an issue with the Red Hat support team about a certified collection, Red Hat support assesses the issue and checks whether the problem exists within Ansible or Ansible usage. They also check whether the issue is with a certified collection. If there is a problem with the certified collection, support teams transfer the issue to the vendor owner of the certified collection through an agreed upon tool such as TSANet. Can I create and certify a collection containing only Ansible Roles? You can create and certify collections that contain only roles. Current testing requirements are focused on collections containing modules, and additional resources are currently in progress for testing collections only containing roles. Contact [email protected] for more information. 1.1. Synchronizing Ansible Content Collections in automation hub Important As of the 2.4 release you can still synchronize content, but synclists are deprecated, and will be removed in a future version. To synchronize content, you can now upload a manually-created requirements file from the rh-certified remote. Remotes are configurations that enable you to synchronize content to your custom repositories from an external collection source. You can use Ansible automation hub to distribute the relevant Red Hat Ansible Certified Content Collections to your users by creating synclists or a requirements file. For more information about using requirements files, see Install multiple collections with a requirements file in the Using Ansible collections guide. 1.1.1. Explanation of Red Hat Ansible Certified Content Collections synclists A synclist is a curated group of Red Hat Certified Collections that is assembled by your organization administrator. It synchronizes with your local Ansible automation hub. Use synclists to manage only the content that you want and exclude unnecessary collections. Design and manage your synclist from the content available as part of Red Hat content on console.redhat.com Each synclist has its own unique repository URL that you can use to designate as a remote source for content in automation hub. You securely access each synclist by using an API token. 1.1.2. Creating a synclist of Red Hat Ansible Certified Content Collections You can create a synclist of curated Red Hat Ansible Certified Content in Ansible automation hub on console.redhat.com. Your synclist repository is located on the automation hub navigation panel under Collection Repositories , which is updated whenever you manage content within Ansible Certified Content Collections. All Ansible Certified Content Collections are included by default in your initial organization synclist. Prerequisites You have a valid Ansible Automation Platform subscription. You have Organization Administrator permissions for console.redhat.com. The following domain names are part of either the firewall or the proxy's allowlist. They are required for successful connection and download of collections from automation hub or Galaxy server: galaxy.ansible.com cloud.redhat.com console.redhat.com sso.redhat.com Ansible automation hub resources are stored in Amazon Simple Storage. The following domain names must be in the allow list: automation-hub-prd.s3.us-east-2.amazonaws.com ansible-galaxy.s3.amazonaws.com SSL inspection is disabled either when using self signed certificates or for the Red Hat domains. Procedure Log in to console.redhat.com . Navigate to Automation Hub Collections . Set the toggle switch on each collection to exclude or include it on your synclist. To initiate the remote repository synchronization, navigate to automation hub and select Collection Repositories . Click the More Actions icon ... and select Sync to initiate the remote repository synchronization to your private automation hub. Optional: If your remote repository is already configured, update the collections content that you made available to local users by manually synchronizing Red Hat Ansible Certified Content Collections to your private automation hub. 1.2. Configuring Ansible automation hub remote repositories to synchronize content Use remote configurations to configure your private automation hub to synchronize with Ansible Certified Content Collections hosted on console.redhat.com or with your collections in Ansible Galaxy. Important As of the 2.4 release you can still synchronize content, but synclists are deprecated, and will be removed in a future version. To synchronize content, you can now upload a manually-created requirements file from the rh-certified remote. Remotes are configurations that allow you to synchronize content to your custom repositories from an external collection source. What's the difference between Ansible Galaxy and Ansible automation hub? Collections published to Ansible Galaxy are the latest content published by the Ansible community and have no joint support claims associated with them. Ansible Galaxy is the recommended frontend directory for the Ansible community accessing content. Collections published to Ansible automation hub are targeted for joint customers of Red Hat and selected partners. Customers need an Ansible subscription to access and download collections on Ansible automation hub. A certified collection means that Red Hat and partners have a strategic relationship in place and are ready to support joint customers, and may have had additional testing and validation done against them. How do I request a namespace on Ansible Galaxy? To request a namespace through an Ansible Galaxy GitHub issue, follow these steps: Send an email to [email protected] Include the GitHub username used to sign up on Ansible Galaxy. You must have logged in at least once for the system to validate. After users are added as administrators of the namespace, you can use the self-serve process to add more administrators. Are there any restrictions for Ansible Galaxy namespace naming? Collection namespaces must follow python module name convention. This means collections should have short, all lowercase names. You can use underscores in the collection name if it improves readability. 1.2.1. Reasons to create remote configurations Each remote configuration located in Collections Remotes provides information for both the community and rh-certified repository about when the repository was last updated . You can add new content to Ansible automation hub at any time using the Edit and Sync features included on the Collection Repositories page. 1.2.2. Retrieving the Sync URL and API token for your Red Hat Certified Collection You can synchronize Ansible Certified Content Collections curated by your organization from console.redhat.com to your private automation hub. The API token is a secret token used to protect your content. Prerequisites You have organization administrator permissions to create the synclist on console.redhat.com. Procedure Log in to console.redhat.com as an organization administrator. Navigate to Automation Hub Connect to Hub . Under Offline token , click Load token . Click Copy to clipboard to copy the API token. Paste the API token into a file and store in a secure location. 1.2.3. Configuring the rh-certified remote repository and synchronizing Red Hat Ansible Certified Content Collection You can edit the rh-certified remote repository to synchronize collections from automation hub hosted on console.redhat.com to your private automation hub. By default, your private automation hub rh-certified repository includes the URL for the entire group of Ansible Certified Content Collections. To use only those collections specified by your organization, a private automation hub administrator can upload manually-created requirements files from the rh-certified remote. For more information about using requirements files, see Install multiple collections with a requirements file in the Using Ansible collections guide. If you have collections A , B , and C in your requirements file, and a new collection X is added to console.redhat.com that you want to use, you must add X to your requirements file for private automation hub to synchronize it. Prerequisites You have valid Modify Ansible repo content permissions. For more information on permissions, see Configuring user access for your private automation hub . You have retrieved the Sync URL and API Token from the automation hub hosted service on console.redhat.com. You have configured access to port 443. This is required for synchronizing certified collections. For more information, see the automation hub table in the Network ports and protocols chapter of the Red Hat Ansible Automation Platform Planning Guide. Procedure Log in to your private automation hub. From the navigation panel, select Collections Remotes . In the rh-certified remote repository, click the More Actions icon ... and click Edit . In the URL field, paste the Sync URL . In the Token field, paste the token you acquired from console.redhat.com. Click Save . You can now synchronize collections between your organization synclist on console.redhat.com and your private automation hub. Click the More Actions icon ... and select Sync . Verification The Sync status notification updates to notify you that the Red Hat Certified Content Collections synchronization is complete. Select Red Hat Certified from the collections content drop-down list to confirm that your collections content has synchronized successfully. 1.2.4. Configuring the community remote repository and syncing Ansible Galaxy collections You can edit the community remote repository to synchronize chosen collections from Ansible Galaxy to your private automation hub. By default, your private automation hub community repository directs to galaxy.ansible.com/api/ . Prerequisites You have Modify Ansible repo content permissions. For more information on permissions, see Configuring user access for your private automation hub . You have a requirements.yml file that identifies those collections to synchronize from Ansible Galaxy as in the following example: Requirements.yml example Procedure Log in to automation hub. From the navigation panel, select Collections Remotes . In the Community remote, click the More Actions icon ... and select Edit . In the YAML requirements field, click Browse and locate the requirements.yml file on your local machine. Click Save . You can now synchronize collections identified in your requirements.yml file from Ansible Galaxy to your private automation hub. Click the More Actions icon ... and select Sync to sync collections from Ansible Galaxy and Ansible automation hub. Verification The Sync status notification updates to notify you of completion or failure of Ansible Galaxy collections synchronization to your Ansible automation hub. Select Community from the collections content drop-down list to confirm successful synchronization. 1.2.5. Configuring proxy settings If your private automation hub is behind a network proxy, you can configure proxy settings on the remote to sync content located outside of your local network. Prerequisites You have valid Modify Ansible repo content permissions. For more information on permissions, see Configuring user access for your private automation hub . You have a proxy URL and credentials from your local network administrator. Procedure Log in to private automation hub. From the navigation panel, select Collections Remotes . In either the rh-certified or Community remote, click the More Actions icon ... and select Edit . Expand the Show advanced options drop-down menu. Enter your proxy URL, proxy username, and proxy password in the appropriate fields. Click Save . 1.3. Collections and content signing in private automation hub As an automation administrator for your organization, you can configure private automation hub for signing and publishing Ansible content collections from different groups within your organization. For additional security, automation creators can configure Ansible-Galaxy CLI to verify these collections to ensure that they have not been changed after they were uploaded to automation hub. 1.3.1. Configuring content signing on private automation hub To successfully sign and publish Ansible Certified Content Collections, you must configure private automation hub for signing. Prerequisites Your GnuPG key pairs have been securely set up and managed by your organization. Your public-private key pair has proper access for configuring content signing on private automation hub. Procedure Create a signing script that accepts only a filename. Note This script acts as the signing service and must generate an ascii-armored detached gpg signature for that file using the key specified through the PULP_SIGNING_KEY_FINGERPRINT environment variable. The script prints out a JSON structure with the following format. {"file": "filename", "signature": "filename.asc"} All the file names are relative paths inside the current working directory. The file name must remain the same for the detached signature. Example: The following script produces signatures for content: #!/usr/bin/env bash FILE_PATH=USD1 SIGNATURE_PATH="USD1.asc" ADMIN_ID="USDPULP_SIGNING_KEY_FINGERPRINT" PASSWORD="password" # Create a detached signature gpg --quiet --batch --pinentry-mode loopback --yes --passphrase \ USDPASSWORD --homedir ~/.gnupg/ --detach-sign --default-key USDADMIN_ID \ --armor --output USDSIGNATURE_PATH USDFILE_PATH # Check the exit status STATUS=USD? if [ USDSTATUS -eq 0 ]; then echo {\"file\": \"USDFILE_PATH\", \"signature\": \"USDSIGNATURE_PATH\"} else exit USDSTATUS fi After you deploy a private automation hub with signing enabled to your Ansible Automation Platform cluster, new UI additions are displayed in collections. Review the Ansible Automation Platform installer inventory file for options that begin with automationhub_* . [all:vars] . . . automationhub_create_default_collection_signing_service = True automationhub_auto_sign_collections = True automationhub_require_content_approval = True automationhub_collection_signing_service_key = /abs/path/to/galaxy_signing_service.gpg automationhub_collection_signing_service_script = /abs/path/to/collection_signing.sh The two new keys ( automationhub_auto_sign_collections and automationhub_require_content_approval ) indicate that the collections must be signed and approved after they are uploaded to private automation hub. 1.3.2. Using content signing services in private automation hub After you have configured content signing on your private automation hub, you can manually sign a new collection or replace an existing signature with a new one. When users download a specific collection, this signature indicates that the collection is for them and has not been modified after certification. You can use content signing on private automation hub in the following scenarios: Your system does not have automatic signing configured and you must use a manual signing process to sign collections. The current signatures on the automatically configured collections are corrupted and need new signatures. You need additional signatures for previously signed content. You want to rotate signatures on your collections. Procedure Log in to your Ansible Automation Platform. From the navigation panel, select Collections Approval . The Approval dashboard opens and displays a list of collections. Click Sign and approve for each collection that you want to sign. Verification Verify that the collections you signed and manually approved are displayed in the Collections tab. 1.3.3. Downloading signature public keys After you sign and approve collections, download the signature public keys from the automation hub UI. You must download the public key before you add it to the local system keyring. Procedure Log in to your automation hub. From the navigation panel, select Signature Keys . The Signature Keys dashboard displays a list of multiple keys: collections and container images. To verify collections, download the key prefixed with collections- . To verify container images, download the key prefixed with container- . Choose one of the following methods to download your public key: Select the menu icon and click Download Key to download the public key. Select the public key from the list and click the Copy to clipboard icon. Click the drop-down menu under the Public Key tab and copy the entire public key block. Use the public key that you copied to verify the content collection that you are installing. 1.3.4. Configuring Ansible-Galaxy CLI to verify collections You can configure Ansible-Galaxy CLI to verify collections. This ensures that downloaded collections are approved by your organization and have not been changed after they were uploaded to automation hub. If a collection has been signed by automation hub, the server provides ASCII armored, GPG-detached signatures to verify the authenticity of MANIFEST.json before using it to verify the collection's contents. You must opt into signature verification by configuring a keyring for ansible-galaxy or providing the path with the --keyring option. Prerequisites Signed collections are available in automation hub to verify signature. Certified collections can be signed by approved roles within your organization. Public key for verification has been added to the local system keyring. Procedure To import a public key into a non-default keyring for use with ansible-galaxy , run the following command. gpg --import --no-default-keyring --keyring ~/.ansible/pubring.kbx my-public-key.asc Note In addition to any signatures provided by the automation hub, signature sources can also be provided in the requirements file and on the command line. Signature sources should be URIs. To verify the collection name provided on the CLI with an additional signature, run the following command: ansible-galaxy collection install namespace.collection --signature https://examplehost.com/detached_signature.asc --signature file:///path/to/local/detached_signature.asc --keyring ~/.ansible/pubring.kbx You can use this option multiple times to provide multiple signatures. Confirm that the collections in a requirements file list any additional signature sources following the collection's signatures key, as in the following example. # requirements.yml collections: - name: ns.coll version: 1.0.0 signatures: - https://examplehost.com/detached_signature.asc - file:///path/to/local/detached_signature.asc ansible-galaxy collection verify -r requirements.yml --keyring ~/.ansible/pubring.kbx When you install a collection from automation hub, the signatures provided by the server are saved along with the installed collections to verify the collection's authenticity. (Optional) If you need to verify the internal consistency of your collection again without querying the Ansible Galaxy server, run the same command you used previously using the --offline option. Are there any recommendations for collection naming? Create a collection with company_name.product format. This format means that multiple products can have different collections under the company namespace. How do I get a namespace on Ansible automation hub? By default namespaces used on Ansible Galaxy are also used on Ansible automation hub by the Ansible partner team. For any queries and clarifications contact [email protected] . 1.4. Ansible validated content Red Hat Ansible Automation Platform includes Ansible validated content, which complements existing Red Hat Ansible Certified Content. Ansible validated content provides an expert-led path for performing operational tasks on a variety of platforms from both Red Hat and our trusted partners. 1.4.1. Configuring validated collections with the installer When you download and run the bundle installer, certified and validated collections are automatically uploaded. Certified collections are uploaded into the rh-certified repository. Validated collections are uploaded into the validated repository. You can change to default configuration by using two variables: automationhub_seed_collections is a boolean that defines whether or not preloading is enabled. automationhub_collection_seed_repository . A variable that enables you to specify the type of content to upload when it is set to true . Possible values are certified or validated . If missing both content sets will be uploaded. 1.4.2. Installing validated content using the tarball If you are not using the bundle installer, you can use a standalone tarball, ansible-validated-content-bundle-1.tar.gz . You can also use this standalone tarball later to update validated contents in any environment, when a newer tarball becomes available, without having to re-run the bundle installer. Prerequisites You require the following variables to run the playbook. Name Description automationhub_admin_password Your administration password. automationhub_api_token The API token generated for your automation hub. automationhub_main_url For example, https://automationhub.example.com automationhub_require_content_approval Boolean ( true or false ) This must match the value used during automation hub deployment. This variable is set to true by the installer. Procedure To obtain the tarball, navigate to the Red Hat Ansible Automation Platform download page and select Ansible Validated Content . Upload the content and define the variables (this example uses automationhub_api_token ): ansible-playbook collection_seed.yml -e automationhub_api_token=<api_token> -e automationhub_main_url=https://automationhub.example.com -e automationhub_require_content_approval=true Note Use either automationhub_admin_password or automationhub_api_token , not both. When complete, the collections are visible in the validated collection section of private automation hub. Users can now view and download collections from your private automation hub. Additional Resources For more information on running ansible playbooks, see ansible-playbook .
[ "collections: # Install a collection from Ansible Galaxy. - name: community.aws version: 5.2.0 source: https://galaxy.ansible.com", "{\"file\": \"filename\", \"signature\": \"filename.asc\"}", "#!/usr/bin/env bash FILE_PATH=USD1 SIGNATURE_PATH=\"USD1.asc\" ADMIN_ID=\"USDPULP_SIGNING_KEY_FINGERPRINT\" PASSWORD=\"password\" Create a detached signature gpg --quiet --batch --pinentry-mode loopback --yes --passphrase USDPASSWORD --homedir ~/.gnupg/ --detach-sign --default-key USDADMIN_ID --armor --output USDSIGNATURE_PATH USDFILE_PATH Check the exit status STATUS=USD? if [ USDSTATUS -eq 0 ]; then echo {\\\"file\\\": \\\"USDFILE_PATH\\\", \\\"signature\\\": \\\"USDSIGNATURE_PATH\\\"} else exit USDSTATUS fi", "[all:vars] . . . automationhub_create_default_collection_signing_service = True automationhub_auto_sign_collections = True automationhub_require_content_approval = True automationhub_collection_signing_service_key = /abs/path/to/galaxy_signing_service.gpg automationhub_collection_signing_service_script = /abs/path/to/collection_signing.sh", "gpg --import --no-default-keyring --keyring ~/.ansible/pubring.kbx my-public-key.asc", "ansible-galaxy collection install namespace.collection --signature https://examplehost.com/detached_signature.asc --signature file:///path/to/local/detached_signature.asc --keyring ~/.ansible/pubring.kbx", "requirements.yml collections: - name: ns.coll version: 1.0.0 signatures: - https://examplehost.com/detached_signature.asc - file:///path/to/local/detached_signature.asc ansible-galaxy collection verify -r requirements.yml --keyring ~/.ansible/pubring.kbx", "ansible-playbook collection_seed.yml -e automationhub_api_token=<api_token> -e automationhub_main_url=https://automationhub.example.com -e automationhub_require_content_approval=true" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/managing_content_in_automation_hub/managing-cert-valid-content
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate and prioritize your feedback regarding our documentation. Provide as much detail as possible, so that your request can be quickly addressed. Prerequisites You are logged in to the Red Hat Customer Portal. Procedure To provide feedback, perform the following steps: Click the following link: Create Issue Describe the issue or enhancement in the Summary text box. Provide details about the issue or requested enhancement in the Description text box. Type your name in the Reporter text box. Click the Create button. This action creates a documentation ticket and routes it to the appropriate documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/red_hat_insights_overview/1-latest/html/release_notes/proc-providing-feedback-on-redhat-documentation
6.3. Performing a Driver Update During Installation
6.3. Performing a Driver Update During Installation At the very beginning of the installation process, you can perform a driver update in the following ways: let the installation program automatically find and offer a driver update for installation, let the installation program prompt you to locate a driver update, manually specify a path to a driver update image or an RPM package. Important Always make sure to put your driver update discs on a standard disk partition. Advanced storage, such as RAID or LVM volumes, might not be accessible during the early stage of the installation when you perform driver updates. 6.3.1. Automatic Driver Update To have the installation program automatically recognize a driver update disc, connect a block device with the OEMDRV volume label to your computer before starting the installation process. Note Starting with Red Hat Enterprise Linux 7.2, you can also use the OEMDRV block device to automatically load a Kickstart file. This file must be named ks.cfg and placed in the root of the device to be loaded. See Chapter 27, Kickstart Installations for more information about Kickstart installations. When the installation begins, the installation program detects all available storage connected to the system. If it finds a storage device labeled OEMDRV , it will treat it as a driver update disc and attempt to load driver updates from this device. You will be prompted to select which drivers to load: Figure 6.1. Selecting a Driver Use number keys to toggle selection on individual drivers. When ready, press c to install the selected drivers and proceed to the Anaconda graphical user interface. 6.3.2. Assisted Driver Update It is always recommended to have a block device with the OEMDRV volume label available to install a driver during installation. However, if no such device is detected and the inst.dd option was specified at the boot command line, the installation program lets you find the driver disk in interactive mode. In the first step, select a local disk partition from the list for Anaconda to scan for ISO files. Then, select one of the detected ISO files. Finally, select one or more available drivers. The image below demonstrates the process in the text user interface with individual steps highlighted. Figure 6.2. Selecting a Driver Interactively Note If you extracted your ISO image file and burned it on a CD or DVD but the media does not have the OEMDRV volume label, either use the inst.dd option with no arguments and use the menu to select the device, or use the following boot option for the installation program to scan the media for drivers: Hit number keys to toggle selection on individual drivers. When ready, press c to install the selected drivers and proceed to the Anaconda graphical user interface. 6.3.3. Manual Driver Update For manual driver installation, prepare an ISO image file containing your drivers to an accessible location, such a USB flash drive or a web server, and connect it to your computer. At the welcome screen, hit Tab to display the boot command line and append the inst.dd= location to it, where location is a path to the driver update disc: Figure 6.3. Specifying a Path to a Driver Update Typically, the image file is located on a web server (for example, http://server.example.com/dd.iso ) or on a USB flash drive (for example, /dev/sdb1 ). It is also possible to specify an RPM package containing the driver update (for example http://server.example.com/dd.rpm ). When ready, hit Enter to execute the boot command. Then, your selected drivers will be loaded and the installation process will proceed normally. 6.3.4. Blacklisting a Driver A malfunctioning driver can prevent a system from booting normally during installation. When this happens, you can disable (or blacklist) the driver by customizing the boot command line. At the boot menu, display the boot command line by hitting the Tab key. Then, append the modprobe.blacklist= driver_name option to it. Replace driver_name with names of a driver or drivers you want to disable, for example: Note that the drivers blacklisted during installation using the modprobe.blacklist= boot option will remain disabled on the installed system and appear in the /etc/modprobe.d/anaconda-blacklist.conf file. See Chapter 23, Boot Options for more information about blacklisting drivers and other boot options.
[ "inst.dd=/dev/sr0", "modprobe.blacklist=ahci" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-driver-updates-performing-x86
Chapter 4. Support for FIPS cryptography
Chapter 4. Support for FIPS cryptography You can install an OpenShift Container Platform cluster in FIPS mode. OpenShift Container Platform is designed for FIPS. When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. For more information about the NIST validation program, see Cryptographic Module Validation Program . For the latest NIST status for the individual versions of RHEL cryptographic libraries that have been submitted for validation, see Compliance Activities and Government Standards . Important To enable FIPS mode for your cluster, you must run the installation program from a RHEL 8 computer that is configured to operate in FIPS mode. Running RHEL 9 with FIPS mode enabled to install an OpenShift Container Platform cluster is not possible. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . For the Red Hat Enterprise Linux CoreOS (RHCOS) machines in your cluster, this change is applied when the machines are deployed based on the status of an option in the install-config.yaml file, which governs the cluster options that a user can change during cluster deployment. With Red Hat Enterprise Linux (RHEL) machines, you must enable FIPS mode when you install the operating system on the machines that you plan to use as worker machines. Because FIPS must be enabled before the operating system that your cluster uses boots for the first time, you cannot enable FIPS after you deploy a cluster. 4.1. FIPS validation in OpenShift Container Platform OpenShift Container Platform uses certain FIPS validated or Modules In Process modules within RHEL and RHCOS for the operating system components that it uses. See RHEL8 core crypto components . For example, when users use SSH to connect to OpenShift Container Platform clusters and containers, those connections are properly encrypted. OpenShift Container Platform components are written in Go and built with Red Hat's golang compiler. When you enable FIPS mode for your cluster, all OpenShift Container Platform components that require cryptographic signing call RHEL and RHCOS cryptographic libraries. Table 4.1. FIPS mode attributes and limitations in OpenShift Container Platform 4.15 Attributes Limitations FIPS support in RHEL 8 and RHCOS operating systems. The FIPS implementation does not offer a single function that both computes hash functions and validates the keys that are based on that hash. This limitation will continue to be evaluated and improved in future OpenShift Container Platform releases. FIPS support in CRI-O runtimes. FIPS support in OpenShift Container Platform services. FIPS validated or Modules In Process cryptographic module and algorithms that are obtained from RHEL 8 and RHCOS binaries and images. Use of FIPS compatible golang compiler. TLS FIPS support is not complete but is planned for future OpenShift Container Platform releases. FIPS support across multiple architectures. FIPS is currently only supported on OpenShift Container Platform deployments using x86_64 , ppc64le , and s390x architectures. 4.2. FIPS support in components that the cluster uses Although the OpenShift Container Platform cluster itself uses FIPS validated or Modules In Process modules, ensure that the systems that support your OpenShift Container Platform cluster use FIPS validated or Modules In Process modules for cryptography. 4.2.1. etcd To ensure that the secrets that are stored in etcd use FIPS validated or Modules In Process encryption, boot the node in FIPS mode. After you install the cluster in FIPS mode, you can encrypt the etcd data by using the FIPS-approved aes cbc cryptographic algorithm. 4.2.2. Storage For local storage, use RHEL-provided disk encryption or Container Native Storage that uses RHEL-provided disk encryption. By storing all data in volumes that use RHEL-provided disk encryption and enabling FIPS mode for your cluster, both data at rest and data in motion, or network data, are protected by FIPS validated or Modules In Process encryption. You can configure your cluster to encrypt the root filesystem of each node, as described in Customizing nodes . 4.2.3. Runtimes To ensure that containers know that they are running on a host that is using FIPS validated or Modules In Process cryptography modules, use CRI-O to manage your runtimes. 4.3. Installing a cluster in FIPS mode To install a cluster in FIPS mode, follow the instructions to install a customized cluster on your preferred infrastructure. Ensure that you set fips: true in the install-config.yaml file before you deploy your cluster. Important To enable FIPS mode for your cluster, you must run the installation program from a RHEL computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . Amazon Web Services Alibaba Cloud Microsoft Azure Bare metal Google Cloud Platform IBM Cloud(R) IBM Power(R) IBM Z(R) and IBM(R) LinuxONE IBM Z(R) and IBM(R) LinuxONE with RHEL KVM Red Hat OpenStack Platform (RHOSP) VMware vSphere Note If you are using Azure File storage, you cannot enable FIPS mode. To apply AES CBC encryption to your etcd data store, follow the Encrypting etcd data process after you install your cluster. If you add RHEL nodes to your cluster, ensure that you enable FIPS mode on the machines before their initial boot. See Adding RHEL compute machines to an OpenShift Container Platform cluster and Enabling FIPS Mode in the RHEL 8 documentation.
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installation_overview/installing-fips
Chapter 2. Projects
Chapter 2. Projects 2.1. Working with projects A project allows a community of users to organize and manage their content in isolation from other communities. Note Projects starting with openshift- and kube- are default projects . These projects host cluster components that run as pods and other infrastructure components. As such, OpenShift Container Platform does not allow you to create projects starting with openshift- or kube- using the oc new-project command. Cluster administrators can create these projects using the oc adm new-project command. Note You cannot assign an SCC to pods created in one of the default namespaces: default , kube-system , kube-public , openshift-node , openshift-infra , and openshift . You cannot use these namespaces for running pods or services. 2.1.1. Creating a project using the web console If allowed by your cluster administrator, you can create a new project. Note Projects starting with openshift- and kube- are considered critical by OpenShift Container Platform. As such, OpenShift Container Platform does not allow you to create Projects starting with openshift- using the web console. Note You cannot assign an SCC to pods created in one of the default namespaces: default , kube-system , kube-public , openshift-node , openshift-infra , and openshift . You cannot use these namespaces for running pods or services. Procedure Navigate to Home Projects . Click Create Project . Enter your project details. Click Create . 2.1.2. Creating a project using the Developer perspective in the web console You can use the Developer perspective in the OpenShift Container Platform web console to create a project in your cluster. Note Projects starting with openshift- and kube- are considered critical by OpenShift Container Platform. As such, OpenShift Container Platform does not allow you to create projects starting with openshift- or kube- using the Developer perspective. Cluster administrators can create these projects using the oc adm new-project command. Note You cannot assign an SCC to pods created in one of the default namespaces: default , kube-system , kube-public , openshift-node , openshift-infra , and openshift . You cannot use these namespaces for running pods or services. Prerequisites Ensure that you have the appropriate roles and permissions to create projects, applications, and other workloads in OpenShift Container Platform. Procedure You can create a project using the Developer perspective, as follows: Click the Project drop-down menu to see a list of all available projects. Select Create Project . Figure 2.1. Create project In the Create Project dialog box, enter a unique name, such as myproject , in the Name field. Optional: Add the Display Name and Description details for the project. Click Create . Use the left navigation panel to navigate to the Project view and see the dashboard for your project. Optional: Use the Project drop-down menu at the top of the screen and select all projects to list all of the projects in your cluster. Use the Details tab to see the project details. If you have adequate permissions for a project, you can use the Project Access tab to provide or revoke administrator , edit , and view privileges for the project. 2.1.3. Creating a project using the CLI If allowed by your cluster administrator, you can create a new project. Note Projects starting with openshift- and kube- are considered critical by OpenShift Container Platform. As such, OpenShift Container Platform does not allow you to create Projects starting with openshift- or kube- using the oc new-project command. Cluster administrators can create these Projects using the oc adm new-project command. Note You cannot assign an SCC to pods created in one of the default namespaces: default , kube-system , kube-public , openshift-node , openshift-infra , and openshift . You cannot use these namespaces for running pods or services. Procedure Run: USD oc new-project <project_name> \ --description="<description>" --display-name="<display_name>" For example: USD oc new-project hello-openshift \ --description="This is an example project" \ --display-name="Hello OpenShift" Note The number of projects you are allowed to create might be limited by the system administrator. After your limit is reached, you might have to delete an existing project in order to create a new one. 2.1.4. Viewing a project using the web console Procedure Navigate to Home Projects . Select a project to view. On this page, click Workloads to see workloads in the project. 2.1.5. Viewing a project using the CLI When viewing projects, you are restricted to seeing only the projects you have access to view based on the authorization policy. Procedure To view a list of projects, run: USD oc get projects You can change from the current project to a different project for CLI operations. The specified project is then used in all subsequent operations that manipulate project-scoped content: USD oc project <project_name> 2.1.6. Providing access permissions to your project using the Developer perspective You can use the Project view in the Developer perspective to grant or revoke access permissions to your project. Procedure To add users to your project and provide Admin , Edit , or View access to them: In the Developer perspective, navigate to the Project view. In the Project page, select the Project Access tab. Click Add Access to add a new row of permissions to the default ones. Figure 2.2. Project permissions Enter the user name, click the Select a role drop-down list, and select an appropriate role. Click Save to add the new permissions. You can also use: The Select a role drop-down list, to modify the access permissions of an existing user. The Remove Access icon, to completely remove the access permissions of an existing user to the project. Note Advanced role-based access control is managed in the Roles and Roles Binding views in the Administrator perspective. 2.1.7. Adding to a project Procedure Select Developer from the context selector at the top of the web console navigation menu. Click +Add At the top of the page, select the name of the project that you want to add to. Click on a method for adding to your project, and then follow the workflow. Note You can also add components to the topology using quick search. 2.1.8. Checking project status using the web console Procedure Navigate to Home Projects . Select a project to see its status. 2.1.9. Checking project status using the CLI Procedure Run: USD oc status This command provides a high-level overview of the current project, with its components and their relationships. 2.1.10. Deleting a project using the web console You can delete a project by using the OpenShift Container Platform web console. Note If you do not have permissions to delete the project, the Delete Project option is not available. Procedure Navigate to Home Projects . Locate the project that you want to delete from the list of projects. On the far right side of the project listing, select Delete Project from the Options menu . When the Delete Project pane opens, enter the name of the project that you want to delete in the field. Click Delete . 2.1.11. Deleting a project using the CLI When you delete a project, the server updates the project status to Terminating from Active . Then, the server clears all content from a project that is in the Terminating state before finally removing the project. While a project is in Terminating status, you cannot add new content to the project. Projects can be deleted from the CLI or the web console. Procedure Run: USD oc delete project <project_name> 2.2. Creating a project as another user Impersonation allows you to create a project as a different user. 2.2.1. API impersonation You can configure a request to the OpenShift Container Platform API to act as though it originated from another user. For more information, see User impersonation in the Kubernetes documentation. 2.2.2. Impersonating a user when you create a project You can impersonate a different user when you create a project request. Because system:authenticated:oauth is the only bootstrap group that can create project requests, you must impersonate that group. Procedure To create a project request on behalf of a different user: USD oc new-project <project> --as=<user> \ --as-group=system:authenticated --as-group=system:authenticated:oauth 2.3. Configuring project creation In OpenShift Container Platform, projects are used to group and isolate related objects. When a request is made to create a new project using the web console or oc new-project command, an endpoint in OpenShift Container Platform is used to provision the project according to a template, which can be customized. As a cluster administrator, you can allow and configure how developers and service accounts can create, or self-provision , their own projects. 2.3.1. About project creation The OpenShift Container Platform API server automatically provisions new projects based on the project template that is identified by the projectRequestTemplate parameter in the cluster's project configuration resource. If the parameter is not defined, the API server creates a default template that creates a project with the requested name, and assigns the requesting user to the admin role for that project. When a project request is submitted, the API substitutes the following parameters into the template: Table 2.1. Default project template parameters Parameter Description PROJECT_NAME The name of the project. Required. PROJECT_DISPLAYNAME The display name of the project. May be empty. PROJECT_DESCRIPTION The description of the project. May be empty. PROJECT_ADMIN_USER The user name of the administrating user. PROJECT_REQUESTING_USER The user name of the requesting user. Access to the API is granted to developers with the self-provisioner role and the self-provisioners cluster role binding. This role is available to all authenticated developers by default. 2.3.2. Modifying the template for new projects As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements. To create your own custom project template: Procedure Log in as a user with cluster-admin privileges. Generate the default project template: USD oc adm create-bootstrap-project-template -o yaml > template.yaml Use a text editor to modify the generated template.yaml file by adding objects or modifying existing objects. The project template must be created in the openshift-config namespace. Load your modified template: USD oc create -f template.yaml -n openshift-config Edit the project configuration resource using the web console or CLI. Using the web console: Navigate to the Administration Cluster Settings page. Click Global Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . Using the CLI: Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section to include the projectRequestTemplate and name parameters, and set the name of your uploaded project template. The default name is project-request . Project configuration resource with custom project template apiVersion: config.openshift.io/v1 kind: Project metadata: ... spec: projectRequestTemplate: name: <template_name> After you save your changes, create a new project to verify that your changes were successfully applied. 2.3.3. Disabling project self-provisioning You can prevent an authenticated user group from self-provisioning new projects. Procedure Log in as a user with cluster-admin privileges. View the self-provisioners cluster role binding usage by running the following command: USD oc describe clusterrolebinding.rbac self-provisioners Example output Name: self-provisioners Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate=true Role: Kind: ClusterRole Name: self-provisioner Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated:oauth Review the subjects in the self-provisioners section. Remove the self-provisioner cluster role from the group system:authenticated:oauth . If the self-provisioners cluster role binding binds only the self-provisioner role to the system:authenticated:oauth group, run the following command: USD oc patch clusterrolebinding.rbac self-provisioners -p '{"subjects": null}' If the self-provisioners cluster role binding binds the self-provisioner role to more users, groups, or service accounts than the system:authenticated:oauth group, run the following command: USD oc adm policy \ remove-cluster-role-from-group self-provisioner \ system:authenticated:oauth Edit the self-provisioners cluster role binding to prevent automatic updates to the role. Automatic updates reset the cluster roles to the default state. To update the role binding using the CLI: Run the following command: USD oc edit clusterrolebinding.rbac self-provisioners In the displayed role binding, set the rbac.authorization.kubernetes.io/autoupdate parameter value to false , as shown in the following example: apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "false" ... To update the role binding by using a single command: USD oc patch clusterrolebinding.rbac self-provisioners -p '{ "metadata": { "annotations": { "rbac.authorization.kubernetes.io/autoupdate": "false" } } }' Log in as an authenticated user and verify that it can no longer self-provision a project: USD oc new-project test Example output Error from server (Forbidden): You may not request a new project via this API. Consider customizing this project request message to provide more helpful instructions specific to your organization. 2.3.4. Customizing the project request message When a developer or a service account that is unable to self-provision projects makes a project creation request using the web console or CLI, the following error message is returned by default: You may not request a new project via this API. Cluster administrators can customize this message. Consider updating it to provide further instructions on how to request a new project specific to your organization. For example: To request a project, contact your system administrator at [email protected] . To request a new project, fill out the project request form located at https://internal.example.com/openshift-project-request . To customize the project request message: Procedure Edit the project configuration resource using the web console or CLI. Using the web console: Navigate to the Administration Cluster Settings page. Click Global Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . Using the CLI: Log in as a user with cluster-admin privileges. Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section to include the projectRequestMessage parameter and set the value to your custom message: Project configuration resource with custom project request message apiVersion: config.openshift.io/v1 kind: Project metadata: ... spec: projectRequestMessage: <message_string> For example: apiVersion: config.openshift.io/v1 kind: Project metadata: ... spec: projectRequestMessage: To request a project, contact your system administrator at [email protected]. After you save your changes, attempt to create a new project as a developer or service account that is unable to self-provision projects to verify that your changes were successfully applied.
[ "oc new-project <project_name> --description=\"<description>\" --display-name=\"<display_name>\"", "oc new-project hello-openshift --description=\"This is an example project\" --display-name=\"Hello OpenShift\"", "oc get projects", "oc project <project_name>", "oc status", "oc delete project <project_name>", "oc new-project <project> --as=<user> --as-group=system:authenticated --as-group=system:authenticated:oauth", "oc adm create-bootstrap-project-template -o yaml > template.yaml", "oc create -f template.yaml -n openshift-config", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>", "oc describe clusterrolebinding.rbac self-provisioners", "Name: self-provisioners Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate=true Role: Kind: ClusterRole Name: self-provisioner Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated:oauth", "oc patch clusterrolebinding.rbac self-provisioners -p '{\"subjects\": null}'", "oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated:oauth", "oc edit clusterrolebinding.rbac self-provisioners", "apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"false\"", "oc patch clusterrolebinding.rbac self-provisioners -p '{ \"metadata\": { \"annotations\": { \"rbac.authorization.kubernetes.io/autoupdate\": \"false\" } } }'", "oc new-project test", "Error from server (Forbidden): You may not request a new project via this API.", "You may not request a new project via this API.", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: <message_string>", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: To request a project, contact your system administrator at [email protected]." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/applications/projects
Chapter 11. Backing up an XFS file system
Chapter 11. Backing up an XFS file system As a system administrator, you can use the xfsdump to back up an XFS file system into a file or on a tape. This provides a simple backup mechanism. 11.1. Features of XFS backup This section describes key concepts and features of backing up an XFS file system with the xfsdump utility. You can use the xfsdump utility to: Perform backups to regular file images. Only one backup can be written to a regular file. Perform backups to tape drives. The xfsdump utility also enables you to write multiple backups to the same tape. A backup can span multiple tapes. To back up multiple file systems to a single tape device, simply write the backup to a tape that already contains an XFS backup. This appends the new backup to the one. By default, xfsdump never overwrites existing backups. Create incremental backups. The xfsdump utility uses dump levels to determine a base backup to which other backups are relative. Numbers from 0 to 9 refer to increasing dump levels. An incremental backup only backs up files that have changed since the last dump of a lower level: To perform a full backup, perform a level 0 dump on the file system. A level 1 dump is the first incremental backup after a full backup. The incremental backup would be level 2, which only backs up files that have changed since the last level 1 dump; and so on, to a maximum of level 9. Exclude files from a backup using size, subtree, or inode flags to filter them. Additional resources xfsdump(8) man page on your system 11.2. Backing up an XFS file system with xfsdump This procedure describes how to back up the content of an XFS file system into a file or a tape. Prerequisites An XFS file system that you can back up. Another file system or a tape drive where you can store the backup. Procedure Use the following command to back up an XFS file system: Replace level with the dump level of your backup. Use 0 to perform a full backup or 1 to 9 to perform consequent incremental backups. Replace backup-destination with the path where you want to store your backup. The destination can be a regular file, a tape drive, or a remote tape device. For example, /backup-files/Data.xfsdump for a file or /dev/st0 for a tape drive. Replace path-to-xfs-filesystem with the mount point of the XFS file system you want to back up. For example, /mnt/data/ . The file system must be mounted. When backing up multiple file systems and saving them on a single tape device, add a session label to each backup using the -L label option so that it is easier to identify them when restoring. Replace label with any name for your backup: for example, backup_data . Example 11.1. Backing up multiple XFS file systems To back up the content of XFS file systems mounted on the /boot/ and /data/ directories and save them as files in the /backup-files/ directory: To back up multiple file systems on a single tape device, add a session label to each backup using the -L label option: Additional resources xfsdump(8) man page on your system
[ "xfsdump -l level [-L label ] \\ -f backup-destination path-to-xfs-filesystem", "xfsdump -l 0 -f /backup-files/boot.xfsdump /boot xfsdump -l 0 -f /backup-files/data.xfsdump /data", "xfsdump -l 0 -L \"backup_boot\" -f /dev/st0 /boot xfsdump -l 0 -L \"backup_data\" -f /dev/st0 /data" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_file_systems/backing-up-an-xfs-file-system_managing-file-systems
32.2. Configuring SELinux User Map Order and Defaults
32.2. Configuring SELinux User Map Order and Defaults An SELinux user map is the association between an SELinux user on a client and an IdM user. The available SELinux user map order is part of the IdM server configuration. The SELinux user map order is a list of the SELinux users, in an order from the most to the least confined. The SELinux user entry itself has this format: The individual user entries are separated with a dollar sign (USD). Since there is no requirement on user entries to have an SELinux map, many entries might be unmapped. The IdM server configuration sets a default SELinux user, one of the users from the total SELinux map list, to use for unmapped IdM user entries. This way, even unmapped IdM users have a functional SELinux context. The default SELinux user for unmapped IdM user entries is unconfined_u , the default SELinux user for system users on Red Hat Enterprise Linux. This configuration defines the map order of available system SELinux users. This does not define any IdM user SELinux policies. The IdM user - SELinux user map must be defined and then users are added to the map. For details, see Section 32.3, "Mapping SELinux Users and IdM Users" . 32.2.1. In the Web UI In the top menu, click the IPA Server main tab and the Configuration subtab. Scroll to the bottom of the list of server configuration areas, to SELINUX OPTIONS . Edit the SELinux user configuration, the SELinux user map order , the Default SELinux user , or both. Click the Update link at the top of the page to save the changes. 32.2.2. In the CLI To view the list of SELinux users, set in the IdM server configuration, which are available to be mapped: To edit the SELinux user settings, use the config-mod command: Example 32.1. List of SELinux Users To edit the list of SELinux users to be available for mapping, use the --ipaselinuxusermaporder option. The list orders the SELinux users from the most to the least confined, for example: Note The default SELinux user, used for unmapped entries, must be included in the user map list or the edit operation fails. Likewise, if the default is edited, it must be changed to a user in the SELinux map list or the map list must be updated first. Example 32.2. Default SELinux User IdM users are not required to have a specific SELinux user mapped to their account. However, the local system still checks the IdM entry for an SELinux user to use for the IdM user account. To modify the default SELinux user, use the --ipaselinuxusermapdefault option. For example:
[ "SELinux_user:MLS[:MCS]", "[user1]@server ~]USD ipa config-show SELinux user map order: guest_u:s0USDxguest_u:s0USDuser_u:s0USDstaff_u:s0-s0:c0.c1023USDunconfined_u:s0-s0:c0.c1023 Default SELinux user: unconfined_u:s0-s0:c0.c1023", "[user1@server ~]USD ipa config-mod --ipaselinuxusermaporder=\"unconfined_u:s0-s0:c0.c1023USDguest_u:s0USDxguest_u:s0USDuser_u:s0-s0:c0.c1023USDstaff_u:s0-s0:c0.c1023\"", "[user1@server ~]USD ipa config-mod --ipaselinuxusermapdefault=\"guest_u:s0\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/config-selinux
Remediation, fencing, and maintenance
Remediation, fencing, and maintenance Workload Availability for Red Hat OpenShift 25.1 Workload Availability remediation, fencing, and maintenance Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/workload_availability_for_red_hat_openshift/25.1/html/remediation_fencing_and_maintenance/index
Chapter 8. Senders and receivers
Chapter 8. Senders and receivers The client uses sender and receiver links to represent channels for delivering messages. Senders and receivers are unidirectional, with a source end for the message origin, and a target end for the message destination. Sources and targets often point to queues or topics on a message broker. Sources are also used to represent subscriptions. 8.1. Creating queues and topics on demand Some message servers support on-demand creation of queues and topics. When a sender or receiver is attached, the server uses the sender target address or the receiver source address to create a queue or topic with a name matching the address. The message server typically defaults to creating either a queue (for one-to-one message delivery) or a topic (for one-to-many message delivery). The client can indicate which it prefers by setting the queue or topic capability on the source or target. To select queue or topic semantics, follow these steps: Configure your message server for automatic creation of queues and topics. This is often the default configuration. Set either the queue or topic capability on your sender target or receiver source, as in the examples below. Example: Sending to a queue created on demand var conn = container.connect({host: "example.com"}); var sender_opts = { target: { address: "jobs", capabilities: ["queue"] } } conn.open_sender( sender_opts ); Example: Receiving from a topic created on demand var conn = container.connect({host: "example.com"}); var receiver_opts = { source: { address: "notifications", capabilities: ["topic"] } } conn.open_receiver( receiver_opts ); For more details, see the following examples: queue-send.js queue-receive.js topic-send.js topic-receive.js 8.2. Creating durable subscriptions A durable subscription is a piece of state on the remote server representing a message receiver. Ordinarily, message receivers are discarded when a client closes. However, because durable subscriptions are persistent, clients can detach from them and then re-attach later. Any messages received while detached are available when the client re-attaches. Durable subscriptions are uniquely identified by combining the client container ID and receiver name to form a subscription ID. These must have stable values so that the subscription can be recovered. Set the connection container ID to a stable value, such as client-1 : var container = rhea.create_container({id: "client-1"}); Create a receiver with a stable name, such as sub-1 , and configure the receiver source for durability by setting the durable and expiry_policy properties: var receiver_opts = { source: { address: "notifications", name: "sub-1", durable: 2, expiry_policy: "never" } } conn.open_receiver(receiver_opts); To detach from a subscription, use the receiver.detach() method. To terminate the subscription, use the receiver.close() method. For more information, see the durable-subscribe.js example . 8.3. Creating shared subscriptions A shared subscription is a piece of state on the remote server representing one or more message receivers. Because it is shared, multiple clients can consume from the same stream of messages. The client configures a shared subscription by setting the shared capability on the receiver source. Shared subscriptions are uniquely identified by combining the client container ID and receiver name to form a subscription ID. These must have stable values so that multiple client processes can locate the same subscription. If the global capability is set in addition to shared , the receiver name alone is used to identify the subscription. To create a durable subscription, follow these steps: Set the connection container ID to a stable value, such as client-1 : var container = rhea.create_container({id: "client-1"}); Create a receiver with a stable name, such as sub-1 , and configure the receiver source for sharing by setting the shared capability: var receiver_opts = { source: { address: "notifications", name: "sub-1", capabilities: ["shared"] } } conn.open_receiver(receiver_opts); To detach from a subscription, use the receiver.detach() method. To terminate the subscription, use the receiver.close() method. For more information, see the shared-subscribe.js example .
[ "var conn = container.connect({host: \"example.com\"}); var sender_opts = { target: { address: \"jobs\", capabilities: [\"queue\"] } } conn.open_sender( sender_opts );", "var conn = container.connect({host: \"example.com\"}); var receiver_opts = { source: { address: \"notifications\", capabilities: [\"topic\"] } } conn.open_receiver( receiver_opts );", "var container = rhea.create_container({id: \"client-1\"});", "var receiver_opts = { source: { address: \"notifications\", name: \"sub-1\", durable: 2, expiry_policy: \"never\" } } conn.open_receiver(receiver_opts);", "var container = rhea.create_container({id: \"client-1\"});", "var receiver_opts = { source: { address: \"notifications\", name: \"sub-1\", capabilities: [\"shared\"] } } conn.open_receiver(receiver_opts);" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_javascript_client/senders_and_receivers
High Availability Add-On Overview
High Availability Add-On Overview Red Hat Enterprise Linux 7 Overview of the components of the High Availability Add-On Steven Levine Red Hat Customer Content Services [email protected]
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_overview/index
Chapter 4. Installing a cluster
Chapter 4. Installing a cluster 4.1. Cleaning up installations In case of an earlier failed deployment, remove the artifacts from the failed attempt before trying to deploy OpenShift Container Platform again. Procedure Power off all bare-metal nodes before installing the OpenShift Container Platform cluster by using the following command: USD ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off Remove all old bootstrap resources if any remain from an earlier deployment attempt by using the following script: for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done Delete the artifacts that the earlier installation generated by using the following command: USD cd ; /bin/rm -rf auth/ bootstrap.ign master.ign worker.ign metadata.json \ .openshift_install.log .openshift_install_state.json Re-create the OpenShift Container Platform manifests by using the following command: USD ./openshift-baremetal-install --dir ~/clusterconfigs create manifests 4.2. Deploying the cluster via the OpenShift Container Platform installer Run the OpenShift Container Platform installer: USD ./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster 4.3. Following the progress of the installation During the deployment process, you can check the installation's overall status by issuing the tail command to the .openshift_install.log log file in the install directory folder: USD tail -f /path/to/install-dir/.openshift_install.log 4.4. Verifying static IP address configuration If the DHCP reservation for a cluster node specifies an infinite lease, after the installer successfully provisions the node, the dispatcher script checks the node's network configuration. If the script determines that the network configuration contains an infinite DHCP lease, it creates a new connection using the IP address of the DHCP lease as a static IP address. Note The dispatcher script might run on successfully provisioned nodes while the provisioning of other nodes in the cluster is ongoing. Verify the network configuration is working properly. Procedure Check the network interface configuration on the node. Turn off the DHCP server and reboot the OpenShift Container Platform node and ensure that the network configuration works properly. 4.5. Additional resources Understanding update channels and releases
[ "ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off", "for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done", "cd ; /bin/rm -rf auth/ bootstrap.ign master.ign worker.ign metadata.json .openshift_install.log .openshift_install_state.json", "./openshift-baremetal-install --dir ~/clusterconfigs create manifests", "./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster", "tail -f /path/to/install-dir/.openshift_install.log" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-installing-a-cluster
function::task_cpu
function::task_cpu Name function::task_cpu - The scheduled cpu of the task Synopsis Arguments task task_struct pointer Description This function returns the scheduled cpu for the given task.
[ "task_cpu:long(task:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-task-cpu
Chapter 3. Installer-provisioned infrastructure
Chapter 3. Installer-provisioned infrastructure 3.1. Preparing to install a cluster on AWS You prepare to install an OpenShift Container Platform cluster on AWS by completing the following steps: Verifying internet connectivity for your cluster. Configuring an AWS account . Downloading the installation program. Note If you are installing in a disconnected environment, you extract the installation program from the mirrored content. For more information, see Mirroring images for a disconnected installation . Installing the OpenShift CLI ( oc ). Note If you are installing in a disconnected environment, install oc to the mirror host. Generating an SSH key pair. You can use this key pair to authenticate into the OpenShift Container Platform cluster's nodes after it is deployed. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, manually creating long-term credentials for AWS or configuring an AWS cluster to use short-term credentials with Amazon Web Services Security Token Service (AWS STS). 3.1.1. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.1.2. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.1.3. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.1.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.1.5. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.2. Installing a cluster on AWS In OpenShift Container Platform version 4.16, you can install a cluster on Amazon Web Services (AWS) that uses the default configuration options. 3.2.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 3.2.2. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from Red Hat OpenShift Cluster Manager . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Additional resources See Configuration and credential file settings in the AWS documentation for more information about AWS profile and credential configuration. 3.2.3. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.2.4. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.2.5. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . 3.3. Installing a cluster on AWS with customizations In OpenShift Container Platform version 4.16, you can install a customized cluster on infrastructure that the installation program provisions on Amazon Web Services (AWS). To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. Note The scope of the OpenShift Container Platform installation configurations is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more OpenShift Container Platform configuration tasks after an installation completes. 3.3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 3.3.2. Obtaining an AWS Marketplace image If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy compute nodes. Prerequisites You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster. Procedure Complete the OpenShift Container Platform subscription from the AWS Marketplace . Record the AMI ID for your specific AWS Region. As part of the installation process, you must update the install-config.yaml file with this value before deploying the cluster. Sample install-config.yaml file with AWS Marketplace compute nodes apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA... pullSecret: '{"auths": ...}' 1 The AMI ID from your AWS Marketplace subscription. 2 Your AMI ID is associated with a specific AWS Region. When creating the installation configuration file, ensure that you select the same AWS Region that you specified when configuring your subscription. 3.3.3. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select AWS as the platform to target. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that the cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on AWS". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for AWS 3.3.3.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.3.3.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 3.1. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 3.3.3.3. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 3.2. Machine types based on 64-bit ARM architecture c6g.* c7g.* m6g.* m7g.* r8g.* 3.3.3.4. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 amiID: ami-0c5d3e03c0ab9b19a 16 serviceEndpoints: 17 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com fips: false 18 sshKey: ssh-ed25519 AAAA... 19 pullSecret: '{"auths": ...}' 20 1 12 14 20 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 16 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 17 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 18 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 19 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 3.3.3.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.3.4. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials . 3.3.4.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 3.3.4.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 3.3.4.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 3.3. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 3.4. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 3.3.4.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 3.3.4.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.3.4.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.3.4.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 3.3.5. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.3.6. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.3.7. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.3.8. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . 3.4. Installing a cluster on AWS with network customizations In OpenShift Container Platform version 4.16, you can install a cluster on Amazon Web Services (AWS) with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. 3.4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 3.4.2. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information, see "Installation configuration parameters". Note Set the networking.machineNetwork to match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located. Important The CIDR range 172.17.0.0/16 is reserved by libVirt . You cannot use any other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2. 3.4.3. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select AWS as the platform to target. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for AWS 3.4.3.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.4.3.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 3.5. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 3.4.3.3. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 3.6. Machine types based on 64-bit ARM architecture c6g.* c7g.* m6g.* m7g.* r8g.* 3.4.3.4. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: 13 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 14 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 15 propagateUserTags: true 16 userTags: adminContact: jdoe costCenter: 7536 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com fips: false 19 sshKey: ssh-ed25519 AAAA... 20 pullSecret: '{"auths": ...}' 21 1 12 15 21 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 13 16 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 14 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 17 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 18 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 19 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 20 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 3.4.3.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.4.4. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials . 3.4.4.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 3.4.4.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 3.4.4.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 3.7. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 3.8. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 3.4.4.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 3.4.4.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.4.4.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.4.4.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 3.4.5. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 3.4.5.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 3.3. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 3.4. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 3.5. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 3.6. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 3.7. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd97::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is fd97::/64 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 3.8. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 3.9. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 3.10. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 3.11. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Table 3.12. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full Important Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power(R). kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 3.13. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 3.4.6. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following example: Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. Remove the Kubernetes manifest files that define the control plane machines and compute MachineSets : USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the MachineSet files to create compute machines by using the machine API, but you must update references to them to match your environment. Note For more information on using a Network Load Balancer (NLB) on AWS, see Configuring Ingress cluster traffic on AWS using a Network Load Balancer . 3.4.7. Configuring an Ingress Controller Network Load Balancer on a new AWS cluster You can create an Ingress Controller backed by an AWS Network Load Balancer (NLB) on a new cluster. Prerequisites Create the install-config.yaml file and complete any modifications to it. Procedure Create an Ingress Controller backed by an AWS NLB on a new cluster. Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-ingress-default-ingresscontroller.yaml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml Example output cluster-ingress-default-ingresscontroller.yaml Open the cluster-ingress-default-ingresscontroller.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration you want: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService Save the cluster-ingress-default-ingresscontroller.yaml file and quit the text editor. Optional: Back up the manifests/cluster-ingress-default-ingresscontroller.yaml file. The installation program deletes the manifests/ directory when creating the cluster. 3.4.8. Configuring hybrid networking with OVN-Kubernetes You can configure your cluster to use hybrid networking with the OVN-Kubernetes network plugin. This allows a hybrid cluster that supports different node networking configurations. Note This configuration is necessary to run both Linux and Windows nodes in the same cluster. Prerequisites You defined OVNKubernetes for the networking.networkType parameter in the install-config.yaml file. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF where: <installation_directory> Specifies the directory name that contains the manifests/ directory for your cluster. Open the cluster-network-03-config.yml file in an editor and configure OVN-Kubernetes with hybrid networking, as in the following example: Specify a hybrid networking configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2 1 Specify the CIDR configuration used for nodes on the additional overlay network. The hybridClusterNetwork CIDR must not overlap with the clusterNetwork CIDR. 2 Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default 4789 port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken . Note Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom hybridOverlayVXLANPort value because this Windows server version does not support selecting a custom VXLAN port. Note For more information about using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads . 3.4.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.4.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.4.11. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.4.12. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . 3.5. Installing a cluster on AWS in a restricted network In OpenShift Container Platform version 4.16, you can install a cluster on Amazon Web Services (AWS) in a restricted network by creating an internal mirror of the installation release content on an existing Amazon Virtual Private Cloud (VPC). 3.5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You have an existing VPC in AWS. When installing to a restricted network using installer-provisioned infrastructure, you cannot use the installer-provisioned VPC. You must use a user-provisioned VPC that satisfies one of the following requirements: Contains the mirror registry Has firewall rules or a peering connection to access the mirror registry hosted elsewhere You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note If you are configuring a proxy, be sure to also review this site list. 3.5.2. About installations in restricted networks In OpenShift Container Platform 4.16, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 3.5.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 3.5.3. About using a custom VPC In OpenShift Container Platform 4.16, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 3.5.3.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Create a VPC in the Amazon Web Services documentation for more information about AWS VPC console wizard configurations and creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. If you want to extend your OpenShift Container Platform cluster into an AWS Outpost and have an existing Outpost subnet, the existing subnet must use the kubernetes.io/cluster/unmanaged: true tag. If you do not apply this tag, the installation might fail due to the Cloud Controller Manager creating a service load balancer in the Outpost subnet, which is an unsupported configuration. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone and platform.aws.hostedZoneRole fields in the install-config.yaml file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use the Passthrough or Manual credentials mode. If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 3.5.3.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 3.5.3.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 3.5.3.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 3.5.4. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select AWS as the platform to target. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the subnets for the VPC to install the cluster in: subnets: - subnet-1 - subnet-2 - subnet-3 Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Make any other modifications to the install-config.yaml file that you require. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for AWS 3.5.4.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.14. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.5.4.2. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 22 additionalTrustBundle: | 23 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 24 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 12 14 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 17 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 18 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 19 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 22 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 23 Provide the contents of the certificate file that you used for your mirror registry. 24 Provide the imageContentSources section from the output of the command to mirror the repository. 3.5.4.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.5.5. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials . 3.5.5.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 3.5.5.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 3.5.5.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 3.9. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 3.10. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 3.5.5.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 3.5.5.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.5.5.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.5.5.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 3.5.6. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.5.7. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.5.8. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 3.5.9. steps Validate an installation . Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . 3.6. Installing a cluster on AWS into an existing VPC In OpenShift Container Platform version 4.16, you can install a cluster into an existing Amazon Virtual Private Cloud (VPC) on Amazon Web Services (AWS). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 3.6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. If the existing VPC is owned by a different account than the cluster, you shared the VPC between accounts. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 3.6.2. About using a custom VPC In OpenShift Container Platform 4.16, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 3.6.2.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Create a VPC in the Amazon Web Services documentation for more information about AWS VPC console wizard configurations and creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: Create a public and private subnet for each availability zone that your cluster uses. Each availability zone can contain no more than one public and one private subnet. For an example of this type of configuration, see VPC with public and private subnets (NAT) in the AWS documentation. Record each subnet ID. Completing the installation requires that you enter these values in the platform section of the install-config.yaml file. See Finding a subnet ID in the AWS documentation. The VPC's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The subnet CIDR blocks must belong to the machine CIDR that you specify. The VPC must have a public internet gateway attached to it. For each availability zone: The public subnet requires a route to the internet gateway. The public subnet requires a NAT gateway with an EIP address. The private subnet requires a route to the NAT gateway in public subnet. The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. If you want to extend your OpenShift Container Platform cluster into an AWS Outpost and have an existing Outpost subnet, the existing subnet must use the kubernetes.io/cluster/unmanaged: true tag. If you do not apply this tag, the installation might fail due to the Cloud Controller Manager creating a service load balancer in the Outpost subnet, which is an unsupported configuration. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone and platform.aws.hostedZoneRole fields in the install-config.yaml file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use the Passthrough or Manual credentials mode. If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 3.6.2.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 3.6.2.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 3.6.2.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 3.6.2.5. Optional: AWS security groups By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified. However, you can apply additional existing AWS security groups, which are associated with your existing VPC, to control plane and compute machines. Applying custom security groups can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. As part of the installation process, you apply custom security groups by modifying the install-config.yaml file before deploying the cluster. For more information, see "Applying existing AWS security groups to the cluster". 3.6.2.6. Modifying trust policy when installing into a shared VPC If you install your cluster using a shared VPC, you can use the Passthrough or Manual credentials mode. You must add the IAM role used to install the cluster as a principal in the trust policy of the account that owns the VPC. If you use Passthrough mode, add the Amazon Resource Name (ARN) of the account that creates the cluster, such as arn:aws:iam::123456789012:user/clustercreator , to the trust policy as a principal. If you use Manual mode, add the ARN of the account that creates the cluster as well as the ARN of the ingress operator role in the cluster owner account, such as arn:aws:iam::123456789012:role/<cluster-name>-openshift-ingress-operator-cloud-credentials , to the trust policy as principals. You must add the following actions to the policy: Example 3.11. Required actions for shared VPC installation route53:ChangeResourceRecordSets route53:ListHostedZones route53:ListHostedZonesByName route53:ListResourceRecordSets route53:ChangeTagsForResource route53:GetAccountLimit route53:GetChange route53:GetHostedZone route53:ListTagsForResource route53:UpdateHostedZoneComment tag:GetResources tag:UntagResources 3.6.3. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select AWS as the platform to target. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for AWS 3.6.3.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.15. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.6.3.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 3.12. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 3.6.3.3. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 3.13. Machine types based on 64-bit ARM architecture c6g.* c7g.* m6g.* m7g.* r8g.* 3.6.3.4. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{"auths": ...}' 22 1 12 14 22 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 17 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 18 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 19 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 3.6.3.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.6.3.6. Applying existing AWS security groups to the cluster Applying existing AWS security groups to your control plane and compute machines can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. Prerequisites You have created the security groups in AWS. For more information, see the AWS documentation about working with security groups . The security groups must be associated with the existing VPC that you are deploying the cluster to. The security groups cannot be associated with another VPC. You have an existing install-config.yaml file. Procedure In the install-config.yaml file, edit the compute.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your compute machines. Edit the controlPlane.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your control plane machines. Save the file and reference it when deploying the cluster. Sample install-config.yaml file that specifies custom security groups # ... compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3 1 Specify the name of the security group as it appears in the Amazon EC2 console, including the sg prefix. 2 Specify subnets for each availability zone that your cluster uses. 3.6.4. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials . 3.6.4.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 3.6.4.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 3.6.4.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 3.14. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 3.15. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 3.6.4.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 3.6.4.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.6.4.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.6.4.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 3.6.5. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.6.6. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.6.7. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.6.8. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . After installing a cluster on AWS into an existing VPC, you can extend the AWS VPC cluster into an AWS Outpost . 3.7. Installing a private cluster on AWS In OpenShift Container Platform version 4.16, you can install a private cluster into an existing VPC on Amazon Web Services (AWS). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 3.7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 3.7.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 3.7.2.1. Private clusters in AWS To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for access from only the private network. The cluster still requires access to internet to access the AWS APIs. The following items are not required or created when you install a private cluster: Public subnets Public load balancers, which support public ingress A public Route 53 zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private Route 53 zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. 3.7.2.1.1. Limitations The ability to add public functionality to a private cluster is limited. You cannot make the Kubernetes API endpoints public after installation without taking additional actions, including creating public subnets in the VPC for each availability zone in use, creating a public load balancer, and configuring the control plane security groups to allow traffic from the internet on 6443 (Kubernetes API port). If you use a public Service type load balancer, you must tag a public subnet in each availability zone with kubernetes.io/cluster/<cluster-infra-id>: shared so that AWS can use them to create public load balancers. 3.7.3. About using a custom VPC In OpenShift Container Platform 4.16, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 3.7.3.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Create a VPC in the Amazon Web Services documentation for more information about AWS VPC console wizard configurations and creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. If you want to extend your OpenShift Container Platform cluster into an AWS Outpost and have an existing Outpost subnet, the existing subnet must use the kubernetes.io/cluster/unmanaged: true tag. If you do not apply this tag, the installation might fail due to the Cloud Controller Manager creating a service load balancer in the Outpost subnet, which is an unsupported configuration. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone and platform.aws.hostedZoneRole fields in the install-config.yaml file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use the Passthrough or Manual credentials mode. If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 3.7.3.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 3.7.3.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 3.7.3.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 3.7.3.5. Optional: AWS security groups By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified. However, you can apply additional existing AWS security groups, which are associated with your existing VPC, to control plane and compute machines. Applying custom security groups can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. As part of the installation process, you apply custom security groups by modifying the install-config.yaml file before deploying the cluster. For more information, see "Applying existing AWS security groups to the cluster". 3.7.4. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for AWS 3.7.4.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.16. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.7.4.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 3.16. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 3.7.4.3. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 3.17. Machine types based on 64-bit ARM architecture c6g.* c7g.* m6g.* m7g.* r8g.* 3.7.4.4. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 pullSecret: '{"auths": ...}' 23 1 12 14 23 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 17 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 18 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 19 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 22 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 3.7.4.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.7.4.6. Applying existing AWS security groups to the cluster Applying existing AWS security groups to your control plane and compute machines can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. Prerequisites You have created the security groups in AWS. For more information, see the AWS documentation about working with security groups . The security groups must be associated with the existing VPC that you are deploying the cluster to. The security groups cannot be associated with another VPC. You have an existing install-config.yaml file. Procedure In the install-config.yaml file, edit the compute.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your compute machines. Edit the controlPlane.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your control plane machines. Save the file and reference it when deploying the cluster. Sample install-config.yaml file that specifies custom security groups # ... compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3 1 Specify the name of the security group as it appears in the Amazon EC2 console, including the sg prefix. 2 Specify subnets for each availability zone that your cluster uses. 3.7.5. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials . 3.7.5.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 3.7.5.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 3.7.5.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 3.18. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 3.19. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 3.7.5.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 3.7.5.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.7.5.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.7.5.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 3.7.6. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.7.7. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.7.8. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.7.9. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . 3.8. Installing a cluster on AWS into a government region In OpenShift Container Platform version 4.16, you can install a cluster on Amazon Web Services (AWS) into a government region. To configure the region, modify parameters in the install-config.yaml file before you install the cluster. 3.8.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 3.8.2. AWS government regions OpenShift Container Platform supports deploying a cluster to an AWS GovCloud (US) region. The following AWS GovCloud partitions are supported: us-gov-east-1 us-gov-west-1 3.8.3. Installation requirements Before you can install the cluster, you must: Provide an existing private AWS VPC and subnets to host the cluster. Public zones are not supported in Route 53 in AWS GovCloud. As a result, clusters must be private when you deploy to an AWS government region. Manually create the installation configuration file ( install-config.yaml ). 3.8.4. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. Note Public zones are not supported in Route 53 in an AWS GovCloud Region. Therefore, clusters must be private if they are deployed to an AWS GovCloud Region. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 3.8.4.1. Private clusters in AWS To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for access from only the private network. The cluster still requires access to internet to access the AWS APIs. The following items are not required or created when you install a private cluster: Public subnets Public load balancers, which support public ingress A public Route 53 zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private Route 53 zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. 3.8.4.1.1. Limitations The ability to add public functionality to a private cluster is limited. You cannot make the Kubernetes API endpoints public after installation without taking additional actions, including creating public subnets in the VPC for each availability zone in use, creating a public load balancer, and configuring the control plane security groups to allow traffic from the internet on 6443 (Kubernetes API port). If you use a public Service type load balancer, you must tag a public subnet in each availability zone with kubernetes.io/cluster/<cluster-infra-id>: shared so that AWS can use them to create public load balancers. 3.8.5. About using a custom VPC In OpenShift Container Platform 4.16, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 3.8.5.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Create a VPC in the Amazon Web Services documentation for more information about AWS VPC console wizard configurations and creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. If you want to extend your OpenShift Container Platform cluster into an AWS Outpost and have an existing Outpost subnet, the existing subnet must use the kubernetes.io/cluster/unmanaged: true tag. If you do not apply this tag, the installation might fail due to the Cloud Controller Manager creating a service load balancer in the Outpost subnet, which is an unsupported configuration. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone and platform.aws.hostedZoneRole fields in the install-config.yaml file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use the Passthrough or Manual credentials mode. If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 3.8.5.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 3.8.5.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 3.8.5.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 3.8.5.5. Optional: AWS security groups By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified. However, you can apply additional existing AWS security groups, which are associated with your existing VPC, to control plane and compute machines. Applying custom security groups can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. As part of the installation process, you apply custom security groups by modifying the install-config.yaml file before deploying the cluster. For more information, see "Applying existing AWS security groups to the cluster". 3.8.6. Obtaining an AWS Marketplace image If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy compute nodes. Prerequisites You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster. Procedure Complete the OpenShift Container Platform subscription from the AWS Marketplace . Record the AMI ID for your specific AWS Region. As part of the installation process, you must update the install-config.yaml file with this value before deploying the cluster. Sample install-config.yaml file with AWS Marketplace compute nodes apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA... pullSecret: '{"auths": ...}' 1 The AMI ID from your AWS Marketplace subscription. 2 Your AMI ID is associated with a specific AWS Region. When creating the installation configuration file, ensure that you select the same AWS Region that you specified when configuring your subscription. 3.8.7. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for AWS 3.8.7.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.17. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.8.7.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 3.20. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 3.8.7.3. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 3.21. Machine types based on 64-bit ARM architecture c6g.* c7g.* m6g.* m7g.* r8g.* 3.8.7.4. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-gov-west-1a - us-gov-west-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-gov-west-1c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-gov-west-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 pullSecret: '{"auths": ...}' 23 1 12 14 23 Required. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 17 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 18 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 19 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 22 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 3.8.7.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.8.7.6. Applying existing AWS security groups to the cluster Applying existing AWS security groups to your control plane and compute machines can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. Prerequisites You have created the security groups in AWS. For more information, see the AWS documentation about working with security groups . The security groups must be associated with the existing VPC that you are deploying the cluster to. The security groups cannot be associated with another VPC. You have an existing install-config.yaml file. Procedure In the install-config.yaml file, edit the compute.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your compute machines. Edit the controlPlane.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your control plane machines. Save the file and reference it when deploying the cluster. Sample install-config.yaml file that specifies custom security groups # ... compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3 1 Specify the name of the security group as it appears in the Amazon EC2 console, including the sg prefix. 2 Specify subnets for each availability zone that your cluster uses. 3.8.8. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Incorporating the Cloud Credential Operator utility manifests . 3.8.8.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 3.8.8.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 3.8.8.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 3.22. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 3.23. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 3.8.8.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 3.8.8.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.8.8.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.8.8.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 3.8.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.8.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.8.11. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.8.12. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . 3.9. Installing a cluster on AWS into a Secret or Top Secret Region In OpenShift Container Platform version 4.16, you can install a cluster on Amazon Web Services (AWS) into the following secret regions: Secret Commercial Cloud Services (SC2S) Commercial Cloud Services (C2S) To configure a cluster in either region, you change parameters in the install config.yaml file before you install the cluster. Warning In OpenShift Container Platform 4.16, the installation program uses Cluster API instead of Terraform to provision cluster infrastructure during installations on AWS. Installing a cluster on AWS into a secret or top-secret region by using the Cluster API implementation has not been tested as of the release of OpenShift Container Platform 4.16. This document will be updated when installation into a secret region has been tested. There is a known issue with Network Load Balancers' support for security groups in secret or top secret regions that causes installations in these regions to fail. For more information, see OCPBUGS-33311 . 3.9.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multifactor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 3.9.2. AWS secret regions The following AWS secret partitions are supported: us-isob-east-1 (SC2S) us-iso-east-1 (C2S) Note The maximum supported MTU in an AWS SC2S and C2S Regions is not the same as AWS commercial. For more information about configuring MTU during installation, see the Cluster Network Operator configuration object section in Installing a cluster on AWS with network customizations 3.9.3. Installation requirements Red Hat does not publish a Red Hat Enterprise Linux CoreOS (RHCOS) Amzaon Machine Image for the AWS Secret and Top Secret Regions. Before you can install the cluster, you must: Upload a custom RHCOS AMI. Manually create the installation configuration file ( install-config.yaml ). Specify the AWS region, and the accompanying custom AMI, in the installation configuration file. You cannot use the OpenShift Container Platform installation program to create the installation configuration file. The installer does not list an AWS region without native support for an RHCOS AMI. Important You must also define a custom CA certificate in the additionalTrustBundle field of the install-config.yaml file because the AWS API requires a custom CA trust bundle. To allow the installation program to access the AWS API, the CA certificates must also be defined on the machine that runs the installation program. You must add the CA bundle to the trust store on the machine, use the AWS_CA_BUNDLE environment variable, or define the CA bundle in the ca_bundle field of the AWS config file. 3.9.4. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. Note Public zones are not supported in Route 53 in an AWS Top Secret Region. Therefore, clusters must be private if they are deployed to an AWS Top Secret Region. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 3.9.4.1. Private clusters in AWS To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for access from only the private network. The cluster still requires access to internet to access the AWS APIs. The following items are not required or created when you install a private cluster: Public subnets Public load balancers, which support public ingress A public Route 53 zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private Route 53 zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. 3.9.4.1.1. Limitations The ability to add public functionality to a private cluster is limited. You cannot make the Kubernetes API endpoints public after installation without taking additional actions, including creating public subnets in the VPC for each availability zone in use, creating a public load balancer, and configuring the control plane security groups to allow traffic from the internet on 6443 (Kubernetes API port). If you use a public Service type load balancer, you must tag a public subnet in each availability zone with kubernetes.io/cluster/<cluster-infra-id>: shared so that AWS can use them to create public load balancers. 3.9.5. About using a custom VPC In OpenShift Container Platform 4.16, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 3.9.5.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Create a VPC in the Amazon Web Services documentation for more information about AWS VPC console wizard configurations and creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. If you want to extend your OpenShift Container Platform cluster into an AWS Outpost and have an existing Outpost subnet, the existing subnet must use the kubernetes.io/cluster/unmanaged: true tag. If you do not apply this tag, the installation might fail due to the Cloud Controller Manager creating a service load balancer in the Outpost subnet, which is an unsupported configuration. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone and platform.aws.hostedZoneRole fields in the install-config.yaml file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use the Passthrough or Manual credentials mode. A cluster in an SC2S or C2S Region is unable to reach the public IP addresses for the EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: SC2S elasticloadbalancing.<aws_region>.sc2s.sgov.gov ec2.<aws_region>.sc2s.sgov.gov s3.<aws_region>.sc2s.sgov.gov C2S elasticloadbalancing.<aws_region>.c2s.ic.gov ec2.<aws_region>.c2s.ic.gov s3.<aws_region>.c2s.ic.gov With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: SC2S elasticloadbalancing.<aws_region>.sc2s.sgov.gov ec2.<aws_region>.sc2s.sgov.gov s3.<aws_region>.sc2s.sgov.gov C2S elasticloadbalancing.<aws_region>.c2s.ic.gov ec2.<aws_region>.c2s.ic.gov s3.<aws_region>.c2s.ic.gov When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 3.9.5.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 3.9.5.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 3.9.5.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 3.9.5.5. Optional: AWS security groups By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified. However, you can apply additional existing AWS security groups, which are associated with your existing VPC, to control plane and compute machines. Applying custom security groups can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. As part of the installation process, you apply custom security groups by modifying the install-config.yaml file before deploying the cluster. For more information, see "Applying existing AWS security groups to the cluster". 3.9.6. Uploading a custom RHCOS AMI in AWS If you are deploying to a custom Amazon Web Services (AWS) region, you must upload a custom Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region. Prerequisites You configured an AWS account. You created an Amazon S3 bucket with the required IAM service role . You uploaded your RHCOS VMDK file to Amazon S3. The RHCOS VMDK file must be the highest version that is less than or equal to the OpenShift Container Platform version you are installing. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer . Procedure Export your AWS profile as an environment variable: USD export AWS_PROFILE=<aws_profile> 1 Export the region to associate with your custom AMI as an environment variable: USD export AWS_DEFAULT_REGION=<aws_region> 1 Export the version of RHCOS you uploaded to Amazon S3 as an environment variable: USD export RHCOS_VERSION=<version> 1 1 1 1 The RHCOS VMDK version, like 4.16.0 . Export the Amazon S3 bucket name as an environment variable: USD export VMIMPORT_BUCKET_NAME=<s3_bucket_name> Create the containers.json file and define your RHCOS VMDK file: USD cat <<EOF > containers.json { "Description": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64", "Format": "vmdk", "UserBucket": { "S3Bucket": "USD{VMIMPORT_BUCKET_NAME}", "S3Key": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk" } } EOF Import the RHCOS disk as an Amazon EBS snapshot: USD aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} \ --description "<description>" \ 1 --disk-container "file://<file_path>/containers.json" 2 1 The description of your RHCOS disk being imported, like rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64 . 2 The file path to the JSON file describing your RHCOS disk. The JSON file should contain your Amazon S3 bucket name and key. Check the status of the image import: USD watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION} Example output { "ImportSnapshotTasks": [ { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "ImportTaskId": "import-snap-fh6i8uil", "SnapshotTaskDetail": { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "DiskImageSize": 819056640.0, "Format": "VMDK", "SnapshotId": "snap-06331325870076318", "Status": "completed", "UserBucket": { "S3Bucket": "external-images", "S3Key": "rhcos-4.7.0-x86_64-aws.x86_64.vmdk" } } } ] } Copy the SnapshotId to register the image. Create a custom RHCOS AMI from the RHCOS snapshot: USD aws ec2 register-image \ --region USD{AWS_DEFAULT_REGION} \ --architecture x86_64 \ 1 --description "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 2 --ena-support \ --name "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 3 --virtualization-type hvm \ --root-device-name '/dev/xvda' \ --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4 1 The RHCOS VMDK architecture type, like x86_64 , aarch64 , s390x , or ppc64le . 2 The Description from the imported snapshot. 3 The name of the RHCOS AMI. 4 The SnapshotID from the imported snapshot. To learn more about these APIs, see the AWS documentation for importing snapshots and creating EBS-backed AMIs . 3.9.7. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have uploaded a custom RHCOS AMI. You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for AWS 3.9.7.1. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 3.24. Machine types based on 64-bit x86 architecture for secret regions c4.* c5.* i3.* m4.* m5.* r4.* r5.* t3.* 3.9.7.2. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-iso-east-1a - us-iso-east-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-iso-east-1a - us-iso-east-1b replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-iso-east-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{"auths": ...}' 24 additionalTrustBundle: | 25 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 12 14 17 24 Required. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 18 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 19 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 20 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 21 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 22 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 23 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 25 The custom CA certificate. This is required when deploying to the SC2S or C2S Regions because the AWS API requires a custom CA trust bundle. 3.9.7.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.9.7.4. Applying existing AWS security groups to the cluster Applying existing AWS security groups to your control plane and compute machines can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. Prerequisites You have created the security groups in AWS. For more information, see the AWS documentation about working with security groups . The security groups must be associated with the existing VPC that you are deploying the cluster to. The security groups cannot be associated with another VPC. You have an existing install-config.yaml file. Procedure In the install-config.yaml file, edit the compute.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your compute machines. Edit the controlPlane.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your control plane machines. Save the file and reference it when deploying the cluster. Sample install-config.yaml file that specifies custom security groups # ... compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3 1 Specify the name of the security group as it appears in the Amazon EC2 console, including the sg prefix. 2 Specify subnets for each availability zone that your cluster uses. 3.9.8. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials . 3.9.8.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 3.9.8.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 3.9.8.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 3.25. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 3.26. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 3.9.8.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 3.9.8.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.9.8.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.9.8.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 3.9.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.9.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.9.11. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources Accessing the web console 3.9.12. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . 3.10. Installing a cluster on AWS China In OpenShift Container Platform version 4.16, you can install a cluster to the following Amazon Web Services (AWS) China regions: cn-north-1 (Beijing) cn-northwest-1 (Ningxia) 3.10.1. Prerequisites You have an Internet Content Provider (ICP) license. You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. 3.10.2. Installation requirements Red Hat does not publish a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for the AWS China regions. Before you can install the cluster, you must: Upload a custom RHCOS AMI. Manually create the installation configuration file ( install-config.yaml ). Specify the AWS region, and the accompanying custom AMI, in the installation configuration file. You cannot use the OpenShift Container Platform installation program to create the installation configuration file. The installer does not list an AWS region without native support for an RHCOS AMI. 3.10.3. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network. Note AWS China does not support a VPN connection between the VPC and your network. For more information about the Amazon VPC service in the Beijing and Ningxia regions, see Amazon Virtual Private Cloud in the AWS China documentation. 3.10.3.1. Private clusters in AWS To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for access from only the private network. The cluster still requires access to internet to access the AWS APIs. The following items are not required or created when you install a private cluster: Public subnets Public load balancers, which support public ingress A public Route 53 zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private Route 53 zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. 3.10.3.1.1. Limitations The ability to add public functionality to a private cluster is limited. You cannot make the Kubernetes API endpoints public after installation without taking additional actions, including creating public subnets in the VPC for each availability zone in use, creating a public load balancer, and configuring the control plane security groups to allow traffic from the internet on 6443 (Kubernetes API port). If you use a public Service type load balancer, you must tag a public subnet in each availability zone with kubernetes.io/cluster/<cluster-infra-id>: shared so that AWS can use them to create public load balancers. 3.10.4. About using a custom VPC In OpenShift Container Platform 4.16, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 3.10.4.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Create a VPC in the Amazon Web Services documentation for more information about AWS VPC console wizard configurations and creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. If you want to extend your OpenShift Container Platform cluster into an AWS Outpost and have an existing Outpost subnet, the existing subnet must use the kubernetes.io/cluster/unmanaged: true tag. If you do not apply this tag, the installation might fail due to the Cloud Controller Manager creating a service load balancer in the Outpost subnet, which is an unsupported configuration. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone and platform.aws.hostedZoneRole fields in the install-config.yaml file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use the Passthrough or Manual credentials mode. If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com.cn elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com.cn elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 3.10.4.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 3.10.4.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 3.10.4.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 3.10.4.5. Optional: AWS security groups By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified. However, you can apply additional existing AWS security groups, which are associated with your existing VPC, to control plane and compute machines. Applying custom security groups can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. As part of the installation process, you apply custom security groups by modifying the install-config.yaml file before deploying the cluster. For more information, see "Applying existing AWS security groups to the cluster". 3.10.5. Uploading a custom RHCOS AMI in AWS If you are deploying to a custom Amazon Web Services (AWS) region, you must upload a custom Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region. Prerequisites You configured an AWS account. You created an Amazon S3 bucket with the required IAM service role . You uploaded your RHCOS VMDK file to Amazon S3. The RHCOS VMDK file must be the highest version that is less than or equal to the OpenShift Container Platform version you are installing. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer . Procedure Export your AWS profile as an environment variable: USD export AWS_PROFILE=<aws_profile> 1 1 The AWS profile name that holds your AWS credentials, like beijingadmin . Export the region to associate with your custom AMI as an environment variable: USD export AWS_DEFAULT_REGION=<aws_region> 1 1 The AWS region, like cn-north-1 . Export the version of RHCOS you uploaded to Amazon S3 as an environment variable: USD export RHCOS_VERSION=<version> 1 1 The RHCOS VMDK version, like 4.16.0 . Export the Amazon S3 bucket name as an environment variable: USD export VMIMPORT_BUCKET_NAME=<s3_bucket_name> Create the containers.json file and define your RHCOS VMDK file: USD cat <<EOF > containers.json { "Description": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64", "Format": "vmdk", "UserBucket": { "S3Bucket": "USD{VMIMPORT_BUCKET_NAME}", "S3Key": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk" } } EOF Import the RHCOS disk as an Amazon EBS snapshot: USD aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} \ --description "<description>" \ 1 --disk-container "file://<file_path>/containers.json" 2 1 The description of your RHCOS disk being imported, like rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64 . 2 The file path to the JSON file describing your RHCOS disk. The JSON file should contain your Amazon S3 bucket name and key. Check the status of the image import: USD watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION} Example output { "ImportSnapshotTasks": [ { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "ImportTaskId": "import-snap-fh6i8uil", "SnapshotTaskDetail": { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "DiskImageSize": 819056640.0, "Format": "VMDK", "SnapshotId": "snap-06331325870076318", "Status": "completed", "UserBucket": { "S3Bucket": "external-images", "S3Key": "rhcos-4.7.0-x86_64-aws.x86_64.vmdk" } } } ] } Copy the SnapshotId to register the image. Create a custom RHCOS AMI from the RHCOS snapshot: USD aws ec2 register-image \ --region USD{AWS_DEFAULT_REGION} \ --architecture x86_64 \ 1 --description "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 2 --ena-support \ --name "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 3 --virtualization-type hvm \ --root-device-name '/dev/xvda' \ --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4 1 The RHCOS VMDK architecture type, like x86_64 , aarch64 , s390x , or ppc64le . 2 The Description from the imported snapshot. 3 The name of the RHCOS AMI. 4 The SnapshotID from the imported snapshot. To learn more about these APIs, see the AWS documentation for importing snapshots and creating EBS-backed AMIs . 3.10.6. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have uploaded a custom RHCOS AMI. You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for AWS 3.10.6.1. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - cn-north-1a - cn-north-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - cn-north-1a replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: cn-north-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.cn-north-1.vpce.amazonaws.com.cn hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{"auths": ...}' 24 1 12 14 17 24 Required. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 18 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 19 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 20 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 21 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 22 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 23 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 3.10.6.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.18. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.10.6.3. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 3.27. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 3.10.6.4. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 3.28. Machine types based on 64-bit ARM architecture c6g.* c7g.* m6g.* m7g.* r8g.* 3.10.6.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.10.6.6. Applying existing AWS security groups to the cluster Applying existing AWS security groups to your control plane and compute machines can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. Prerequisites You have created the security groups in AWS. For more information, see the AWS documentation about working with security groups . The security groups must be associated with the existing VPC that you are deploying the cluster to. The security groups cannot be associated with another VPC. You have an existing install-config.yaml file. Procedure In the install-config.yaml file, edit the compute.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your compute machines. Edit the controlPlane.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your control plane machines. Save the file and reference it when deploying the cluster. Sample install-config.yaml file that specifies custom security groups # ... compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3 1 Specify the name of the security group as it appears in the Amazon EC2 console, including the sg prefix. 2 Specify subnets for each availability zone that your cluster uses. 3.10.7. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials . 3.10.7.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 3.10.7.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 3.10.7.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 3.29. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 3.30. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 3.10.7.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 3.10.7.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.10.7.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 3.10.7.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 3.10.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.10.9. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.10.10. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.10.11. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . 3.11. Installing a cluster with compute nodes on AWS Local Zones You can quickly install an OpenShift Container Platform cluster on Amazon Web Services (AWS) Local Zones by setting the zone names in the edge compute pool of the install-config.yaml file, or install a cluster in an existing Amazon Virtual Private Cloud (VPC) with Local Zone subnets. AWS Local Zones is an infrastructure that place Cloud Resources close to metropolitan regions. For more information, see the AWS Local Zones Documentation . 3.11.1. Infrastructure prerequisites You reviewed details about OpenShift Container Platform installation and update processes. You are familiar with Selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Warning If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation. If you use a firewall, you configured it to allow the sites that your cluster must access. You noted the region and supported AWS Local Zones locations to create the network resources in. You read the AWS Local Zones features in the AWS documentation. You added permissions for creating network resources that support AWS Local Zones to the Identity and Access Management (IAM) user or role. The following example enables a zone group that can provide a user or role access for creating network network resources that support AWS Local Zones. Example of an additional IAM policy with the ec2:ModifyAvailabilityZoneGroup permission attached to an IAM user or role. { "Version": "2012-10-17", "Statement": [ { "Action": [ "ec2:ModifyAvailabilityZoneGroup" ], "Effect": "Allow", "Resource": "*" } ] } 3.11.2. About AWS Local Zones and edge compute pool Read the following sections to understand infrastructure behaviors and cluster limitations in an AWS Local Zones environment. 3.11.2.1. Cluster limitations in AWS Local Zones Some limitations exist when you try to deploy a cluster with a default installation configuration in an Amazon Web Services (AWS) Local Zone. Important The following list details limitations when deploying a cluster in a pre-configured AWS zone: The maximum transmission unit (MTU) between an Amazon EC2 instance in a zone and an Amazon EC2 instance in the Region is 1300 . This causes the cluster-wide network MTU to change according to the network plugin that is used with the deployment. Network resources such as Network Load Balancer (NLB), Classic Load Balancer, and Network Address Translation (NAT) Gateways are not globally supported. For an OpenShift Container Platform cluster on AWS, the AWS Elastic Block Storage (EBS) gp3 type volume is the default for node volumes and the default for the storage class. This volume type is not globally available on zone locations. By default, the nodes running in zones are deployed with the gp2 EBS volume. The gp2-csi StorageClass parameter must be set when creating workloads on zone nodes. If you want the installation program to automatically create Local Zone subnets for your OpenShift Container Platform cluster, specific configuration limitations apply with this method. Important The following configuration limitation applies when you set the installation program to automatically create subnets for your OpenShift Container Platform cluster: When the installation program creates private subnets in AWS Local Zones, the program associates each subnet with the route table of its parent zone. This operation ensures that each private subnet can route egress traffic to the internet by way of NAT Gateways in an AWS Region. If the parent-zone route table does not exist during cluster installation, the installation program associates any private subnet with the first available private route table in the Amazon Virtual Private Cloud (VPC). This approach is valid only for AWS Local Zones subnets in an OpenShift Container Platform cluster. 3.11.2.2. About edge compute pools Edge compute nodes are tainted compute nodes that run in AWS Local Zones locations. When deploying a cluster that uses Local Zones, consider the following points: Amazon EC2 instances in the Local Zones are more expensive than Amazon EC2 instances in the Availability Zones. The latency is lower between the applications running in AWS Local Zones and the end user. A latency impact exists for some workloads if, for example, ingress traffic is mixed between Local Zones and Availability Zones. Important Generally, the maximum transmission unit (MTU) between an Amazon EC2 instance in a Local Zones and an Amazon EC2 instance in the Region is 1300. The cluster network MTU must be always less than the EC2 MTU to account for the overhead. The specific overhead is determined by the network plugin. For example: OVN-Kubernetes has an overhead of 100 bytes . The network plugin can provide additional features, such as IPsec, that also affect the MTU sizing. For more information, see How Local Zones work in the AWS documentation. OpenShift Container Platform 4.12 introduced a new compute pool, edge , that is designed for use in remote zones. The edge compute pool configuration is common between AWS Local Zones locations. Because of the type and size limitations of resources like EC2 and EBS on Local Zones resources, the default instance type can vary from the traditional compute pool. The default Elastic Block Store (EBS) for Local Zones locations is gp2 , which differs from the non-edge compute pool. The instance type used for each Local Zones on an edge compute pool also might differ from other compute pools, depending on the instance offerings on the zone. The edge compute pool creates new labels that developers can use to deploy applications onto AWS Local Zones nodes. The new labels are: node-role.kubernetes.io/edge='' machine.openshift.io/zone-type=local-zone machine.openshift.io/zone-group=USDZONE_GROUP_NAME By default, the machine sets for the edge compute pool define the taint of NoSchedule to prevent other workloads from spreading on Local Zones instances. Users can only run user workloads if they define tolerations in the pod specification. Additional resources MTU value selection Changing the MTU for the cluster network Understanding taints and tolerations Storage classes Ingress Controller sharding 3.11.3. Installation prerequisites Before you install a cluster in an AWS Local Zones environment, you must configure your infrastructure so that it can adopt Local Zone capabilities. 3.11.3.1. Opting in to an AWS Local Zones If you plan to create subnets in AWS Local Zones, you must opt in to each zone group separately. Prerequisites You have installed the AWS CLI. You have determined an AWS Region for where you want to deploy your OpenShift Container Platform cluster. You have attached a permissive IAM policy to a user or role account that opts in to the zone group. Procedure List the zones that are available in your AWS Region by running the following command: Example command for listing available AWS Local Zones in an AWS Region USD aws --region "<value_of_AWS_Region>" ec2 describe-availability-zones \ --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' \ --filters Name=zone-type,Values=local-zone \ --all-availability-zones Depending on the AWS Region, the list of available zones might be long. The command returns the following fields: ZoneName The name of the Local Zones. GroupName The group that comprises the zone. To opt in to the Region, save the name. Status The status of the Local Zones group. If the status is not-opted-in , you must opt in the GroupName as described in the step. Opt in to the zone group on your AWS account by running the following command: USD aws ec2 modify-availability-zone-group \ --group-name "<value_of_GroupName>" \ 1 --opt-in-status opted-in 1 Replace <value_of_GroupName> with the name of the group of the Local Zones where you want to create subnets. For example, specify us-east-1-nyc-1 to use the zone us-east-1-nyc-1a (US East New York). 3.11.3.2. Obtaining an AWS Marketplace image If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy compute nodes. Prerequisites You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster. Procedure Complete the OpenShift Container Platform subscription from the AWS Marketplace . Record the AMI ID for your specific AWS Region. As part of the installation process, you must update the install-config.yaml file with this value before deploying the cluster. Sample install-config.yaml file with AWS Marketplace compute nodes apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA... pullSecret: '{"auths": ...}' 1 The AMI ID from your AWS Marketplace subscription. 2 Your AMI ID is associated with a specific AWS Region. When creating the installation configuration file, ensure that you select the same AWS Region that you specified when configuring your subscription. 3.11.4. Preparing for the installation Before you extend nodes to Local Zones, you must prepare certain resources for the cluster installation environment. 3.11.4.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.19. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 3.11.4.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform for use with AWS Local Zones. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 3.31. Machine types based on 64-bit x86 architecture for AWS Local Zones c5.* c5d.* m6i.* m5.* r5.* t3.* Additional resources See AWS Local Zones features in the AWS documentation. 3.11.4.3. Creating the installation configuration file Generate and customize the installation configuration file that the installation program needs to deploy your cluster. Prerequisites You obtained the OpenShift Container Platform installation program for user-provisioned infrastructure and the pull secret for your cluster. You checked that you are deploying your cluster to an AWS Region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to an AWS Region that requires a custom AMI, such as an AWS GovCloud Region, you must create the install-config.yaml file manually. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS Region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from Red Hat OpenShift Cluster Manager . Optional: Back up the install-config.yaml file. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 3.11.4.4. Examples of installation configuration files with edge compute pools The following examples show install-config.yaml files that contain an edge machine pool configuration. Configuration that uses an edge pool with a custom instance type apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: type: r5.2xlarge platform: aws: region: us-west-2 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... Instance types differ between locations. To verify availability in the Local Zones in which the cluster runs, see the AWS documentation. Configuration that uses an edge pool with a custom Amazon Elastic Block Store (EBS) type apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b - us-west-2-phx-2a rootVolume: type: gp3 size: 120 platform: aws: region: us-west-2 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... Elastic Block Storage (EBS) types differ between locations. Check the AWS documentation to verify availability in the Local Zones in which the cluster runs. Configuration that uses an edge pool with custom security groups apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 platform: aws: region: us-west-2 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 1 Specify the name of the security group as it is displayed on the Amazon EC2 console. Ensure that you include the sg prefix. 3.11.4.5. Customizing the cluster network MTU Before you deploy a cluster on AWS, you can customize the cluster network maximum transmission unit (MTU) for your cluster network to meet the needs of your infrastructure. By default, when you install a cluster with supported Local Zones capabilities, the MTU value for the cluster network is automatically adjusted to the lowest value that the network plugin accepts. Important Setting an unsupported MTU value for EC2 instances that operate in the Local Zones infrastructure can cause issues for your OpenShift Container Platform cluster. If the Local Zone supports higher MTU values in between EC2 instances in the Local Zone and the AWS Region, you can manually configure the higher value to increase the network performance of the cluster network. You can customize the MTU for a cluster by specifying the networking.clusterNetworkMTU parameter in the install-config.yaml configuration file. Important All subnets in Local Zones must support the higher MTU value, so that each node in that zone can successfully communicate with services in the AWS Region and deploy your workloads. Example of overwriting the default MTU value apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: edge-zone networking: clusterNetworkMTU: 8901 compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b platform: aws: region: us-west-2 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... Additional resources For more information about the maximum supported maximum transmission unit (MTU) value, see AWS resources supported in Local Zones in the AWS documentation. 3.11.5. Cluster installation options for an AWS Local Zones environment Choose one of the following installation options to install an OpenShift Container Platform cluster on AWS with edge compute nodes defined in Local Zones: Fully automated option: Installing a cluster to quickly extend compute nodes to edge compute pools, where the installation program automatically creates infrastructure resources for the OpenShift Container Platform cluster. Existing VPC option: Installing a cluster on AWS into an existing VPC, where you supply Local Zones subnets to the install-config.yaml file. steps Choose one of the following options to install an OpenShift Container Platform cluster in an AWS Local Zones environment: Installing a cluster quickly in AWS Local Zones Installing a cluster in an existing VPC with defined AWS Local Zone subnets 3.11.6. Install a cluster quickly in AWS Local Zones For OpenShift Container Platform 4.16, you can quickly install a cluster on Amazon Web Services (AWS) to extend compute nodes to Local Zones locations. By using this installation route, the installation program automatically creates network resources and Local Zones subnets for each zone that you defined in your configuration file. To customize the installation, you must modify parameters in the install-config.yaml file before you deploy the cluster. 3.11.6.1. Modifying an installation configuration file to use AWS Local Zones Modify an install-config.yaml file to include AWS Local Zones. Prerequisites You have configured an AWS account. You added your AWS keys and AWS Region to your local AWS profile by running aws configure . You are familiar with the configuration limitations that apply when you specify the installation program to automatically create subnets for your OpenShift Container Platform cluster. You opted in to the Local Zones group for each zone. You created an install-config.yaml file by using the procedure "Creating the installation configuration file". Procedure Modify the install-config.yaml file by specifying Local Zones names in the platform.aws.zones property of the edge compute pool. # ... platform: aws: region: <region_name> 1 compute: - name: edge platform: aws: zones: 2 - <local_zone_name> #... 1 The AWS Region name. 2 The list of Local Zones names that you use must exist in the same AWS Region specified in the platform.aws.region field. Example of a configuration to install a cluster in the us-west-2 AWS Region that extends edge nodes to Local Zones in Los Angeles and Las Vegas locations apiVersion: v1 baseDomain: example.com metadata: name: cluster-name platform: aws: region: us-west-2 compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b - us-west-2-las-1a pullSecret: '{"auths": ...}' sshKey: 'ssh-ed25519 AAAA...' #... Deploy your cluster. Additional resources Creating the installation configuration file Cluster limitations in AWS Local Zones steps Deploying the cluster 3.11.7. Installing a cluster in an existing VPC that has Local Zone subnets You can install a cluster into an existing Amazon Virtual Private Cloud (VPC) on Amazon Web Services (AWS). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, modify parameters in the install-config.yaml file before you install the cluster. Installing a cluster on AWS into an existing VPC requires extending compute nodes to the edge of the Cloud Infrastructure by using AWS Local Zones. Local Zone subnets extend regular compute nodes to edge networks. Each edge compute nodes runs a user workload. After you create an Amazon Web Service (AWS) Local Zone environment, and you deploy your cluster, you can use edge compute nodes to create user workloads in Local Zone subnets. Note If you want to create private subnets, you must either modify the provided CloudFormation template or create your own template. You can use a provided CloudFormation template to create network resources. Additionally, you can modify a template to customize your infrastructure or use the information that they contain to create AWS resources according to your company's policies. Important The steps for performing an installer-provisioned infrastructure installation are provided for example purposes only. Installing a cluster in an existing VPC requires that you have knowledge of the cloud provider and the installation process of OpenShift Container Platform. You can use a CloudFormation template to assist you with completing these steps or to help model your own cluster installation. Instead of using the CloudFormation template to create resources, you can decide to use other methods for generating these resources. 3.11.7.1. Creating a VPC in AWS You can create a Virtual Private Cloud (VPC), and subnets for all Local Zones locations, in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to extend compute nodes to edge locations. You can further customize your VPC to meet your requirements, including a VPN and route tables. You can also add new Local Zones subnets not included at initial deployment. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and AWS Region to your local AWS profile by running aws configure . You opted in to the AWS Local Zones on your AWS account. Procedure Create a JSON file that contains the parameter values that the CloudFormation template requires: [ { "ParameterKey": "VpcCidr", 1 "ParameterValue": "10.0.0.0/16" 2 }, { "ParameterKey": "AvailabilityZoneCount", 3 "ParameterValue": "3" 4 }, { "ParameterKey": "SubnetBits", 5 "ParameterValue": "12" 6 } ] 1 The CIDR block for the VPC. 2 Specify a CIDR block in the format x.x.x.x/16-24 . 3 The number of availability zones to deploy the VPC in. 4 Specify an integer between 1 and 3 . 5 The size of each subnet in each availability zone. 6 Specify an integer between 5 and 13 , where 5 is /27 and 13 is /19 . Go to the section of the documentation named "CloudFormation template for the VPC", and then copy the syntax from the provided template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC by running the following command: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> \ 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-vpc . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path and the name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f Confirm that the template components exist by running the following command: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster. VpcId The ID of your VPC. PublicSubnetIds The IDs of the new public subnets. PrivateSubnetIds The IDs of the new private subnets. PublicRouteTableId The ID of the new public route table ID. 3.11.7.2. CloudFormation template for the VPC You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster. Example 3.32. CloudFormation template for the VPC AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)" MinValue: 1 MaxValue: 3 Default: 1 Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Network Configuration" Parameters: - VpcCidr - SubnetBits - Label: default: "Availability Zones" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: "Availability Zone Count" VpcCidr: default: "VPC CIDR" SubnetBits: default: "Bits Per Subnet" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: "AWS::EC2::VPC" Properties: EnableDnsSupport: "true" EnableDnsHostnames: "true" CidrBlock: !Ref VpcCidr PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" InternetGateway: Type: "AWS::EC2::InternetGateway" GatewayToInternet: Type: "AWS::EC2::VPCGatewayAttachment" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PublicRoute: Type: "AWS::EC2::Route" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Properties: AllocationId: "Fn::GetAtt": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: "AWS::EC2::EIP" Properties: Domain: vpc Route: Type: "AWS::EC2::Route" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable2: Type: "AWS::EC2::RouteTable" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz2 Properties: AllocationId: "Fn::GetAtt": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: "AWS::EC2::EIP" Condition: DoAz2 Properties: Domain: vpc Route2: Type: "AWS::EC2::Route" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable3: Type: "AWS::EC2::RouteTable" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz3 Properties: AllocationId: "Fn::GetAtt": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: "AWS::EC2::EIP" Condition: DoAz3 Properties: Domain: vpc Route3: Type: "AWS::EC2::Route" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ ",", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ ",", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ ",", [ !Join ["=", [ !Select [0, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join ["=", [!Select [1, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable2]], !Ref "AWS::NoValue" ], !If [DoAz3, !Join ["=", [!Select [2, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable3]], !Ref "AWS::NoValue" ] ] ] 3.11.7.3. Creating subnets in Local Zones Before you configure a machine set for edge compute nodes in your OpenShift Container Platform cluster, you must create the subnets in Local Zones. Complete the following procedure for each Local Zone that you want to deploy compute nodes to. You can use the provided CloudFormation template and create a CloudFormation stack. You can then use this stack to custom provision a subnet. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You opted in to the Local Zones group. Procedure Go to the section of the documentation named "CloudFormation template for the VPC subnet", and copy the syntax from the template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires. Run the following command to deploy the CloudFormation template, which creates a stack of AWS resources that represent the VPC: USD aws cloudformation create-stack --stack-name <stack_name> \ 1 --region USD{CLUSTER_REGION} \ --template-body file://<template>.yaml \ 2 --parameters \ ParameterKey=VpcId,ParameterValue="USD{VPC_ID}" \ 3 ParameterKey=ClusterName,ParameterValue="USD{CLUSTER_NAME}" \ 4 ParameterKey=ZoneName,ParameterValue="USD{ZONE_NAME}" \ 5 ParameterKey=PublicRouteTableId,ParameterValue="USD{ROUTE_TABLE_PUB}" \ 6 ParameterKey=PublicSubnetCidr,ParameterValue="USD{SUBNET_CIDR_PUB}" \ 7 ParameterKey=PrivateRouteTableId,ParameterValue="USD{ROUTE_TABLE_PVT}" \ 8 ParameterKey=PrivateSubnetCidr,ParameterValue="USD{SUBNET_CIDR_PVT}" 9 1 <stack_name> is the name for the CloudFormation stack, such as cluster-wl-<local_zone_shortname> . You need the name of this stack if you remove the cluster. 2 <template> is the relative path and the name of the CloudFormation template YAML file that you saved. 3 USD{VPC_ID} is the VPC ID, which is the value VpcID in the output of the CloudFormation template for the VPC. 4 USD{ZONE_NAME} is the value of Local Zones name to create the subnets. 5 USD{CLUSTER_NAME} is the value of ClusterName to be used as a prefix of the new AWS resource names. 6 USD{SUBNET_CIDR_PUB} is a valid CIDR block that is used to create the public subnet. This block must be part of the VPC CIDR block VpcCidr . 7 USD{ROUTE_TABLE_PVT} is the PrivateRouteTableId extracted from the output of the VPC's CloudFormation stack. 8 USD{SUBNET_CIDR_PVT} is a valid CIDR block that is used to create the private subnet. This block must be part of the VPC CIDR block VpcCidr . Example output arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f Verification Confirm that the template components exist by running the following command: USD aws cloudformation describe-stacks --stack-name <stack_name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. Ensure that you provide these parameter values to the other CloudFormation templates that you run to create for your cluster. PublicSubnetId The IDs of the public subnet created by the CloudFormation stack. PrivateSubnetId The IDs of the private subnet created by the CloudFormation stack. 3.11.7.4. CloudFormation template for the VPC subnet You can use the following CloudFormation template to deploy the private and public subnets in a zone on Local Zones infrastructure. Example 3.33. CloudFormation template for VPC subnets AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\b|(?:[0-9]{1,3}\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: ".+" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: ".+" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: ".+" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: ".+" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String Resources: PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "public", !Ref ZoneName]] PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "private", !Ref ZoneName]] PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join ["", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join ["", [!Ref PrivateSubnet]] Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 3.11.7.5. Modifying an installation configuration file to use AWS Local Zones subnets Modify your install-config.yaml file to include Local Zones subnets. Prerequisites You created subnets by using the procedure "Creating subnets in Local Zones". You created an install-config.yaml file by using the procedure "Creating the installation configuration file". Procedure Modify the install-config.yaml configuration file by specifying Local Zones subnets in the platform.aws.subnets parameter. Example installation configuration file with Local Zones subnets # ... platform: aws: region: us-west-2 subnets: 1 - publicSubnetId-1 - publicSubnetId-2 - publicSubnetId-3 - privateSubnetId-1 - privateSubnetId-2 - privateSubnetId-3 - publicSubnetId-LocalZone-1 # ... 1 List of subnet IDs created in the zones: Availability and Local Zones. Additional resources For more information about viewing the CloudFormation stacks that you created, see AWS CloudFormation console . For more information about AWS profile and credential configuration, see Configuration and credential file settings in the AWS documentation. steps Deploying the cluster 3.11.8. Optional: AWS security groups By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified. However, you can apply additional existing AWS security groups, which are associated with your existing VPC, to control plane and compute machines. Applying custom security groups can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. As part of the installation process, you apply custom security groups by modifying the install-config.yaml file before deploying the cluster. For more information, see "Edge compute pools and AWS Local Zones". 3.11.9. Optional: Assign public IP addresses to edge compute nodes If your workload requires deploying the edge compute nodes in public subnets on Local Zones infrastructure, you can configure the machine set manifests when installing a cluster. AWS Local Zones infrastructure accesses the network traffic in a specified zone, so applications can take advantage of lower latency when serving end users that are closer to that zone. The default setting that deploys compute nodes in private subnets might not meet your needs, so consider creating edge compute nodes in public subnets when you want to apply more customization to your infrastructure. Important By default, OpenShift Container Platform deploy the compute nodes in private subnets. For best performance, consider placing compute nodes in subnets that have their Public IP addresses attached to the subnets. You must create additional security groups, but ensure that you only open the groups' rules over the internet when you really need to. Procedure Change to the directory that contains the installation program and generate the manifest files. Ensure that the installation manifests get created at the openshift and manifests directory level. USD ./openshift-install create manifests --dir <installation_directory> Edit the machine set manifest that the installation program generates for the Local Zones, so that the manifest gets deployed in public subnets. Specify true for the spec.template.spec.providerSpec.value.publicIP parameter. Example machine set manifest configuration for installing a cluster quickly in Local Zones spec: template: spec: providerSpec: value: publicIp: true subnet: filters: - name: tag:Name values: - USD{INFRA_ID}-public-USD{ZONE_NAME} Example machine set manifest configuration for installing a cluster in an existing VPC that has Local Zones subnets apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <infrastructure_id>-edge-<zone> namespace: openshift-machine-api spec: template: spec: providerSpec: value: publicIp: true 3.11.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.11.11. Verifying the status of the deployed cluster Verify that your OpenShift Container Platform successfully deployed on AWS Local Zones. 3.11.11.1. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.11.11.2. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources Accessing the web console 3.11.11.3. Verifying nodes that were created with edge compute pool After you install a cluster that uses AWS Local Zones infrastructure, check the status of the machine that was created by the machine set manifests created during installation. To check the machine sets created from the subnet you added to the install-config.yaml file, run the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE cluster-7xw5g-edge-us-east-1-nyc-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1b 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1c 1 1 1 1 3h4m To check the machines that were created from the machine sets, run the following command: USD oc get machines -n openshift-machine-api Example output To check nodes with edge roles, run the following command: USD oc get nodes -l node-role.kubernetes.io/edge Example output NAME STATUS ROLES AGE VERSION ip-10-0-207-188.ec2.internal Ready edge,worker 172m v1.25.2+d2e245f steps Validating an installation . If necessary, you can opt out of remote health . 3.12. Installing a cluster with compute nodes on AWS Wavelength Zones You can quickly install an OpenShift Container Platform cluster on Amazon Web Services (AWS) Wavelength Zones by setting the zone names in the edge compute pool of the install-config.yaml file, or install a cluster in an existing Amazon Virtual Private Cloud (VPC) with Wavelength Zone subnets. AWS Wavelength Zones is an infrastructure that AWS configured for mobile edge computing (MEC) applications. A Wavelength Zone embeds AWS compute and storage services within the 5G network of a communication service provider (CSP). By placing application servers in a Wavelength Zone, the application traffic from your 5G devices can stay in the 5G network. The application traffic of the device reaches the target server directly, making latency a non-issue. Additional resources See Wavelength Zones in the AWS documentation. 3.12.1. Infrastructure prerequisites You reviewed details about OpenShift Container Platform installation and update processes. You are familiar with Selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Warning If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation. If you use a firewall, you configured it to allow the sites that your cluster must access. You noted the region and supported AWS Wavelength Zone locations to create the network resources in. You read AWS Wavelength features in the AWS documentation. You read the Quotas and considerations for Wavelength Zones in the AWS documentation. You added permissions for creating network resources that support AWS Wavelength Zones to the Identity and Access Management (IAM) user or role. For example: Example of an additional IAM policy that attached ec2:ModifyAvailabilityZoneGroup , ec2:CreateCarrierGateway , and ec2:DeleteCarrierGateway permissions to a user or role { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DeleteCarrierGateway", "ec2:CreateCarrierGateway" ], "Resource": "*" }, { "Action": [ "ec2:ModifyAvailabilityZoneGroup" ], "Effect": "Allow", "Resource": "*" } ] } 3.12.2. About AWS Wavelength Zones and edge compute pool Read the following sections to understand infrastructure behaviors and cluster limitations in an AWS Wavelength Zones environment. 3.12.2.1. Cluster limitations in AWS Wavelength Zones Some limitations exist when you try to deploy a cluster with a default installation configuration in an Amazon Web Services (AWS) Wavelength Zone. Important The following list details limitations when deploying a cluster in a pre-configured AWS zone: The maximum transmission unit (MTU) between an Amazon EC2 instance in a zone and an Amazon EC2 instance in the Region is 1300 . This causes the cluster-wide network MTU to change according to the network plugin that is used with the deployment. Network resources such as Network Load Balancer (NLB), Classic Load Balancer, and Network Address Translation (NAT) Gateways are not globally supported. For an OpenShift Container Platform cluster on AWS, the AWS Elastic Block Storage (EBS) gp3 type volume is the default for node volumes and the default for the storage class. This volume type is not globally available on zone locations. By default, the nodes running in zones are deployed with the gp2 EBS volume. The gp2-csi StorageClass parameter must be set when creating workloads on zone nodes. If you want the installation program to automatically create Wavelength Zone subnets for your OpenShift Container Platform cluster, specific configuration limitations apply with this method. The following note details some of these limitations. For other limitations, ensure that you read the "Quotas and considerations for Wavelength Zones" document that Red Hat provides in the "Infrastructure prerequisites" section. Important The following configuration limitation applies when you set the installation program to automatically create subnets for your OpenShift Container Platform cluster: When the installation program creates private subnets in AWS Wavelength Zones, the program associates each subnet with the route table of its parent zone. This operation ensures that each private subnet can route egress traffic to the internet by way of NAT Gateways in an AWS Region. If the parent-zone route table does not exist during cluster installation, the installation program associates any private subnet with the first available private route table in the Amazon Virtual Private Cloud (VPC). This approach is valid only for AWS Wavelength Zones subnets in an OpenShift Container Platform cluster. 3.12.2.2. About edge compute pools Edge compute nodes are tainted compute nodes that run in AWS Wavelength Zones locations. When deploying a cluster that uses Wavelength Zones, consider the following points: Amazon EC2 instances in the Wavelength Zones are more expensive than Amazon EC2 instances in the Availability Zones. The latency is lower between the applications running in AWS Wavelength Zones and the end user. A latency impact exists for some workloads if, for example, ingress traffic is mixed between Wavelength Zones and Availability Zones. Important Generally, the maximum transmission unit (MTU) between an Amazon EC2 instance in a Wavelength Zones and an Amazon EC2 instance in the Region is 1300. The cluster network MTU must be always less than the EC2 MTU to account for the overhead. The specific overhead is determined by the network plugin. For example: OVN-Kubernetes has an overhead of 100 bytes . The network plugin can provide additional features, such as IPsec, that also affect the MTU sizing. For more information, see How AWS Wavelength work in the AWS documentation. OpenShift Container Platform 4.12 introduced a new compute pool, edge , that is designed for use in remote zones. The edge compute pool configuration is common between AWS Wavelength Zones locations. Because of the type and size limitations of resources like EC2 and EBS on Wavelength Zones resources, the default instance type can vary from the traditional compute pool. The default Elastic Block Store (EBS) for Wavelength Zones locations is gp2 , which differs from the non-edge compute pool. The instance type used for each Wavelength Zones on an edge compute pool also might differ from other compute pools, depending on the instance offerings on the zone. The edge compute pool creates new labels that developers can use to deploy applications onto AWS Wavelength Zones nodes. The new labels are: node-role.kubernetes.io/edge='' machine.openshift.io/zone-type=wavelength-zone machine.openshift.io/zone-group=USDZONE_GROUP_NAME By default, the machine sets for the edge compute pool define the taint of NoSchedule to prevent other workloads from spreading on Wavelength Zones instances. Users can only run user workloads if they define tolerations in the pod specification. Additional resources MTU value selection Changing the MTU for the cluster network Understanding taints and tolerations Storage classes Ingress Controller sharding 3.12.3. Installation prerequisites Before you install a cluster in an AWS Wavelength Zones environment, you must configure your infrastructure so that it can adopt Wavelength Zone capabilities. 3.12.3.1. Opting in to an AWS Wavelength Zones If you plan to create subnets in AWS Wavelength Zones, you must opt in to each zone group separately. Prerequisites You have installed the AWS CLI. You have determined an AWS Region for where you want to deploy your OpenShift Container Platform cluster. You have attached a permissive IAM policy to a user or role account that opts in to the zone group. Procedure List the zones that are available in your AWS Region by running the following command: Example command for listing available AWS Wavelength Zones in an AWS Region USD aws --region "<value_of_AWS_Region>" ec2 describe-availability-zones \ --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' \ --filters Name=zone-type,Values=wavelength-zone \ --all-availability-zones Depending on the AWS Region, the list of available zones might be long. The command returns the following fields: ZoneName The name of the Wavelength Zones. GroupName The group that comprises the zone. To opt in to the Region, save the name. Status The status of the Wavelength Zones group. If the status is not-opted-in , you must opt in the GroupName as described in the step. Opt in to the zone group on your AWS account by running the following command: USD aws ec2 modify-availability-zone-group \ --group-name "<value_of_GroupName>" \ 1 --opt-in-status opted-in 1 Replace <value_of_GroupName> with the name of the group of the Wavelength Zones where you want to create subnets. As an example for Wavelength Zones, specify us-east-1-wl1 to use the zone us-east-1-wl1-nyc-wlz-1 (US East New York). 3.12.3.2. Obtaining an AWS Marketplace image If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy compute nodes. Prerequisites You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster. Procedure Complete the OpenShift Container Platform subscription from the AWS Marketplace . Record the AMI ID for your specific AWS Region. As part of the installation process, you must update the install-config.yaml file with this value before deploying the cluster. Sample install-config.yaml file with AWS Marketplace compute nodes apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA... pullSecret: '{"auths": ...}' 1 The AMI ID from your AWS Marketplace subscription. 2 Your AMI ID is associated with a specific AWS Region. When creating the installation configuration file, ensure that you select the same AWS Region that you specified when configuring your subscription. 3.12.4. Preparing for the installation Before you extend nodes to Wavelength Zones, you must prepare certain resources for the cluster installation environment. 3.12.4.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.20. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 3.12.4.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform for use with AWS Wavelength Zones. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 3.34. Machine types based on 64-bit x86 architecture for AWS Wavelength Zones r5.* t3.* Additional resources See AWS Wavelength features in the AWS documentation. 3.12.4.3. Creating the installation configuration file Generate and customize the installation configuration file that the installation program needs to deploy your cluster. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You checked that you are deploying your cluster to an AWS Region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to an AWS Region that requires a custom AMI, such as an AWS GovCloud Region, you must create the install-config.yaml file manually. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS Region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from Red Hat OpenShift Cluster Manager . Optional: Back up the install-config.yaml file. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 3.12.4.4. Examples of installation configuration files with edge compute pools The following examples show install-config.yaml files that contain an edge machine pool configuration. Configuration that uses an edge pool with a custom instance type apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: type: r5.2xlarge platform: aws: region: us-west-2 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... Instance types differ between locations. To verify availability in the Wavelength Zones in which the cluster runs, see the AWS documentation. Configuration that uses an edge pool with custom security groups apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 platform: aws: region: us-west-2 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 1 Specify the name of the security group as it is displayed on the Amazon EC2 console. Ensure that you include the sg prefix. 3.12.5. Cluster installation options for an AWS Wavelength Zones environment Choose one of the following installation options to install an OpenShift Container Platform cluster on AWS with edge compute nodes defined in Wavelength Zones: Fully automated option: Installing a cluster to quickly extend compute nodes to edge compute pools, where the installation program automatically creates infrastructure resources for the OpenShift Container Platform cluster. Existing VPC option: Installing a cluster on AWS into an existing VPC, where you supply Wavelength Zones subnets to the install-config.yaml file. steps Choose one of the following options to install an OpenShift Container Platform cluster in an AWS Wavelength Zones environment: Installing a cluster quickly in AWS Wavelength Zones Modifying an installation configuration file to use AWS Wavelength Zones 3.12.6. Install a cluster quickly in AWS Wavelength Zones For OpenShift Container Platform 4.16, you can quickly install a cluster on Amazon Web Services (AWS) to extend compute nodes to Wavelength Zones locations. By using this installation route, the installation program automatically creates network resources and Wavelength Zones subnets for each zone that you defined in your configuration file. To customize the installation, you must modify parameters in the install-config.yaml file before you deploy the cluster. 3.12.6.1. Modifying an installation configuration file to use AWS Wavelength Zones Modify an install-config.yaml file to include AWS Wavelength Zones. Prerequisites You have configured an AWS account. You added your AWS keys and AWS Region to your local AWS profile by running aws configure . You are familiar with the configuration limitations that apply when you specify the installation program to automatically create subnets for your OpenShift Container Platform cluster. You opted in to the Wavelength Zones group for each zone. You created an install-config.yaml file by using the procedure "Creating the installation configuration file". Procedure Modify the install-config.yaml file by specifying Wavelength Zones names in the platform.aws.zones property of the edge compute pool. # ... platform: aws: region: <region_name> 1 compute: - name: edge platform: aws: zones: 2 - <wavelength_zone_name> #... 1 The AWS Region name. 2 The list of Wavelength Zones names that you use must exist in the same AWS Region specified in the platform.aws.region field. Example of a configuration to install a cluster in the us-west-2 AWS Region that extends edge nodes to Wavelength Zones in Los Angeles and Las Vegas locations apiVersion: v1 baseDomain: example.com metadata: name: cluster-name platform: aws: region: us-west-2 compute: - name: edge platform: aws: zones: - us-west-2-wl1-lax-wlz-1 - us-west-2-wl1-las-wlz-1 pullSecret: '{"auths": ...}' sshKey: 'ssh-ed25519 AAAA...' #... Deploy your cluster. Additional resources Creating the installation configuration file Cluster limitations in AWS Wavelength Zones steps Deploying the cluster 3.12.7. Installing a cluster in an existing VPC that has Wavelength Zone subnets You can install a cluster into an existing Amazon Virtual Private Cloud (VPC) on Amazon Web Services (AWS). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, modify parameters in the install-config.yaml file before you install the cluster. Installing a cluster on AWS into an existing VPC requires extending compute nodes to the edge of the Cloud Infrastructure by using AWS Wavelength Zones. You can use a provided CloudFormation template to create network resources. Additionally, you can modify a template to customize your infrastructure or use the information that they contain to create AWS resources according to your company's policies. Important The steps for performing an installer-provisioned infrastructure installation are provided for example purposes only. Installing a cluster in an existing VPC requires that you have knowledge of the cloud provider and the installation process of OpenShift Container Platform. You can use a CloudFormation template to assist you with completing these steps or to help model your own cluster installation. Instead of using the CloudFormation template to create resources, you can decide to use other methods for generating these resources. 3.12.7.1. Creating a VPC in AWS You can create a Virtual Private Cloud (VPC), and subnets for all Wavelength Zones locations, in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to extend compute nodes to edge locations. You can further customize your VPC to meet your requirements, including a VPN and route tables. You can also add new Wavelength Zones subnets not included at initial deployment. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and AWS Region to your local AWS profile by running aws configure . You opted in to the AWS Wavelength Zones on your AWS account. Procedure Create a JSON file that contains the parameter values that the CloudFormation template requires: [ { "ParameterKey": "VpcCidr", 1 "ParameterValue": "10.0.0.0/16" 2 }, { "ParameterKey": "AvailabilityZoneCount", 3 "ParameterValue": "3" 4 }, { "ParameterKey": "SubnetBits", 5 "ParameterValue": "12" 6 } ] 1 The CIDR block for the VPC. 2 Specify a CIDR block in the format x.x.x.x/16-24 . 3 The number of availability zones to deploy the VPC in. 4 Specify an integer between 1 and 3 . 5 The size of each subnet in each availability zone. 6 Specify an integer between 5 and 13 , where 5 is /27 and 13 is /19 . Go to the section of the documentation named "CloudFormation template for the VPC", and then copy the syntax from the provided template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC by running the following command: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> \ 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-vpc . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path and the name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f Confirm that the template components exist by running the following command: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster. VpcId The ID of your VPC. PublicSubnetIds The IDs of the new public subnets. PrivateSubnetIds The IDs of the new private subnets. PublicRouteTableId The ID of the new public route table ID. 3.12.7.2. CloudFormation template for the VPC You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster. Example 3.35. CloudFormation template for the VPC AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)" MinValue: 1 MaxValue: 3 Default: 1 Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Network Configuration" Parameters: - VpcCidr - SubnetBits - Label: default: "Availability Zones" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: "Availability Zone Count" VpcCidr: default: "VPC CIDR" SubnetBits: default: "Bits Per Subnet" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: "AWS::EC2::VPC" Properties: EnableDnsSupport: "true" EnableDnsHostnames: "true" CidrBlock: !Ref VpcCidr PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" InternetGateway: Type: "AWS::EC2::InternetGateway" GatewayToInternet: Type: "AWS::EC2::VPCGatewayAttachment" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PublicRoute: Type: "AWS::EC2::Route" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Properties: AllocationId: "Fn::GetAtt": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: "AWS::EC2::EIP" Properties: Domain: vpc Route: Type: "AWS::EC2::Route" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable2: Type: "AWS::EC2::RouteTable" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz2 Properties: AllocationId: "Fn::GetAtt": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: "AWS::EC2::EIP" Condition: DoAz2 Properties: Domain: vpc Route2: Type: "AWS::EC2::Route" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable3: Type: "AWS::EC2::RouteTable" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz3 Properties: AllocationId: "Fn::GetAtt": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: "AWS::EC2::EIP" Condition: DoAz3 Properties: Domain: vpc Route3: Type: "AWS::EC2::Route" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ ",", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ ",", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ ",", [ !Join ["=", [ !Select [0, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join ["=", [!Select [1, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable2]], !Ref "AWS::NoValue" ], !If [DoAz3, !Join ["=", [!Select [2, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable3]], !Ref "AWS::NoValue" ] ] ] 3.12.7.3. Creating a VPC carrier gateway To use public subnets in your OpenShift Container Platform cluster that runs on Wavelength Zones, you must create the carrier gateway and associate the carrier gateway to the VPC. Subnets are useful for deploying load balancers or edge compute nodes. To create edge nodes or internet-facing load balancers in Wavelength Zones locations for your OpenShift Container Platform cluster, you must create the following required network components: A carrier gateway that associates to the existing VPC. A carrier route table that lists route entries. A subnet that associates to the carrier route table. Carrier gateways exist for VPCs that only contain subnets in a Wavelength Zone. The following list explains the functions of a carrier gateway in the context of an AWS Wavelength Zones location: Provides connectivity between your Wavelength Zone and the carrier network, which includes any available devices from the carrier network. Performs Network Address Translation (NAT) functions, such as translating IP addresses that are public IP addresses stored in a network border group, from Wavelength Zones to carrier IP addresses. These translation functions apply to inbound and outbound traffic. Authorizes inbound traffic from a carrier network that is located in a specific location. Authorizes outbound traffic to a carrier network and the internet. Note No inbound connection configuration exists from the internet to a Wavelength Zone through the carrier gateway. You can use the provided CloudFormation template to create a stack of the following AWS resources: One carrier gateway that associates to the VPC ID in the template. One public route table for the Wavelength Zone named as <ClusterName>-public-carrier . Default IPv4 route entry in the new route table that targets the carrier gateway. VPC gateway endpoint for an AWS Simple Storage Service (S3). Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . Procedure Go to the section of the documentation named "CloudFormation template for the VPC Carrier Gateway", and then copy the syntax from the CloudFormation template for VPC Carrier Gateway template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires. Run the following command to deploy the CloudFormation template, which creates a stack of AWS resources that represent the VPC: USD aws cloudformation create-stack --stack-name <stack_name> \ 1 --region USD{CLUSTER_REGION} \ --template-body file://<template>.yaml \ 2 --parameters \// ParameterKey=VpcId,ParameterValue="USD{VpcId}" \ 3 ParameterKey=ClusterName,ParameterValue="USD{ClusterName}" 4 1 <stack_name> is the name for the CloudFormation stack, such as clusterName-vpc-carrier-gw . You need the name of this stack if you remove the cluster. 2 <template> is the relative path and the name of the CloudFormation template YAML file that you saved. 3 <VpcId> is the VPC ID extracted from the CloudFormation stack output created in the section named "Creating a VPC in AWS". 4 <ClusterName> is a custom value that prefixes to resources that the CloudFormation stack creates. You can use the same name that is defined in the metadata.name section of the install-config.yaml configuration file. Example output arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-2fd3-11eb-820e-12a48460849f Verification Confirm that the CloudFormation template components exist by running the following command: USD aws cloudformation describe-stacks --stack-name <stack_name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameter. Ensure that you provide the parameter value to the other CloudFormation templates that you run to create for your cluster. PublicRouteTableId The ID of the Route Table in the Carrier infrastructure. Additional resources See Amazon S3 in the AWS documentation. 3.12.7.4. CloudFormation template for the VPC Carrier Gateway You can use the following CloudFormation template to deploy the Carrier Gateway on AWS Wavelength infrastructure. Example 3.36. CloudFormation template for VPC Carrier Gateway AWSTemplateFormatVersion: 2010-09-09 Description: Template for Creating Wavelength Zone Gateway (Carrier Gateway). Parameters: VpcId: Description: VPC ID to associate the Carrier Gateway. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\b|(?:[0-9]{1,3}\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster Name or Prefix name to prepend the tag Name for each subnet. Type: String AllowedPattern: ".+" ConstraintDescription: ClusterName parameter must be specified. Resources: CarrierGateway: Type: "AWS::EC2::CarrierGateway" Properties: VpcId: !Ref VpcId Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "cagw"]] PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VpcId Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "public-carrier"]] PublicRoute: Type: "AWS::EC2::Route" DependsOn: CarrierGateway Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 CarrierGatewayId: !Ref CarrierGateway S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VpcId Outputs: PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable 3.12.7.5. Creating subnets in Wavelength Zones Before you configure a machine set for edge compute nodes in your OpenShift Container Platform cluster, you must create the subnets in Wavelength Zones. Complete the following procedure for each Wavelength Zone that you want to deploy compute nodes to. You can use the provided CloudFormation template and create a CloudFormation stack. You can then use this stack to custom provision a subnet. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You opted in to the Wavelength Zones group. Procedure Go to the section of the documentation named "CloudFormation template for the VPC subnet", and copy the syntax from the template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires. Run the following command to deploy the CloudFormation template, which creates a stack of AWS resources that represent the VPC: USD aws cloudformation create-stack --stack-name <stack_name> \ 1 --region USD{CLUSTER_REGION} \ --template-body file://<template>.yaml \ 2 --parameters \ ParameterKey=VpcId,ParameterValue="USD{VPC_ID}" \ 3 ParameterKey=ClusterName,ParameterValue="USD{CLUSTER_NAME}" \ 4 ParameterKey=ZoneName,ParameterValue="USD{ZONE_NAME}" \ 5 ParameterKey=PublicRouteTableId,ParameterValue="USD{ROUTE_TABLE_PUB}" \ 6 ParameterKey=PublicSubnetCidr,ParameterValue="USD{SUBNET_CIDR_PUB}" \ 7 ParameterKey=PrivateRouteTableId,ParameterValue="USD{ROUTE_TABLE_PVT}" \ 8 ParameterKey=PrivateSubnetCidr,ParameterValue="USD{SUBNET_CIDR_PVT}" 9 1 <stack_name> is the name for the CloudFormation stack, such as cluster-wl-<wavelength_zone_shortname> . You need the name of this stack if you remove the cluster. 2 <template> is the relative path and the name of the CloudFormation template YAML file that you saved. 3 USD{VPC_ID} is the VPC ID, which is the value VpcID in the output of the CloudFormation template for the VPC. 4 USD{ZONE_NAME} is the value of Wavelength Zones name to create the subnets. 5 USD{CLUSTER_NAME} is the value of ClusterName to be used as a prefix of the new AWS resource names. 6 USD{ROUTE_TABLE_PUB} is the PublicRouteTableId extracted from the output of the VPC's carrier gateway CloudFormation stack. 7 USD{SUBNET_CIDR_PUB} is a valid CIDR block that is used to create the public subnet. This block must be part of the VPC CIDR block VpcCidr . 8 USD{ROUTE_TABLE_PVT} is the PrivateRouteTableId extracted from the output of the VPC's CloudFormation stack. 9 USD{SUBNET_CIDR_PVT} is a valid CIDR block that is used to create the private subnet. This block must be part of the VPC CIDR block VpcCidr . Example output arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f Verification Confirm that the template components exist by running the following command: USD aws cloudformation describe-stacks --stack-name <stack_name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. Ensure that you provide these parameter values to the other CloudFormation templates that you run to create for your cluster. PublicSubnetId The IDs of the public subnet created by the CloudFormation stack. PrivateSubnetId The IDs of the private subnet created by the CloudFormation stack. 3.12.7.6. CloudFormation template for the VPC subnet You can use the following CloudFormation template to deploy the private and public subnets in a zone on Wavelength Zones infrastructure. Example 3.37. CloudFormation template for VPC subnets AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\b|(?:[0-9]{1,3}\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: ".+" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: ".+" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: ".+" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: ".+" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String Resources: PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "public", !Ref ZoneName]] PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "private", !Ref ZoneName]] PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join ["", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join ["", [!Ref PrivateSubnet]] 3.12.7.7. Modifying an installation configuration file to use AWS Wavelength Zones subnets Modify your install-config.yaml file to include Wavelength Zones subnets. Prerequisites You created subnets by using the procedure "Creating subnets in Wavelength Zones". You created an install-config.yaml file by using the procedure "Creating the installation configuration file". Procedure Modify the install-config.yaml configuration file by specifying Wavelength Zones subnets in the platform.aws.subnets parameter. Example installation configuration file with Wavelength Zones subnets # ... platform: aws: region: us-west-2 subnets: 1 - publicSubnetId-1 - publicSubnetId-2 - publicSubnetId-3 - privateSubnetId-1 - privateSubnetId-2 - privateSubnetId-3 - publicOrPrivateSubnetID-Wavelength-1 # ... 1 List of subnet IDs created in the zones: Availability and Wavelength Zones. Additional resources For more information about viewing the CloudFormation stacks that you created, see AWS CloudFormation console . For more information about AWS profile and credential configuration, see Configuration and credential file settings in the AWS documentation. steps Deploying the cluster 3.12.8. Optional: Assign public IP addresses to edge compute nodes If your workload requires deploying the edge compute nodes in public subnets on Wavelength Zones infrastructure, you can configure the machine set manifests when installing a cluster. AWS Wavelength Zones infrastructure accesses the network traffic in a specified zone, so applications can take advantage of lower latency when serving end users that are closer to that zone. The default setting that deploys compute nodes in private subnets might not meet your needs, so consider creating edge compute nodes in public subnets when you want to apply more customization to your infrastructure. Important By default, OpenShift Container Platform deploy the compute nodes in private subnets. For best performance, consider placing compute nodes in subnets that have their Public IP addresses attached to the subnets. You must create additional security groups, but ensure that you only open the groups' rules over the internet when you really need to. Procedure Change to the directory that contains the installation program and generate the manifest files. Ensure that the installation manifests get created at the openshift and manifests directory level. USD ./openshift-install create manifests --dir <installation_directory> Edit the machine set manifest that the installation program generates for the Wavelength Zones, so that the manifest gets deployed in public subnets. Specify true for the spec.template.spec.providerSpec.value.publicIP parameter. Example machine set manifest configuration for installing a cluster quickly in Wavelength Zones spec: template: spec: providerSpec: value: publicIp: true subnet: filters: - name: tag:Name values: - USD{INFRA_ID}-public-USD{ZONE_NAME} Example machine set manifest configuration for installing a cluster in an existing VPC that has Wavelength Zones subnets apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <infrastructure_id>-edge-<zone> namespace: openshift-machine-api spec: template: spec: providerSpec: value: publicIp: true 3.12.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.12.10. Verifying the status of the deployed cluster Verify that your OpenShift Container Platform successfully deployed on AWS Wavelength Zones. 3.12.10.1. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.12.10.2. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources Accessing the web console 3.12.10.3. Verifying nodes that were created with edge compute pool After you install a cluster that uses AWS Wavelength Zones infrastructure, check the status of the machine that was created by the machine set manifests created during installation. To check the machine sets created from the subnet you added to the install-config.yaml file, run the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE cluster-7xw5g-edge-us-east-1-wl1-nyc-wlz-1 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1b 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1c 1 1 1 1 3h4m To check the machines that were created from the machine sets, run the following command: USD oc get machines -n openshift-machine-api Example output To check nodes with edge roles, run the following command: USD oc get nodes -l node-role.kubernetes.io/edge Example output NAME STATUS ROLES AGE VERSION ip-10-0-207-188.ec2.internal Ready edge,worker 172m v1.25.2+d2e245f steps Validating an installation . If necessary, you can opt out of remote health . 3.13. Extending an AWS VPC cluster into an AWS Outpost In OpenShift Container Platform version 4.14, you could install a cluster on Amazon Web Services (AWS) with compute nodes running in AWS Outposts as a Technology Preview. As of OpenShift Container Platform version 4.15, this installation method is no longer supported. Instead, you can install a cluster on AWS into an existing VPC, and provision compute nodes on AWS Outposts as a postinstallation configuration task. After installing a cluster on Amazon Web Services (AWS) into an existing Amazon Virtual Private Cloud (VPC) , you can create a compute machine set that deploys compute machines in AWS Outposts. AWS Outposts is an AWS edge compute service that enables using many features of a cloud-based AWS deployment with the reduced latency of an on-premise environment. For more information, see the AWS Outposts documentation . 3.13.1. AWS Outposts on OpenShift Container Platform requirements and limitations You can manage the resources on your AWS Outpost similarly to those on a cloud-based AWS cluster if you configure your OpenShift Container Platform cluster to accommodate the following requirements and limitations: To extend an OpenShift Container Platform cluster on AWS into an Outpost, you must have installed the cluster into an existing Amazon Virtual Private Cloud (VPC). The infrastructure of an Outpost is tied to an availability zone in an AWS region and uses a dedicated subnet. Edge compute machines deployed into an Outpost must use the Outpost subnet and the availability zone that the Outpost is tied to. When the AWS Kubernetes cloud controller manager discovers an Outpost subnet, it attempts to create service load balancers in the Outpost subnet. AWS Outposts do not support running service load balancers. To prevent the cloud controller manager from creating unsupported services in the Outpost subnet, you must include the kubernetes.io/cluster/unmanaged tag in the Outpost subnet configuration. This requirement is a workaround in OpenShift Container Platform version 4.16. For more information, see OCPBUGS-30041 . OpenShift Container Platform clusters on AWS include the gp3-csi and gp2-csi storage classes. These classes correspond to Amazon Elastic Block Store (EBS) gp3 and gp2 volumes. OpenShift Container Platform clusters use the gp3-csi storage class by default, but AWS Outposts does not support EBS gp3 volumes. This implementation uses the node-role.kubernetes.io/outposts taint to prevent spreading regular cluster workloads to the Outpost nodes. To schedule user workloads in the Outpost, you must specify a corresponding toleration in the Deployment resource for your application. Reserving the AWS Outpost infrastructure for user workloads avoids additional configuration requirements, such as updating the default CSI to gp2-csi so that it is compatible. To create a volume in the Outpost, the CSI driver requires the Outpost Amazon Resource Name (ARN). The driver uses the topology keys stored on the CSINode objects to determine the Outpost ARN. To ensure that the driver uses the correct topology values, you must set the volume binding mode to WaitForConsumer and avoid setting allowed topologies on any new storage classes that you create. When you extend an AWS VPC cluster into an Outpost, you have two types of compute resources. The Outpost has edge compute nodes, while the VPC has cloud-based compute nodes. The cloud-based AWS Elastic Block volume cannot attach to Outpost edge compute nodes, and the Outpost volumes cannot attach to cloud-based compute nodes. As a result, you cannot use CSI snapshots to migrate applications that use persistent storage from cloud-based compute nodes to edge compute nodes or directly use the original persistent volume. To migrate persistent storage data for applications, you must perform a manual backup and restore operation. AWS Outposts does not support AWS Network Load Balancers or AWS Classic Load Balancers. You must use AWS Application Load Balancers to enable load balancing for edge compute resources in the AWS Outposts environment. To provision an Application Load Balancer, you must use an Ingress resource and install the AWS Load Balancer Operator. If your cluster contains both edge and cloud-based compute instances that share workloads, additional configuration is required. For more information, see "Using the AWS Load Balancer Operator in an AWS VPC cluster extended into an Outpost". Additional resources Using the AWS Load Balancer Operator in an AWS VPC cluster extended into an Outpost 3.13.2. Obtaining information about your environment To extend an AWS VPC cluster to your Outpost, you must provide information about your OpenShift Container Platform cluster and your Outpost environment. You use this information to complete network configuration tasks and configure a compute machine set that creates compute machines in your Outpost. You can use command-line tools to gather the required details. 3.13.2.1. Obtaining information from your OpenShift Container Platform cluster You can use the OpenShift CLI ( oc ) to obtain information from your OpenShift Container Platform cluster. Tip You might find it convenient to store some or all of these values as environment variables by using the export command. Prerequisites You have installed an OpenShift Container Platform cluster into a custom VPC on AWS. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). Procedure List the infrastructure ID for the cluster by running the following command. Retain this value. USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructures.config.openshift.io cluster Obtain details about the compute machine sets that the installation program created by running the following commands: List the compute machine sets on your cluster: USD oc get machinesets.machine.openshift.io -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE <compute_machine_set_name_1> 1 1 1 1 55m <compute_machine_set_name_2> 1 1 1 1 55m Display the Amazon Machine Image (AMI) ID for one of the listed compute machine sets. Retain this value. USD oc get machinesets.machine.openshift.io <compute_machine_set_name_1> \ -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}' Display the subnet ID for the AWS VPC cluster. Retain this value. USD oc get machinesets.machine.openshift.io <compute_machine_set_name_1> \ -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet.id}' 3.13.2.2. Obtaining information from your AWS account You can use the AWS CLI ( aws ) to obtain information from your AWS account. Tip You might find it convenient to store some or all of these values as environment variables by using the export command. Prerequisites You have an AWS Outposts site with the required hardware setup complete. Your Outpost is connected to your AWS account. You have access to your AWS account by using the AWS CLI ( aws ) as a user with permissions to perform the required tasks. Procedure List the Outposts that are connected to your AWS account by running the following command: USD aws outposts list-outposts Retain the following values from the output of the aws outposts list-outposts command: The Outpost ID. The Amazon Resource Name (ARN) for the Outpost. The Outpost availability zone. Note The output of the aws outposts list-outposts command includes two values related to the availability zone: AvailabilityZone and AvailabilityZoneId . You use the AvailablilityZone value to configure a compute machine set that creates compute machines in your Outpost. Using the value of the Outpost ID, show the instance types that are available in your Outpost by running the following command. Retain the values of the available instance types. USD aws outposts get-outpost-instance-types \ --outpost-id <outpost_id_value> Using the value of the Outpost ARN, show the subnet ID for the Outpost by running the following command. Retain this value. USD aws ec2 describe-subnets \ --filters Name=outpost-arn,Values=<outpost_arn_value> 3.13.3. Configuring your network for your Outpost To extend your VPC cluster into an Outpost, you must complete the following network configuration tasks: Change the Cluster Network MTU. Create a subnet in your Outpost. 3.13.3.1. Changing the cluster network MTU to support AWS Outposts During installation, the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster. You might need to decrease the MTU value for the cluster network to support an AWS Outposts subnet. Important The migration is disruptive and nodes in your cluster might be temporarily unavailable as the MTU update takes effect. For more details about the migration process, including important service interruption considerations, see "Changing the MTU for the cluster network" in the additional resources for this procedure. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster using an account with cluster-admin permissions. You have identified the target MTU for your cluster. The MTU for the OVN-Kubernetes network plugin must be set to 100 less than the lowest hardware MTU value in your cluster. The MTU for the OpenShift SDN network plugin must be set to 50 less than the lowest hardware MTU value in your cluster. Procedure To obtain the current MTU for the cluster network, enter the following command: USD oc describe network.config cluster Example output ... Status: Cluster Network: Cidr: 10.217.0.0/22 Host Prefix: 23 Cluster Network MTU: 1400 Network Type: OVNKubernetes Service Network: 10.217.4.0/23 ... To begin the MTU migration, specify the migration configuration by entering the following command. The Machine Config Operator performs a rolling reboot of the nodes in the cluster in preparation for the MTU change. USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": <overlay_from>, "to": <overlay_to> } , "machine": { "to" : <machine_to> } } } } }' where: <overlay_from> Specifies the current cluster network MTU value. <overlay_to> Specifies the target MTU for the cluster network. This value is set relative to the value of <machine_to> . For OVN-Kubernetes, this value must be 100 less than the value of <machine_to> . For OpenShift SDN, this value must be 50 less than the value of <machine_to> . <machine_to> Specifies the MTU for the primary network interface on the underlying host network. Example that decreases the cluster MTU USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": 1400, "to": 1000 } , "machine": { "to" : 1100} } } } }' As the Machine Config Operator updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get machineconfigpools A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the Machine Config Operator updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command: USD oc get machineconfig <config_name> -o yaml | grep ExecStart where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. The machine config must include the following update to the systemd configuration: ExecStart=/usr/local/bin/mtu-migration.sh Finalize the MTU migration for your plugin. In both example commands, <mtu> specifies the new cluster network MTU that you specified with <overlay_to> . To finalize the MTU migration, enter the following command for the OVN-Kubernetes network plugin: USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "ovnKubernetesConfig": { "mtu": <mtu> }}}}' To finalize the MTU migration, enter the following command for the OpenShift SDN network plugin: USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "openshiftSDNConfig": { "mtu": <mtu> }}}}' After finalizing the MTU migration, each machine config pool node is rebooted one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get machineconfigpools A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Verification Verify that the node in your cluster uses the MTU that you specified by entering the following command: USD oc describe network.config cluster Additional resources Changing the MTU for the cluster network 3.13.3.2. Creating subnets for AWS edge compute services Before you configure a machine set for edge compute nodes in your OpenShift Container Platform cluster, you must create a subnet in AWS Outposts. You can use the provided CloudFormation template and create a CloudFormation stack. You can then use this stack to custom provision a subnet. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You have obtained the required information about your environment from your OpenShift Container Platform cluster, Outpost, and AWS account. Procedure Go to the section of the documentation named "CloudFormation template for the VPC subnet", and copy the syntax from the template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires. Run the following command to deploy the CloudFormation template, which creates a stack of AWS resources that represent the VPC: USD aws cloudformation create-stack --stack-name <stack_name> \ 1 --region USD{CLUSTER_REGION} \ --template-body file://<template>.yaml \ 2 --parameters \ ParameterKey=VpcId,ParameterValue="USD{VPC_ID}" \ 3 ParameterKey=ClusterName,ParameterValue="USD{CLUSTER_NAME}" \ 4 ParameterKey=ZoneName,ParameterValue="USD{ZONE_NAME}" \ 5 ParameterKey=PublicRouteTableId,ParameterValue="USD{ROUTE_TABLE_PUB}" \ 6 ParameterKey=PublicSubnetCidr,ParameterValue="USD{SUBNET_CIDR_PUB}" \ 7 ParameterKey=PrivateRouteTableId,ParameterValue="USD{ROUTE_TABLE_PVT}" \ 8 ParameterKey=PrivateSubnetCidr,ParameterValue="USD{SUBNET_CIDR_PVT}" \ 9 ParameterKey=PrivateSubnetLabel,ParameterValue="private-outpost" \ ParameterKey=PublicSubnetLabel,ParameterValue="public-outpost" \ ParameterKey=OutpostArn,ParameterValue="USD{OUTPOST_ARN}" 10 1 <stack_name> is the name for the CloudFormation stack, such as cluster-<outpost_name> . 2 <template> is the relative path and the name of the CloudFormation template YAML file that you saved. 3 USD{VPC_ID} is the VPC ID, which is the value VpcID in the output of the CloudFormation template for the VPC. 4 USD{CLUSTER_NAME} is the value of ClusterName to be used as a prefix of the new AWS resource names. 5 USD{ZONE_NAME} is the value of AWS Outposts name to create the subnets. 6 USD{ROUTE_TABLE_PUB} is the Public Route Table ID created in the USD{VPC_ID} used to associate the public subnets on Outposts. Specify the public route table to associate the Outpost subnet created by this stack. 7 USD{SUBNET_CIDR_PUB} is a valid CIDR block that is used to create the public subnet. This block must be part of the VPC CIDR block VpcCidr . 8 USD{ROUTE_TABLE_PVT} is the Private Route Table ID created in the USD{VPC_ID} used to associate the private subnets on Outposts. Specify the private route table to associate the Outpost subnet created by this stack. 9 USD{SUBNET_CIDR_PVT} is a valid CIDR block that is used to create the private subnet. This block must be part of the VPC CIDR block VpcCidr . 10 USD{OUTPOST_ARN} is the Amazon Resource Name (ARN) for the Outpost. Example output arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f Verification Confirm that the template components exist by running the following command: USD aws cloudformation describe-stacks --stack-name <stack_name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters: PublicSubnetId The IDs of the public subnet created by the CloudFormation stack. PrivateSubnetId The IDs of the private subnet created by the CloudFormation stack. Ensure that you provide these parameter values to the other CloudFormation templates that you run to create for your cluster. 3.13.3.3. CloudFormation template for the VPC subnet You can use the following CloudFormation template to deploy the Outpost subnet. Example 3.38. CloudFormation template for VPC subnets AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\b|(?:[0-9]{1,3}\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: ".+" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: ".+" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: ".+" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: ".+" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String PrivateSubnetLabel: Default: "private" Description: Subnet label to be added when building the subnet name. Type: String PublicSubnetLabel: Default: "public" Description: Subnet label to be added when building the subnet name. Type: String OutpostArn: Default: "" Description: OutpostArn when creating subnets on AWS Outpost. Type: String Conditions: OutpostEnabled: !Not [!Equals [!Ref "OutpostArn", ""]] Resources: PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName OutpostArn: !If [ OutpostEnabled, !Ref OutpostArn, !Ref "AWS::NoValue"] Tags: - Key: Name Value: !Join ['-', [ !Ref ClusterName, !Ref PublicSubnetLabel, !Ref ZoneName]] - Key: kubernetes.io/cluster/unmanaged 1 Value: true PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName OutpostArn: !If [ OutpostEnabled, !Ref OutpostArn, !Ref "AWS::NoValue"] Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, !Ref PrivateSubnetLabel, !Ref ZoneName]] - Key: kubernetes.io/cluster/unmanaged 2 Value: true PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join ["", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join ["", [!Ref PrivateSubnet]] 1 You must include the kubernetes.io/cluster/unmanaged tag in the public subnet configuration for AWS Outposts. 2 You must include the kubernetes.io/cluster/unmanaged tag in the private subnet configuration for AWS Outposts. 3.13.4. Creating a compute machine set that deploys edge compute machines on an Outpost To create edge compute machines on AWS Outposts, you must create a new compute machine set with a compatible configuration. Prerequisites You have an AWS Outposts site. You have installed an OpenShift Container Platform cluster into a custom VPC on AWS. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). Procedure List the compute machine sets in your cluster by running the following command: USD oc get machinesets.machine.openshift.io -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE <original_machine_set_name_1> 1 1 1 1 55m <original_machine_set_name_2> 1 1 1 1 55m Record the names of the existing compute machine sets. Create a YAML file that contains the values for a new compute machine set custom resource (CR) by using one of the following methods: Copy an existing compute machine set configuration into a new file by running the following command: USD oc get machinesets.machine.openshift.io <original_machine_set_name_1> \ -n openshift-machine-api -o yaml > <new_machine_set_name_1>.yaml You can edit this YAML file with your preferred text editor. Create an empty YAML file named <new_machine_set_name_1>.yaml with your preferred text editor and include the required values for your new compute machine set. If you are not sure which value to set for a specific field, you can view values of an existing compute machine set CR by running the following command: USD oc get machinesets.machine.openshift.io <original_machine_set_name_1> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role>-<availability_zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<availability_zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<availability_zone> spec: providerSpec: 3 # ... 1 The cluster infrastructure ID. 2 A default node label. For AWS Outposts, you use the outposts role. 3 The omitted providerSpec section includes values that must be configured for your Outpost. Configure the new compute machine set to create edge compute machines in the Outpost by editing the <new_machine_set_name_1>.yaml file: Example compute machine set for AWS Outposts apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-outposts-<availability_zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-outposts-<availability_zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: outposts machine.openshift.io/cluster-api-machine-type: outposts machine.openshift.io/cluster-api-machineset: <infrastructure_id>-outposts-<availability_zone> spec: metadata: labels: node-role.kubernetes.io/outposts: "" location: outposts providerSpec: value: ami: id: <ami_id> 3 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: volumeSize: 120 volumeType: gp2 4 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile instanceType: m5.xlarge 5 kind: AWSMachineProviderConfig placement: availabilityZone: <availability_zone> region: <region> 6 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg subnet: id: <subnet_id> 7 tags: - name: kubernetes.io/cluster/<infrastructure_id> value: owned userDataSecret: name: worker-user-data taints: 8 - key: node-role.kubernetes.io/outposts effect: NoSchedule 1 Specifies the cluster infrastructure ID. 2 Specifies the name of the compute machine set. The name is composed of the cluster infrastructure ID, the outposts role name, and the Outpost availability zone. 3 Specifies the Amazon Machine Image (AMI) ID. 4 Specifies the EBS volume type. AWS Outposts requires gp2 volumes. 5 Specifies the AWS instance type. You must use an instance type that is configured in your Outpost. 6 Specifies the AWS region in which the Outpost availability zone exists. 7 Specifies the dedicated subnet for your Outpost. 8 Specifies a taint to prevent workloads from being scheduled on nodes that have the node-role.kubernetes.io/outposts label. To schedule user workloads in the Outpost, you must specify a corresponding toleration in the Deployment resource for your application. Save your changes. Create a compute machine set CR by running the following command: USD oc create -f <new_machine_set_name_1>.yaml Verification To verify that the compute machine set is created, list the compute machine sets in your cluster by running the following command: USD oc get machinesets.machine.openshift.io -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE <new_machine_set_name_1> 1 1 1 1 4m12s <original_machine_set_name_1> 1 1 1 1 55m <original_machine_set_name_2> 1 1 1 1 55m To list the machines that are managed by the new compute machine set, run the following command: USD oc get -n openshift-machine-api machines.machine.openshift.io \ -l machine.openshift.io/cluster-api-machineset=<new_machine_set_name_1> Example output NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Provisioned m5.xlarge us-east-1 us-east-1a 25s <machine_from_new_2> Provisioning m5.xlarge us-east-1 us-east-1a 25s To verify that a machine created by the new compute machine set has the correct configuration, examine the relevant fields in the CR for one of the new machines by running the following command: USD oc describe machine <machine_from_new_1> -n openshift-machine-api 3.13.5. Creating user workloads in an Outpost After you extend an OpenShift Container Platform in an AWS VPC cluster into an Outpost, you can use edge compute nodes with the label node-role.kubernetes.io/outposts to create user workloads in the Outpost. Prerequisites You have extended an AWS VPC cluster into an Outpost. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have created a compute machine set that deploys edge compute machines compatible with the Outpost environment. Procedure Configure a Deployment resource file for an application that you want to deploy to the edge compute node in the edge subnet. Example Deployment manifest kind: Namespace apiVersion: v1 metadata: name: <application_name> 1 --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: <application_name> namespace: <application_namespace> 2 spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: gp2-csi 3 volumeMode: Filesystem --- apiVersion: apps/v1 kind: Deployment metadata: name: <application_name> namespace: <application_namespace> spec: selector: matchLabels: app: <application_name> replicas: 1 template: metadata: labels: app: <application_name> location: outposts 4 spec: securityContext: seccompProfile: type: RuntimeDefault nodeSelector: 5 node-role.kubernetes.io/outpost: '' tolerations: 6 - key: "node-role.kubernetes.io/outposts" operator: "Equal" value: "" effect: "NoSchedule" containers: - image: openshift/origin-node command: - "/bin/socat" args: - TCP4-LISTEN:8080,reuseaddr,fork - EXEC:'/bin/bash -c \"printf \\\"HTTP/1.0 200 OK\r\n\r\n\\\"; sed -e \\\"/^\r/q\\\"\"' imagePullPolicy: Always name: <application_name> ports: - containerPort: 8080 volumeMounts: - mountPath: "/mnt/storage" name: data volumes: - name: data persistentVolumeClaim: claimName: <application_name> 1 Specify a name for your application. 2 Specify a namespace for your application. The application namespace can be the same as the application name. 3 Specify the storage class name. For an edge compute configuration, you must use the gp2-csi storage class. 4 Specify a label to identify workloads deployed in the Outpost. 5 Specify the node selector label that targets edge compute nodes. 6 Specify tolerations that match the key and effects taints in the compute machine set for your edge compute machines. Set the value and operator tolerations as shown. Create the Deployment resource by running the following command: USD oc create -f <application_deployment>.yaml Configure a Service object that exposes a pod from a targeted edge compute node to services that run inside your edge network. Example Service manifest apiVersion: v1 kind: Service 1 metadata: name: <application_name> namespace: <application_namespace> spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: NodePort selector: 2 app: <application_name> 1 Defines the service resource. 2 Specify the label type to apply to managed pods. Create the Service CR by running the following command: USD oc create -f <application_service>.yaml 3.13.6. Scheduling workloads on edge and cloud-based AWS compute resources When you extend an AWS VPC cluster into an Outpost, the Outpost uses edge compute nodes and the VPC uses cloud-based compute nodes. The following load balancer considerations apply to an AWS VPC cluster extended into an Outpost: Outposts cannot run AWS Network Load Balancers or AWS Classic Load Balancers, but a Classic Load Balancer for a VPC cluster extended into an Outpost can attach to the Outpost edge compute nodes. For more information, see Using AWS Classic Load Balancers in an AWS VPC cluster extended into an Outpost . To run a load balancer on an Outpost instance, you must use an AWS Application Load Balancer. You can use the AWS Load Balancer Operator to deploy an instance of the AWS Load Balancer Controller. The controller provisions AWS Application Load Balancers for Kubernetes Ingress resources. For more information, see Using the AWS Load Balancer Operator in an AWS VPC cluster extended into an Outpost . 3.13.6.1. Using AWS Classic Load Balancers in an AWS VPC cluster extended into an Outpost AWS Outposts infrastructure cannot run AWS Classic Load Balancers, but Classic Load Balancers in the AWS VPC cluster can target edge compute nodes in the Outpost if edge and cloud-based subnets are in the same availability zone. As a result, Classic Load Balancers on the VPC cluster might schedule pods on either of these node types. Scheduling the workloads on edge compute nodes and cloud-based compute nodes can introduce latency. If you want to prevent a Classic Load Balancer in the VPC cluster from targeting Outpost edge compute nodes, you can apply labels to the cloud-based compute nodes and configure the Classic Load Balancer to only schedule on nodes with the applied labels. Note If you do not need to prevent a Classic Load Balancer in the VPC cluster from targeting Outpost edge compute nodes, you do not need to complete these steps. Prerequisites You have extended an AWS VPC cluster into an Outpost. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have created a user workload in the Outpost with tolerations that match the taints for your edge compute machines. Procedure Optional: Verify that the edge compute nodes have the location=outposts label by running the following command and verifying that the output includes only the edge compute nodes in your Outpost: USD oc get nodes -l location=outposts Label the cloud-based compute nodes in the VPC cluster with a key-value pair by running the following command: USD for NODE in USD(oc get node -l node-role.kubernetes.io/worker --no-headers | grep -v outposts | awk '{printUSD1}'); do oc label node USDNODE <key_name>=<value>; done where <key_name>=<value> is the label you want to use to distinguish cloud-based compute nodes. Example output node1.example.com labeled node2.example.com labeled node3.example.com labeled Optional: Verify that the cloud-based compute nodes have the specified label by running the following command and confirming that the output includes all cloud-based compute nodes in your VPC cluster: USD oc get nodes -l <key_name>=<value> Example output NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.29.4 node2.example.com Ready worker 7h v1.29.4 node3.example.com Ready worker 7h v1.29.4 Configure the Classic Load Balancer service by adding the cloud-based subnet information to the annotations field of the Service manifest: Example service configuration apiVersion: v1 kind: Service metadata: labels: app: <application_name> name: <application_name> namespace: <application_namespace> annotations: service.beta.kubernetes.io/aws-load-balancer-subnets: <aws_subnet> 1 service.beta.kubernetes.io/aws-load-balancer-target-node-labels: <key_name>=<value> 2 spec: ports: - name: http port: 80 protocol: TCP targetPort: 8080 selector: app: <application_name> type: LoadBalancer 1 Specify the subnet ID for the AWS VPC cluster. 2 Specify the key-value pair that matches the pair in the node label. Create the Service CR by running the following command: USD oc create -f <file_name>.yaml Verification Verify the status of the service resource to show the host of the provisioned Classic Load Balancer by running the following command: USD HOST=USD(oc get service <application_name> -n <application_namespace> --template='{{(index .status.loadBalancer.ingress 0).hostname}}') Verify the status of the provisioned Classic Load Balancer host by running the following command: USD curl USDHOST In the AWS console, verify that only the labeled instances appear as the targeted instances for the load balancer. 3.13.6.2. Using the AWS Load Balancer Operator in an AWS VPC cluster extended into an Outpost You can configure the AWS Load Balancer Operator to provision an AWS Application Load Balancer in an AWS VPC cluster extended into an Outpost. AWS Outposts does not support AWS Network Load Balancers. As a result, the AWS Load Balancer Operator cannot provision Network Load Balancers in an Outpost. You can create an AWS Application Load Balancer either in the cloud subnet or in the Outpost subnet. An Application Load Balancer in the cloud can attach to cloud-based compute nodes and an Application Load Balancer in the Outpost can attach to edge compute nodes. You must annotate Ingress resources with the Outpost subnet or the VPC subnet, but not both. Prerequisites You have extended an AWS VPC cluster into an Outpost. You have installed the OpenShift CLI ( oc ). You have installed the AWS Load Balancer Operator and created the AWS Load Balancer Controller. Procedure Configure the Ingress resource to use a specified subnet: Example Ingress resource configuration apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: <application_name> annotations: alb.ingress.kubernetes.io/subnets: <subnet_id> 1 spec: ingressClassName: alb rules: - http: paths: - path: / pathType: Exact backend: service: name: <application_name> port: number: 80 1 Specifies the subnet to use. To use the Application Load Balancer in an Outpost, specify the Outpost subnet ID. To use the Application Load Balancer in the cloud, you must specify at least two subnets in different availability zones. Additional resources Creating the AWS Load Balancer Controller 3.13.7. Additional resources Installing a cluster on AWS into an existing VPC
[ "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 amiID: ami-0c5d3e03c0ab9b19a 16 serviceEndpoints: 17 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com fips: false 18 sshKey: ssh-ed25519 AAAA... 19 pullSecret: '{\"auths\": ...}' 20", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: 13 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 14 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 15 propagateUserTags: true 16 userTags: adminContact: jdoe costCenter: 7536 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com fips: false 19 sshKey: ssh-ed25519 AAAA... 20 pullSecret: '{\"auths\": ...}' 21", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService", "./openshift-install create manifests --dir <installation_directory>", "cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "./openshift-install create install-config --dir <installation_directory> 1", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "subnets: - subnet-1 - subnet-2 - subnet-3", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 22 additionalTrustBundle: | 23 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 24 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{\"auths\": ...}' 22", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 pullSecret: '{\"auths\": ...}' 23", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-gov-west-1a - us-gov-west-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-gov-west-1c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-gov-west-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 pullSecret: '{\"auths\": ...}' 23", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "export AWS_PROFILE=<aws_profile> 1", "export AWS_DEFAULT_REGION=<aws_region> 1", "export RHCOS_VERSION=<version> 1", "export VMIMPORT_BUCKET_NAME=<s3_bucket_name>", "cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF", "aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2", "watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}", "{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }", "aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-iso-east-1a - us-iso-east-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-iso-east-1a - us-iso-east-1b replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-iso-east-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{\"auths\": ...}' 24 additionalTrustBundle: | 25 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "export AWS_PROFILE=<aws_profile> 1", "export AWS_DEFAULT_REGION=<aws_region> 1", "export RHCOS_VERSION=<version> 1", "export VMIMPORT_BUCKET_NAME=<s3_bucket_name>", "cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF", "aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2", "watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}", "{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }", "aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - cn-north-1a - cn-north-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - cn-north-1a replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: cn-north-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.cn-north-1.vpce.amazonaws.com.cn hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{\"auths\": ...}' 24", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Action\": [ \"ec2:ModifyAvailabilityZoneGroup\" ], \"Effect\": \"Allow\", \"Resource\": \"*\" } ] }", "aws --region \"<value_of_AWS_Region>\" ec2 describe-availability-zones --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' --filters Name=zone-type,Values=local-zone --all-availability-zones", "aws ec2 modify-availability-zone-group --group-name \"<value_of_GroupName>\" \\ 1 --opt-in-status opted-in", "apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: type: r5.2xlarge platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b - us-west-2-phx-2a rootVolume: type: gp3 size: 120 platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: edge-zone networking: clusterNetworkMTU: 8901 compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "platform: aws: region: <region_name> 1 compute: - name: edge platform: aws: zones: 2 - <local_zone_name> #", "apiVersion: v1 baseDomain: example.com metadata: name: cluster-name platform: aws: region: us-west-2 compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b - us-west-2-las-1a pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...' #", "[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"3\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]", "aws cloudformation create-stack --stack-name <name> \\ 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ \",\", [ !Join [\"=\", [ !Select [0, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join [\"=\", [!Select [1, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable2]], !Ref \"AWS::NoValue\" ], !If [DoAz3, !Join [\"=\", [!Select [2, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable3]], !Ref \"AWS::NoValue\" ] ] ]", "aws cloudformation create-stack --stack-name <stack_name> \\ 1 --region USD{CLUSTER_REGION} --template-body file://<template>.yaml \\ 2 --parameters ParameterKey=VpcId,ParameterValue=\"USD{VPC_ID}\" \\ 3 ParameterKey=ClusterName,ParameterValue=\"USD{CLUSTER_NAME}\" \\ 4 ParameterKey=ZoneName,ParameterValue=\"USD{ZONE_NAME}\" \\ 5 ParameterKey=PublicRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PUB}\" \\ 6 ParameterKey=PublicSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PUB}\" \\ 7 ParameterKey=PrivateRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PVT}\" \\ 8 ParameterKey=PrivateSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PVT}\" 9", "arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f", "aws cloudformation describe-stacks --stack-name <stack_name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\\b|(?:[0-9]{1,3}\\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: \".+\" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String Resources: PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"public\", !Ref ZoneName]] PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"private\", !Ref ZoneName]] PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join [\"\", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join [\"\", [!Ref PrivateSubnet]]", "platform: aws: region: us-west-2 subnets: 1 - publicSubnetId-1 - publicSubnetId-2 - publicSubnetId-3 - privateSubnetId-1 - privateSubnetId-2 - privateSubnetId-3 - publicSubnetId-LocalZone-1", "./openshift-install create manifests --dir <installation_directory>", "spec: template: spec: providerSpec: value: publicIp: true subnet: filters: - name: tag:Name values: - USD{INFRA_ID}-public-USD{ZONE_NAME}", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <infrastructure_id>-edge-<zone> namespace: openshift-machine-api spec: template: spec: providerSpec: value: publicIp: true", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE cluster-7xw5g-edge-us-east-1-nyc-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1b 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1c 1 1 1 1 3h4m", "oc get machines -n openshift-machine-api", "NAME PHASE TYPE REGION ZONE AGE cluster-7xw5g-edge-us-east-1-nyc-1a-wbclh Running c5d.2xlarge us-east-1 us-east-1-nyc-1a 3h cluster-7xw5g-master-0 Running m6i.xlarge us-east-1 us-east-1a 3h4m cluster-7xw5g-master-1 Running m6i.xlarge us-east-1 us-east-1b 3h4m cluster-7xw5g-master-2 Running m6i.xlarge us-east-1 us-east-1c 3h4m cluster-7xw5g-worker-us-east-1a-rtp45 Running m6i.xlarge us-east-1 us-east-1a 3h cluster-7xw5g-worker-us-east-1b-glm7c Running m6i.xlarge us-east-1 us-east-1b 3h cluster-7xw5g-worker-us-east-1c-qfvz4 Running m6i.xlarge us-east-1 us-east-1c 3h", "oc get nodes -l node-role.kubernetes.io/edge", "NAME STATUS ROLES AGE VERSION ip-10-0-207-188.ec2.internal Ready edge,worker 172m v1.25.2+d2e245f", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DeleteCarrierGateway\", \"ec2:CreateCarrierGateway\" ], \"Resource\": \"*\" }, { \"Action\": [ \"ec2:ModifyAvailabilityZoneGroup\" ], \"Effect\": \"Allow\", \"Resource\": \"*\" } ] }", "aws --region \"<value_of_AWS_Region>\" ec2 describe-availability-zones --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' --filters Name=zone-type,Values=wavelength-zone --all-availability-zones", "aws ec2 modify-availability-zone-group --group-name \"<value_of_GroupName>\" \\ 1 --opt-in-status opted-in", "apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: type: r5.2xlarge platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "platform: aws: region: <region_name> 1 compute: - name: edge platform: aws: zones: 2 - <wavelength_zone_name> #", "apiVersion: v1 baseDomain: example.com metadata: name: cluster-name platform: aws: region: us-west-2 compute: - name: edge platform: aws: zones: - us-west-2-wl1-lax-wlz-1 - us-west-2-wl1-las-wlz-1 pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...' #", "[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"3\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]", "aws cloudformation create-stack --stack-name <name> \\ 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ \",\", [ !Join [\"=\", [ !Select [0, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join [\"=\", [!Select [1, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable2]], !Ref \"AWS::NoValue\" ], !If [DoAz3, !Join [\"=\", [!Select [2, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable3]], !Ref \"AWS::NoValue\" ] ] ]", "aws cloudformation create-stack --stack-name <stack_name> \\ 1 --region USD{CLUSTER_REGION} --template-body file://<template>.yaml \\ 2 --parameters \\// ParameterKey=VpcId,ParameterValue=\"USD{VpcId}\" \\ 3 ParameterKey=ClusterName,ParameterValue=\"USD{ClusterName}\" 4", "arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-2fd3-11eb-820e-12a48460849f", "aws cloudformation describe-stacks --stack-name <stack_name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Creating Wavelength Zone Gateway (Carrier Gateway). Parameters: VpcId: Description: VPC ID to associate the Carrier Gateway. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\\b|(?:[0-9]{1,3}\\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster Name or Prefix name to prepend the tag Name for each subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: ClusterName parameter must be specified. Resources: CarrierGateway: Type: \"AWS::EC2::CarrierGateway\" Properties: VpcId: !Ref VpcId Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"cagw\"]] PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VpcId Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"public-carrier\"]] PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: CarrierGateway Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 CarrierGatewayId: !Ref CarrierGateway S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VpcId Outputs: PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable", "aws cloudformation create-stack --stack-name <stack_name> \\ 1 --region USD{CLUSTER_REGION} --template-body file://<template>.yaml \\ 2 --parameters ParameterKey=VpcId,ParameterValue=\"USD{VPC_ID}\" \\ 3 ParameterKey=ClusterName,ParameterValue=\"USD{CLUSTER_NAME}\" \\ 4 ParameterKey=ZoneName,ParameterValue=\"USD{ZONE_NAME}\" \\ 5 ParameterKey=PublicRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PUB}\" \\ 6 ParameterKey=PublicSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PUB}\" \\ 7 ParameterKey=PrivateRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PVT}\" \\ 8 ParameterKey=PrivateSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PVT}\" 9", "arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f", "aws cloudformation describe-stacks --stack-name <stack_name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\\b|(?:[0-9]{1,3}\\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: \".+\" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String Resources: PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"public\", !Ref ZoneName]] PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"private\", !Ref ZoneName]] PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join [\"\", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join [\"\", [!Ref PrivateSubnet]]", "platform: aws: region: us-west-2 subnets: 1 - publicSubnetId-1 - publicSubnetId-2 - publicSubnetId-3 - privateSubnetId-1 - privateSubnetId-2 - privateSubnetId-3 - publicOrPrivateSubnetID-Wavelength-1", "./openshift-install create manifests --dir <installation_directory>", "spec: template: spec: providerSpec: value: publicIp: true subnet: filters: - name: tag:Name values: - USD{INFRA_ID}-public-USD{ZONE_NAME}", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <infrastructure_id>-edge-<zone> namespace: openshift-machine-api spec: template: spec: providerSpec: value: publicIp: true", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE cluster-7xw5g-edge-us-east-1-wl1-nyc-wlz-1 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1b 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1c 1 1 1 1 3h4m", "oc get machines -n openshift-machine-api", "NAME PHASE TYPE REGION ZONE AGE cluster-7xw5g-edge-us-east-1-wl1-nyc-wlz-1-wbclh Running c5d.2xlarge us-east-1 us-east-1-wl1-nyc-wlz-1 3h cluster-7xw5g-master-0 Running m6i.xlarge us-east-1 us-east-1a 3h4m cluster-7xw5g-master-1 Running m6i.xlarge us-east-1 us-east-1b 3h4m cluster-7xw5g-master-2 Running m6i.xlarge us-east-1 us-east-1c 3h4m cluster-7xw5g-worker-us-east-1a-rtp45 Running m6i.xlarge us-east-1 us-east-1a 3h cluster-7xw5g-worker-us-east-1b-glm7c Running m6i.xlarge us-east-1 us-east-1b 3h cluster-7xw5g-worker-us-east-1c-qfvz4 Running m6i.xlarge us-east-1 us-east-1c 3h", "oc get nodes -l node-role.kubernetes.io/edge", "NAME STATUS ROLES AGE VERSION ip-10-0-207-188.ec2.internal Ready edge,worker 172m v1.25.2+d2e245f", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructures.config.openshift.io cluster", "oc get machinesets.machine.openshift.io -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <compute_machine_set_name_1> 1 1 1 1 55m <compute_machine_set_name_2> 1 1 1 1 55m", "oc get machinesets.machine.openshift.io <compute_machine_set_name_1> -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}'", "oc get machinesets.machine.openshift.io <compute_machine_set_name_1> -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet.id}'", "aws outposts list-outposts", "aws outposts get-outpost-instance-types --outpost-id <outpost_id_value>", "aws ec2 describe-subnets --filters Name=outpost-arn,Values=<outpost_arn_value>", "oc describe network.config cluster", "Status: Cluster Network: Cidr: 10.217.0.0/22 Host Prefix: 23 Cluster Network MTU: 1400 Network Type: OVNKubernetes Service Network: 10.217.4.0/23", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": <overlay_from>, \"to\": <overlay_to> } , \"machine\": { \"to\" : <machine_to> } } } } }'", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": 1400, \"to\": 1000 } , \"machine\": { \"to\" : 1100} } } } }'", "oc get machineconfigpools", "oc describe node | egrep \"hostname|machineconfig\"", "kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done", "oc get machineconfig <config_name> -o yaml | grep ExecStart", "ExecStart=/usr/local/bin/mtu-migration.sh", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": null, \"defaultNetwork\":{ \"ovnKubernetesConfig\": { \"mtu\": <mtu> }}}}'", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": null, \"defaultNetwork\":{ \"openshiftSDNConfig\": { \"mtu\": <mtu> }}}}'", "oc get machineconfigpools", "oc describe network.config cluster", "aws cloudformation create-stack --stack-name <stack_name> \\ 1 --region USD{CLUSTER_REGION} --template-body file://<template>.yaml \\ 2 --parameters ParameterKey=VpcId,ParameterValue=\"USD{VPC_ID}\" \\ 3 ParameterKey=ClusterName,ParameterValue=\"USD{CLUSTER_NAME}\" \\ 4 ParameterKey=ZoneName,ParameterValue=\"USD{ZONE_NAME}\" \\ 5 ParameterKey=PublicRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PUB}\" \\ 6 ParameterKey=PublicSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PUB}\" \\ 7 ParameterKey=PrivateRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PVT}\" \\ 8 ParameterKey=PrivateSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PVT}\" \\ 9 ParameterKey=PrivateSubnetLabel,ParameterValue=\"private-outpost\" ParameterKey=PublicSubnetLabel,ParameterValue=\"public-outpost\" ParameterKey=OutpostArn,ParameterValue=\"USD{OUTPOST_ARN}\" 10", "arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f", "aws cloudformation describe-stacks --stack-name <stack_name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\\b|(?:[0-9]{1,3}\\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: \".+\" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String PrivateSubnetLabel: Default: \"private\" Description: Subnet label to be added when building the subnet name. Type: String PublicSubnetLabel: Default: \"public\" Description: Subnet label to be added when building the subnet name. Type: String OutpostArn: Default: \"\" Description: OutpostArn when creating subnets on AWS Outpost. Type: String Conditions: OutpostEnabled: !Not [!Equals [!Ref \"OutpostArn\", \"\"]] Resources: PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName OutpostArn: !If [ OutpostEnabled, !Ref OutpostArn, !Ref \"AWS::NoValue\"] Tags: - Key: Name Value: !Join ['-', [ !Ref ClusterName, !Ref PublicSubnetLabel, !Ref ZoneName]] - Key: kubernetes.io/cluster/unmanaged 1 Value: true PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName OutpostArn: !If [ OutpostEnabled, !Ref OutpostArn, !Ref \"AWS::NoValue\"] Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, !Ref PrivateSubnetLabel, !Ref ZoneName]] - Key: kubernetes.io/cluster/unmanaged 2 Value: true PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join [\"\", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join [\"\", [!Ref PrivateSubnet]]", "oc get machinesets.machine.openshift.io -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <original_machine_set_name_1> 1 1 1 1 55m <original_machine_set_name_2> 1 1 1 1 55m", "oc get machinesets.machine.openshift.io <original_machine_set_name_1> -n openshift-machine-api -o yaml > <new_machine_set_name_1>.yaml", "oc get machinesets.machine.openshift.io <original_machine_set_name_1> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role>-<availability_zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<availability_zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<availability_zone> spec: providerSpec: 3", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-outposts-<availability_zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-outposts-<availability_zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: outposts machine.openshift.io/cluster-api-machine-type: outposts machine.openshift.io/cluster-api-machineset: <infrastructure_id>-outposts-<availability_zone> spec: metadata: labels: node-role.kubernetes.io/outposts: \"\" location: outposts providerSpec: value: ami: id: <ami_id> 3 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: volumeSize: 120 volumeType: gp2 4 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile instanceType: m5.xlarge 5 kind: AWSMachineProviderConfig placement: availabilityZone: <availability_zone> region: <region> 6 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg subnet: id: <subnet_id> 7 tags: - name: kubernetes.io/cluster/<infrastructure_id> value: owned userDataSecret: name: worker-user-data taints: 8 - key: node-role.kubernetes.io/outposts effect: NoSchedule", "oc create -f <new_machine_set_name_1>.yaml", "oc get machinesets.machine.openshift.io -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <new_machine_set_name_1> 1 1 1 1 4m12s <original_machine_set_name_1> 1 1 1 1 55m <original_machine_set_name_2> 1 1 1 1 55m", "oc get -n openshift-machine-api machines.machine.openshift.io -l machine.openshift.io/cluster-api-machineset=<new_machine_set_name_1>", "NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Provisioned m5.xlarge us-east-1 us-east-1a 25s <machine_from_new_2> Provisioning m5.xlarge us-east-1 us-east-1a 25s", "oc describe machine <machine_from_new_1> -n openshift-machine-api", "kind: Namespace apiVersion: v1 metadata: name: <application_name> 1 --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: <application_name> namespace: <application_namespace> 2 spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: gp2-csi 3 volumeMode: Filesystem --- apiVersion: apps/v1 kind: Deployment metadata: name: <application_name> namespace: <application_namespace> spec: selector: matchLabels: app: <application_name> replicas: 1 template: metadata: labels: app: <application_name> location: outposts 4 spec: securityContext: seccompProfile: type: RuntimeDefault nodeSelector: 5 node-role.kubernetes.io/outpost: '' tolerations: 6 - key: \"node-role.kubernetes.io/outposts\" operator: \"Equal\" value: \"\" effect: \"NoSchedule\" containers: - image: openshift/origin-node command: - \"/bin/socat\" args: - TCP4-LISTEN:8080,reuseaddr,fork - EXEC:'/bin/bash -c \\\"printf \\\\\\\"HTTP/1.0 200 OK\\r\\n\\r\\n\\\\\\\"; sed -e \\\\\\\"/^\\r/q\\\\\\\"\\\"' imagePullPolicy: Always name: <application_name> ports: - containerPort: 8080 volumeMounts: - mountPath: \"/mnt/storage\" name: data volumes: - name: data persistentVolumeClaim: claimName: <application_name>", "oc create -f <application_deployment>.yaml", "apiVersion: v1 kind: Service 1 metadata: name: <application_name> namespace: <application_namespace> spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: NodePort selector: 2 app: <application_name>", "oc create -f <application_service>.yaml", "oc get nodes -l location=outposts", "for NODE in USD(oc get node -l node-role.kubernetes.io/worker --no-headers | grep -v outposts | awk '{printUSD1}'); do oc label node USDNODE <key_name>=<value>; done", "node1.example.com labeled node2.example.com labeled node3.example.com labeled", "oc get nodes -l <key_name>=<value>", "NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.29.4 node2.example.com Ready worker 7h v1.29.4 node3.example.com Ready worker 7h v1.29.4", "apiVersion: v1 kind: Service metadata: labels: app: <application_name> name: <application_name> namespace: <application_namespace> annotations: service.beta.kubernetes.io/aws-load-balancer-subnets: <aws_subnet> 1 service.beta.kubernetes.io/aws-load-balancer-target-node-labels: <key_name>=<value> 2 spec: ports: - name: http port: 80 protocol: TCP targetPort: 8080 selector: app: <application_name> type: LoadBalancer", "oc create -f <file_name>.yaml", "HOST=USD(oc get service <application_name> -n <application_namespace> --template='{{(index .status.loadBalancer.ingress 0).hostname}}')", "curl USDHOST", "apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: <application_name> annotations: alb.ingress.kubernetes.io/subnets: <subnet_id> 1 spec: ingressClassName: alb rules: - http: paths: - path: / pathType: Exact backend: service: name: <application_name> port: number: 80" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_aws/installer-provisioned-infrastructure
Chapter 3. Using Camel with Spring XML
Chapter 3. Using Camel with Spring XML Using Camel with Spring XML files, is a way, of using XML DSL with Camel. Camel has historically been using Spring XML for a long time. The Spring framework started with XML files as a popular and common configuration for building Spring applications. Example of Spring application 3.1. Specifying Camel routes using Spring XML You can use Spring XML files to specify Camel routes using XML DSL as shown: <camelContext id="camel-A" xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="seda:start"/> <to uri="mock:result"/> </route> </camelContext> 3.2. Configuring Components and Endpoints You can configure your Component or Endpoint instances in your Spring XML as follows in this example. <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring"> </camelContext> <bean id="jmsConnectionFactory" class="org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory"> <property name="brokerURL" value="tcp:someserver:61616"/> </bean> <bean id="jms" class="org.apache.camel.component.jms.JmsComponent"> <property name="connectionFactory"> <bean class="org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory"> <property name="brokerURL" value="tcp:someserver:61616"/> </bean> </property> </bean> This allows you to configure a component using any name, but its common to use the same name, for example, jms . Then you can refer to the component using jms:destinationName . This works by the Camel fetching components from the Spring context for the scheme name you use for Endpoint URIs. 3.3. Using Java DSL with Spring XML files You can use Java Code to define your RouteBuilder implementations. These are defined as beans in spring and then referenced in your camel context, as shown: <camelContext xmlns="http://camel.apache.org/schema/spring"> <routeBuilder ref="myBuilder"/> </camelContext> <bean id="myBuilder" class="org.apache.camel.spring.example.test1.MyRouteBuilder"/> 3.4. Using package scanning Camel also provides a powerful feature that allows for the automatic discovery and initialization of routes in given packages. This is configured by adding tags to the camel context in your spring context definition, specifying the packages to be recursively searched for RouteBuilder implementations. To use this feature add a <package></package> tag specifying a comma separated list of packages that should be searched. For example, <camelContext> <packageScan> <package>com.foo</package> <excludes>**.*Excluded*</excludes> <includes>**.*</includes> </packageScan> </camelContext> This scans for RouteBuilder classes in the com.foo and the sub-packages. You can also filter the classes with includes or excludes such as: <camelContext> <packageScan> <package>com.foo</package> <excludes>**.*Special*</excludes> </packageScan> </camelContext> This skips the classes that has Special in the name. Exclude patterns are applied before the include patterns. If no include or exclude patterns are defined then all the Route classes discovered in the packages are returned. ? matches one character, * matches zero or more characters, ** matches zero or more segments of a fully qualified name. 3.5. Using context scanning You can allow Camel to scan the container context, for example, the Spring ApplicationContext for route builder instances. This allows you to use the Spring <component-scan> feature and have Camel pickup any RouteBuilder instances which was created by Spring in its scan process. <!-- enable Spring @Component scan --> <context:component-scan base-package="org.apache.camel.spring.issues.contextscan"/> <camelContext xmlns="http://camel.apache.org/schema/spring"> <!-- and then let Camel use those @Component scanned route builders --> <contextScan/> </camelContext> This allows you to just annotate your routes using the Spring @Component and have those routes included by Camel: @Component public class MyRoute extends RouteBuilder { @Override public void configure() throws Exception { from("direct:start") .to("mock:result"); } } You can also use the ANT style for inclusion and exclusion, as mentioned above in the package scan section.
[ "<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd \"> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:a\"/> <choice> <when> <xpath>USDfoo = 'bar'</xpath> <to uri=\"direct:b\"/> </when> <when> <xpath>USDfoo = 'cheese'</xpath> <to uri=\"direct:c\"/> </when> <otherwise> <to uri=\"direct:d\"/> </otherwise> </choice> </route> </camelContext> </beans>", "<camelContext id=\"camel-A\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"seda:start\"/> <to uri=\"mock:result\"/> </route> </camelContext>", "<camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> </camelContext> <bean id=\"jmsConnectionFactory\" class=\"org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory\"> <property name=\"brokerURL\" value=\"tcp:someserver:61616\"/> </bean> <bean id=\"jms\" class=\"org.apache.camel.component.jms.JmsComponent\"> <property name=\"connectionFactory\"> <bean class=\"org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory\"> <property name=\"brokerURL\" value=\"tcp:someserver:61616\"/> </bean> </property> </bean>", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <routeBuilder ref=\"myBuilder\"/> </camelContext> <bean id=\"myBuilder\" class=\"org.apache.camel.spring.example.test1.MyRouteBuilder\"/>", "<camelContext> <packageScan> <package>com.foo</package> <excludes>**.*Excluded*</excludes> <includes>**.*</includes> </packageScan> </camelContext>", "<camelContext> <packageScan> <package>com.foo</package> <excludes>**.*Special*</excludes> </packageScan> </camelContext>", "<!-- enable Spring @Component scan --> <context:component-scan base-package=\"org.apache.camel.spring.issues.contextscan\"/> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <!-- and then let Camel use those @Component scanned route builders --> <contextScan/> </camelContext>", "@Component public class MyRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\"direct:start\") .to(\"mock:result\"); } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/getting_started_with_red_hat_build_of_apache_camel_for_spring_boot/csb-camel-spring-xml
Chapter 3. Feature enhancements
Chapter 3. Feature enhancements Cryostat 2.4 includes feature enhancements that build upon the Cryostat 2.3 offerings. Topology view enhancements for displaying Cryostat agents In the Topology view of the Cryostat web console, Cryostat agents are now displayed with the Cryostat icon. This helps to differentiate the Cryostat agents from JMX targets, which are still displayed with the OpenJDK Duke icon. Topology and Dashboard views display additional target details The Topology and Dashboard views of the Cryostat web console now display the following additional types of information for your target JVMs: Operating system name Total physical memory Total swap space Class path Library paths Input arguments System properties Note You can view this information in either of the following ways: In the Topology panel, click the twistie ( > ) icons to expand down to a target endpoint and then click the Details tab for this endpoint. In the Dashboard panel, add the Target JVM Details card to the dashboard. Auomated Analysis enhancements for JFR recordings In the Cryostat 2.4 web console, the Automated Analysis information for a JFR recording reuses components from the Automated Analysis dashboard card. This supersedes the behavior in releases where Automated Analysis data provided a more limited set of information. Note To view Automated Analysis information for a JFR recording, open any panel that includes a list of recordings and click the twistie ( > ) icon to the recording you are interested in. Parameter enhancements for restarting JFR recordings When creating a JFR flight recording, the Cryostat server typically rejects requests to create a recording if another recording with the same name already exists on the same target. Clients can modify this behavior by specifying a request parameter to restart the recording. In releases, clients could include a restart parameter in requests to create a recording. The restart parameter accepted true or false values and was set to false by default. If the client request included a restart=false setting, the Cryostat server rejected the request if another recording with the same name already existed. If the client request included a restart=true setting, the Cryostat server would stop, delete, and recreate any existing recording with the same name. Cryostat 2.4 introduces a new replace parameter, which supersedes the restart parameter that was available in releases. The replace parameter accepts always , never , or stopped values. In this situation, a setting of replace=always matches the old restart=true behavior, and a setting of replace=never matches the old restart=false behavior. The replace=stopped setting provides a third type of behavior, by instructing the server to recreate a recording with the same name only if the existing recording is in a STOPPED state. Otherwise, if the client request includes a replace=stopped setting, but an existing recording with the same name is in another state such as RUNNING , the server rejects the request. This enhancement helps to avoid situations where enabling automated rules to recreate JFR recordings might lead to the loss of non-archived data for recordings that are not in a STOPPED state. Enhanced error and warning messages Cryostat 2.4 includes enhancements to various error and warning messages.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/release_notes_for_the_red_hat_build_of_cryostat_2.4/cryostat-feature-enhancements_cryostat
Preface
Preface Given the Metro disaster recovery stretch cluster is to provide resiliency in the face of a complete or partial site outage, it is important to understand the different methods of recovery for applications and their storage. How the application is architected determines how soon it becomes available again on the active zone. There are different methods of recovery for applications and their storage depending on the site outage. The recovery time depends on the application architecture. The different methods of recovery are as follows: Recovery for zone-aware HA applications with RWX storage . Recovery for HA applications with RWX storage . Recovery for applications with RWO storage . Recovery for StatefulSet pods .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/recovering_a_metro-dr_stretch_cluster/pr01
Installing IBM Cloud Bare Metal (Classic)
Installing IBM Cloud Bare Metal (Classic) OpenShift Container Platform 4.18 Installing OpenShift Container Platform on IBM Cloud Bare Metal (Classic) Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_ibm_cloud_bare_metal_classic/index
Chapter 7. Scaling storage nodes
Chapter 7. Scaling storage nodes To scale the storage capacity of OpenShift Container Storage, you can do either of the following: Scale up storage nodes - Add storage capacity to the existing OpenShift Container Storage worker nodes Scale out storage nodes - Add new worker nodes containing storage capacity 7.1. Requirements for scaling storage nodes Before you proceed to scale the storage nodes, refer to the following sections to understand the node requirements for your specific Red Hat OpenShift Container Storage instance: Platform requirements Storage device requirements Dynamic storage devices Capacity planning Warning Always ensure that you have plenty of storage capacity. If storage ever fills completely, it is not possible to add capacity or delete or migrate content away from the storage to free up space. Completely full storage is very difficult to recover. Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space. If you do run out of storage space completely, contact Red Hat Customer Support. 7.2. Scaling up storage by adding capacity to your OpenShift Container Storage nodes on Red Hat OpenStack Platform infrastructure Use this procedure to add storage capacity and performance to your configured Red Hat OpenShift Container Storage worker nodes. Prerequisites A running OpenShift Container Storage Platform. Administrative privileges on the OpenShift Web Console. To scale using a storage class other than the one provisioned during deployment, first define an additional storage class. See Creating a storage class for details. Procedure Log in to the OpenShift Web Console. Click on Operators Installed Operators . Click OpenShift Container Storage Operator. Click Storage Cluster tab. The visible list should have only one item. Click (...) on the far right to extend the options menu. Select Add Capacity from the options menu. Select a storage class. The storage class should be set to standard if you are using the default storage class generated during deployment. If you have created other storage classes, select whichever is appropriate. The Raw Capacity field shows the size set during storage class creation. The total amount of storage consumed is three times this amount, because OpenShift Container Storage uses a replica count of 3. Click Add and wait for the cluster state to change to Ready . Verification steps Navigate to Overview Persistent Storage tab, then check the Capacity breakdown card. Note that the capacity increases based on your selections. Verify that the new OSDs and their corresponding new PVCs are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. (Optional) If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the node(s) where the new OSD pod(s) are running. For example: For each of the nodes identified in step, do the following: Create a debug pod and open a chroot environment for the selected host(s). Run "lsblk" and check for the "crypt" keyword beside the ocs-deviceset name(s) Important Cluster reduction is not currently supported, regardless of whether reduction would be done by removing nodes or OSDs. 7.3. Scaling out storage capacity by adding new nodes To scale out storage capacity, you need to perform the following: Add a new node to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs, which is the increment of 3 OSDs of the capacity selected during initial configuration. Verify that the new node is added successfully Scale up the storage capacity after the node is added Prerequisites You must be logged into OpenShift Container Platform (RHOCP) cluster. Procedure Navigate to Compute Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute Nodes and confirm if the new node is in Ready state. Apply the OpenShift Container Storage label to the new node. For the new node, Action menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage and click Save . Note It is recommended to add 3 nodes each in different zones. You must add 3 nodes and perform this procedure for all of them. Verification steps To verify that the new node is added, see Verifying the addition of a new node . 7.3.1. Verifying the addition of a new node Execute the following command and verify that the new node is present in the output: Click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 7.3.2. Scaling up storage capacity After you add a new node to OpenShift Container Storage, you must scale up the storage capacity as described in Scaling up storage by adding capacity .
[ "oc get -o=custom-columns=NODE:.spec.nodeName pod/<OSD pod name>", "get -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "oc debug node/<node name> chroot /host", "lsblk", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/deploying_and_managing_openshift_container_storage_using_red_hat_openstack_platform/scaling-storage-nodes_osp
Chapter 1. Getting started with Data Grid Server
Chapter 1. Getting started with Data Grid Server Install the server distribution, create a user, and start your first Data Grid cluster. Ansible collection Automate installation of Data Grid clusters with our Ansible collection that optionally includes Keycloak caches and cross-site replication configuration. The Ansible collection also lets you inject Data Grid caches into the static configuration for each server instance during installation. The Ansible collection for Data Grid is available from the Red Hat Automation Hub . 1.1. Data Grid Server requirements Data Grid Server requires a Java Virtual Machine. See the Data Grid Supported Configurations for details on supported versions. 1.2. Downloading Data Grid Server distributions The Data Grid Server distribution is an archive of Java libraries ( JAR files) and configuration files. Procedure Access the Red Hat customer portal. Download Red Hat Data Grid 8.5 Server from the software downloads section . Run the md5sum or sha256sum command with the server download archive as the argument, for example: Compare with the MD5 or SHA-256 checksum value on the Data Grid Software Details page. Reference Data Grid Server README describes the contents of the server distribution. 1.3. Installing Data Grid Server Install the Data Grid Server distribution on a host system. Prerequisites Download a Data Grid Server distribution archive. Procedure Use any appropriate tool to extract the Data Grid Server archive to the host filesystem. The resulting directory is your USDRHDG_HOME . 1.4. JVM settings for Data Grid You can define Java Virtual Machine (JVM) settings for Data Grid either by editing the server.conf configuration file, or by setting the JAVA_OPTS environment variable . Important If you are running Data Grid in a container do not set Xmx or Xms because the values are automatically calculated from the container settings to be 50% of the container size. Editing the configuration file You can edit the required values in the server.conf configuration file. For example, to set the options to pass to the JVM, edit the following lines: You can uncomment the existing example settings as well. For example, to configure Java Platform Debugger Architecture (JPDA) settings for remote socket debugging, update the file as follows: Additionally, you can add more settings to JAVA_OPTS like this: Setting an environment variable You can override the settings in server.conf configuration file by setting the JAVA_OPTS environment variable. For example: Linux Microsoft Windows 1.5. Starting Data Grid Server Run Data Grid Server instances in a Java Virtual Machine (JVM) on any supported host. Prerequisites Download and install the server distribution. Procedure Open a terminal in USDRHDG_HOME . Start Data Grid Server instances with the server script. Linux Microsoft Windows Data Grid Server is running successfully when it logs the following messages: Verification Open 127.0.0.1:11222/console/ in any browser. Enter your credentials at the prompt and continue to Data Grid Console. 1.6. Passing Data Grid Server configuration at startup Specify custom configuration when you start Data Grid Server. Data Grid Server can parse multiple configuration files that you overlay on startup with the --server-config argument. You can use as many configuration overlay files as required, in any order. Configuration overlay files: Must be valid Data Grid configuration and contain the root server element or field. Do not need to be full configuration as long as your combination of overlay files results in a full configuration. Important Data Grid Server does not detect conflicting configuration between overlay files. Each overlay file overwrites any conflicting configuration in the preceding configuration. Note If you pass cache configuration to Data Grid Server on startup it does not dynamically create those cache across the cluster. You must manually propagate caches to each node. Additionally, cache configuration that you pass to Data Grid Server on startup must include the infinispan and cache-container elements. Prerequisites Download and install the server distribution. Add custom server configuration to the server/conf directory of your Data Grid Server installation. Procedure Open a terminal in USDRHDG_HOME . Specify one or more configuration files with the --server-config= or -c argument, for example: 1.7. Creating Data Grid users Add credentials to authenticate with Data Grid Server deployments through Hot Rod and REST endpoints. Before you can access the Data Grid Console or perform cache operations you must create at least one user with the Data Grid command line interface (CLI). Tip Data Grid enforces security authorization with role-based access control (RBAC). Create an admin user the first time you add credentials to gain full ADMIN permissions to your Data Grid deployment. Prerequisites Download and install Data Grid Server. Procedure Open a terminal in USDRHDG_HOME . Create an admin user with the user create command. bin/cli.sh user create admin -p changeme Tip Run help user from a CLI session to get complete command details. Verification Open user.properties and confirm the user exists. Note Adding credentials to a properties realm with the CLI creates the user only on the server instance to which you are connected. You must manually synchronize credentials in a properties realm to each node in the cluster. 1.7.1. Granting roles to users Assign roles to users and grant them permissions to perform cache operations and interact with Data Grid resources. Tip Grant roles to groups instead of users if you want to assign the same role to multiple users and centrally maintain their permissions. Prerequisites Have ADMIN permissions for Data Grid. Create Data Grid users. Procedure Create a CLI connection to Data Grid. Assign roles to users with the user roles grant command, for example: Verification List roles that you grant to users with the user roles ls command. 1.7.2. Adding users to groups Groups let you change permissions for multiple users. You assign a role to a group and then add users to that group. Users inherit permissions from the group role. Note You use groups as part of a property realm in the Data Grid Server configuration. Each group is a special type of user that also requires a username and password. Prerequisites Have ADMIN permissions for Data Grid. Create Data Grid users. Procedure Create a CLI connection to Data Grid. Use the user create command to create a group. Specify a group name with the --groups argument. Set a username and password for the group. List groups. Grant a role to the group. List roles for the group. Add users to the group one at a time. Verification Open groups.properties and confirm the group exists. 1.7.3. Data Grid user roles and permissions Data Grid includes several roles that provide users with permissions to access caches and Data Grid resources. Role Permissions Description admin ALL Superuser with all permissions including control of the Cache Manager lifecycle. deployer ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR, CREATE Can create and delete Data Grid resources in addition to application permissions. application ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR Has read and write access to Data Grid resources in addition to observer permissions. Can also listen to events and execute server tasks and scripts. observer ALL_READ, MONITOR Has read access to Data Grid resources in addition to monitor permissions. monitor MONITOR Can view statistics via JMX and the metrics endpoint. Additional resources org.infinispan.security.AuthorizationPermission Enum Data Grid configuration schema reference 1.8. Verifying cluster views Data Grid Server instances on the same network automatically discover each other and form clusters. Complete this procedure to observe cluster discovery with the MPING protocol in the default TCP stack with locally running Data Grid Server instances. If you want to adjust cluster transport for custom network requirements, see the documentation for setting up Data Grid clusters. Note This procedure is intended to demonstrate the principle of cluster discovery and is not intended for production environments. Doing things like specifying a port offset on the command line is not a reliable way to configure cluster transport for production. Prerequisites Have one instance of Data Grid Server running. Procedure Open a terminal in USDRHDG_HOME . Copy the root directory to server2 . Specify a port offset and the server2 directory. Verification You can view cluster membership in the console at 127.0.0.1:11222/console/cluster-membership . Data Grid also logs the following messages when nodes join clusters: 1.9. Shutting down Data Grid Server Stop individually running servers or bring down clusters gracefully. Procedure Create a CLI connection to Data Grid. Shut down Data Grid Server in one of the following ways: Stop all nodes in a cluster with the shutdown cluster command, for example: This command saves cluster state to the data folder for each node in the cluster. If you use a cache store, the shutdown cluster command also persists all data in the cache. Stop individual server instances with the shutdown server command and the server hostname, for example: Important The shutdown server command does not wait for rebalancing operations to complete, which can lead to data loss if you specify multiple hostnames at the same time. Tip Run help shutdown for more details about using the command. Verification Data Grid logs the following messages when you shut down servers: 1.9.1. Shutdown and restart of Data Grid clusters Prevent data loss and ensure consistency of your cluster by properly shutting down and restarting nodes. Cluster shutdown Data Grid recommends using the shutdown cluster command to stop all nodes in a cluster while saving cluster state and persisting all data in the cache. You can use the shutdown cluster command also for clusters with a single node. When you bring Data Grid clusters back online, all nodes and caches in the cluster will be unavailable until all nodes rejoin. To prevent inconsistencies or data loss, Data Grid restricts access to the data stored in the cluster and modifications of the cluster state until the cluster is fully operational again. Additionally, Data Grid disables cluster rebalancing and prevents local cache stores purging on startup. During the cluster recovery process, the coordinator node logs messages for each new node joining, indicating which nodes are available and which are still missing. Other nodes in the Data Grid cluster have the view from the time they join. You can monitor availability of caches using the Data Grid Console or REST API. However, in cases where waiting for all nodes is not necessary nor desired, it is possible to set a cache available with the current topology. This approach is possible through the CLI, see below, or the REST API. Important Manually installing a topology can lead to data loss, only perform this operation if the initial topology cannot be recreated. Server shutdown After using the shutdown server command to bring nodes down, the first node to come back online will be available immediately without waiting for other members. The remaining nodes join the cluster immediately, triggering state transfer but loading the local persistence first, which might lead to stale entries. Local cache stores configured to purge on startup will be emptied when the server starts. Local cache stores marked as purge=false will be available after a server restarts but might contain stale entries. If you shutdown clustered nodes with the shutdown server command, you must restart each server in reverse order to avoid potential issues related to data loss and stale entries in the cache. For example, if you shutdown server1 and then shutdown server2 , you should first start server2 and then start server1 . However, restarting clustered nodes in reverse order does not completely prevent data loss and stale entries. 1.10. Data Grid Server installation directory structure Data Grid Server uses the following folders on the host filesystem under USDRHDG_HOME : Tip See the Data Grid Server README for descriptions of the each folder in your USDRHDG_HOME directory as well as system properties you can use to customize the filesystem. 1.10.1. Server root directory Apart from resources in the bin and docs folders, the only folder under USDRHDG_HOME that you should interact with is the server root directory, which is named server by default. You can create multiple nodes under the same USDRHDG_HOME directory or in different directories, but each Data Grid Server instance must have its own server root directory. For example, a cluster of 5 nodes could have the following server root directories on the filesystem: Each server root directory should contain the following folders: server/conf Holds infinispan.xml configuration files for a Data Grid Server instance. Data Grid separates configuration into two layers: Dynamic Create mutable cache configurations for data scalability. Data Grid Server permanently saves the caches you create at runtime along with the cluster state that is distributed across nodes. Each joining node receives a complete cluster state that Data Grid Server synchronizes across all nodes whenever changes occur. Static Add configuration to infinispan.xml for underlying server mechanisms such as cluster transport, security, and shared datasources. server/data Provides internal storage that Data Grid Server uses to maintain cluster state. Important Never directly delete or modify content in server/data . Modifying files such as caches.xml while the server is running can cause corruption. Deleting content can result in an incorrect state, which means clusters cannot restart after shutdown. server/lib Contains extension JAR files for custom filters, custom event listeners, JDBC drivers, custom ServerTask implementations, and so on. server/log Holds Data Grid Server log files. Additional resources Data Grid Server README What is stored in the <server>/data directory used by a RHDG server (Red Hat Knowledgebase)
[ "sha256sum jboss-datagrid-USD{version}-server.zip", "unzip redhat-datagrid-8.5.2-server.zip", "JAVA_OPTS=\"-Xms64m -Xmx512m -XX:MetaspaceSize=64M -Djava.net.preferIPv4Stack=true\" JAVA_OPTS=\"USDJAVA_OPTS -Djava.awt.headless=true\"", "Sample JPDA settings for remote socket debugging JAVA_OPTS=\"USDJAVA_OPTS -agentlib:jdwp=transport=dt_socket,address=8787,server=y,suspend=n\"", "JAVA_OPTS=\"USDJAVA_OPTS <key_1>=<value_1>, ..., <key_N>=<value_N> \"", "export JAVA_OPTS=\"-Xmx1024M\"", "set JAVA_OPTS=\"-Xmx1024M\"", "bin/server.sh", "bin\\server.bat", "ISPN080004: Protocol SINGLE_PORT listening on 127.0.0.1:11222 ISPN080034: Server '...' listening on http://127.0.0.1:11222 ISPN080001: Data Grid Server <version> started in <mm>ms", "bin/server.sh -c infinispan.xml -c datasources.yaml -c security-realms.json", "bin/cli.sh user create admin -p changeme", "cat server/conf/users.properties admin=scram-sha-1\\:BYGcIAwvf6b", "user roles grant --roles=deployer katie", "user roles ls katie [\"deployer\"]", "user create --groups=developers developers -p changeme", "user ls --groups", "user roles grant --roles=application developers", "user roles ls developers", "user groups john --groups=developers", "cat server/conf/groups.properties", "cp -r server server2", "bin/server.sh -o 100 -s server2", "INFO [org.infinispan.CLUSTER] (jgroups-11,<server_hostname>) ISPN000094: Received new cluster view for channel cluster: [<server_hostname>|3] (2) [<server_hostname>, <server2_hostname>] INFO [org.infinispan.CLUSTER] (jgroups-11,<server_hostname>) ISPN100000: Node <server2_hostname> joined the cluster", "shutdown cluster", "shutdown server <my_server01>", "ISPN080002: Data Grid Server stopping ISPN000080: Disconnecting JGroups channel cluster ISPN000390: Persisted state, version=<USDversion> timestamp=YYYY-MM-DDTHH:MM:SS ISPN080003: Data Grid Server stopped", "├── bin ├── boot ├── docs ├── lib ├── server └── static", "├── server ├── server1 ├── server2 ├── server3 └── server4", "├── server │ ├── conf │ ├── data │ ├── lib │ └── log" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_server_guide/server-getting-started
Chapter 9. Managing Gluster volumes using the Web Console
Chapter 9. Managing Gluster volumes using the Web Console 9.1. Accessing the Gluster Management dashboard The Gluster Management dashboard lets you view information about the currently configured Gluster volumes in your hyperconverged cluster. To access the Gluster Management dashboard: Click Virtualization Hosted Engine to reach the Hosted Engine dashboard. The Hosted Engine dashboard in the Web Console Click Manage Gluster to reach the Gluster Management dashboard. The Gluster Management dashboard in the Web Console 9.2. Expanding volume from Web console Follow these instructions to use the Web Console to expand your volume. Prerequisites Verify that your scaling plans are supported: Requirements for scaling . Procedure Log in to the Web Console. Click Virtualization Hosted Engine and then click Manage Gluster . Click Expand volume button beside the volume you want to expand. The expand volume data page opens. On the Hosts tab, verify the Host details and click . On the Volumes tab, specify the details of the brick path to be configured for the new disk. On the Bricks tab, specify the details of the disks to be used to expand the Gluster volume. On the Review tab, check the generated file for any problems. Here, Enable debug logging , runs ansible-playbook in verbose mode, and provides more logs to add information. When you are satisfied, click Deploy . 9.3. Expanding volume from Red Hat Virtualization Manager Follow this section to expand an existing volume across new bricks on new hyperconverged nodes. Prerequisites Verify that your scaling plans are supported: Requirements for scaling . Install three physical machines to serve as the new hyperconverged nodes. Follow the instructions in Installing hyperconverged hosts . Configure key-based SSH authentication without a password. Configure this from the node that is running the Web Console to all new nodes, and from the first new node to all other new nodes. Important RHHI for Virtualization expects key-based SSH authentication without a password between these nodes for both IP addresses and FQDNs. Ensure that you configure key-based SSH authentication between these machines for the IP address and FQDN of all storage and management network interfaces. Follow the instructions in Using key pairs instead of passwords for SSH authentication to configure key based authentication without a password. Procedure Create new bricks Create the bricks on the servers you want to expand your volume across by following the instructions in Creating bricks using ansible or Creating bricks above a VDO layer using ansible depending on your requirements. Important If the path: defined does not begin with /rhgs the bricks are not detected automatically by the Administration Portal. Synchronize the host storage after running the create_brick.yml playbook to synchronize the new bricks to the Administration Portal. Click Compute Hosts and select the host. Click Storage Devices . Click Sync . Repeat for each host that has new bricks. Add new bricks to the volume Log in to RHV Administration Console. Click Storage Volumes and select the volume to expand. Click the Bricks tab. Click Add . The Add Bricks window opens. Add new bricks. Select the brick host from the Host dropdown menu. Select the brick to add from the Brick Directory dropdown menu and click Add . When all bricks are listed, click OK to add bricks to the volume. The volume automatically syncs the new bricks. 9.4. Expanding the hyperconverged cluster by adding a new volume on new nodes using the Web Console Follow these instructions to use the Web Console to expand your hyperconverged cluster with a new volume on new nodes. Prerequisites Verify that your scaling plans are supported: Requirements for scaling . Install three physical machines to serve as the new hyperconverged nodes. Follow the instructions in Installing hyperconverged hosts . Configure key-based SSH authentication without a password. Configure this from the node that is running the Web Console to all new nodes, and from the first new node to all other new nodes. Important RHHI for Virtualization expects key-based SSH authentication without a password between these nodes for both IP addresses and FQDNs. Ensure that you configure key-based SSH authentication between these machines for the IP address and FQDN of all storage and management network interfaces. Follow the instructions in Using key pairs instead of passwords for SSH authentication to configure key based authentication without a password. Procedure Log in to the Web Console. Click Virtualization Hosted Engine and then click Manage Gluster . Click Expand Cluster . The Gluster Deployment window opens. On the Hosts tab, enter the FQDN or IP address of the new hyperconverged nodes and click . On the Volumes tab, specify the details of the volume you want to create. On the Bricks tab, specify the details of the disks to be used to create the Gluster volume. On the Review tab, check the generated file for any problems. When you are satisfied, click Deploy . Deployment takes some time to complete. The following screen appears when the cluster has been successfully expanded. 9.4.1. Configure additional hyperconverged hosts If your environment uses IPv6 addresses, or if you did not specify additional hyperconverged hosts as part of Configure Red Hat Gluster Storage for Hosted Engine using the Web Console , follow these steps in the Administration Portal for each of the other hyperconverged hosts. Click Compute Hosts and then click New to open the New Host window. Provide the Name , Hostname , and Password for the host that you want to manage. Under Advanced Parameters , uncheck the Automatically configure host firewall checkbox, as firewall rules are already configured by the deployment process. In the Hosted Engine tab of the New Host dialog, set the value of Choose hosted engine deployment action to Deploy . This ensures that the hosted engine can run on the new host. Click OK . Attach the gluster network to all remaining hosts Click the name of the newly added host to go to the host page. Click the Network Interfaces subtab and then click Setup Host Networks . Drag and drop the newly created network to the correct interface. Ensure that the Verify connectivity checkbox is checked. Ensure that the Save network configuration checkbox is checked. Click OK to save. In the General subtab for this host, verify that the value of Hosted Engine HA is Active , with a positive integer as a score. Important If Score is listed as N/A , you may have forgotten to select the deploy action for Choose hosted engine deployment action . Follow the steps in Reinstalling a hyperconverged host in Maintaining Red Hat Hyperconverged Infrastructure for Virtualization to reinstall the host with the deploy action. Verify the health of the network Click the Network Interfaces tab and check the state of the host's network. If the network interface enters an "Out of sync" state or does not have an IP Address, click Management Refresh Capabilities . See the Red Hat Virtualization 4.4 Self-Hosted Engine Guide for further details: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/self-hosted_engine_guide/chap-installing_additional_hosts_to_a_self-hosted_environment 9.5. Creating an additional Gluster volume using the Web Console Follow these instructions to use the Web Console to create a new Red Hat Gluster Storage volume using raw disks that are available on hyperconverged hosts in your cluster. Prerequisites Verify that the raw disk drives you plan to use for the new volume are visible under the Drives section of the Storage Dashboard , and do not have any file systems listed on their Drive Overview page. Procedure Log in to the Web Console. Click Virtualization Hosted Engine and then click Manage Gluster . Click Create Volume . The Create Volume window opens. On the Hosts tab, select three different hyperconverged hosts with unused disks and click . On the Volumes tab, specify the details of the volume you want to create and click . On the Bricks tab, specify the details of the disks to be used to create the volume and click . On the Review tab, check the generated configuration file for any incorrect information. When you are satisfied, click Deploy . Successfully created volume is displayed when deployment completes successfully.
null
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/managing_red_hat_gluster_storage_using_the_web_console/assembly-cockpit-gluster_mgmt
14.8. Actions
14.8. Actions 14.8.1. Install VDSM Action Install VDSM and related software on the host. Example 14.28. Action to install VDSM on a virtualization host 14.8.2. Activate Host Action Activate the host for use, such as running virtual machines. Example 14.29. Action to activate a host 14.8.3. Host Network Setup Action Configure multiple network settings on a host. The setupnetworks action can be used for complex network configuration such as moving a network from one network interface to another. Example 14.30. Action to edit host network configuration This action updates all specified host network resources with standard NIC elements. The request includes additional elements specified in the following table. Table 14.14. Elements for multiple host network interface setup Element Type Description modified_bonds complex Creates or updates bonds. Each host_nic element contains standard bonding elements. See Section 14.7.2.2, "Bonded Interfaces" . removed_bonds complex An ID list of bonds to remove. modified_network_attachments complex Adds or updates network attachments on the host. Each network_attachment element contains standard host network_attachment elements. See Section 14.7.1, "Host Network Attachments Sub-Collection" . Changing the host_nic ID moves the network to a different network interface card. synchronized_network_attachments complex An ID list of out-of-sync network attachments to synchronize with the logical network definition of the data center. removed_network_attachments complex An ID list of network attachments to remove. modified_labels complex Creates or modifies labels. Each label element contains a label id (when creating a label) and a host_nic identified by a name or ID. Changing the host_nic ID moves the label to a different network interface card. removed_labels complex An ID list of labels to remove. checkConnectivity Boolean Set to true to verify connectivity between the host and the Red Hat Virtualization Manager. If the connectivity is lost, Red Hat Virtualization Manager reverts the settings. connectivityTimeout integer Defines the timeout for loss of connectivity. 14.8.4. Fence Host Action An API user controls a host's power management device with the fence action. The capabilities lists available fence_type options. Example 14.31. Action to fence a host 14.8.5. Deactivate Host Action Deactivate the host to perform maintenance tasks. Example 14.32. Action to deactivate a host 14.8.6. Host iSCSI Login Action The iscsilogin action enables a host to login to an iSCSI target. Logging into a target makes the contained LUNs available in the host_storage collection. Example 14.33. Action to enable a host to login to iSCSI target 14.8.7. Host iSCSI Discover Action The iscsidiscover action enables an iSCSI portal to be queried for its list of targets. Example 14.34. Action to query a list of targets for iSCSI portal 14.8.8. Commit Host Network Configuration Action An API user commits the network configuration to persist a host network interface attachment or detachment, or persist the creation and deletion of a bonded interface. Example 14.35. Action to commit network configuration Important Networking configuration is only committed after the Manager has established that host connectivity is not lost as a result of the configuration changes. If host connectivity is lost, the host requires a reboot and automatically reverts to the networking configuration. 14.8.9. Setting SPM Manually set a host as the Storage Pool Manager (SPM). Example 14.36. Action to Set Host as SPM
[ "POST /ovirt-engine/api/hosts/2ab5e1da-b726-4274-bbf7-0a42b16a0fc3/install HTTP/1.1 Accept: application/xml Content-type: application/xml <action> <root_password>p@55w0Rd!</root_password> </action>", "POST /ovirt-engine/api/hosts/2ab5e1da-b726-4274-bbf7-0a42b16a0fc3/activate HTTP/1.1 Accept: application/xml Content-type: application/xml <action/>", "POST /ovirt-engine/api/hosts/2ab5e1da-b726-4274-bbf7-0a42b16a0fc3/setupnetworks HTTP/1.1 Accept: application/xml Content-type: application/xml <action> <modified_network_attachments> <network_attachment id=\"41561e1c-c653-4b45-b9c9-126630e8e3b9\"> <host_nic id=\"857a46d3-5f64-68bd-f456-c70de5b2d569\"/> </network_attachment< <network_attachment id=\"3c3f442f-948b-4cdc-9a48-89bb0593cfbd\"> <network id=\"00000000-0000-0000-0000-000000000010\"/> <ip address=\"10.35.1.247\" netmask=\"255.255.254.0\" gateway=\"10.35.1.254\"/> <properties> <property> <name>bridge_opts</name> <value> forward_delay=1500 group_fwd_mask=0x0 multicast_snooping=1 </value> </property> </properties> </network_attachment> </modified_network_attachments> <synchronized_network_attachments> <network_attachment id=\"3c3f442f-948b-4cdc-9a48-89bb0593cfbd\"> </synchronized_network_attachments> <removed_network_attachments> <network_attachment id=\"7f456dae-c57f-35d5-55a4-20b74dc53af9\"> </removed_network_attachments> <modified_bonds> <host_nic id=\"a56b212d-2bc4-4120-9136-53be6cacb39a\"> <bonding> <slaves> <host_nic id=\"75ac21f7-4aa3-405a-a022-341e5f525b85\"> <host_nic id=\"f3dda04c-1233-41af-a111-74327b876487\"> </slaves> </bonding> </host_nic> </modified_bonds> <removed_bonds> <host_nic id=\"36ab5c7f-647a-bc64-f5e7-ba5d74f8e4ba\"> </removed_bonds> <modified_labels> <label id=\"Label002\"> <host_nic id=\"857a46d3-5f64-68bd-f456-c70de5b2d569\"/> </label> <label> <host_nic id=\"a56b212d-2bc4-4120-9136-53be6cacb39a\"/> <label id=\"Label003/> </label> </modified_labels> <removed_labels> <label id=\"Label001\"> </removed_labels> <checkConnectivity>true</checkConnectivity> <connectivityTimeout>60</connectivityTimeout> </action>", "POST /ovirt-engine/api/hosts/2ab5e1da-b726-4274-bbf7-0a42b16a0fc3/fence Accept: application/xml Content-Type: application/xml <action> <fence_type>start</fence_type> </action>", "POST /ovirt-engine/api/hosts/2ab5e1da-b726-4274-bbf7-0a42b16a0fc3/deactivate HTTP/1.1 Accept: application/xml Content-type: application/xml <action/>", "POST /ovirt-engine/api/hosts/2ab5e1da-b726-4274-bbf7-0a42b16a0fc3/iscsilogin HTTP/1.1 Accept: application/xml Content-Type: application/xml <action> <iscsi> <address>mysan.example.com</address> <target>iqn.2009-08.com.example:mysan.foobar</target> <username>jimmy</username> <password>s3kr37</password> </iscsi> </action>", "POST /ovirt-engine/api/hosts/2ab5e1da-b726-4274-bbf7-0a42b16a0fc3/iscsidiscover HTTP/1.1 Accept: application/xml Content-Type: application/xml <action> <iscsi> <address>mysan.example.com</address> <port>3260</port> </iscsi> </action>", "POST /ovirt-engine/api/hosts/2ab5e1da-b726-4274-bbf7-0a42b16a0fc3/commitnetconfig HTTP/1.1 Accept: application/xml Content-type: application/xml <action/>", "POST /ovirt-engine/api/hosts/2ab5e1da-b726-4274-bbf7-0a42b16a0fc3/forceselectspm HTTP/1.1 Accept: application/xml Content-type: application/xml <action/>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/sect-Actions1
Chapter 4. Configuring Hammer
Chapter 4. Configuring Hammer The default location for global hammer configuration is: /etc/hammer/cli_config.yml for general hammer settings /etc/hammer/cli.modules.d/ for CLI module configuration files You can set user specific directives for hammer (in ~/.hammer/cli_config.yml ) as well as for CLI modules (in respective .yml files under ~/.hammer/cli.modules.d/ ). To see the order in which configuration files are loaded, as well as versions of loaded modules, use: Note Loading configuration for many CLI modules can slow down the execution of hammer commands. In such a case, consider disabling CLI modules that are not regularly used. Apart from saving credentials as described in Chapter 3, Hammer authentication , you can set several other options in the ~/.hammer/ configuration directory. For example, you can change the default log level and set log rotation with the following directives in ~/.hammer/cli_config.yml . These directives affect only the current user and are not applied globally. Similarly, you can configure user interface settings. For example, set the number of entries displayed per request in the Hammer output by changing the following line: This setting is an equivalent of the --per-page Hammer option. 4.1. Setting a default organization and location context Many hammer commands are organization specific. You can set a default organization and location for hammer commands so that you do not have to specify them every time with the --organization and --location options. Specifying a default organization is useful when you mostly manage a single organization, as it makes your commands shorter. However, when you switch to a different organization, you must use hammer with the --organization option to specify it. Procedure Set a default organization: You can find the name of your organization with the hammer organization list command. Optional: Set a default location: You can find the name of your location with the hammer location list command. Verification Review the currently specified default settings: 4.2. Increasing the logging level for Hammer You can find the log in ~/.hammer/log/hammer.log . Procedure In /etc/hammer/cli_config.yml , set the :log_level: option to debug :
[ "hammer -d --version", ":log_level: 'warning' :log_size: 5 #in MB", ":per_page: 30", "hammer defaults add --param-name organization --param-value \"Your_Organization\"", "hammer defaults add --param-name location --param-value \"Your_Location\"", "hammer defaults list", ":log_level: 'debug'" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/using_the_hammer_cli_tool/configuring-hammer
7.5. Clustered Listeners
7.5. Clustered Listeners Clustered listeners allow listeners to be used in a distributed cache configuration. In a distributed cache environment, registered local listeners are only notified of events that are local to the node where the event has occurred. Clustered listeners resolve this issue by allowing a single listener to receive any write notification that occurs in the cluster, regardless of where the event occurred. As a result, clustered listeners perform slower than non-clustered listeners, which only provide event notifications for the node on which the event occurs. When using clustered listeners, client applications are notified when an entry is added, updated, expired, or deleted in a particular cache. The event is cluster-wide so that client applications can access the event regardless of the node on which the application resides or connects with. The event will always be triggered on the node where the listener was registered, while disregarding where the cache update originated. 23155%2C+Developer+Guide-6.630-06-2017+15%3A00%3A55JBoss+Data+Grid+6Documentation6.6.1 Report a bug 7.5.1. Configuring Clustered Listeners In the following use case, listener stores events as it receives them. Procedure 7.1. Clustered Listener Configuration Clustered listeners are enabled by annotating the @Listener class with clustered=true . The following methods are annotated to allow client applications to be notified when entries are added, modified, expired, or removed. @CacheEntryCreated @CacheEntryModified @CacheEntryExpired @CacheEntryRemoved The listener is registered with a cache, with the option of passing on a filter or converter. The following limitations occur when using clustered listeners, that do not apply to non-clustered listeners: A cluster listener can only listen to entries that are created, modified, expired, or removed. No other events are listened to by a clustered listener. Only post events are sent to a clustered listener, pre events are ignored. Report a bug 7.5.2. The Cache Listener API Clustered listeners can be added on top of the existing @CacheListener API via the addListener method. Example 7.3. The Cache Listener API The Cache API The local or clustered listener can be registered with the cache.addListener method, and is active until one of the following events occur. The listener is explicitly unregistered by invoking cache.removeListener . The node on which the listener was registered crashes. Listener Annotation The listener annotation is enhanced with two attributes: clustered() :This attribute defines whether the annotated listener is clustered or not. Note that clustered listeners can only be notified for @CacheEntryRemoved , @CacheEntryCreated , @CacheEntryExpired , and @CacheEntryModified events. This attribute is false by default. includeCurrentState() : This attribute applies to clustered listeners only, and is false by default. When set to true , the entire existing state within the cluster is evaluated. When being registered, a listener will immediately be sent a CacheCreatedEvent for every entry in the cache. oldValue and oldMetadata The oldValue and oldMetadata values are extra methods on the accept method of CacheEventFilter and CacheEventConverter classes. They values are provided to any listener, including local listeners. For more information about these values, see the JBoss Data Grid API Documentation . EventType The EventType includes the type of event, whether it was a retry, and if it was a pre or post event. When using clustered listeners, the order in which the cache is updated is reflected in the sequence of notifications received. The clustered listener does not guarantee that an event is sent only once. The listener implementation must be idempotent in order to prevent situations where the same event is sent more than once. Implementors can expect singularity to be honored for stable clusters and outside of the time span in which synthetic events are generated as a result of includeCurrentState . Report a bug 7.5.3. Clustered Listener Example The following use case demonstrates a listener that wants to know when orders are generated that have a destination of New York, NY. The listener requires a Filter that filters all orders that come in and out of New York. The listener also requires a Converter as it does not require the entire order, only the date it is to be delivered. Example 7.4. Use Case: Filtering and Converting the New York orders Report a bug 7.5.4. Optimized Cache Filter Converter The example provided in Section 7.5.3, "Clustered Listener Example" could use the optimized CacheEventFilterConverter , in order to perform the filtering and converting of results into one step. The CacheEventFilterConverter is an optimization that allows the event filter and conversion to be performed in one step. This can be used when an event filter and converter are most efficiently used as the same object, composing the filtering and conversion in the same method. This can only be used in situations where your conversion will not return a null value, as a returned value of null indicates that the value did not pass the filter. To convert a null value, use the CacheEventFilter and the CacheEventConverter interfaces independently. The following is an example of the New York orders use case using the CacheEventFilterConverter : Example 7.5. CacheEventFilterConverter When registering the listener, provide the FilterConverter as both arguments to the filter and converter: Report a bug
[ "@Listener(clustered = true) protected static class ClusterListener { List<CacheEntryEvent> events = Collections.synchronizedList(new ArrayList<CacheEntryEvent>()); @CacheEntryCreated @CacheEntryModified @CacheEntryExpired @CacheEntryRemoved public void onCacheEvent(CacheEntryEvent event) { log.debugf(\"Adding new cluster event %s\", event); events.add(event); } } public void addClusterListener(Cache<?, ?> cache) { ClusterListener clusterListener = new ClusterListener(); cache.addListener(clusterListener); }", "cache.addListener(Object listener, Filter filter, Converter converter); public @interface Listener { boolean clustered() default false; boolean includeCurrentState() default false; boolean sync() default true; } interface CacheEventFilter<K,V> { public boolean accept(K key, V oldValue, Metadata oldMetadata, V newValue, Metadata newMetadata, EventType eventType); } interface CacheEventConverter<K,V,C> { public C convert(K key, V oldValue, Metadata oldMetadata, V newValue, Metadata newMetadata, EventType eventType); }", "class CityStateFilter implements CacheEventFilter<String, Order> { private final String state; private final String city; public boolean accept(String orderId, Order oldOrder, Metadata oldMetadata, Order newOrder, Metadata newMetadata, EventType eventType) { switch (eventType.getType()) { // Only send update if the order is going to our city case Type.CACHE_ENTRY_CREATED: return city.equals(newOrder.getCity()) && state.equals(newOrder.getState()); // Only send update if our order has changed from our city to elsewhere or if is now going to our city case Type.CACHE_ENTRY_MODIFIED: if (city.equals(oldOrder.getCity()) && state.equals(oldOrder.getState())) { // If old city matches then we have to compare if new order is no longer going to our city return !city.equals(newOrder.getCity()) || !state.equals(newOrder.getState()); } else { // If the old city doesn't match ours then only send update if new update does match ours return city.equals(newOrder.getCity()) && state.equals(newOrder.getState()); } // On remove we have to send update if our order was originally going to city case Type.CACHE_ENTRY_REMOVED: return city.equals(oldOrder.getCity()) && state.equals(oldOrder.getState()); } } } class OrderDateConverter implements CacheEventConverter<String, Order, Date> { private final String state; private final String city; public Date convert(String orderId, Order oldValue, Metadata oldMetadata, Order newValue, Metadata newMetadata, EventType eventType) { // If remove we do not care about date - this tells listener to remove its data if (eventType.isRemove()) { return null; } else if (eventType.isModification()) { if (state.equals(newOrder.getState()) && city.equals(newOrder.getCity()) { // If it is a modification meaning the destination has changed to ours then we allow it return newOrder.getDate(); } else { // If destination is no longer our city it means it was changed from us so send null return null; } } else { // This was a create so we always send date return newOrder.getDate(); } } }", "class OrderDateFilterConverter extends AbstractCacheEventFilterConverter<String, Order, Date> { private final String state; private final String city; public Date filterAndConvert(String orderId, Order oldValue, Metadata oldMetadata, Order newValue, Metadata newMetadata, EventType eventType) { // Remove if the date is not required - this tells listener to remove its data if (eventType.isRemove()) { return null; } else if (eventType.isModification()) { if (state.equals(newOrder.getState()) && city.equals(newOrder.getCity()) { // If it is a modification meaning the destination has changed to ours then we allow it return newOrder.getDate(); } else { // If destination is no longer our city it means it was changed from us so send null return null; } } else { // This was a create so we always send date return newOrder.getDate(); } } }", "OrderDateFilterConverter filterConverter = new OrderDateFilterConverter(\"NY\", \"New York\"); cache.addListener(listener, filterConveter, filterConverter);" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/sect-Clustered_Listeners
Chapter 5. Storage classes and storage pools
Chapter 5. Storage classes and storage pools The OpenShift Data Foundation operator installs a default storage class depending on the platform in use. This default storage class is owned and controlled by the operator and it cannot be deleted or modified. However, you can create a custom storage class if you want the storage class to have a different behavior. You can create multiple storage pools which map to storage classes that provide the following features: Enable applications with their own high availability to use persistent volumes with two replicas, potentially improving application performance. Save space for persistent volume claims using storage classes with compression enabled. Note Multiple storage classes and multiple pools are not supported for external mode OpenShift Data Foundation clusters. Note With a minimal cluster of a single device set, only two new storage classes can be created. Every storage cluster expansion allows two new additional storage classes. 5.1. Creating storage classes and pools You can create a storage class using an existing pool or you can create a new pool for the storage class while creating it. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and OpenShift Data Foundation cluster is in Ready state. Procedure Click Storage StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Reclaim Policy is set to Delete as the default option. Use this setting. If you change the reclaim policy to Retain in the storage class, the persistent volume (PV) remains in Released state even after deleting the persistent volume claim (PVC). Volume binding mode is set to WaitForConsumer as the default option. If you choose the Immediate option, then the PV is created at the same time while creating the PVC. Select RBD Provisioner which is the plugin used for provisioning the persistent volumes. Select an existing Storage Pool from the list or create a new pool. Create new pool Click Create New Pool . Enter Pool name . Choose 2-way-Replication or 3-way-Replication as the Data Protection Policy. Select Enable compression if you need to compress the data. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression will not be compressed. Click Create to create the new storage pool. Click Finish after the pool is created. Optional: Select Enable Encryption checkbox. Click Create to create the storage class. 5.2. Creating a storage class for persistent volume encryption Persistent volume (PV) encryption guarantees isolation and confidentiality between tenants (applications). Before you can use PV encryption, you must create a storage class for PV encryption. OpenShift Data Foundation supports storing encryption passphrases in HashiCorp Vault. Use the following procedure to create an encryption enabled storage class using an external key management system (KMS) for persistent volume encryption. Persistent volume encryption is only available for RBD PVs. You can configure access to the KMS in two different ways: Using vaulttokens : allows users to authenticate using a token Using vaulttenantsa (technology preview): allows users to use serviceaccounts to authenticate with Vault Important Accessing the KMS using vaulttenantsa is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . See the relevant prerequisites section for your use case before following the procedure for creating the storage class: Section 5.2.1, "Prerequisites for using vaulttokens " Section 5.2.2, "Prerequisites for using vaulttenantsa " 5.2.1. Prerequisites for using vaulttokens The OpenShift Data Foundation cluster is in Ready state. On the external key management system (KMS), Ensure that a policy with a token exists and the key value backend path in Vault is enabled. For more information, see Enabling key value and policy in Vault . Ensure that you are using signed certificates on your Vault servers. Create a secret in the tenant's namespace as follows: On the OpenShift Container Platform web console, navigate to Workloads Secrets . Click Create Key/value secret . Enter Secret Name as ceph-csi-kms-token . Enter Key as token . Enter Value . It is the token from Vault. You can either click Browse to select and upload the file containing the token or enter the token directly in the text box. Click Create . Note The token can be deleted only after all the encrypted PVCs using the ceph-csi-kms-token have been deleted. , follow the steps in Section 5.2.3, "Procedure for creating a storage class for PV encryption" . 5.2.2. Prerequisites for using vaulttenantsa The OpenShift Data Foundation cluster is in Ready state. On the external key management system (KMS), Ensure that a policy exists and the key value backend path in Vault is enabled. For more information, see Enabling key value and policy in Vault . Ensure that you are using signed certificates on your Vault servers. Create the following serviceaccount in the tenant namespace as shown below: The Kubernetes authentication method must be configured before OpenShift Data Foundation can authenticate with and start using Vault. The instructions below create and configure serviceAccount , ClusterRole , and ClusterRoleBinding required to allow OpenShift Data Foundation to authenticate with Vault. Apply the following YAML to your Openshift cluster: Identify the secret name associated with the serviceaccount (SA) created above: Get the token and the CA certificate from the secret: Retrieve the OCP cluster endpoint: Use the information collected in the steps above to setup the kubernetes authentication method in Vault as shown below: Create a role in Vault for the tenant namespace: csi-kubernetes is the default role name that OpenShift Data Foundation looks for in Vault. The default service account name in the tenant namespace in the Openshift Data Foundation cluster is ceph-csi-vault-sa . These default values can be overridden by creating a ConfigMap in the tenant namespace. For more information about overriding the default names, see Overriding Vault connection details using tenant ConfigMap . In order to create a storageclass that uses the vaulttenantsa method for PV encrytpion, you must either edit the existing ConfigMap or create a ConfigMap named csi-kms-connection-details that will hold all the information needed to establish the connection with Vault. The sample yaml given below can be used to update or create the csi-kms-connection-detail ConfigMap: encryptionKMSType : should be set to vaulttenantsa to use service accounts for authentication with vault. vaultAddress : The hostname or IP address of the vault server with the port number. vaultTLSServerName : (Optional) The vault TLS server name vaultAuthPath : (Optional) The path where kubernetes auth method is enabled in Vault. The default path is kubernetes . If the auth method is enabled in a different path other than kubernetes , this variable needs to be set as "/v1/auth/<path>/login" . vaultAuthNamespace : (Optional) The Vault namespace where kubernetes auth method is enabled. vaultNamespace : (Optional) The Vault namespace where the backend path being used to store the keys exists vaultBackendPath : The backend path in Vault where the encryption keys will be stored vaultCAFromSecret : The secret in the OpenShift Data Foundation cluster containing the CA certificate from Vault vaultClientCertFromSecret : The secret in the OpenShift Data Foundation cluster containing the client certificate from Vault vaultClientCertKeyFromSecret : The secret in the OpenShift Data Foundation cluster containing the client private key from Vault tenantSAName : (Optional) The service account name in the tenant namespace. The default value is ceph-csi-vault-sa . If a different name is to be used, this variable has to be set accordingly. , follow the steps in Section 5.2.3, "Procedure for creating a storage class for PV encryption" . 5.2.3. Procedure for creating a storage class for PV encryption After performing the required prerequisites for either vaulttokens or vaulttenantsa , perform the steps below to create a storageclass with encryption enabled. Navigate to Storage StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Select either Delete or Retain for the Reclaim Policy . By default, Delete is selected. Select either Immediate or WaitForFirstConsumer as the Volume binding mode . WaitForConsumer is set as the default option. Select RBD Provisioner openshift-storage.rbd.csi.ceph.com which is the plugin used for provisioning the persistent volumes. Select Storage Pool where the volume data is stored from the list or create a new pool. Select the Enable encryption checkbox. There are two options available to set the KMS connection details: Choose existing KMS connection : Select an existing KMS connection from the drop-down list. The list is populated from the the connection details available in the csi-kms-connection-details ConfigMap. Create new KMS connection : This is applicable for vaulttokens only. Key Management Service Provider is set to Vault by default. Enter a unique Vault Service Name , host Address of the Vault server ( https://<hostname or ip> ), and Port number. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration. Enter the key value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional : Enter TLS Server Name and Vault Enterprise Namespace . Provide CA Certificate , Client Certificate and Client Private Key by uploading the respective PEM encoded certificate file. Click Save . Click Save . Click Create . Edit the ConfigMap to add the VAULT_BACKEND or vaultBackend parameter if the HashiCorp Vault setup does not allow automatic detection of the Key/Value (KV) secret engine API version used by the backend path. Note VAULT_BACKEND or vaultBackend are optional parameters that has added to the configmap to specify the version of the KV secret engine API associated with the backend path. Ensure that the value matches the KV secret engine API version that is set for the backend path, otherwise it might result in a failure during persistent volume claim (PVC) creation. Identify the encryptionKMSID being used by the newly created storage class. On the OpenShift Web Console, navigate to Storage Storage Classes . Click the Storage class name YAML tab. Capture the encryptionKMSID being used by the storage class. Example: On the OpenShift Web Console, navigate to Workloads ConfigMaps . To view the KMS connection details, click csi-kms-connection-details . Edit the ConfigMap. Click Action menu (...) Edit ConfigMap . Add the VAULT_BACKEND or vaultBackend parameter depending on the backend that is configured for the previously identified encryptionKMSID. You can assign kv for KV secret engine API, version 1 and kv-v2 for KV secret engine API, version 2. Example: Click Save steps The storage class can be used to create encrypted persistent volumes. For more information, see managing persistent volume claims . Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the HashiCorp product. For technical assistance with this product, contact HashiCorp . 5.2.3.1. Overriding Vault connection details using tenant ConfigMap The Vault connections details can be reconfigured per tenant by creating a ConfigMap in the Openshift namespace with configuration options that differ from the values set in the csi-kms-connection-details ConfigMap in the openshift-storage namespace. The ConfigMap needs to be located in the tenant namespace. The values in the ConfigMap in the tenant namespace will override the values set in the csi-kms-connection-details ConfigMap for the encrypted Persistent Volumes created in that namespace. Procedure Ensure that you are in the tenant namespace. Click on Workloads ConfigMaps . Click on Create ConfigMap . The following is a sample yaml. The values to be overidden for the given tenant namespace can be specified under the data section as shown below: Once the yaml is edited, click on Create .
[ "cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: ceph-csi-vault-sa EOF", "apiVersion: v1 kind: ServiceAccount metadata: name: rbd-csi-vault-token-review --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-csi-vault-token-review rules: - apiGroups: [\"authentication.k8s.io\"] resources: [\"tokenreviews\"] verbs: [\"create\", \"get\", \"list\"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-csi-vault-token-review subjects: - kind: ServiceAccount name: rbd-csi-vault-token-review namespace: openshift-storage roleRef: kind: ClusterRole name: rbd-csi-vault-token-review apiGroup: rbac.authorization.k8s.io", "oc -n openshift-storage get sa rbd-csi-vault-token-review -o jsonpath=\"{.secrets[*]['name']}\"", "oc get secret <secret associated with SA> -o jsonpath=\"{.data['token']}\" | base64 --decode; echo oc get secret <secret associated with SA> -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo", "oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\"", "vault auth enable kubernetes vault write auth/kubernetes/config token_reviewer_jwt=<SA token> kubernetes_host=<OCP cluster endpoint> kubernetes_ca_cert=<SA CA certificate>", "vault write \"auth/kubernetes/role/csi-kubernetes\" bound_service_account_names=\"ceph-csi-vault-sa\" bound_service_account_namespaces=<tenant_namespace> policies=<policy_name_in_vault>", "apiVersion: v1 data: vault-tenant-sa: |- { \"encryptionKMSType\": \"vaulttenantsa\", \"vaultAddress\": \"<https://hostname_or_ip_of_vault_server:port>\", \"vaultTLSServerName\": \"<vault TLS server name>\", \"vaultAuthPath\": \"/v1/auth/kubernetes/login\", \"vaultAuthNamespace\": \"<vault auth namespace name>\" \"vaultNamespace\": \"<vault namespace name>\", \"vaultBackendPath\": \"<vault backend path name>\", \"vaultCAFromSecret\": \"<secret containing CA cert>\", \"vaultClientCertFromSecret\": \"<secret containing client cert>\", \"vaultClientCertKeyFromSecret\": \"<secret containing client private key>\", \"tenantSAName\": \"<service account name in the tenant namespace>\" } metadata: name: csi-kms-connection-details", "encryptionKMSID: 1-vault", "kind: ConfigMap apiVersion: v1 metadata: name: csi-kms-connection-details [...] data: 1-vault: |- { \"KMS_PROVIDER\": \"vaulttokens\", \"KMS_SERVICE_NAME\": \"1-vault\", [...] \"VAULT_BACKEND\": \"kv-v2\" } 2-vault: |- { \"encryptionKMSType\": \"vaulttenantsa\", [...] \"vaultBackend\": \"kv-v2\" }", "--- apiVersion: v1 kind: ConfigMap metadata: name: ceph-csi-kms-config data: vaultAddress: \"<vault_address:port>\" vaultBackendPath: \"<backend_path>\" vaultTLSServerName: \"<vault_tls_server_name>\" vaultNamespace: \"<vault_namespace>\"" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/storage-classes-and-storage-pools_osp
3.2. Using the Red Hat Customer Portal
3.2. Using the Red Hat Customer Portal The Red Hat Customer Portal at https://access.redhat.com/ is the main customer-oriented resource for official information related to Red Hat products. You can use it to find documentation, manage your subscriptions, download products and updates, open support cases, and learn about security updates. 3.2.1. Viewing Security Advisories on the Customer Portal To view security advisories (errata) relevant to the systems for which you have active subscriptions, log into the Customer Portal at https://access.redhat.com/ and click on the Download Products & Updates button on the main page. When you enter the Software & Download Center page, continue by clicking on the Errata button to see a list of advisories pertinent to your registered systems. To browse a list of all security updates for all active Red Hat products, go to Security Security Updates Active Products using the navigation menu at the top of the page. Click on the erratum code in the left part of the table to display more detailed information about the individual advisories. The page contains not only a description of the given erratum, including its causes, consequences, and required fixes, but also a list of all packages that the particular erratum updates along with instructions on how to apply the updates. The page also includes links to relevant references, such as related CVE . 3.2.2. Navigating CVE Customer Portal Pages The CVE ( Common Vulnerabilities and Exposures ) project, maintained by The MITRE Corporation, is a list of standardized names for vulnerabilities and security exposures. To browse a list of CVE that pertain to Red Hat products on the Customer Portal, log into your account at https://access.redhat.com/ and navigate to Security Resources CVE Database using the navigation menu at the top of the page. Click on the CVE code in the left part of the table to display more detailed information about the individual vulnerabilities. The page contains not only a description of the given CVE but also a list of affected Red Hat products along with links to relevant Red Hat errata. 3.2.3. Understanding Issue Severity Classification All security issues discovered in Red Hat products are assigned an impact rating by Red Hat Product Security according to the severity of the problem. The four-point scale consists of the following levels: Low, Moderate, Important, and Critical. In addition to that, every security issue is rated using the Common Vulnerability Scoring System ( CVSS ) base scores. Together, these ratings help you understand the impact of security issues, allowing you to schedule and prioritize upgrade strategies for your systems. Note that the ratings reflect the potential risk of a given vulnerability, which is based on a technical analysis of the bug, not the current threat level. This means that the security impact rating does not change if an exploit is released for a particular flaw. To see a detailed description of the individual levels of severity ratings on the Customer Portal, visit the Severity Ratings page.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-using_the_red_hat_customer_portal
Chapter 1. Preparing to install on GCP
Chapter 1. Preparing to install on GCP 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Requirements for installing OpenShift Container Platform on GCP Before installing OpenShift Container Platform on Google Cloud Platform (GCP), you must create a service account and configure a GCP project. See Configuring a GCP project for details about creating a project, enabling API services, configuring DNS, GCP account limits, and supported GCP regions. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, see Manually creating long-term credentials for GCP for other options. 1.3. Choosing a method to install OpenShift Container Platform on GCP You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on GCP infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods: Installing a cluster quickly on GCP : You can install OpenShift Container Platform on GCP infrastructure that is provisioned by the OpenShift Container Platform installation program. You can install a cluster quickly by using the default configuration options. Installing a customized cluster on GCP : You can install a customized cluster on GCP infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on GCP with network customizations : You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements. Installing a cluster on GCP in a restricted network : You can install OpenShift Container Platform on GCP on installer-provisioned infrastructure by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. While you can install OpenShift Container Platform by using the mirrored content, your cluster still requires internet access to use the GCP APIs. Installing a cluster into an existing Virtual Private Cloud : You can install OpenShift Container Platform on an existing GCP Virtual Private Cloud (VPC). You can use this installation method if you have constraints set by the guidelines of your company, such as limits on creating new accounts or infrastructure. Installing a private cluster on an existing VPC : You can install a private cluster on an existing GCP VPC. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet. 1.3.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on GCP infrastructure that you provision, by using one of the following methods: Installing a cluster on GCP with user-provisioned infrastructure : You can install OpenShift Container Platform on GCP infrastructure that you provide. You can use the provided Deployment Manager templates to assist with the installation. Installing a cluster with shared VPC on user-provisioned infrastructure in GCP : You can use the provided Deployment Manager templates to create GCP resources in a shared VPC infrastructure. Installing a cluster on GCP in a restricted network with user-provisioned infrastructure : You can install OpenShift Container Platform on GCP in a restricted network with user-provisioned infrastructure. By creating an internal mirror of the installation release content, you can install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. 1.4. steps Configuring a GCP project
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_gcp/preparing-to-install-on-gcp
Chapter 2. Starting and Stopping Apache Karaf
Chapter 2. Starting and Stopping Apache Karaf Abstract Apache Karaf provides simple command-line tools for starting and stopping the server. 2.1. Starting Apache Karaf The default way to deploy the Apache Karaf runtime is to deploy it as a standalone server with an active console. You can also deploy the runtime as a background process without a console. 2.1.1. Setting up your environment You can start the Karaf runtime directly from the bin subdirectory of your installation, without modifying your environment. However, if you want to start it in a different folder you need to add the bin directory of your Karaf installation to the PATH environment variable, as follows: Windows Linux/UNIX 2.1.2. Launching the runtime in console mode If you are launching the Karaf runtime from the installation directory use the following command: Windows Linux/UNIX If Karaf starts up correctly you should see the following on the console: Note Since version Fuse 6.2.1, launching in console mode creates two processes: the parent process ./bin/karaf , which is executing the Karaf console; and the child process, which is executing the Karaf server in a java JVM. The shutdown behaviour remains the same as before, however. That is, you can shut down the server from the console using either Ctrl-D or osgi:shutdown , which kills both processes. 2.1.3. Launching the runtime in server mode Launching in server mode runs Apache Karaf in the background, without a local console. You would then connect to the running instance using a remote console. See Section 17.2, "Connecting and Disconnecting Remotely" for details. To launch Karaf in server mode, run the following Windows Linux/UNIX 2.1.4. Launching the runtime in client mode In production environments you might want to have a runtime instance accessible using only a local console. In other words, you cannot connect to the runtime remotely through the SSH console port. You can do this by launching the runtime in client mode, using the following command: Windows Linux/UNIX Note Launching in client mode suppresses only the SSH console port (usually port 8101). Other Karaf server ports (for example, the JMX management RMI ports) are opened as normal. 2.1.5. Running Fuse in debug mode Running Fuse in debug mode helps identify and resolve errors more efficiently. This option is disabled by default. When enabled, Fuse starts a JDWP socket on port 5005 . You have three approaches to run Fuse in debug mode . Section 2.1.5.1, "Use the Karaf environment variable" Section 2.1.5.2, "Run Fuse debug" Section 2.1.5.3, "Run Fuse debugs" 2.1.5.1. Use the Karaf environment variable This approach enables the KARAF_DEBUG environment variable ( =1 ), and then you start the container. 2.1.5.2. Run Fuse debug This approach runs debug where the suspend option is set to n (no). 2.1.5.3. Run Fuse debugs This approach runs debugs where the suspend option is set to y (yes). Note Setting suspend to yes causes the JVM to pause just before running main() until a debugger is attached and then it resumes execution. 2.2. Stopping Apache Karaf You can stop an instance of Apache Karaf either from within a console, or using a stop script. 2.2.1. Stopping an instance from a local console If you launched the Karaf instance by running fuse or fuse client , you can stop it by doing one of the following at the karaf> prompt: Type shutdown Press Ctrl + D 2.2.2. Stopping an instance running in server mode You can stop a locally running Karaf instance (root container), by invoking the stop(.bat) from the InstallDir/bin directory, as follows: Windows Linux/UNIX The shutdown mechanism invoked by the Karaf stop script is similar to the shutdown mechanism implemented in Apache Tomcat. The Karaf server opens a dedicated shutdown port ( not the same as the SSH port) to receive the shutdown notification. By default, the shutdown port is chosen randomly, but you can configure it to use a specific port if you prefer. You can optionally customize the shutdown port by setting the following properties in the InstallDir/etc/config.properties file: karaf.shutdown.port Specifies the TCP port to use as the shutdown port. Setting this property to -1 disables the port. Default is 0 (for a random port). Note If you wanted to use the bin/stop script to shut down the Karaf server running on a remote host, you would need to set this property equal to the remote host's shutdown port. But beware that this setting also affects the Karaf server located on the same host as the etc/config.properties file. karaf.shutdown.host Specifies the hostname to which the shutdown port is bound. This setting could be useful on a multi-homed host. Defaults to localhost . Note If you wanted to use the bin/stop script to shut down the Karaf server running on a remote host, you would need to set this property to the hostname (or IP address) of the remote host. But beware that this setting also affects the Karaf server located on the same host as the etc/config.properties file. karaf.shutdown.port.file After the Karaf instance starts up, it writes the current shutdown port to the file specified by this property. The stop script reads the file specified by this property to discover the value of the current shutdown port. Defaults to USD{karaf.data}/port . karaf.shutdown.command Specifies the UUID value that must be sent to the shutdown port in order to trigger shutdown. This provides an elementary level of security, as long as the UUID value is kept a secret. For example, the etc/config.properties file could be read-protected to prevent this value from being read by ordinary users. When Apache Karaf is started for the very first time, a random UUID value is automatically generated and this setting is written to the end of the etc/config.properties file. Alternatively, if karaf.shutdown.command is already set, the Karaf server uses the pre-existing UUID value (which enables you to customize the UUID setting, if required). Note If you wanted to use the bin/stop script to shut down the Karaf server running on a remote host, you would need to set this property to be equal to the value of the remote host's karaf.shutdown.command . But beware that this setting also affects the Karaf server located on the same host as the etc/config.properties file. 2.2.3. Stopping a remote instance You can stop a container instance running on a remote host as described in Section 17.3, "Stopping a Remote Container" .
[ "set PATH=%PATH%;InstallDir\\bin", "export PATH=USDPATH,InstallDir/bin`", "bin\\fuse.bat", "./bin/fuse", "Red Hat Fuse starting up. Press Enter to open the shell now 100% [========================================================================] Karaf started in 8s. Bundle stats: 220 active, 220 total ____ _ _ _ _ _____ | _ \\ ___ __| | | | | | __ _| |_ | ___| _ ___ ___ | |_) / _ \\/ _` | | |_| |/ _` | __| | |_ | | | / __|/ _ | _ < __/ (_| | | _ | (_| | |_ | _|| |_| \\__ \\ __/ |_| \\_\\___|\\__,_| |_| |_|\\__,_|\\__| |_| \\__,_|___/___| Fuse (7.x.x.fuse-xxxxxx-redhat-xxxxx) http://www.redhat.com/products/jbossenterprisemiddleware/fuse/ Hit '<tab>' for a list of available commands and '[cmd] --help' for help on a specific command. Open a browser to http://localhost:8181/hawtio to access the management console Hit '<ctrl-d>' or 'shutdown' to shutdown Red Hat Fuse. karaf@root()>", "bin\\start.bat", "./bin/start", "bin\\fuse.bat client", "./bin/fuse client", "export KARAF_DEBUG=1 bin/start", "bin/fuse debug", "bin/fuse debugs", "bin\\stop.bat", "./bin/stop" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_apache_karaf/esbruntimestartstop
Chapter 1. The OpenStack Dashboard
Chapter 1. The OpenStack Dashboard The OpenStack dashboard is a web-based graphical user interface for managing OpenStack services. To access the browser dashboard, the dashboard service must be installed, and you must know the dashboard host name (or IP) and login password. The dashboard URL is:
[ "http:// HOSTNAME /dashboard/" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/introduction_to_the_openstack_dashboard/the_openstack_dashboard
4.3.5. Removing Physical Volumes from a Volume Group
4.3.5. Removing Physical Volumes from a Volume Group To remove unused physical volumes from a volume group, use the vgreduce command. The vgreduce command shrinks a volume group's capacity by removing one or more empty physical volumes. This frees those physical volumes to be used in different volume groups or to be removed from the system. Before removing a physical volume from a volume group, you can make sure that the physical volume is not used by any logical volumes by using the pvdisplay command. If the physical volume is still being used you will have to migrate the data to another physical volume using the pvmove command. Then use the vgreduce command to remove the physical volume: The following command removes the physical volume /dev/hda1 from the volume group my_volume_group .
[ "pvdisplay /dev/hda1 -- Physical volume --- PV Name /dev/hda1 VG Name myvg PV Size 1.95 GB / NOT usable 4 MB [LVM: 122 KB] PV# 1 PV Status available Allocatable yes (but full) Cur LV 1 PE Size (KByte) 4096 Total PE 499 Free PE 0 Allocated PE 499 PV UUID Sd44tK-9IRw-SrMC-MOkn-76iP-iftz-OVSen7", "vgreduce my_volume_group /dev/hda1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/vg_remove_pv
Chapter 16. tags
Chapter 16. tags Optional. An operator-defined list of tags placed on each log by the collector or normalizer. The payload can be a string with whitespace-delimited string tokens or a JSON list of string tokens. Data type text file The path to the log file from which the collector reads this log entry. Normally, this is a path in the /var/log file system of a cluster node. Data type text offset The offset value. Can represent bytes to the start of the log line in the file (zero- or one-based), or log line numbers (zero- or one-based), so long as the values are strictly monotonically increasing in the context of a single log file. The values are allowed to wrap, representing a new version of the log file (rotation). Data type long
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/logging/tags
Chapter 3. Deploy standalone Multicloud Object Gateway
Chapter 3. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway 3.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.15 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.2. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node)
[ "oc annotate namespace openshift-storage openshift.io/node-selector=" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_amazon_web_services/deploy-standalone-multicloud-object-gateway
Chapter 5. Viewing and exporting logs
Chapter 5. Viewing and exporting logs Activity logs are gathered for all repositories and namespace in Red Hat Quay. Viewing usage logs of Red Hat Quay. can provide valuable insights and benefits for both operational and security purposes. Usage logs might reveal the following information: Resource Planning : Usage logs can provide data on the number of image pulls, pushes, and overall traffic to your registry. User Activity : Logs can help you track user activity, showing which users are accessing and interacting with images in the registry. This can be useful for auditing, understanding user behavior, and managing access controls. Usage Patterns : By studying usage patterns, you can gain insights into which images are popular, which versions are frequently used, and which images are rarely accessed. This information can help prioritize image maintenance and cleanup efforts. Security Auditing : Usage logs enable you to track who is accessing images and when. This is crucial for security auditing, compliance, and investigating any unauthorized or suspicious activity. Image Lifecycle Management : Logs can reveal which images are being pulled, pushed, and deleted. This information is essential for managing image lifecycles, including deprecating old images and ensuring that only authorized images are used. Compliance and Regulatory Requirements : Many industries have compliance requirements that mandate tracking and auditing of access to sensitive resources. Usage logs can help you demonstrate compliance with such regulations. Identifying Abnormal Behavior : Unusual or abnormal patterns in usage logs can indicate potential security breaches or malicious activity. Monitoring these logs can help you detect and respond to security incidents more effectively. Trend Analysis : Over time, usage logs can provide trends and insights into how your registry is being used. This can help you make informed decisions about resource allocation, access controls, and image management strategies. There are multiple ways of accessing log files: Viewing logs through the web UI. Exporting logs so that they can be saved externally. Accessing log entries using the API. To access logs, you must have administrative privileges for the selected repository or namespace. Note A maximum of 100 log results are available at a time via the API. To gather more results that that, you must use the log exporter feature described in this chapter. 5.1. Viewing logs using the UI Use the following procedure to view log entries for a repository or namespace using the web UI. Procedure Navigate to a repository or namespace for which you are an administrator of. In the navigation pane, select Usage Logs . Optional. On the usage logs page: Set the date range for viewing log entries by adding dates to the From and to boxes. By default, the UI shows you the most recent week of log entries. Type a string into the Filter Logs box to display log entries that of the specified keyword. For example, you can type delete to filter the logs to show deleted tags. Under Description , toggle the arrow of a log entry to see more, or less, text associated with a specific log entry. 5.2. Exporting repository logs You can obtain a larger number of log files and save them outside of the Red Hat Quay database by using the Export Logs feature. This feature has the following benefits and constraints: You can choose a range of dates for the logs you want to gather from a repository. You can request that the logs be sent to you by an email attachment or directed to a callback URL. To export logs, you must be an administrator of the repository or namespace. 30 days worth of logs are retained for all users. Export logs only gathers log data that was previously produced. It does not stream logging data. Your Red Hat Quay instance must be configured for external storage for this feature. Local storage does not work for exporting logs. When logs are gathered and made available to you, you should immediately copy that data if you want to save it. By default, the data expires after one hour. Use the following procedure to export logs. Procedure Select a repository for which you have administrator privileges. In the navigation pane, select Usage Logs . Optional. If you want to specify specific dates, enter the range in the From and to boxes. Click the Export Logs button. An Export Usage Logs pop-up appears, as shown Enter an email address or callback URL to receive the exported log. For the callback URL, you can use a URL to a specified domain, for example, <webhook.site>. Select Start Logs Export to start the process for gather the selected log entries. Depending on the amount of logging data being gathered, this can take anywhere from a few minutes to several hours to complete. When the log export is completed, the one of following two events happens: An email is received, alerting you to the available of your requested exported log entries. A successful status of your log export request from the webhook URL is returned. Additionally, a link to the exported data is made available for you to delete to download the logs. Note The URL points to a location in your Red Hat Quay external storage and is set to expire within one hour. Make sure that you copy the exported logs before the expiration time if you intend to keep your logs.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/use_red_hat_quay/use-quay-view-export-logs
Chapter 4. Troubleshooting the Block Storage backup service
Chapter 4. Troubleshooting the Block Storage backup service There are two common scenarios that cause many of the issues that occur with the backup service: When the cinder-backup service starts, it connects to its configured backend and uses this as a target for backups. Problems with this connection can cause services to fail. When backups are requested, the backup service connects to the volume service and attaches the requested volume. Problems with this connection are evident only during backup time. In either case, the logs contain messages that describe the error. For more information about log files and services, see Location of log files for OpenStack services in the Logging, Monitoring and Troubleshooting Guide . For more information about log locations and troubleshooting suggestions, see Block Storage (cinder) Log Files in the Logging, Monitoring and Troubleshooting Guide . 4.1. Verifying services You can diagnose many issues by verifying that services are available and by checking log files for error messages. After you verify the status of the services, check the cinder-backup.log file. The Block Storage Backup service log is located in /var/log/containers/cinder]/cinder-backup.log . Procedure Run the cinder show command on the volume to see if it is stored by the host: Run the cinder service-list command to view running services: Verify that the expected services are available. 4.2. Querying the status of a failed backup Backups are asynchronous. The Block Storage backup service performs a small number of static checks upon receiving an API request, such as checking for an invalid volume reference ( missing ) or a volume that is in-use or attached to an instance. The in-use case requires you to use the --force option. Note Using the --force option means that I/O is not be quiesced and the resulting volume image may be corrupt. If the API accepts the request, the backup occurs in the background. Usually, the CLI returns immediately even if the backup fails or is approaching failure. You can query the status of a backup by using the cinder backup API. If an error occurs, review the logs to discover the cause. 4.3. Using Pacemaker to manage resources By default, Pacemaker deploys the Block Storage backup service. Pacemaker configures virtual IP addresses, containers, services, and other features as resources in a cluster to ensure that the defined set of Red Hat OpenStack Platform cluster resources are running and available. When a service or an entire node in a cluster fails, Pacemaker can restart the resource, take the node out of the cluster, or reboot the node. Requests to most services are through HAProxy. For information about how to use Pacemaker for troubleshooting, see Managing high availability services with Pacemaker in the High Availability Deployment and Usage guide.
[ "cinder show", "cinder service-list +------------------+--------------------+------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+--------------------+------+---------+-------+----------------------------+-----------------+ | cinder-backup | hostgroup | nova | enabled | up | 2017-05-15T02:42:25.000000 | - | | cinder-scheduler | hostgroup | nova | enabled | up | 2017-05-15T02:42:25.000000 | - | | cinder-volume | hostgroup@sas-pool | nova | enabled | down | 2017-05-14T03:04:01.000000 | - | | cinder-volume | hostgroup@ssd-pool | nova | enabled | down | 2017-05-14T03:04:01.000000 | - | +------------------+--------------------+------+---------+-------+----------------------------+-----------------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/block_storage_backup_guide/assembly_backup-troubleshooting
Chapter 8. Advanced Resource types
Chapter 8. Advanced Resource types This chapter describes advanced resource types that Pacemaker supports. 8.1. Resource Clones You can clone a resource so that the resource can be active on multiple nodes. For example, you can use cloned resources to configure multiple instances of an IP resource to distribute throughout a cluster for node balancing. You can clone any resource provided the resource agent supports it. A clone consists of one resource or one resource group. Note Only resources that can be active on multiple nodes at the same time are suitable for cloning. For example, a Filesystem resource mounting a non-clustered file system such as ext4 from a shared memory device should not be cloned. Since the ext4 partition is not cluster aware, this file system is not suitable for read/write operations occurring from multiple nodes at the same time. 8.1.1. Creating and Removing a Cloned Resource You can create a resource and a clone of that resource at the same time with the following command. The name of the clone will be resource_id -clone . You cannot create a resource group and a clone of that resource group in a single command. Alternately, you can create a clone of a previously-created resource or resource group with the following command. The name of the clone will be resource_id -clone or group_name -clone . Note You need to configure resource configuration changes on one node only. Note When configuring constraints, always use the name of the group or clone. When you create a clone of a resource, the clone takes on the name of the resource with -clone appended to the name. The following commands creates a resource of type apache named webfarm and a clone of that resource named webfarm-clone . Use the following command to remove a clone of a resource or a resource group. This does not remove the resource or resource group itself. For information on resource options, see Section 5.1, "Resource Creation" . Table 8.1, "Resource Clone Options" describes the options you can specify for a cloned resource. Table 8.1. Resource Clone Options Field Description priority, target-role, is-managed Options inherited from resource that is being cloned, as described in Table 5.3, "Resource Meta Options" . clone-max How many copies of the resource to start. Defaults to the number of nodes in the cluster. clone-node-max How many copies of the resource can be started on a single node; the default value is 1 . clone-min The number of instances that must be running before any dependent resources can run. This parameter can be of particular use for services behind a virtual IP and HAProxy, such as is often required for an OpenStack platform. notify When stopping or starting a copy of the clone, tell all the other copies beforehand and when the action was successful. Allowed values: false , true . The default value is false . globally-unique Does each copy of the clone perform a different function? Allowed values: false , true If the value of this option is false , these resources behave identically everywhere they are running and thus there can be only one copy of the clone active per machine. If the value of this option is true , a copy of the clone running on one machine is not equivalent to another instance, whether that instance is running on another node or on the same node. The default value is true if the value of clone-node-max is greater than one; otherwise the default value is false . ordered Should the copies be started in series (instead of in parallel). Allowed values: false , true . The default value is false . interleave Changes the behavior of ordering constraints (between clones/masters) so that copies of the first clone can start or stop as soon as the copy on the same node of the second clone has started or stopped (rather than waiting until every instance of the second clone has started or stopped). Allowed values: false , true . The default value is false . 8.1.2. Clone Constraints In most cases, a clone will have a single copy on each active cluster node. You can, however, set clone-max for the resource clone to a value that is less than the total number of nodes in the cluster. If this is the case, you can indicate which nodes the cluster should preferentially assign copies to with resource location constraints. These constraints are written no differently to those for regular resources except that the clone's id must be used. The following command creates a location constraint for the cluster to preferentially assign resource clone webfarm-clone to node1 . Ordering constraints behave slightly differently for clones. In the example below, webfarm-stats will wait until all copies of webfarm-clone that need to be started have done so before starting itself. Only if no copies of webfarm-clone can be started then webfarm-stats will be prevented from being active. Additionally, webfarm-clone will wait for webfarm-stats to be stopped before stopping itself. Colocation of a regular (or group) resource with a clone means that the resource can run on any machine with an active copy of the clone. The cluster will choose a copy based on where the clone is running and the resource's own location preferences. Colocation between clones is also possible. In such cases, the set of allowed locations for the clone is limited to nodes on which the clone is (or will be) active. Allocation is then performed as normally. The following command creates a colocation constraint to ensure that the resource webfarm-stats runs on the same node as an active copy of webfarm-clone . 8.1.3. Clone Stickiness To achieve a stable allocation pattern, clones are slightly sticky by default. If no value for resource-stickiness is provided, the clone will use a value of 1. Being a small value, it causes minimal disturbance to the score calculations of other resources but is enough to prevent Pacemaker from needlessly moving copies around the cluster.
[ "pcs resource create resource_id standard:provider:type | type [ resource options ] clone [meta clone_options ]", "pcs resource clone resource_id | group_name [ clone_options ]", "pcs resource create webfarm apache clone", "pcs resource unclone resource_id | group_name", "pcs constraint location webfarm-clone prefers node1", "pcs constraint order start webfarm-clone then webfarm-stats", "pcs constraint colocation add webfarm-stats with webfarm-clone" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/ch-advancedresource-haar
Chapter 3. Types of instance storage
Chapter 3. Types of instance storage The virtual storage that is available to an instance is defined by the flavor used to launch the instance. The following virtual storage resources can be associated with an instance: Instance disk Ephemeral storage Swap storage Persistent block storage volumes Config drive 3.1. Instance disk The instance disk created to store instance data depends on the boot source that you use to create the instance. The instance disk of an instance that you boot from an image is controlled by the Compute service and deleted when the instance is deleted. The instance disk of an instance that you boot from a volume is a persistent volume provided by the Block Storage service. 3.2. Instance ephemeral storage You can specify that an ephemeral disk is created for the instance by choosing a flavor that configures an ephemeral disk. This ephemeral storage is an empty additional disk that is available to an instance. This storage value is defined by the instance flavor. The default value is 0, meaning that no secondary ephemeral storage is created. The ephemeral disk appears in the same way as a plugged-in hard drive or thumb drive. It is available as a block device, which you can check using the lsblk command. You can mount it and use it however you normally use a block device. You cannot preserve or reference that disk beyond the instance it is attached to. Note Ephemeral storage data is not included in instance snapshots, and is not available on instances that are shelved and then unshelved. 3.3. Instance swap storage You can specify that a swap disk is created for the instance by choosing a flavor that configures a swap disk. This swap storage is an additional disk that is available to the instance for use as swap space for the running operating system. 3.4. Instance block storage A block storage volume is persistent storage that is available to an instance regardless of the state of the running instance. You can attach multiple block devices to an instance, one of which can be a bootable volume. Note When you use a block storage volume for your instance disk data, the block storage volume persists for any instance rebuilds, even when an instance is rebuilt with a new image that requests that a new volume is created. 3.5. Config drive You can attach a config drive to an instance when it boots. The config drive is presented to the instance as a read-only drive. The instance can mount this drive and read files from it. You can use the config drive as a source for cloud-init information. Config drives are useful when combined with cloud-init for server bootstrapping, and when you want to pass large files to your instances. For example, you can configure cloud-init to automatically mount the config drive and run the setup scripts during the initial instance boot. Config drives are created with the volume label of config-2 , and attached to the instance when it boots. The contents of any additional files passed to the config drive are added to the user_data file in the openstack/{version}/ directory of the config drive. cloud-init retrieves the user data from this file.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/creating_and_managing_instances/con_types-of-instance-storage_osp
4.5. Securing DNS Traffic with DNSSEC
4.5. Securing DNS Traffic with DNSSEC 4.5.1. Introduction to DNSSEC DNSSEC is a set of Domain Name System Security Extensions ( DNSSEC ) that enables a DNS client to authenticate and check the integrity of responses from a DNS nameserver in order to verify their origin and to determine if they have been tampered with in transit. 4.5.2. Understanding DNSSEC For connecting over the Internet, a growing number of websites now offer the ability to connect securely using HTTPS . However, before connecting to an HTTPS webserver, a DNS lookup must be performed, unless you enter the IP address directly. These DNS lookups are done insecurely and are subject to man-in-the-middle attacks due to lack of authentication. In other words, a DNS client cannot have confidence that the replies that appear to come from a given DNS nameserver are authentic and have not been tampered with. More importantly, a recursive nameserver cannot be sure that the records it obtains from other nameservers are genuine. The DNS protocol did not provide a mechanism for the client to ensure it was not subject to a man-in-the-middle attack. DNSSEC was introduced to address the lack of authentication and integrity checks when resolving domain names using DNS . It does not address the problem of confidentiality. Publishing DNSSEC information involves digitally signing DNS resource records as well as distributing public keys in such a way as to enable DNS resolvers to build a hierarchical chain of trust. Digital signatures for all DNS resource records are generated and added to the zone as digital signature resource records ( RRSIG ). The public key of a zone is added as a DNSKEY resource record. To build the hierarchical chain, hashes of the DNSKEY are published in the parent zone as Delegation of Signing ( DS ) resource records. To facilitate proof of non-existence, the NextSECure ( NSEC ) and NSEC3 resource records are used. In a DNSSEC signed zone, each resource record set ( RRset ) has a corresponding RRSIG resource record. Note that records used for delegation to a child zone (NS and glue records) are not signed; these records appear in the child zone and are signed there. Processing DNSSEC information is done by resolvers that are configured with the root zone public key. Using this key, resolvers can verify the signatures used in the root zone. For example, the root zone has signed the DS record for .com . The root zone also serves NS and glue records for the .com name servers. The resolver follows this delegation and queries for the DNSKEY record of .com using these delegated name servers. The hash of the DNSKEY record obtained should match the DS record in the root zone. If so, the resolver will trust the obtained DNSKEY for .com . In the .com zone, the RRSIG records are created by the .com DNSKEY. This process is repeated similarly for delegations within .com , such as redhat.com . Using this method, a validating DNS resolver only needs to be configured with one root key while it collects many DNSKEYs from around the world during its normal operation. If a cryptographic check fails, the resolver will return SERVFAIL to the application. DNSSEC has been designed in such a way that it will be completely invisible to applications not supporting DNSSEC. If a non-DNSSEC application queries a DNSSEC capable resolver, it will receive the answer without any of these new resource record types such as RRSIG. However, the DNSSEC capable resolver will still perform all cryptographic checks, and will still return a SERVFAIL error to the application if it detects malicious DNS answers. DNSSEC protects the integrity of the data between DNS servers (authoritative and recursive), it does not provide security between the application and the resolver. Therefore, it is important that the applications are given a secure transport to their resolver. The easiest way to accomplish that is to run a DNSSEC capable resolver on localhost and use 127.0.0.1 in /etc/resolv.conf . Alternatively a VPN connection to a remote DNS server could be used. Understanding the Hotspot Problem Wi-Fi Hotspots or VPNs rely on " DNS lies " : Captive portals tend to hijack DNS in order to redirect users to a page where they are required to authenticate (or pay) for the Wi-Fi service. Users connecting to a VPN often need to use an " internal only " DNS server in order to locate resources that do not exist outside the corporate network. This requires additional handling by software. For example, dnssec-trigger can be used to detect if a Hotspot is hijacking the DNS queries and unbound can act as a proxy nameserver to handle the DNSSEC queries. Choosing a DNSSEC Capable Recursive Resolver To deploy a DNSSEC capable recursive resolver, either BIND or unbound can be used. Both enable DNSSEC by default and are configured with the DNSSEC root key. To enable DNSSEC on a server, either will work however the use of unbound is preferred on mobile devices, such as notebooks, as it allows the local user to dynamically reconfigure the DNSSEC overrides required for Hotspots when using dnssec-trigger , and for VPNs when using Libreswan . The unbound daemon further supports the deployment of DNSSEC exceptions listed in the etc/unbound/*.d/ directories which can be useful to both servers and mobile devices. 4.5.3. Understanding Dnssec-trigger Once unbound is installed and configured in /etc/resolv.conf , all DNS queries from applications are processed by unbound . dnssec-trigger only reconfigures the unbound resolver when triggered to do so. This mostly applies to roaming client machines, such as laptops, that connect to different Wi-Fi networks. The process is as follows: NetworkManager " triggers " dnssec-trigger when a new DNS server is obtained through DHCP . Dnssec-trigger then performs a number of tests against the server and decides whether or not it properly supports DNSSEC. If it does, then dnssec-trigger reconfigures unbound to use that DNS server as a forwarder for all queries. If the tests fail, dnssec-trigger will ignore the new DNS server and try a few available fall-back methods. If it determines that an unrestricted port 53 ( UDP and TCP ) is available, it will tell unbound to become a full recursive DNS server without using any forwarder. If this is not possible, for example because port 53 is blocked by a firewall for everything except reaching the network's DNS server itself, it will try to use DNS to port 80, or TLS encapsulated DNS to port 443. Servers running DNS on port 80 and 443 can be configured in /etc/dnssec-trigger/dnssec-trigger.conf . Commented out examples should be available in the default configuration file. If these fall-back methods also fail, dnssec-trigger offers to either operate insecurely, which would bypass DNSSEC completely, or run in " cache only " mode where it will not attempt new DNS queries but will answer for everything it already has in the cache. Wi-Fi Hotspots increasingly redirect users to a sign-on page before granting access to the Internet. During the probing sequence outlined above, if a redirection is detected, the user is prompted to ask if a login is required to gain Internet access. The dnssec-trigger daemon continues to probe for DNSSEC resolvers every ten seconds. See Section 4.5.8, "Using Dnssec-trigger" for information on using the dnssec-trigger graphical utility. 4.5.4. VPN Supplied Domains and Name Servers Some types of VPN connections can convey a domain and a list of nameservers to use for that domain as part of the VPN tunnel setup. On Red Hat Enterprise Linux , this is supported by NetworkManager . This means that the combination of unbound , dnssec-trigger , and NetworkManager can properly support domains and name servers provided by VPN software. Once the VPN tunnel comes up, the local unbound cache is flushed for all entries of the domain name received, so that queries for names within the domain name are fetched fresh from the internal name servers reached using the VPN. When the VPN tunnel is terminated, the unbound cache is flushed again to ensure any queries for the domain will return the public IP addresses, and not the previously obtained private IP addresses. See Section 4.5.11, "Configuring DNSSEC Validation for Connection Supplied Domains" . 4.5.5. Recommended Naming Practices Red Hat recommends that both static and transient names match the fully-qualified domain name ( FQDN ) used for the machine in DNS , such as host.example.com . The Internet Corporation for Assigned Names and Numbers (ICANN) sometimes adds previously unregistered Top-Level Domains (such as .yourcompany ) to the public register. Therefore, Red Hat strongly recommends that you do not use a domain name that is not delegated to you, even on a private network, as this can result in a domain name that resolves differently depending on network configuration. As a result, network resources can become unavailable. Using domain names that are not delegated to you also makes DNSSEC more difficult to deploy and maintain, as domain name collisions require manual configuration to enable DNSSEC validation. See the ICANN FAQ on domain name collision for more information on this issue. 4.5.6. Understanding Trust Anchors In a hierarchical cryptographic system, a trust anchor is an authoritative entity which is assumed to be trustworthy. For example, in X.509 architecture, a root certificate is a trust anchor from which a chain of trust is derived. The trust anchor must be put in the possession of the trusting party beforehand to make path validation possible. In the context of DNSSEC, a trust anchor consists of a DNS name and public key (or hash of the public key) associated with that name. It is expressed as a base 64 encoded key. It is similar to a certificate in that it is a means of exchanging information, including a public key, which can be used to verify and authenticate DNS records. RFC 4033 defines a trust anchor as a configured DNSKEY RR or DS RR hash of a DNSKEY RR. A validating security-aware resolver uses this public key or hash as a starting point for building the authentication chain to a signed DNS response. In general, a validating resolver will have to obtain the initial values of its trust anchors through some secure or trusted means outside the DNS protocol. Presence of a trust anchor also implies that the resolver should expect the zone to which the trust anchor points to be signed. 4.5.7. Installing DNSSEC 4.5.7.1. Installing unbound In order to validate DNS using DNSSEC locally on a machine, it is necessary to install the DNS resolver unbound (or bind ). It is only necessary to install dnssec-trigger on mobile devices. For servers, unbound should be sufficient although a forwarding configuration for the local domain might be required depending on where the server is located (LAN or Internet). dnssec-trigger will currently only help with the global public DNS zone. NetworkManager , dhclient , and VPN applications can often gather the domain list (and nameserver list as well) automatically, but not dnssec-trigger nor unbound . To install unbound enter the following command as the root user: 4.5.7.2. Checking if unbound is Running To determine whether the unbound daemon is running, enter the following command: The systemctl status command will report unbound as Active: inactive (dead) if the unbound service is not running. 4.5.7.3. Starting unbound To start the unbound daemon for the current session, enter the following command as the root user: Run the systemctl enable command to ensure that unbound starts up every time the system boots: The unbound daemon allows configuration of local data or overrides using the following directories: The /etc/unbound/conf.d directory is used to add configurations for a specific domain name. This is used to redirect queries for a domain name to a specific DNS server. This is often used for sub-domains that only exist within a corporate WAN. The /etc/unbound/keys.d directory is used to add trust anchors for a specific domain name. This is required when an internal-only name is DNSSEC signed, but there is no publicly existing DS record to build a path of trust. Another use case is when an internal version of a domain is signed using a different DNSKEY than the publicly available name outside the corporate WAN. The /etc/unbound/local.d directory is used to add specific DNS data as a local override. This can be used to build blacklists or create manual overrides. This data will be returned to clients by unbound , but it will not be marked as DNSSEC signed. NetworkManager , as well as some VPN software, may change the configuration dynamically. These configuration directories contain commented out example entries. For further information see the unbound.conf(5) man page. 4.5.7.4. Installing Dnssec-trigger The dnssec-trigger application runs as a daemon, dnssec-triggerd . To install dnssec-trigger enter the following command as the root user: 4.5.7.5. Checking if the Dnssec-trigger Daemon is Running To determine whether dnssec-triggerd is running, enter the following command: The systemctl status command will report dnssec-triggerd as Active: inactive (dead) if the dnssec-triggerd daemon is not running. To start it for the current session enter the following command as the root user: Run the systemctl enable command to ensure that dnssec-triggerd starts up every time the system boots: 4.5.8. Using Dnssec-trigger The dnssec-trigger application has a GNOME panel utility for displaying DNSSEC probe results and for performing DNSSEC probe requests on demand. To start the utility, press the Super key to enter the Activities Overview, type DNSSEC and then press Enter . An icon resembling a ships anchor is added to the message tray at the bottom of the screen. Press the round blue notification icon in the bottom right of the screen to reveal it. Right click the anchor icon to display a pop-up menu. In normal operations unbound is used locally as the name server, and resolv.conf points to 127.0.0.1 . When you click OK on the Hotspot Sign-On panel this is changed. The DNS servers are queried from NetworkManager and put in resolv.conf . Now you can authenticate on the Hotspot's sign-on page. The anchor icon shows a big red exclamation mark to warn you that DNS queries are being made insecurely. When authenticated, dnssec-trigger should automatically detect this and switch back to secure mode, although in some cases it cannot and the user has to do this manually by selecting Reprobe . Dnssec-trigger does not normally require any user interaction. Once started, it works in the background and if a problem is encountered it notifies the user by means of a pop-up text box. It also informs unbound about changes to the resolv.conf file. 4.5.9. Using dig With DNSSEC To see whether DNSSEC is working, one can use various command line tools. The best tool to use is the dig command from the bind-utils package. Other tools that are useful are drill from the ldns package and unbound-host from the unbound package. The old DNS utilities nslookup and host are obsolete and should not be used. To send a query requesting DNSSEC data using dig , the option +dnssec is added to the command, for example: In addition to the A record, an RRSIG record is returned which contains the DNSSEC signature, as well as the inception time and expiration time of the signature. The unbound server indicated that the data was DNSSEC authenticated by returning the ad bit in the flags: section at the top. If DNSSEC validation fails, the dig command would return a SERVFAIL error: To request more information about the failure, DNSSEC checking can be disabled by specifying the +cd option to the dig command: Often, DNSSEC mistakes manifest themselves by bad inception or expiration time, although in this example, the people at www.dnssec-tools.org have mangled this RRSIG signature on purpose, which we would not be able to detect by looking at this output manually. The error will show in the output of systemctl status unbound and the unbound daemon logs these errors to syslog as follows: Aug 22 22:04:52 laptop unbound: [3065:0] info: validation failure badsign-a.test.dnssec-tools.org. A IN An example using unbound-host : 4.5.10. Setting up Hotspot Detection Infrastructure for Dnssec-trigger When connecting to a network, dnssec-trigger attempts to detect a Hotspot. A Hotspot is generally a device that forces user interaction with a web page before they can use the network resources. The detection is done by attempting to download a specific fixed web page with known content. If there is a Hotspot, then the content received will not be as expected. To set up a fixed web page with known content that can be used by dnssec-trigger to detect a Hotspot, proceed as follows: Set up a web server on some machine that is publicly reachable on the Internet. See the Web Servers chapter in the Red Hat Enterprise Linux 7 System Administrator's Guide. Once you have the server running, publish a static page with known content on it. The page does not need to be a valid HTML page. For example, you could use a plain-text file named hotspot.txt that contains only the string OK . Assuming your server is located at example.com and you published your hotspot.txt file in the web server document_root /static/ sub-directory, then the address to your static web page would be example.com/static/hotspot.txt . See the DocumentRoot directive in the Web Servers chapter in the Red Hat Enterprise Linux 7 System Administrator's Guide. Add the following line to the /etc/dnssec-trigger/dnssec-trigger.conf file: url: "http://example.com/static/hotspot.txt OK" This command adds a URL that is probed using HTTP (port 80). The first part is the URL that will be resolved and the page that will be downloaded. The second part of the command is the text string that the downloaded webpage is expected to contain. For more information on the configuration options see the man page dnssec-trigger.conf(8) . 4.5.11. Configuring DNSSEC Validation for Connection Supplied Domains By default, forward zones with proper nameservers are automatically added into unbound by dnssec-trigger for every domain provided by any connection, except Wi-Fi connections through NetworkManager . By default, all forward zones added into unbound are DNSSEC validated. The default behavior for validating forward zones can be altered, so that all forward zones will not be DNSSEC validated by default. To do this, change the validate_connection_provided_zones variable in the dnssec-trigger configuration file /etc/dnssec.conf . As root user, open and edit the line as follows: validate_connection_provided_zones=no The change is not done for any existing forward zones, but only for future forward zones. Therefore if you want to disable DNSSEC for the current provided domain, you need to reconnect. 4.5.11.1. Configuring DNSSEC Validation for Wi-Fi Supplied Domains Adding forward zones for Wi-Fi provided zones can be enabled. To do this, change the add_wifi_provided_zones variable in the dnssec-trigger configuration file, /etc/dnssec.conf . As root user, open and edit the line as follows: add_wifi_provided_zones=yes The change is not done for any existing forward zones, but only for future forward zones. Therefore, if you want to enable DNSSEC for the current Wi-Fi provided domain, you need to reconnect (restart) the Wi-Fi connection. Warning Turning on the addition of Wi-Fi provided domains as forward zones into unbound may have security implications such as: A Wi-Fi access point can intentionally provide you a domain through DHCP for which it does not have authority and route all your DNS queries to its DNS servers. If you have the DNSSEC validation of forward zones turned off , the Wi-Fi provided DNS servers can spoof the IP address for domain names from the provided domain without you knowing it. 4.5.12. Additional Resources The following are resources which explain more about DNSSEC. 4.5.12.1. Installed Documentation dnssec-trigger(8) man page - Describes command options for dnssec-triggerd , dnssec-trigger-control and dnssec-trigger-panel . dnssec-trigger.conf(8) man page - Describes the configuration options for dnssec-triggerd . unbound(8) man page - Describes the command options for unbound , the DNS validating resolver. unbound.conf(5) man page - Contains information on how to configure unbound . resolv.conf(5) man page - Contains information that is read by the resolver routines. 4.5.12.2. Online Documentation http://tools.ietf.org/html/rfc4033 RFC 4033 DNS Security Introduction and Requirements. http://www.dnssec.net/ A website with links to many DNSSEC resources. http://www.dnssec-deployment.org/ The DNSSEC Deployment Initiative, sponsored by the Department for Homeland Security, contains a lot of DNSSEC information and has a mailing list to discuss DNSSEC deployment issues. http://www.internetsociety.org/deploy360/dnssec/community/ The Internet Society's " Deploy 360 " initiative to stimulate and coordinate DNSSEC deployment is a good resource for finding communities and DNSSEC activities worldwide. http://www.unbound.net/ This document contains general information about the unbound DNS service. http://www.nlnetlabs.nl/projects/dnssec-trigger/ This document contains general information about dnssec-trigger .
[ "~]# yum install unbound", "~]USD systemctl status unbound unbound.service - Unbound recursive Domain Name Server Loaded: loaded (/usr/lib/systemd/system/unbound.service; disabled) Active: active (running) since Wed 2013-03-13 01:19:30 CET; 6h ago", "~]# systemctl start unbound", "~]# systemctl enable unbound", "~]# yum install dnssec-trigger", "~]USD systemctl status dnssec-triggerd systemctl status dnssec-triggerd.service dnssec-triggerd.service - Reconfigure local DNS(SEC) resolver on network change Loaded: loaded (/usr/lib/systemd/system/dnssec-triggerd.service; enabled) Active: active (running) since Wed 2013-03-13 06:10:44 CET; 1h 41min ago", "~]# systemctl start dnssec-triggerd", "~]# systemctl enable dnssec-triggerd", "~]USD dig +dnssec whitehouse.gov ; <<>> DiG 9.9.3-rl.13207.22-P2-RedHat-9.9.3-4.P2.el7 <<>> +dnssec whitehouse.gov ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 21388 ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags: do; udp: 4096 ;; QUESTION SECTION: ;whitehouse.gov. IN A ;; ANSWER SECTION: whitehouse.gov. 20 IN A 72.246.36.110 whitehouse.gov. 20 IN RRSIG A 7 2 20 20130825124016 20130822114016 8399 whitehouse.gov. BB8VHWEkIaKpaLprt3hq1GkjDROvkmjYTBxiGhuki/BJn3PoIGyrftxR HH0377I0Lsybj/uZv5hL4UwWd/lw6Gn8GPikqhztAkgMxddMQ2IARP6p wbMOKbSUuV6NGUT1WWwpbi+LelFMqQcAq3Se66iyH0Jem7HtgPEUE1Zc 3oI= ;; Query time: 227 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Aug 22 22:01:52 EDT 2013 ;; MSG SIZE rcvd: 233", "~]USD dig badsign-a.test.dnssec-tools.org ; <<>> DiG 9.9.3-rl.156.01-P1-RedHat-9.9.3-3.P1.el7 <<>> badsign-a.test.dnssec-tools.org ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 1010 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;badsign-a.test.dnssec-tools.org. IN A ;; Query time: 1284 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Aug 22 22:04:52 EDT 2013 ;; MSG SIZE rcvd: 60]", "~]USD dig +cd +dnssec badsign-a.test.dnssec-tools.org ; <<>> DiG 9.9.3-rl.156.01-P1-RedHat-9.9.3-3.P1.el7 <<>> +cd +dnssec badsign-a.test.dnssec-tools.org ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 26065 ;; flags: qr rd ra cd; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags: do; udp: 4096 ;; QUESTION SECTION: ;badsign-a.test.dnssec-tools.org. IN A ;; ANSWER SECTION: badsign-a.test.dnssec-tools.org. 49 IN A 75.119.216.33 badsign-a.test.dnssec-tools.org. 49 IN RRSIG A 5 4 86400 20130919183720 20130820173720 19442 test.dnssec-tools.org. E572dLKMvYB4cgTRyAHIKKEvdOP7tockQb7hXFNZKVbfXbZJOIDREJrr zCgAfJ2hykfY0yJHAlnuQvM0s6xOnNBSvc2xLIybJdfTaN6kSR0YFdYZ n2NpPctn2kUBn5UR1BJRin3Gqy20LZlZx2KD7cZBtieMsU/IunyhCSc0 kYw= ;; Query time: 1 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Aug 22 22:06:31 EDT 2013 ;; MSG SIZE rcvd: 257", "~]USD unbound-host -C /etc/unbound/unbound.conf -v whitehouse.gov whitehouse.gov has address 184.25.196.110 (secure) whitehouse.gov has IPv6 address 2600:1417:11:2:8800::fc4 (secure) whitehouse.gov has IPv6 address 2600:1417:11:2:8000::fc4 (secure) whitehouse.gov mail is handled by 105 mail1.eop.gov. (secure) whitehouse.gov mail is handled by 110 mail5.eop.gov. (secure) whitehouse.gov mail is handled by 105 mail4.eop.gov. (secure) whitehouse.gov mail is handled by 110 mail6.eop.gov. (secure) whitehouse.gov mail is handled by 105 mail2.eop.gov. (secure) whitehouse.gov mail is handled by 105 mail3.eop.gov. (secure)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-Securing_DNS_Traffic_with_DNSSEC