title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 17. Bean | Chapter 17. Bean Only producer is supported The Bean component binds beans to Camel message exchanges. 17.1. Dependencies When using bean with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-bean-starter</artifactId> </dependency> 17.2. URI format Where beanID can be any string which is used to look up the bean in the Registry 17.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 17.3.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 17.3.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 17.4. Component Options The Bean component supports 4 options, which are listed below. Name Description Default Type cache (producer) Deprecated Use singleton option instead. true Boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean scope (producer) Scope of bean. When using singleton scope (default) the bean is created or looked up only once and reused for the lifetime of the endpoint. The bean should be thread-safe in case concurrent threads is calling the bean at the same time. When using request scope the bean is created or looked up once per request (exchange). This can be used if you want to store state on a bean while processing a request and you want to call the same bean instance multiple times while processing the request. The bean does not have to be thread-safe as the instance is only called from the same request. When using delegate scope, then the bean will be looked up or created per call. However in case of lookup then this is delegated to the bean registry such as Spring or CDI (if in use), which depends on their configuration can act as either singleton or prototype scope. so when using prototype then this depends on the delegated registry. Enum values: Singleton Request Prototype Singleton BeanScope autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 17.5. Endpoint Options The Bean endpoint is configured using URI syntax: with the following path and query parameters: 17.5.1. Path Parameters (1 parameters) Name Description Default Type beanName (common) Required Sets the name of the bean to invoke. String 17.5.2. Query Parameters (5 parameters) Name Description Default Type cache (common) Deprecated Use scope option instead. Boolean method (common) Sets the name of the method to invoke on the bean. String scope (common) Scope of bean. When using singleton scope (default) the bean is created or looked up only once and reused for the lifetime of the endpoint. The bean should be thread-safe in case concurrent threads is calling the bean at the same time. When using request scope the bean is created or looked up once per request (exchange). This can be used if you want to store state on a bean while processing a request and you want to call the same bean instance multiple times while processing the request. The bean does not have to be thread-safe as the instance is only called from the same request. When using prototype scope, then the bean will be looked up or created per call. However in case of lookup then this is delegated to the bean registry such as Spring or CDI (if in use), which depends on their configuration can act as either singleton or prototype scope. so when using prototype then this depends on the delegated registry. Enum values: Singleton Request Prototype Singleton BeanScope lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean parameters (advanced) Used for configuring additional properties on the bean. Map 17.6. Examples The object instance that is used to consume messages must be explicitly registered with the Registry. For example, if you are using Spring you must define the bean in the Spring configuration XML file. You can also register beans manually via Camel's Registry with the bind method. Once an endpoint has been registered, you can build Camel routes that use it to process exchanges. A bean: endpoint cannot be defined as the input to the route; i.e. you cannot consume from it, you can only route from some inbound message Endpoint to the bean endpoint as output. So consider using a direct: or queue: endpoint as the input. You can use the createProxy() methods on ProxyHelper to create a proxy that will generate exchanges and send them to any endpoint: And the same route using XML DSL: <route> <from uri="direct:hello"/> <to uri="bean:bye"/> </route> 17.7. Bean as endpoint Camel also supports invoking Bean as an Endpoint. What happens is that when the exchange is routed to the myBean Camel will use the Bean Binding to invoke the bean. The source for the bean is just a plain POJO. Camel will use Bean Binding to invoke the sayHello method, by converting the Exchange's In body to the String type and storing the output of the method on the Exchange Out body. 17.8. Java DSL bean syntax Java DSL comes with syntactic sugar for the component. Instead of specifying the bean explicitly as the endpoint (i.e. to("bean:beanName") ) you can use the following syntax: // Send message to the bean endpoint // and invoke method resolved using Bean Binding. from("direct:start").bean("beanName"); // Send message to the bean endpoint // and invoke given method. from("direct:start").bean("beanName", "methodName"); Instead of passing name of the reference to the bean (so that Camel will lookup for it in the registry), you can specify the bean itself: // Send message to the given bean instance. from("direct:start").bean(new ExampleBean()); // Explicit selection of bean method to be invoked. from("direct:start").bean(new ExampleBean(), "methodName"); // Camel will create the instance of bean and cache it for you. from("direct:start").bean(ExampleBean.class); 17.9. Bean Binding How bean methods to be invoked are chosen (if they are not specified explicitly through the method parameter) and how parameter values are constructed from the Message are all defined by the Bean Binding mechanism which is used throughout all of the various Bean Integration mechanisms in Camel. 17.10. Spring Boot Auto-Configuration The component supports 13 options, which are listed below. Name Description Default Type camel.component.bean.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.bean.enabled Whether to enable auto configuration of the bean component. This is enabled by default. Boolean camel.component.bean.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.bean.scope Scope of bean. When using singleton scope (default) the bean is created or looked up only once and reused for the lifetime of the endpoint. The bean should be thread-safe in case concurrent threads is calling the bean at the same time. When using request scope the bean is created or looked up once per request (exchange). This can be used if you want to store state on a bean while processing a request and you want to call the same bean instance multiple times while processing the request. The bean does not have to be thread-safe as the instance is only called from the same request. When using delegate scope, then the bean will be looked up or created per call. However in case of lookup then this is delegated to the bean registry such as Spring or CDI (if in use), which depends on their configuration can act as either singleton or prototype scope. so when using prototype then this depends on the delegated registry. BeanScope camel.component.class.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.class.enabled Whether to enable auto configuration of the class component. This is enabled by default. Boolean camel.component.class.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.class.scope Scope of bean. When using singleton scope (default) the bean is created or looked up only once and reused for the lifetime of the endpoint. The bean should be thread-safe in case concurrent threads is calling the bean at the same time. When using request scope the bean is created or looked up once per request (exchange). This can be used if you want to store state on a bean while processing a request and you want to call the same bean instance multiple times while processing the request. The bean does not have to be thread-safe as the instance is only called from the same request. When using delegate scope, then the bean will be looked up or created per call. However in case of lookup then this is delegated to the bean registry such as Spring or CDI (if in use), which depends on their configuration can act as either singleton or prototype scope. so when using prototype then this depends on the delegated registry. BeanScope camel.language.bean.enabled Whether to enable auto configuration of the bean language. This is enabled by default. Boolean camel.language.bean.scope Scope of bean. When using singleton scope (default) the bean is created or looked up only once and reused for the lifetime of the endpoint. The bean should be thread-safe in case concurrent threads is calling the bean at the same time. When using request scope the bean is created or looked up once per request (exchange). This can be used if you want to store state on a bean while processing a request and you want to call the same bean instance multiple times while processing the request. The bean does not have to be thread-safe as the instance is only called from the same request. When using prototype scope, then the bean will be looked up or created per call. However in case of lookup then this is delegated to the bean registry such as Spring or CDI (if in use), which depends on their configuration can act as either singleton or prototype scope. So when using prototype scope then this depends on the bean registry implementation. Singleton String camel.language.bean.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.component.bean.cache Deprecated Use singleton option instead. true Boolean camel.component.class.cache Deprecated Use singleton option instead. true Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-bean-starter</artifactId> </dependency>",
"bean:beanName[?options]",
"bean:beanName",
"<route> <from uri=\"direct:hello\"/> <to uri=\"bean:bye\"/> </route>",
"// Send message to the bean endpoint // and invoke method resolved using Bean Binding. from(\"direct:start\").bean(\"beanName\"); // Send message to the bean endpoint // and invoke given method. from(\"direct:start\").bean(\"beanName\", \"methodName\");",
"// Send message to the given bean instance. from(\"direct:start\").bean(new ExampleBean()); // Explicit selection of bean method to be invoked. from(\"direct:start\").bean(new ExampleBean(), \"methodName\"); // Camel will create the instance of bean and cache it for you. from(\"direct:start\").bean(ExampleBean.class);"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-bean-component-starter |
Chapter 2. Admin REST API | Chapter 2. Admin REST API Red Hat build of Keycloak comes with a fully functional Admin REST API with all features provided by the Admin Console. To invoke the API you need to obtain an access token with the appropriate permissions. The required permissions are described in the Server Administration Guide. You can obtain a token by enabling authentication for your application using Red Hat build of Keycloak; see the Securing Applications and Services Guide. You can also use direct access grant to obtain an access token. 2.1. Examples of using CURL 2.1.1. Authenticating with a username and password Note The following example assumes that you created the user admin with the password password in the master realm as shown in the Getting Started Guide tutorial. Procedure Obtain an access token for the user in the realm master with username admin and password password : curl \ -d "client_id=admin-cli" \ -d "username=admin" \ -d "password=password" \ -d "grant_type=password" \ "http://localhost:8080/realms/master/protocol/openid-connect/token" Note By default this token expires in 1 minute The result will be a JSON document. Invoke the API you need by extracting the value of the access_token property. Invoke the API by including the value in the Authorization header of requests to the API. The following example shows how to get the details of the master realm: curl \ -H "Authorization: bearer eyJhbGciOiJSUz..." \ "http://localhost:8080/admin/realms/master" 2.1.2. Authenticating with a service account To authenticate against the Admin REST API using a client_id and a client_secret , perform this procedure. Procedure Make sure the client is configured as follows: client_id is a confidential client that belongs to the realm master client_id has Service Accounts Enabled option enabled client_id has a custom "Audience" mapper Included Client Audience: security-admin-console Check that client_id has the role 'admin' assigned in the "Service Account Roles" tab. curl \ -d "client_id=<YOUR_CLIENT_ID>" \ -d "client_secret=<YOUR_CLIENT_SECRET>" \ -d "grant_type=client_credentials" \ "http://localhost:8080/realms/master/protocol/openid-connect/token" 2.2. Additional resources Server Administration Guide Securing Applications and Services Guide API Documentation | [
"curl -d \"client_id=admin-cli\" -d \"username=admin\" -d \"password=password\" -d \"grant_type=password\" \"http://localhost:8080/realms/master/protocol/openid-connect/token\"",
"curl -H \"Authorization: bearer eyJhbGciOiJSUz...\" \"http://localhost:8080/admin/realms/master\"",
"curl -d \"client_id=<YOUR_CLIENT_ID>\" -d \"client_secret=<YOUR_CLIENT_SECRET>\" -d \"grant_type=client_credentials\" \"http://localhost:8080/realms/master/protocol/openid-connect/token\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_developer_guide/admin_rest_api |
Chapter 2. Installing security updates | Chapter 2. Installing security updates In RHEL, you can install a specific security advisory and all available security updates. You can also configure the system to download and install security updates automatically. 2.1. Installing all available security updates To keep the security of your system up to date, you can install all currently available security updates using the dnf utility. Prerequisites A Red Hat subscription is attached to the host. Procedure Install security updates using dnf utility: Without the --security parameter, dnf update installs all updates, including bug fixes and enhancements. Confirm and start the installation by pressing y : Optional: List processes that require a manual restart of the system after installing the updated packages: The command lists only processes that require a restart, and not services. That is, you cannot restart processes listed using the systemctl utility. For example, the bash process in the output is terminated when the user that owns this process logs out. 2.2. Installing a security update provided by a specific advisory In certain situations, you might want to install only specific updates. For example, if a specific service can be updated without scheduling a downtime, you can install security updates for only this service, and install the remaining security updates later. Prerequisites A Red Hat subscription is attached to the host. You know the ID of the security advisory that you want to update. For more information, see the Identifying the security advisory updates section. Procedure Install a specific advisory, for example: Alternatively, update to apply a specific advisory with a minimal version change by using the dnf upgrade-minimal command, for example: Confirm and start the installation by pressing y : Optional: List the processes that require a manual restart of the system after installing the updated packages: The command lists only processes that require a restart, and not services. This means that you cannot restart all processes listed by using the systemctl utility. For example, the bash process in the output is terminated when the user that owns this process logs out. 2.3. Installing security updates automatically You can configure your system so that it automatically downloads and installs all security updates. Prerequisites A Red Hat subscription is attached to the host. The dnf-automatic package is installed. Procedure In the /etc/dnf/automatic.conf file, in the [commands] section, make sure the upgrade_type option is set to either default or security : Enable and start the systemd timer unit: Verification Verify that the timer is enabled: Additional resources dnf-automatic(8) man page on your system | [
"dnf update --security",
"... Transaction Summary =========================================== Upgrade ... Packages Total download size: ... M Is this ok [y/d/N]: y",
"dnf needs-restarting 1107 : /usr/sbin/rsyslogd -n 1199 : -bash",
"dnf update --advisory=RHSA-2019:0997",
"dnf upgrade-minimal --advisory=RHSA-2019:0997",
"... Transaction Summary =========================================== Upgrade ... Packages Total download size: ... M Is this ok [y/d/N]: y",
"dnf needs-restarting 1107 : /usr/sbin/rsyslogd -n 1199 : -bash",
"What kind of upgrade to perform: default = all available upgrades security = only the security upgrades upgrade_type = security",
"systemctl enable --now dnf-automatic-install.timer",
"systemctl status dnf-automatic-install.timer"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_and_monitoring_security_updates/installing-security-updates_managing-and-monitoring-security-updates |
18.5. Configuring Maps | 18.5. Configuring Maps Configuring maps not only creates the maps, it associates mount points through the keys and it assigns mount options that should be used when the directory is accessed. IdM supports both direct and indirect maps. Note Different clients can use different map sets. Map sets use a tree structure, so maps cannot be shared between locations. Important Identity Management does not set up or configure autofs. That must be done separately. Identity Management works with an existing autofs deployment. 18.5.1. Configuring Direct Maps Direct maps define exact locations, meaning absolute paths, to the file mount. In the location entry, a direct map is identified by the preceding forward slash: 18.5.1.1. Configuring Direct Maps from the Web UI Click the Policy tab. Click the Automount subtab. Click name of the automount location to which to add the map. In the Automount Maps tab, click the + Add link to create a new map. In pop-up window, select the Direct radio button and enter the name of the new map. In the Automount Keys tab, click the + Add link to create a new key for the map. Enter the mount point. The key defines the actual mount point in the key name. The Info field sets the network location of the directory, as well as any mount options to use. Click the Add button to save the new key. 18.5.1.2. Configuring Direct Maps from the Command Line The key defines the actual mount point (in the key name) and any options. A map is a direct or indirect map based on the format of its key. Each location is created with an auto.direct item. The simplest configuration is to define a direct mapping by adding an automount key the existing direct map entry. It is also possible to create different direct map entries. Add the key for the direct map to the location's auto.direct file. The --key option identifies the mount point, and --info gives the network location of the directory, as well as any mount options to use. For example: Mount options are described in the mount manpage, http://linux.die.net/man/8/mount . On Solaris, add the direct map and key using the ldapclient command to add the LDAP entry directly: 18.5.2. Configuring Indirect Maps An indirect map essentially specifies a relative path for maps. A parent entry sets the base directory for all of the indirect maps. The indirect map key sets a sub directory; whenever the indirect map location is loaded, the key is appended to that base directory. For example, if the base directory is /docs and the key is man , then the map is /docs/man . 18.5.2.1. Configuring Indirect Maps from the Web UI Click the Policy tab. Click the Automount subtab. Click name of the automount location to which to add the map. In the Automount Maps tab, click the + Add link to create a new map. In pop-up window, select the Indirect radio button and enter the required information for the indirect map: The name of the new map The mount point. The Mount field sets the base directory to use for all the indirect map keys. Optionally, a parent map. The default parent is auto.master , but if another map exists which should be used, that can be specified in the Parent Map field. Click the Add button to save the new key. 18.5.2.2. Configuring Indirect Maps from the Command Line The primary difference between a direct map and an indirect map is that there is no forward slash in front of an indirect key. Create an indirect map to set the base entry using the automountmap-add-indirect command. The --mount option sets the base directory to use for all the indirect map keys. The default parent entry is auto.master , but if another map exists which should be used, that can be specified using the --parentmap option. For example: Add the indirect key for the mount location: To verify the configuration, check the location file list using automountlocation-tofiles : On Solaris, add the indirect map using the ldapclient command to add the LDAP entry directly: 18.5.3. Importing Automount Maps If there are existing automount maps, these can be imported into the IdM automount configuration. The only required information is the IdM automount location and the full path and name of the map file. The --continuous option tells the automountlocation-import command to continue through the map file, even if the command encounters errors. For example: | [
"--------------------------- /etc/auto.direct: /shared/man server.example.com:/shared/man",
"ipa automountkey-add raleigh auto.direct --key=/share --info=\"ro,soft,ipaserver.example.com:/home/share\" Key: /share Mount information: ro,soft,ipaserver.example.com:/home/share",
"ldapclient -a serviceSearchDescriptor=auto_direct:automountMapName=auto.direct,cn= location ,cn=automount,dc=example,dc=com?one",
"--------------------------- /etc/auto.share: man ipa.example.com:/docs/man ---------------------------",
"ipa automountmap-add-indirect location mapName --mount= directory [--parentmap= mapName ]",
"ipa automountmap-add-indirect raleigh auto.share --mount=/share -------------------------------- Added automount map \"auto.share\" --------------------------------",
"ipa automountkey-add raleigh auto.share --key=docs --info=\"ipa.example.com:/export/docs\" ------------------------- Added automount key \"docs\" ------------------------- Key: docs Mount information: ipa.example.com:/export/docs",
"ipa automountlocation-tofiles raleigh /etc/auto.master: /- /etc/auto.direct /share /etc/auto.share --------------------------- /etc/auto.direct: --------------------------- /etc/auto.share: man ipa.example.com:/export/docs",
"ldapclient -a serviceSearchDescriptor=auto_share:automountMapName=auto.share,cn= location ,cn=automount,dc=example,dc=com?one",
"ipa automountlocation-import location map_file [--continuous]",
"ipa automountlocation-import raleigh /etc/custom.map"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/configuring-maps |
Chapter 29. Schedules | Chapter 29. Schedules From the navigation panel, click Views Schedules to access your configured schedules. The schedules list can be sorted by any of the attributes from each column using the directional arrows. You can also search by name, date, or the name of the month in which a schedule runs. Each schedule has a corresponding Actions column that has options to enable or disable that schedule using the On or Off toggle to the schedule name. Click the Edit icon to edit a schedule. If you are setting up a template, a project, or an inventory source, click the Schedules tab to configure schedules for these resources. When you create a schedule, they are listed by the following: Name Click the schedule name to open its details. Type This identifies whether the schedule is associated with a source control update or a system-managed job schedule. run The scheduled run of this task. 29.1. Adding a new schedule You can only create schedules from a template, project, or inventory source, and not directly on the main Schedules screen. To create a new schedule: Procedure Click the Schedules tab of the resource that you are configuring. This can be a template, project, or inventory source. Click Add . This opens the Create New Schedule window. Enter the appropriate details into the following fields: Name : Enter the name. Optional: Description : Enter a description. Start date/time : Enter the date and time to start the schedule. Local time zone : The start time that you enter must be in this time zone. Repeat frequency : Appropriate scheduling options display depending on the frequency you select. The Schedule Details display when you establish a schedule, enabling you to review the schedule settings and a list of the scheduled occurrences in the selected Local Time Zone . Important Jobs are scheduled in UTC. Repeating jobs that run at a specific time of day can move relative to a local time zone when Daylight Savings Time shifts occur. The system resolves the local time zone based time to UTC when the schedule is saved. To ensure your schedules are correctly created, set your schedules in UTC time. Click Save . Use the On or Off toggle to stop an active schedule or activate a stopped schedule. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_user_guide/controller-schedules |
Chapter 6. Using SOAP 1.1 Messages | Chapter 6. Using SOAP 1.1 Messages Abstract Apache CXF provides a tool to generate a SOAP 1.1 binding which does not use any SOAP headers. However, you can add SOAP headers to your binding using any text or XML editor. 6.1. Adding a SOAP 1.1 Binding Using wsdl2soap To generate a SOAP 1.1 binding using wsdl2soap use the following command: wsdl2soap -i port-type-name -b binding-name -d output-directory -o output-file -n soap-body-namespace -style (document/rpc)-use (literal/encoded)-v-verbose-quiet wsdlurl Note To use wsdl2soap you will need to download the Apache CXF distribution. The command has the following options: Option Interpretation -i port-type-name Specifies the portType element for which a binding is generated. wsdlurl The path and name of the WSDL file containing the portType element definition. The tool has the following optional arguments: Option Interpretation -b binding-name Specifies the name of the generated SOAP binding. -d output-directory Specifies the directory to place the generated WSDL file. -o output-file Specifies the name of the generated WSDL file. -n soap-body-namespace Specifies the SOAP body namespace when the style is RPC. -style (document/rpc) Specifies the encoding style (document or RPC) to use in the SOAP binding. The default is document. -use (literal/encoded) Specifies the binding use (encoded or literal) to use in the SOAP binding. The default is literal. -v Displays the version number for the tool. -verbose Displays comments during the code generation process. -quiet Suppresses comments during the code generation process. The -i port-type-name and wsdlurl arguments are required. If the -style rpc argument is specified, the -n soap-body-namspace argument is also required. All other arguments are optional and may be listed in any order. Important wsdl2soap does not support the generation of document/encoded SOAP bindings. Example If your system has an interface that takes orders and offers a single operation to process the orders it is defined in a WSDL fragment similar to the one shown in Example 6.1, "Ordering System Interface" . Example 6.1. Ordering System Interface The SOAP binding generated for orderWidgets is shown in Example 6.2, "SOAP 1.1 Binding for orderWidgets " . Example 6.2. SOAP 1.1 Binding for orderWidgets This binding specifies that messages are sent using the document/literal message style. 6.2. Adding SOAP Headers to a SOAP 1.1 Binding Overview SOAP headers are defined by adding soap:header elements to your default SOAP 1.1 binding. The soap:header element is an optional child of the input , output , and fault elements of the binding. The SOAP header becomes part of the parent message. A SOAP header is defined by specifying a message and a message part. Each SOAP header can only contain one message part, but you can insert as many SOAP headers as needed. Syntax The syntax for defining a SOAP header is shown in Example 6.3, "SOAP Header Syntax" . The message attribute of soap:header is the qualified name of the message from which the part being inserted into the header is taken. The part attribute is the name of the message part inserted into the SOAP header. Because SOAP headers are always document style, the WSDL message part inserted into the SOAP header must be defined using an element. Together the message and the part attributes fully describe the data to insert into the SOAP header. Example 6.3. SOAP Header Syntax As well as the mandatory message and part attributes, soap:header also supports the namespace , the use , and the encodingStyle attributes. These attributes function the same for soap:header as they do for soap:body . Splitting messages between body and header The message part inserted into the SOAP header can be any valid message part from the contract. It can even be a part from the parent message which is being used as the SOAP body. Because it is unlikely that you would want to send information twice in the same message, the SOAP binding provides a means for specifying the message parts that are inserted into the SOAP body. The soap:body element has an optional attribute, parts , that takes a space delimited list of part names. When parts is defined, only the message parts listed are inserted into the SOAP body. You can then insert the remaining parts into the SOAP header. Note When you define a SOAP header using parts of the parent message, Apache CXF automatically fills in the SOAP headers for you. Example Example 6.4, "SOAP 1.1 Binding with a SOAP Header" shows a modified version of the orderWidgets service shown in Example 6.1, "Ordering System Interface" . This version has been modified so that each order has an xsd:base64binary value placed in the SOAP header of the request and response. The SOAP header is defined as being the keyVal part from the widgetKey message. In this case you are responsible for adding the SOAP header to your application logic because it is not part of the input or output message. Example 6.4. SOAP 1.1 Binding with a SOAP Header You can also modify Example 6.4, "SOAP 1.1 Binding with a SOAP Header" so that the header value is a part of the input and output messages. | [
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <definitions name=\"widgetOrderForm.wsdl\" targetNamespace=\"http://widgetVendor.com/widgetOrderForm\" xmlns=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:soap=\"http://schemas.xmlsoap.org/wsdl/soap/\" xmlns:tns=\"http://widgetVendor.com/widgetOrderForm\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:xsd1=\"http://widgetVendor.com/types/widgetTypes\" xmlns:SOAP-ENC=\"http://schemas.xmlsoap.org/soap/encoding/\"> <message name=\"widgetOrder\"> <part name=\"numOrdered\" type=\"xsd:int\"/> </message> <message name=\"widgetOrderBill\"> <part name=\"price\" type=\"xsd:float\"/> </message> <message name=\"badSize\"> <part name=\"numInventory\" type=\"xsd:int\"/> </message> <portType name=\"orderWidgets\"> <operation name=\"placeWidgetOrder\"> <input message=\"tns:widgetOrder\" name=\"order\"/> <output message=\"tns:widgetOrderBill\" name=\"bill\"/> <fault message=\"tns:badSize\" name=\"sizeFault\"/> </operation> </portType> </definitions>",
"<binding name=\"orderWidgetsBinding\" type=\"tns:orderWidgets\"> <soap:binding style=\"document\" transport=\"http://schemas.xmlsoap.org/soap/http\"/> <operation name=\"placeWidgetOrder\"> <soap:operation soapAction=\"\" style=\"document\"/> <input name=\"order\"> <soap:body use=\"literal\"/> </input> <output name=\"bill\"> <soap:body use=\"literal\"/> </output> <fault name=\"sizeFault\"> <soap:body use=\"literal\"/> </fault> </operation> </binding>",
"<binding name=\"headwig\"> <soap:binding style=\"document\" transport=\"http://schemas.xmlsoap.org/soap/http\"/> <operation name=\"weave\"> <soap:operation soapAction=\"\" style=\"document\"/> <input name=\"grain\"> <soap:body ... /> <soap:header message=\" QName \" part=\" partName \"/> </input> </binding>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <definitions name=\"widgetOrderForm.wsdl\" targetNamespace=\"http://widgetVendor.com/widgetOrderForm\" xmlns=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:soap=\"http://schemas.xmlsoap.org/wsdl/soap/\" xmlns:tns=\"http://widgetVendor.com/widgetOrderForm\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:xsd1=\"http://widgetVendor.com/types/widgetTypes\" xmlns:SOAP-ENC=\"http://schemas.xmlsoap.org/soap/encoding/\"> <types> <schema targetNamespace=\"http://widgetVendor.com/types/widgetTypes\" xmlns=\"http://www.w3.org/2001/XMLSchema\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\"> <element name=\"keyElem\" type=\"xsd:base64Binary\"/> </schema> </types> <message name=\"widgetOrder\"> <part name=\"numOrdered\" type=\"xsd:int\"/> </message> <message name=\"widgetOrderBill\"> <part name=\"price\" type=\"xsd:float\"/> </message> <message name=\"badSize\"> <part name=\"numInventory\" type=\"xsd:int\"/> </message> <message name=\"widgetKey\"> <part name=\"keyVal\" element=\"xsd1:keyElem\"/> </message> <portType name=\"orderWidgets\"> <operation name=\"placeWidgetOrder\"> <input message=\"tns:widgetOrder\" name=\"order\"/> <output message=\"tns:widgetOrderBill\" name=\"bill\"/> <fault message=\"tns:badSize\" name=\"sizeFault\"/> </operation> </portType> <binding name=\"orderWidgetsBinding\" type=\"tns:orderWidgets\"> <soap:binding style=\"document\" transport=\"http://schemas.xmlsoap.org/soap/http\"/> <operation name=\"placeWidgetOrder\"> <soap:operation soapAction=\"\" style=\"document\"/> <input name=\"order\"> <soap:body use=\"literal\"/> <soap:header message=\"tns:widgetKey\" part=\"keyVal\"/> </input> <output name=\"bill\"> <soap:body use=\"literal\"/> <soap:header message=\"tns:widgetKey\" part=\"keyVal\"/> </output> <fault name=\"sizeFault\"> <soap:body use=\"literal\"/> </fault> </operation> </binding> </definitions>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/fusecxfsoap11 |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/tuning_performance_in_identity_management/proc_providing-feedback-on-red-hat-documentation_tuning-performance-in-idm |
24.3. Virtual Hosts Settings | 24.3. Virtual Hosts Settings Virtual hosts allow you to run different servers for different IP addresses, different host names, or different ports on the same machine. For example, you can run the website for http://www.example.com and http://www.anotherexample.com on the same Web server using virtual hosts. This option corresponds to the <VirtualHost> directive for the default virtual host and IP based virtual hosts. It corresponds to the <NameVirtualHost> directive for a name based virtual host. The directives set for a virtual host only apply to that particular virtual host. If a directive is set server-wide using the Edit Default Settings button and not defined within the virtual host settings, the default setting is used. For example, you can define a Webmaster email address in the Main tab and not define individual email addresses for each virtual host. The HTTP Configuration Tool includes a default virtual host as shown in Figure 24.8, "Virtual Hosts" . Figure 24.8. Virtual Hosts http://httpd.apache.org/docs-2.0/vhosts/ and the Apache HTTP Server documentation on your machine provide more information about virtual hosts. 24.3.1. Adding and Editing a Virtual Host To add a virtual host, click the Virtual Hosts tab and then click the Add button. You can also edit a virtual host by selecting it and clicking the Edit button. 24.3.1.1. General Options The General Options settings only apply to the virtual host that you are configuring. Set the name of the virtual host in the Virtual Host Name text area. This name is used by HTTP Configuration Tool to distinguish between virtual hosts. Set the Document Root Directory value to the directory that contains the root document (such as index.html) for the virtual host. This option corresponds to the DocumentRoot directive within the < VirtualHost > directive. The default DocumentRoot is /var/www/html . The Webmaster email address corresponds to the ServerAdmin directive within the VirtualHost directive. This email address is used in the footer of error pages if you choose to show a footer with an email address on the error pages. In the Host Information section, choose Default Virtual Host , IP based Virtual Host , or Name based Virtual Host . Default Virtual Host You should only configure one default virtual host (remember that there is one setup by default). The default virtual host settings are used when the requested IP address is not explicitly listed in another virtual host. If there is no default virtual host defined, the main server settings are used. IP based Virtual Host If you choose IP based Virtual Host , a window appears to configure the <VirtualHost> directive based on the IP address of the server. Specify this IP address in the IP address field. To specify multiple IP addresses, separate each IP address with spaces. To specify a port, use the syntax IP Address:Port . Use "colon, asterisk" ( :* ) to configure all ports for the IP address. Specify the host name for the virtual host in the Server Host Name field. Name based Virtual Host If you choose Name based Virtual Host , a window appears to configure the NameVirtualHost directive based on the host name of the server. Specify the IP address in the IP address field. To specify multiple IP addresses, separate each IP address with spaces. To specify a port, use the syntax IP Address:Port . Use "colon, asterisk" ( :* ) to configure all ports for the IP address. Specify the host name for the virtual host in the Server Host Name field. In the Aliases section, click Add to add a host name alias. Adding an alias here adds a ServerAlias directive within the NameVirtualHost directive. 24.3.1.2. SSL Note You cannot use name based virtual hosts with SSL because the SSL handshake (when the browser accepts the secure Web server's certificate) occurs before the HTTP request, which identifies the appropriate name based virtual host. If you plan to use name-based virtual hosts, remember that they only work with your non-secure Web server. Figure 24.9. SSL Support If an Apache HTTP Server is not configured with SSL support, communications between an Apache HTTP Server and its clients are not encrypted. This is appropriate for websites without personal or confidential information. For example, an open source website that distributes open source software and documentation has no need for secure communications. However, an ecommerce website that requires credit card information should use the Apache SSL support to encrypt its communications. Enabling Apache SSL support enables the use of the mod_ssl security module. To enable it through the HTTP Configuration Tool , you must allow access through port 443 under the Main tab => Available Addresses . Refer to Section 24.1, "Basic Settings" for details. Then, select the virtual host name in the Virtual Hosts tab, click the Edit button, choose SSL from the left-hand menu, and check the Enable SSL Support option as shown in Figure 24.9, "SSL Support" . The SSL Configuration section is pre-configured with the dummy digital certificate. The digital certificate provides authentication for your secure Web server and identifies the secure server to client Web browsers. You must purchase your own digital certificate. Do not use the dummy one provided for your website. For details on purchasing a CA-approved digital certificate, refer to the Chapter 25, Apache HTTP Secure Server Configuration . 24.3.1.3. Additional Virtual Host Options The Site Configuration , Environment Variables , and Directories options for the virtual hosts are the same directives that you set when you clicked the Edit Default Settings button, except the options set here are for the individual virtual hosts that you are configuring. Refer to Section 24.2, "Default Settings" for details on these options. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/HTTPD_Configuration-Virtual_Hosts_Settings |
Chapter 26. Storage [operator.openshift.io/v1] | Chapter 26. Storage [operator.openshift.io/v1] Description Storage provides a means to configure an operator to manage the cluster storage operator. cluster is the canonical name. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 26.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 26.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides holds a sparse config that will override any previously set options. It only needs to be the fields to override it will end up overlaying in the following order: 1. hardcoded defaults 2. observedConfig 3. unsupportedConfigOverrides vsphereStorageDriver string VSphereStorageDriver indicates the storage driver to use on VSphere clusters. Once this field is set to CSIWithMigrationDriver, it can not be changed. If this is empty, the platform will choose a good default, which may change over time without notice. The current default is LegacyDeprecatedInTreeDriver. DEPRECATED: This field will be removed in a future release. 26.1.2. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 26.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 26.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 26.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 26.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 26.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/storages DELETE : delete collection of Storage GET : list objects of kind Storage POST : create a Storage /apis/operator.openshift.io/v1/storages/{name} DELETE : delete a Storage GET : read the specified Storage PATCH : partially update the specified Storage PUT : replace the specified Storage /apis/operator.openshift.io/v1/storages/{name}/status GET : read status of the specified Storage PATCH : partially update status of the specified Storage PUT : replace status of the specified Storage 26.2.1. /apis/operator.openshift.io/v1/storages Table 26.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Storage Table 26.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 26.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Storage Table 26.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 26.5. HTTP responses HTTP code Reponse body 200 - OK StorageList schema 401 - Unauthorized Empty HTTP method POST Description create a Storage Table 26.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 26.7. Body parameters Parameter Type Description body Storage schema Table 26.8. HTTP responses HTTP code Reponse body 200 - OK Storage schema 201 - Created Storage schema 202 - Accepted Storage schema 401 - Unauthorized Empty 26.2.2. /apis/operator.openshift.io/v1/storages/{name} Table 26.9. Global path parameters Parameter Type Description name string name of the Storage Table 26.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Storage Table 26.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 26.12. Body parameters Parameter Type Description body DeleteOptions schema Table 26.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Storage Table 26.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 26.15. HTTP responses HTTP code Reponse body 200 - OK Storage schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Storage Table 26.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 26.17. Body parameters Parameter Type Description body Patch schema Table 26.18. HTTP responses HTTP code Reponse body 200 - OK Storage schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Storage Table 26.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 26.20. Body parameters Parameter Type Description body Storage schema Table 26.21. HTTP responses HTTP code Reponse body 200 - OK Storage schema 201 - Created Storage schema 401 - Unauthorized Empty 26.2.3. /apis/operator.openshift.io/v1/storages/{name}/status Table 26.22. Global path parameters Parameter Type Description name string name of the Storage Table 26.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Storage Table 26.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 26.25. HTTP responses HTTP code Reponse body 200 - OK Storage schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Storage Table 26.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 26.27. Body parameters Parameter Type Description body Patch schema Table 26.28. HTTP responses HTTP code Reponse body 200 - OK Storage schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Storage Table 26.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 26.30. Body parameters Parameter Type Description body Storage schema Table 26.31. HTTP responses HTTP code Reponse body 200 - OK Storage schema 201 - Created Storage schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operator_apis/storage-operator-openshift-io-v1 |
Chapter 1. Support policy for Eclipse Temurin | Chapter 1. Support policy for Eclipse Temurin Red Hat will support select major versions of Eclipse Temurin in its products. For consistency, these are the same versions that Oracle designates as long-term support (LTS) for the Oracle JDK. A major version of Eclipse Temurin will be supported for a minimum of six years from the time that version is first introduced. For more information, see the Eclipse Temurin Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Eclipse Temurin does not support RHEL 6 as a supported configuration. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.432_release_notes/openjdk8-temurin-support-policy |
Chapter 5. Creating Cross-forest Trusts with Active Directory and Identity Management | Chapter 5. Creating Cross-forest Trusts with Active Directory and Identity Management This chapter describes creating cross-forest trusts between Active Directory and Identity Management. A cross-forest trust is the recommended one of the two methods to integrate Identity Management and Active Directory (AD) environments indirectly. The other method is synchronization. If you are unsure which method to choose for your environment, read Section 1.3, "Indirect Integration" . Kerberos implements a concept of a trust . In a trust, a principal from one Kerberos realm can request a ticket to a service in another Kerberos realm. Using this ticket, the principal can authenticate against resources on machines belonging to the other realm. Kerberos also has the ability to create a relationship between two otherwise separate Kerberos realms: a cross-realm trust . Realms that are part of a trust use a shared pair of a ticket and key; a member of one realm then counts as a member of both realms. Red Hat Identity Management supports configuring a cross-forest trust between an IdM domain and an Active Directory domain. 5.1. Introduction to Cross-forest Trusts Kerberos realm only concerns authentication. Other services and protocols are involved in complementing identity and authorization for resources running on the machines in the Kerberos realm. As such, establishing Kerberos cross-realm trust is not enough to allow users from one realm to access resources in the other realm; a support is required at other levels of communication as well. 5.1.1. The Architecture of a Trust Relationship Both Active Directory and Identity Management manage a variety of core services such as Kerberos, LDAP, DNS, or certificate services. To transparently integrate these two diverse environments, all core services must interact seamlessly with one another. Active Directory Trusts, Forests, and Cross-forest Trusts Kerberos cross-realm trust plays an important role in authentication between Active Directory environments. All activities to resolve user and group names in a trusted AD domain require authentication, regardless of how access is performed: using LDAP protocol or as part of the Distributed Computing Environment/Remote Procedure Calls (DCE/RPC) on top of the Server Message Block (SMB) protocol. Because there are more protocols involved in organizing access between two different Active Directory domains, trust relationship has a more generic name, Active Directory trust . Multiple AD domains can be organized together into an Active Directory forest . A root domain of the forest is the first domain created in the forest. Identity Management domain cannot be part of an existing AD forest, thus it is always seen as a separate forest. When trust relationship is established between two separate forest root domains, allowing users and services from different AD forests to communicate, a trust is called Active Directory cross-forest trust . Trust Flow and One-way Trusts A trust establishes an access relationship between two domains. Active Directory environments can be complex so there are different possible types and arrangements for Active Directory trusts, between child domains, root domains, or forests. A trust is a path from one domain to another. The way that identities and information move between the domains is called a trust flow . The trusted domain contains users, and the trusting domain allows access to resources. In a one-way trust, trust flows only in one direction: users can access the trusting domain's resources but users in the trusting domain cannot access resources in the trusted domain. In Figure 5.1, "One-way Trust" , Domain A is trusted by Domain B, but Domain B is not trusted by Domain A. Figure 5.1. One-way Trust IdM allows the administrator to configure both one-way and two-way trusts. For details, see Section 5.1.4, "One-Way and Two-Way Trusts" . Transitive and Non-transitive Trusts Trusts can be transitive so that a domain trusts another domain and any other domain trusted by that second domain. Figure 5.2. Transitive Trusts Trusts can also be non-transitive which means the trust is limited only to the explicitly included domains. Cross-forest Trust in Active Directory and Identity Management Within an Active Directory forest, trust relationships between domains are normally two-way and transitive by default. Because trust between two AD forests is a trust between two forest root domains, it can also be two-way or one-way. The transitivity of the cross-forest trust is explicit: any domain trust within an AD forest that leads to the root domain of the forest is transitive over the cross-forest trust. However, separate cross-forest trusts are not transitive. An explicit cross-forest trust must be established between each AD forest root domain to another AD forest root domain. From the perspective of AD, Identity Management represents a separate AD forest with a single AD domain. When cross-forest trust between an AD forest root domain and an IdM domain is established, users from the AD forest domains can interact with Linux machines and services from the IdM domain. Figure 5.3. Trust Direction 5.1.2. Active Directory Security Objects and Trust Active Directory Global Catalog The global catalog contains information about objects of an Active Directory. It stores a full copy of objects within its own domain. From objects of other domains in the Active Directory forest, only a partial copy of the commonly most searched attributes is stored in the global catalog. Additionally, some types of groups are only valid within a specific scope and might not be part of the global catalog. Note that the cross-forest trust context is wider than a single domain. Therefore, some of these server-local or domain-local security group memberships from a trusted forest might not be visible to IdM servers. Global Catalog and POSIX Attributes Active Directory does not replicate POSIX attributes with its default settings. If it is required to use POSIX attributes that are defined in AD Red Hat strongly recommends to replicate them to the global catalog service. 5.1.3. Trust Architecture in IdM On the Identity Management side, the IdM server has to be able to recognize Active Directory identities and appropriately process their group membership for access controls. The Microsoft PAC (MS-PAC, Privilege Account Certificate) contains the required information about the user; their security ID, domain user name, and group memberships. Identity Management has two components to analyze data in the PAC on the Kerberos ticket: SSSD, to perform identity lookups on Active Directory and to retrieve user and group security identifiers (SIDs) for authorization. SSSD also caches user, group, and ticket information for users and maps Kerberos and DNS domains, Identity Management (Linux domain management), to associate the Active Directory user with an IdM group for IdM policies and access. Note Access control rules and policies for Linux domain administration, such as SELinux, sudo, and host-based access controls, are defined and applied through Identity Management. Any access control rules set on the Active Directory side are not evaluated or used by IdM; the only Active Directory configuration which is relevant is group membership. Trusts with Different Active Directory Forests IdM can also be part of trust relationships with different AD forests. Once a trust is established, additional trusts with other forests can be added later, following the same commands and procedures. IdM can trust multiple entirely unrelated forests at the same time, allowing users from such unrelated AD forests access to resources in the same shared IdM domain. 5.1.3.1. Active Directory PACs and IdM Tickets Group information in Active Directory is stored in a list of identifiers in the Privilege Attribute Certificate (MS-PAC or PAC) data set. The PAC contains various authorization information, such as group membership or additional credentials information. It also includes security identifiers (SIDs) of users and groups in the Active Directory domain. SIDs are identifiers assigned to Active Directory users and groups when they are created. In trust environments, group members are identified by SIDs, rather than by names or DNs. A PAC is embedded in the Kerberos service request ticket for Active Directory users as a way of identifying the entity to other Windows clients and servers in the Windows domain. IdM maps the group information in the PAC to the Active Directory groups and then to the corresponding IdM groups to determine access. When an Active Directory user requests a ticket for a service on IdM resources, the process goes as follows: The request for a service contains the PAC of the user. The IdM Kerberos Distribution Centre (KDC) analyzes the PAC by comparing the list of Active Directory groups to memberships in IdM groups. For SIDs of the Kerberos principal defined in the MS-PAC, the IdM KDC evaluates external group memberships defined in the IdM LDAP. If additional mappings are available for an SID, the MS-PAC record is extended with other SIDs of the IdM groups to which the SID belongs. The resulting MS-PAC is signed by the IdM KDC. The service ticket is returned to the user with the updated PAC signed by the IdM KDC. Users belonging to AD groups known to the IdM domain can now be recognized by SSSD running on the IdM clients based on the MS-PAC content of the service ticket. This allows to reduce identity traffic to discover group memberships by the IdM clients. When the IdM client evaluates the service ticket, the process includes the following steps: The Kerberos client libraries used in the evaluation process send the PAC data to the SSSD PAC responder. The PAC responder verifies the group SIDs in the PAC and adds the user to the corresponding groups in the SSSD cache. SSSD stores multiple TGTs and tickets for each user as new services are accessed. Users belonging to the verified groups can now access the required services on the IdM side. 5.1.3.2. Active Directory Users and Identity Management Groups When managing Active Directory users and groups, you can add individual AD users and whole AD groups to Identity Management groups. For a description of how to configure IdM groups for AD users, see Section 5.3.3, "Creating IdM Groups for Active Directory Users" . Non-POSIX External Groups and SID Mapping Group membership in the IdM LDAP is expressed by specifying a distinguished name (DN) of an LDAP object that is a member of a group. AD entries are not synchronized or copied over to IdM, which means that AD users and groups have no LDAP objects in the IdM LDAP. Therefore, they cannot be directly used to express group membership in the IdM LDAP. For this reason, IdM creates non-POSIX external groups : proxy LDAP objects that contain references to SIDs of AD users and groups as strings. Non-POSIX external groups are then referenced as normal IdM LDAP objects to signify group membership for AD users and groups in IdM. SIDs of non-POSIX external groups are processed by SSSD; SSSD maps SIDs of groups to which an AD user belongs to POSIX groups in IdM. The SIDs on the AD side are associated with user names. When the user name is used to access IdM resources, SSSD in IdM resolves that user name to its SID, and then looks up the information for that SID within the AD domain, as described in Section 5.1.3.1, "Active Directory PACs and IdM Tickets" . ID Ranges When a user is created in Linux, it is assigned a user ID number. In addition, a private group is created for the user. The private group ID number is the same as the user ID number. In Linux environment, this does not create a conflict. On Windows, however, the security ID number must be unique for every object in the domain. Trusted AD users require a UID and GID number on a Linux system. This UID and GID number can be generated by IdM, but if the AD entry already has UID and GID numbers assigned, assigning different numbers creates a conflict. To avoid such conflicts, it is possible to use the AD-defined POSIX attributes, including the UID and GID number and preferred login shell. Note AD stores a subset of information for all objects within the forest in a global catalog . The global catalog includes every entry for every domain in the forest. If you want to use AD-defined POSIX attributes, Red Hat strongly recommends that you first replicate the attributes to the global catalog. When a trust is created, IdM automatically detects what kind of ID range to use and creates a unique ID range for the AD domain added to the trust. You can also choose this manually by passing one of the following options to the ipa trust-add command: ipa-ad-trust This range option is used for IDs algorithmically generated by IdM based on the SID. If IdM generates the SIDs using SID-to-POSIX ID mapping, the ID ranges for AD and IdM users and groups must have unique, non-overlapping ID ranges available. ipa-ad-trust-posix This range option is used for IDs defined in POSIX attributes in the AD entry. IdM obtains the POSIX attributes, including uidNumber and gidNumber , from the global catalog in AD or from the directory controller. If the AD domain is managed correctly and without ID conflicts, the ID numbers generated in this way are unique. In this case, no ID validation or ID range is required. For example: Recreating a trust with the other ID range If the ID range of the created trust does not suit your deployment, you can re-create the trust using the other --range-type option: View all the ID ranges that are currently in use: In the list, identify the name of the ID range that was created by the ipa trust-add command. The first part of the name of the ID range is the name of the trust: name_of_the_trust _id_range, for example ad.example.com . (Optional) If you do not know which --range-type option, ipa-ad-trust or ipa-ad-trust-posix , was used when the trust was created, identify the option: Make note of the type so that you choose the opposite type for the new trust in Step 5. Remove the range that was created by the ipa trust-add command: Remove the trust: Create a new trust with the correct --range-type option. For example: 5.1.3.3. Active Directory Users and IdM Policies and Configuration Several IdM policy definitions, such as SELinux, host-based access control, sudo, and netgroups, rely on user groups to identify how the policies are applied. Figure 5.4. Active Directory Users and IdM Groups and Policies Active Directory users are external to the IdM domain, but they can still be added as group members to IdM groups, as long as those groups are configured as external groups described in Section 5.1.3.2, "Active Directory Users and Identity Management Groups" . In such cases, the sudo, host-based access controls, and other policies are applied to the external POSIX group and, ultimately, to the AD user when accessing IdM domain resources. The user SID in the PAC in the ticket is resolved to the AD identity. This means that Active Directory users can be added as group members using their fully-qualified user name or their SID. 5.1.4. One-Way and Two-Way Trusts IdM supports two types of trust agreements, depending on whether the entities that can establish connection to services in IdM are limited to only AD or can include IdM entities as well. One-way trust One-way trust enables AD users and groups to access resources in IdM, but not the other way around. The IdM domain trusts the AD forest, but the AD forest does not trust the IdM domain. One-way trust is the default mode for creating a trust. Two-way trust Two-way trust enables AD users and groups to access resources in IdM. You must configure a two-way trust for solutions such as Microsoft SQL Server that expect the S4U2Self and S4U2Proxy Microsoft extensions to the Kerberos protocol to work over a trust boundary. An application on a RHEL IdM host might request S4U2Self or S4U2Proxy information from an Active Directory domain controller about an AD user, and a two-way trust provides this feature. Note that this two-way trust functionality does not allow IdM users to login to Windows systems, and the two-way trust in IdM does not give the users any additional rights compared to the one-way trust solution in AD. For more general information on one-way and two-way trusts, see Section 5.1.1, "The Architecture of a Trust Relationship" . After a trust is established, it is not possible to modify its type. If you require a different type of trust, run the ipa trust-add command again; by doing this, you can delete the existing trust and establish a new one. 5.1.5. External Trusts to Active Directory An external trust is a trust relationship between domains that are in a different forests. While forest trusts always require to establish the trust between the root domains of Active Directory forests, you can establish an external trust to any domain within the forest. External trusts are non-transitive. For this reason, users and groups from other Active Directory domains have no access to IdM resources. For further information, see the section called "Transitive and Non-transitive Trusts" . 5.1.6. Trust Controllers and Trust Agents IdM provides the following types of IdM servers that support trust to Active Directory: Trust controllers IdM servers that can control the trust and perform identity lookups against Active Directory domain controllers (DC). Active Directory domain controllers contact trust controllers when establishing and verifying the trust to Active Directory. The first trust controller is created when you configure the trust. For details about configuring an IdM server as a trust controller, see Section 5.2.2, "Creating Trusts" . Trust controllers run an increased amount of network-facing services compared to trust agents, and thus present a greater attack surface for potential intruders. Trust agents IdM servers that can perform identity lookups against Active Directory domain controllers. For details about configuring an IdM server as a trust agent, see Section 5.2.2.1.1, "Preparing the IdM Server for Trust" . In addition to trust controllers and agents, the IdM domain can also include replicas without any role. However, these servers do not communicate with Active Directory. Therefore, clients that communicate with these servers cannot resolve Active Directory users and groups or authenticate and authorize Active Directory users. Table 5.1. A comparison of the capabilities provided by trust controllers and trust agents Capability Trust controllers Trust agents Resolve Active Directory users and groups Yes Yes Enroll IdM clients that run services accessible by users from trusted Active Directory forests Yes Yes Manage the trust (for example, add trust agreements) Yes No When planning the deployment of trust controllers and trust agents, consider these guidelines: Configure at least two trust controllers per Identity Management deployment. Configure at least two trust controllers in each data center. If you ever want to create additional trust controllers or if an existing trust controller fails, create a new trust controller by promoting a trust agent or a replica. To do this, use the ipa-adtrust-install utility on the IdM server as described in Section 5.2.2.1.1, "Preparing the IdM Server for Trust" . Important You cannot downgrade an existing trust controller to a trust agent. The trust controller server role, once installed, cannot be removed from the topology. | [
"ipa trust-add name_of_the_trust --range-type=ipa-ad-trust-posix",
"ipa idrange-find",
"ipa idrange-show name_of_the_trust_id_range",
"ipa idrange-del name_of_the_trust_id_range",
"ipa trust-del name_of_the_trust",
"ipa trust-add name_of_the_trust --range-type=ipa-ad-trust"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/windows_integration_guide/active-directory-trust |
5.4. Network Tuning Techniques | 5.4. Network Tuning Techniques This section describes techniques for tuning network performance in virtualized environments. Important The following features are supported on Red Hat Enterprise Linux 7 hypervisors and virtual machines, but also on virtual machines running Red Hat Enterprise Linux 6.6 and later. 5.4.1. Bridge Zero Copy Transmit Zero copy transmit mode is effective on large packet sizes. It typically reduces the host CPU overhead by up to 15% when transmitting large packets between a guest network and an external network, without affecting throughput. It does not affect performance for guest-to-guest, guest-to-host, or small packet workloads. Bridge zero copy transmit is fully supported on Red Hat Enterprise Linux 7 virtual machines, but disabled by default. To enable zero copy transmit mode, set the experimental_zcopytx kernel module parameter for the vhost_net module to 1. For detailed instructions, see the Virtualization Deployment and Administration Guide . Note An additional data copy is normally created during transmit as a threat mitigation technique against denial of service and information leak attacks. Enabling zero copy transmit disables this threat mitigation technique. If performance regression is observed, or if host CPU utilization is not a concern, zero copy transmit mode can be disabled by setting experimental_zcopytx to 0. 5.4.2. Multi-Queue virtio-net Multi-queue virtio-net provides an approach that scales the network performance as the number of vCPUs increases, by allowing them to transfer packets through more than one virtqueue pair at a time. Today's high-end servers have more processors, and guests running on them often have an increasing number of vCPUs. In single queue virtio-net, the scale of the protocol stack in a guest is restricted, as the network performance does not scale as the number of vCPUs increases. Guests cannot transmit or retrieve packets in parallel, as virtio-net has only one TX and RX queue. Multi-queue support removes these bottlenecks by allowing paralleled packet processing. Multi-queue virtio-net provides the greatest performance benefit when: Traffic packets are relatively large. The guest is active on many connections at the same time, with traffic running between guests, guest to host, or guest to an external system. The number of queues is equal to the number of vCPUs. This is because multi-queue support optimizes RX interrupt affinity and TX queue selection in order to make a specific queue private to a specific vCPU. Note Currently, setting up a multi-queue virtio-net connection can have a negative effect on the performance of outgoing traffic. Specifically, this may occur when sending packets under 1,500 bytes over the Transmission Control Protocol (TCP) stream. For more information, see the Red Hat Knowledgebase . 5.4.2.1. Configuring Multi-Queue virtio-net To use multi-queue virtio-net, enable support in the guest by adding the following to the guest XML configuration (where the value of N is from 1 to 256, as the kernel supports up to 256 queues for a multi-queue tap device): When running a virtual machine with N virtio-net queues in the guest, enable the multi-queue support with the following command (where the value of M is from 1 to N ): | [
"<interface type='network'> <source network='default'/> <model type='virtio'/> <driver name='vhost' queues='N'/> </interface>",
"ethtool -L eth0 combined M"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_tuning_and_optimization_guide/sect-Virtualization_Tuning_Optimization_Guide-Networking-Techniques |
5.6. Performing Bulk Issuance | 5.6. Performing Bulk Issuance There can be instances when an administrator needs to submit and generate a large number of certificates simultaneously. A combination of tools supplied with Certificate System can be used to post a file containing certificate requests to the CA. This example procedure uses the PKCS10Client command to generate the requests and the sslget command to send the requests to the CA. Since this process is scripted, multiple variables need to be set to identify the CA (host, port) and the items used for authentication (the agent certificate and certificate database and password). For example, set these variables for the session by exporting them in the terminal: export d=/var/tmp/testDir export p=password export f=/var/tmp/server.csr.txt export nick="CA agent cert" export cahost=1.2.3.4 export caport=8443 Note The local system must have a valid security database with an agent's certificate in it. To set up the databases: Export or download the agent user certificate and keys from the browser and save to a file, such as agent.p12 . If necessary, create a new directory for the security databases. If necessary, create new security databases. Stop the Certificate System instance. Use pk12util to import the certificates. If the procedure is successful, the command prints the following output: Start the Certificate System instance. Two additional variables must be set. A variable that identify the CA profile to be used to process the requests, and a variable that is used to send a post statement to supply the information for the profile form. export post="cert_request_type=pkcs10&xmlOutput=true&profileId=caAgentServerCert&cert_request=" export url="/ca/ee/ca/profileSubmitSSLClient" Note This example submits the certificate requests to the caAgentServerCert profile (identified in the profileId element of the post statement. Any certificate profile can be used, including custom profiles. Test the variable configuration. echo USD{d} USD{p} USD{f} USD{nick} USD{cahost} USD{caport} USD{post} USD{url} Generate the certificate requests using (for this example) PKCS10Client : time for i in {1..10}; do /usr/bin/PKCS10Client -d USD{d} -p USD{p} -o USD{f}.USD{i} -s "cn=testmsUSD{i}.example.com"; cat USD{f}.USD{i} >> USD{f}; done perl -pi -e 's/\r\n//;s/\+/%2B/g;s/\//%2F/g' USD{f} wc -l USD{f} Submit the bulk certificate request file created in step 4 to the CA profile interface using sslget . For example: cat USD{f} | while read thisreq; do /usr/bin/sslget -n "USD{nick}" -p USD{p} -d USD{d} -e USD{post}USD{thisreq} -v -r USD{url} USD{cahost}:USD{caport}; done | [
"export d=/var/tmp/testDir export p=password export f=/var/tmp/server.csr.txt export nick=\"CA agent cert\" export cahost=1.2.3.4 export caport=8443",
"mkdir USD{d}",
"certutil -N -d USD{d}",
"pki-server stop instance_name",
"pk12util -i /tmp/agent.p12 -d USD{d} -W p12filepassword",
"pk12util: PKCS12 IMPORT SUCCESSFUL",
"pki-server start instance_name",
"export post=\"cert_request_type=pkcs10&xmlOutput=true&profileId=caAgentServerCert&cert_request=\" export url=\"/ca/ee/ca/profileSubmitSSLClient\"",
"echo USD{d} USD{p} USD{f} USD{nick} USD{cahost} USD{caport} USD{post} USD{url}",
"time for i in {1..10}; do /usr/bin/PKCS10Client -d USD{d} -p USD{p} -o USD{f}.USD{i} -s \"cn=testmsUSD{i}.example.com\"; cat USD{f}.USD{i} >> USD{f}; done perl -pi -e 's/\\r\\n//;s/\\+/%2B/g;s/\\//%2F/g' USD{f} wc -l USD{f}",
"cat USD{f} | while read thisreq; do /usr/bin/sslget -n \"USD{nick}\" -p USD{p} -d USD{d} -e USD{post}USD{thisreq} -v -r USD{url} USD{cahost}:USD{caport}; done"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/bulk-issuance |
4.6. Red Hat Virtualization 4.3 Batch Update 5 (ovirt-4.3.8) | 4.6. Red Hat Virtualization 4.3 Batch Update 5 (ovirt-4.3.8) 4.6.1. Bug Fix These bugs were fixed in this release of Red Hat Virtualization: BZ# 1754979 Previously, upgrading RHV Manager from 4.2 to 4.3 ovirt-fast-forward-upgrade sometimes failed with a yum dependency error if engine-setup was not executed following the yum update of the 4.2. That was causing an inconsistent state of the system, preventing it from upgrading to 4.3. The current version fixes this issue. BZ# 1757423 Normally, when the "UserSessionTimeOutInterval" is set to a negative value such as "-1", the user remains logged into the VM Portal indefinitely. However, in RHV version 4.5.3.6, a negative value automatically logged the user out immediately. The current release fixes this issue, such that, with the value set to -1, the user is never automatically logged out, as expected per the config value definition in ovirt-engine engine-config.properties: "A negative value indicates that sessions never expire.": https://github.com/oVirt/ovirt-engine/blob/e0940bd9b768cb52c78f9e0d0c97afd6de7ac8a5/packaging/etc/engine-config/engine-config.properties#L218-L220 BZ# 1765912 Previously, VDSM v4.30.26 added support for 4K block size with file-based storage domains to Red Hat Virtualization 4.3. However, there were a few instances where 512B block size remained hard-coded. These instances impacted the support of 4K block size under certain conditions. This release fixes these issues. BZ# 1773580 Previously, when you used the VM Portal to create a Windows virtual machine, it failed with the following error "CREATE_VM failed [Cannot add VM. Invalid time zone for given OS type., Attribute: vmStatic]." The Administration Portal did not have this issue. The current release fixes this issue. BZ# 1779664 Previously, when you deleted a snapshot of a VM with a LUN disk, its image ID parsed incorrectly and used "mapper" as its value, which caused a null pointer exception. The current release fixes this issue by avoiding disks whose image ID parses as 'mapper' so deleting the VM snapshot is successful. BZ# 1780290 When sharding was enabled and a file's path had been unlinked, but its descriptor was still available, attempting to perform a read or write operation against the file resulted in the host becoming non-operational. File paths are now unlinked later, avoiding this issue. BZ# 1781380 Previously, after using the REST API to create an affinity group, the resulting group did not have the required labels, even though they were defined in the request body. The current release fixes this issue so the affinity group has the labels that were defined in the request body. 4.6.2. Enhancements This release of Red Hat Virtualization features the following enhancements: BZ# 1739106 This release adds a new feature, rhv-image-discrepancies, which reports discrepancies between images in the engine database and storage. The report lists images that are present in one location but missing from the other. Also, for images that are present in both locations, the report lists discrepancies in the values of attributes such as status, parent_id, and type. BZ# 1767333 This release adds a new 'status' column to the affinity group table that shows whether all of an affinity group's rules are satisfied (status = ok) or not (status = broken). The "Enforcing" option does not affect this status. BZ# 1779160 To avoid overloading the journal log, in this release, oVirt Guest Agent no longer attempts to query the list of containers on machines without Docker. BZ# 1782412 In the current release, Metrics Store adds support for a flat DNS environment without subdomains. This capability helps you satisfy security policies that mandate having a "flat" DNS environment with no submains. To enable this capability, you add a suffix to the master0 virtual machine when you configure networking for Metrics Store virtual machines. For example, if you set 'openshift_ovirt_machine_suffix' to 'prod' and 'public_hosted_zone' is 'example.com', then the metrics store virtual machine will be called 'master-prod0.example.com'. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/release_notes/red_hat_virtualization_4_3_batch_update_5_ovirt_4_3_8 |
Chapter 7. Available BPF Features | Chapter 7. Available BPF Features This chapter provides the complete list of Berkeley Packet Filter ( BPF ) features available in the kernel of this minor version of Red Hat Enterprise Linux 8. The tables include the lists of: System configuration and other options Available program types and supported helpers Available map types This chapter contains automatically generated output of the bpftool feature command. Table 7.1. System configuration and other options Option Value unprivileged_bpf_disabled 1 (bpf() syscall restricted to privileged users, without recovery) JIT compiler 1 (enabled) JIT compiler hardening 1 (enabled for unprivileged users) JIT compiler kallsyms exports 1 (enabled for root) Memory limit for JIT for unprivileged users 264241152 CONFIG_BPF y CONFIG_BPF_SYSCALL y CONFIG_HAVE_EBPF_JIT y CONFIG_BPF_JIT y CONFIG_BPF_JIT_ALWAYS_ON y CONFIG_DEBUG_INFO_BTF y CONFIG_DEBUG_INFO_BTF_MODULES n CONFIG_CGROUPS y CONFIG_CGROUP_BPF y CONFIG_CGROUP_NET_CLASSID y CONFIG_SOCK_CGROUP_DATA y CONFIG_BPF_EVENTS y CONFIG_KPROBE_EVENTS y CONFIG_UPROBE_EVENTS y CONFIG_TRACING y CONFIG_FTRACE_SYSCALLS y CONFIG_FUNCTION_ERROR_INJECTION y CONFIG_BPF_KPROBE_OVERRIDE y CONFIG_NET y CONFIG_XDP_SOCKETS y CONFIG_LWTUNNEL_BPF y CONFIG_NET_ACT_BPF m CONFIG_NET_CLS_BPF m CONFIG_NET_CLS_ACT y CONFIG_NET_SCH_INGRESS m CONFIG_XFRM y CONFIG_IP_ROUTE_CLASSID y CONFIG_IPV6_SEG6_BPF n CONFIG_BPF_LIRC_MODE2 n CONFIG_BPF_STREAM_PARSER y CONFIG_NETFILTER_XT_MATCH_BPF m CONFIG_BPFILTER n CONFIG_BPFILTER_UMH n CONFIG_TEST_BPF m CONFIG_HZ 1000 bpf() syscall available Large program size limit available Table 7.2. Available program types and supported helpers Program type Available helpers socket_filter bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_perf_event_output, bpf_skb_load_bytes, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_skb_load_bytes_relative, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf kprobe bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_override_return, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf sched_cls bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_l3_csum_replace, bpf_l4_csum_replace, bpf_tail_call, bpf_clone_redirect, bpf_get_cgroup_classid, bpf_skb_vlan_push, bpf_skb_vlan_pop, bpf_skb_get_tunnel_key, bpf_skb_set_tunnel_key, bpf_redirect, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_get_tunnel_opt, bpf_skb_set_tunnel_opt, bpf_skb_change_proto, bpf_skb_change_type, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_change_tail, bpf_skb_pull_data, bpf_csum_update, bpf_set_hash_invalid, bpf_get_numa_node_id, bpf_skb_change_head, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_set_hash, bpf_skb_adjust_room, bpf_skb_get_xfrm_state, bpf_skb_load_bytes_relative, bpf_fib_lookup, bpf_skb_cgroup_id, bpf_skb_ancestor_cgroup_id, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sk_fullsock, bpf_tcp_sock, bpf_skb_ecn_set_ce, bpf_get_listener_sock, bpf_skc_lookup_tcp, bpf_tcp_check_syncookie, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_tcp_gen_syncookie, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_sk_assign, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_csum_level, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_skb_cgroup_classid, bpf_redirect_neigh, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_redirect_peer, bpf_ktime_get_coarse_ns, bpf_check_mtu, bpf_for_each_map_elem, bpf_snprintf sched_act bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_l3_csum_replace, bpf_l4_csum_replace, bpf_tail_call, bpf_clone_redirect, bpf_get_cgroup_classid, bpf_skb_vlan_push, bpf_skb_vlan_pop, bpf_skb_get_tunnel_key, bpf_skb_set_tunnel_key, bpf_redirect, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_get_tunnel_opt, bpf_skb_set_tunnel_opt, bpf_skb_change_proto, bpf_skb_change_type, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_change_tail, bpf_skb_pull_data, bpf_csum_update, bpf_set_hash_invalid, bpf_get_numa_node_id, bpf_skb_change_head, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_set_hash, bpf_skb_adjust_room, bpf_skb_get_xfrm_state, bpf_skb_load_bytes_relative, bpf_fib_lookup, bpf_skb_cgroup_id, bpf_skb_ancestor_cgroup_id, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sk_fullsock, bpf_tcp_sock, bpf_skb_ecn_set_ce, bpf_get_listener_sock, bpf_skc_lookup_tcp, bpf_tcp_check_syncookie, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_tcp_gen_syncookie, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_sk_assign, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_csum_level, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_skb_cgroup_classid, bpf_redirect_neigh, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_redirect_peer, bpf_ktime_get_coarse_ns, bpf_check_mtu, bpf_for_each_map_elem, bpf_snprintf tracepoint bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf xdp bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_redirect, bpf_perf_event_output, bpf_csum_diff, bpf_get_current_task, bpf_get_numa_node_id, bpf_xdp_adjust_head, bpf_redirect_map, bpf_xdp_adjust_meta, bpf_xdp_adjust_tail, bpf_fib_lookup, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_skc_lookup_tcp, bpf_tcp_check_syncookie, bpf_tcp_gen_syncookie, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_check_mtu, bpf_for_each_map_elem, bpf_snprintf perf_event bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_perf_prog_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_read_branch_records, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf cgroup_skb bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_perf_event_output, bpf_skb_load_bytes, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_skb_load_bytes_relative, bpf_skb_cgroup_id, bpf_get_local_storage, bpf_skb_ancestor_cgroup_id, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sk_fullsock, bpf_tcp_sock, bpf_skb_ecn_set_ce, bpf_get_listener_sock, bpf_skc_lookup_tcp, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_sk_cgroup_id, bpf_sk_ancestor_cgroup_id, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf cgroup_sock bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_get_cgroup_classid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sk_storage_get, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_netns_cookie, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf lwt_in bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_cgroup_classid, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_pull_data, bpf_get_numa_node_id, bpf_lwt_push_encap, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf lwt_out bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_cgroup_classid, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_pull_data, bpf_get_numa_node_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf lwt_xmit bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_l3_csum_replace, bpf_l4_csum_replace, bpf_tail_call, bpf_clone_redirect, bpf_get_cgroup_classid, bpf_skb_get_tunnel_key, bpf_skb_set_tunnel_key, bpf_redirect, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_get_tunnel_opt, bpf_skb_set_tunnel_opt, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_change_tail, bpf_skb_pull_data, bpf_csum_update, bpf_set_hash_invalid, bpf_get_numa_node_id, bpf_skb_change_head, bpf_lwt_push_encap, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_csum_level, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf sock_ops bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_setsockopt, bpf_sock_map_update, bpf_getsockopt, bpf_sock_ops_cb_flags_set, bpf_sock_hash_update, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_tcp_sock, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_load_hdr_opt, bpf_store_hdr_opt, bpf_reserve_hdr_opt, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf sk_skb bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_tail_call, bpf_perf_event_output, bpf_skb_load_bytes, bpf_get_current_task, bpf_skb_change_tail, bpf_skb_pull_data, bpf_get_numa_node_id, bpf_skb_change_head, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_skb_adjust_room, bpf_sk_redirect_map, bpf_sk_redirect_hash, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_skc_lookup_tcp, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf cgroup_device bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_uid_gid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf sk_msg bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_cgroup_classid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_msg_redirect_map, bpf_msg_apply_bytes, bpf_msg_cork_bytes, bpf_msg_pull_data, bpf_msg_redirect_hash, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_msg_push_data, bpf_msg_pop_data, bpf_spin_lock, bpf_spin_unlock, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf raw_tracepoint bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf cgroup_sock_addr bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_get_cgroup_classid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_setsockopt, bpf_getsockopt, bpf_bind, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_skc_lookup_tcp, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_get_netns_cookie, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf lwt_seg6local bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_cgroup_classid, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_csum_diff, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_skb_pull_data, bpf_get_numa_node_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf lirc_mode2 not supported sk_reuseport bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_skb_load_bytes, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_socket_cookie, bpf_skb_load_bytes_relative, bpf_sk_select_reuseport, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf flow_dissector bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_skb_load_bytes, bpf_get_current_task, bpf_get_numa_node_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf cgroup_sysctl bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_uid_gid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_sysctl_get_name, bpf_sysctl_get_current_value, bpf_sysctl_get_new_value, bpf_sysctl_set_new_value, bpf_strtol, bpf_strtoul, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf raw_tracepoint_writable bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_perf_event_read, bpf_perf_event_output, bpf_get_stackid, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_get_numa_node_id, bpf_probe_read_str, bpf_perf_event_read_value, bpf_get_stack, bpf_get_current_cgroup_id, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_send_signal, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_send_signal_thread, bpf_jiffies64, bpf_get_ns_current_pid_tgid, bpf_get_current_ancestor_cgroup_id, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_get_task_stack, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_get_current_task_btf, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf cgroup_sockopt bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_get_current_uid_gid, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_tcp_sock, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf tracing not supported struct_ops bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_probe_read, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_skb_store_bytes, bpf_l3_csum_replace, bpf_l4_csum_replace, bpf_tail_call, bpf_clone_redirect, bpf_get_current_pid_tgid, bpf_get_current_uid_gid, bpf_get_current_comm, bpf_get_cgroup_classid, bpf_skb_vlan_push, bpf_skb_vlan_pop, bpf_skb_get_tunnel_key, bpf_skb_set_tunnel_key, bpf_perf_event_read, bpf_redirect, bpf_get_route_realm, bpf_perf_event_output, bpf_skb_load_bytes, bpf_get_stackid, bpf_csum_diff, bpf_skb_get_tunnel_opt, bpf_skb_set_tunnel_opt, bpf_skb_change_proto, bpf_skb_change_type, bpf_skb_under_cgroup, bpf_get_hash_recalc, bpf_get_current_task, bpf_current_task_under_cgroup, bpf_skb_change_tail, bpf_skb_pull_data, bpf_csum_update, bpf_set_hash_invalid, bpf_get_numa_node_id, bpf_skb_change_head, bpf_xdp_adjust_head, bpf_probe_read_str, bpf_get_socket_cookie, bpf_get_socket_uid, bpf_set_hash, bpf_setsockopt, bpf_skb_adjust_room, bpf_redirect_map, bpf_sk_redirect_map, bpf_sock_map_update, bpf_xdp_adjust_meta, bpf_perf_event_read_value, bpf_perf_prog_read_value, bpf_getsockopt, bpf_override_return, bpf_sock_ops_cb_flags_set, bpf_msg_redirect_map, bpf_msg_apply_bytes, bpf_msg_cork_bytes, bpf_msg_pull_data, bpf_bind, bpf_xdp_adjust_tail, bpf_skb_get_xfrm_state, bpf_get_stack, bpf_skb_load_bytes_relative, bpf_fib_lookup, bpf_sock_hash_update, bpf_msg_redirect_hash, bpf_sk_redirect_hash, bpf_lwt_push_encap, bpf_lwt_seg6_store_bytes, bpf_lwt_seg6_adjust_srh, bpf_lwt_seg6_action, bpf_rc_repeat, bpf_rc_keydown, bpf_skb_cgroup_id, bpf_get_current_cgroup_id, bpf_get_local_storage, bpf_sk_select_reuseport, bpf_skb_ancestor_cgroup_id, bpf_sk_lookup_tcp, bpf_sk_lookup_udp, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_msg_push_data, bpf_msg_pop_data, bpf_rc_pointer_rel, bpf_spin_lock, bpf_spin_unlock, bpf_sk_fullsock, bpf_tcp_sock, bpf_skb_ecn_set_ce, bpf_get_listener_sock, bpf_skc_lookup_tcp, bpf_tcp_check_syncookie, bpf_sysctl_get_name, bpf_sysctl_get_current_value, bpf_sysctl_get_new_value, bpf_sysctl_set_new_value, bpf_strtol, bpf_strtoul, bpf_sk_storage_get, bpf_sk_storage_delete, bpf_send_signal, bpf_tcp_gen_syncookie, bpf_skb_output, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_tcp_send_ack, bpf_send_signal_thread, bpf_jiffies64, bpf_read_branch_records, bpf_get_ns_current_pid_tgid, bpf_xdp_output, bpf_get_netns_cookie, bpf_get_current_ancestor_cgroup_id, bpf_sk_assign, bpf_ktime_get_boot_ns, bpf_seq_printf, bpf_seq_write, bpf_sk_cgroup_id, bpf_sk_ancestor_cgroup_id, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_csum_level, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_get_task_stack, bpf_load_hdr_opt, bpf_store_hdr_opt, bpf_reserve_hdr_opt, bpf_inode_storage_get, bpf_inode_storage_delete, bpf_d_path, bpf_copy_from_user, bpf_snprintf_btf, bpf_seq_printf_btf, bpf_skb_cgroup_classid, bpf_redirect_neigh, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_redirect_peer, bpf_task_storage_get, bpf_task_storage_delete, bpf_get_current_task_btf, bpf_bprm_opts_set, bpf_ktime_get_coarse_ns, bpf_ima_inode_hash, bpf_sock_from_file, bpf_check_mtu, bpf_for_each_map_elem, bpf_snprintf, bpf_sys_bpf, bpf_btf_find_by_name_kind, bpf_sys_close ext not supported lsm not supported sk_lookup bpf_map_lookup_elem, bpf_map_update_elem, bpf_map_delete_elem, bpf_ktime_get_ns, bpf_get_prandom_u32, bpf_get_smp_processor_id, bpf_tail_call, bpf_perf_event_output, bpf_get_current_task, bpf_get_numa_node_id, bpf_sk_release, bpf_map_push_elem, bpf_map_pop_elem, bpf_map_peek_elem, bpf_spin_lock, bpf_spin_unlock, bpf_probe_read_user, bpf_probe_read_kernel, bpf_probe_read_user_str, bpf_probe_read_kernel_str, bpf_jiffies64, bpf_sk_assign, bpf_ktime_get_boot_ns, bpf_ringbuf_output, bpf_ringbuf_reserve, bpf_ringbuf_submit, bpf_ringbuf_discard, bpf_ringbuf_query, bpf_skc_to_tcp6_sock, bpf_skc_to_tcp_sock, bpf_skc_to_tcp_timewait_sock, bpf_skc_to_tcp_request_sock, bpf_skc_to_udp6_sock, bpf_snprintf_btf, bpf_per_cpu_ptr, bpf_this_cpu_ptr, bpf_ktime_get_coarse_ns, bpf_for_each_map_elem, bpf_snprintf Table 7.3. Available map types Map type Available hash yes array yes prog_array yes perf_event_array yes percpu_hash yes percpu_array yes stack_trace yes cgroup_array yes lru_hash yes lru_percpu_hash yes lpm_trie yes array_of_maps yes hash_of_maps yes devmap yes sockmap yes cpumap yes xskmap yes sockhash yes cgroup_storage yes reuseport_sockarray yes percpu_cgroup_storage yes queue yes stack yes sk_storage yes devmap_hash yes struct_ops no ringbuf yes inode_storage yes task_storage no | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.7_release_notes/available_bpf_features |
Chapter 47. Managing host groups using Ansible playbooks | Chapter 47. Managing host groups using Ansible playbooks To learn more about host groups in Identity Management (IdM) and using Ansible to perform operations involving host groups in Identity Management (IdM), see the following: Host groups in IdM Ensuring the presence of IdM host groups Ensuring the presence of hosts in IdM host groups Nesting IdM host groups Ensuring the presence of member managers in IdM host groups Ensuring the absence of hosts from IdM host groups Ensuring the absence of nested host groups from IdM host groups Ensuring the absence of member managers from IdM host groups 47.1. Host groups in IdM IdM host groups can be used to centralize control over important management tasks, particularly access control. Definition of host groups A host group is an entity that contains a set of IdM hosts with common access control rules and other characteristics. For example, you can define host groups based on company departments, physical locations, or access control requirements. A host group in IdM can include: IdM servers and clients Other IdM host groups Host groups created by default By default, the IdM server creates the host group ipaservers for all IdM server hosts. Direct and indirect group members Group attributes in IdM apply to both direct and indirect members: when host group B is a member of host group A, all members of host group B are considered indirect members of host group A. 47.2. Ensuring the presence of IdM host groups using Ansible playbooks Follow this procedure to ensure the presence of host groups in Identity Management (IdM) using Ansible playbooks. Note Without Ansible, host group entries are created in IdM using the ipa hostgroup-add command. The result of adding a host group to IdM is the state of the host group being present in IdM. Because of the Ansible reliance on idempotence, to add a host group to IdM using Ansible, you must create a playbook in which you define the state of the host group as present: state: present . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host group information. For example, to ensure the presence of a host group named databases , specify name: databases in the - ipahostgroup task. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/user/ensure-hostgroup-is-present.yml file. In the playbook, state: present signifies a request to add the host group to IdM unless it already exists there. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group whose presence in IdM you wanted to ensure: The databases host group exists in IdM. 47.3. Ensuring the presence of hosts in IdM host groups using Ansible playbooks Follow this procedure to ensure the presence of hosts in host groups in Identity Management (IdM) using Ansible playbooks. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The hosts you want to reference in your Ansible playbook exist in IdM. For details, see Ensuring the presence of an IdM host entry using Ansible playbooks . The host groups you reference from the Ansible playbook file have been added to IdM. For details, see Ensuring the presence of IdM host groups using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host information. Specify the name of the host group using the name parameter of the ipahostgroup variable. Specify the name of the host with the host parameter of the ipahostgroup variable. To simplify this step, you can copy and modify the examples in the /usr/share/doc/ansible-freeipa/playbooks/hostgroup/ensure-hosts-and-hostgroups-are-present-in-hostgroup.yml file: This playbook adds the db.idm.example.com host to the databases host group. The action: member line indicates that when the playbook is run, no attempt is made to add the databases group itself. Instead, only an attempt is made to add db.idm.example.com to databases . Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about a host group to see which hosts are present in it: The db.idm.example.com host is present as a member of the databases host group. 47.4. Nesting IdM host groups using Ansible playbooks Follow this procedure to ensure the presence of nested host groups in Identity Management (IdM) host groups using Ansible playbooks. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The host groups you reference from the Ansible playbook file exist in IdM. For details, see Ensuring the presence of IdM host groups using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host group information. To ensure that a nested host group A exists in a host group B : in the Ansible playbook, specify, among the - ipahostgroup variables, the name of the host group B using the name variable. Specify the name of the nested hostgroup A with the hostgroup variable. To simplify this step, you can copy and modify the examples in the /usr/share/doc/ansible-freeipa/playbooks/hostgroup/ensure-hosts-and-hostgroups-are-present-in-hostgroup.yml file: This Ansible playbook ensures the presence of the myqsl-server and oracle-server host groups in the databases host group. The action: member line indicates that when the playbook is run, no attempt is made to add the databases group itself to IdM. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group in which nested host groups are present: The mysql-server and oracle-server host groups exist in the databases host group. 47.5. Ensuring the presence of member managers in IDM host groups using Ansible Playbooks The following procedure describes ensuring the presence of member managers in IdM hosts and host groups using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You must have the name of the host or host group you are adding as member managers and the name of the host group you want them to manage. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the necessary host and host group member management information: Run the playbook: Verification You can verify if the group_name group contains example_member and project_admins as member managers by using the ipa group-show command: Log into ipaserver as administrator: Display information about testhostgroup : Additional resources See ipa hostgroup-add-member-manager --help . See the ipa man page on your system. 47.6. Ensuring the absence of hosts from IdM host groups using Ansible playbooks Follow this procedure to ensure the absence of hosts from host groups in Identity Management (IdM) using Ansible playbooks. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The hosts you want to reference in your Ansible playbook exist in IdM. For details, see Ensuring the presence of an IdM host entry using Ansible playbooks . The host groups you reference from the Ansible playbook file exist in IdM. For details, see Ensuring the presence of IdM host groups using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host and host group information. Specify the name of the host group using the name parameter of the ipahostgroup variable. Specify the name of the host whose absence from the host group you want to ensure using the host parameter of the ipahostgroup variable. To simplify this step, you can copy and modify the examples in the /usr/share/doc/ansible-freeipa/playbooks/hostgroup/ensure-hosts-and-hostgroups-are-absent-in-hostgroup.yml file: This playbook ensures the absence of the db.idm.example.com host from the databases host group. The action: member line indicates that when the playbook is run, no attempt is made to remove the databases group itself. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group and the hosts it contains: The db.idm.example.com host does not exist in the databases host group. 47.7. Ensuring the absence of nested host groups from IdM host groups using Ansible playbooks Follow this procedure to ensure the absence of nested host groups from outer host groups in Identity Management (IdM) using Ansible playbooks. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The host groups you reference from the Ansible playbook file exist in IdM. For details, see Ensuring the presence of IdM host groups using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host group information. Specify, among the - ipahostgroup variables, the name of the outer host group using the name variable. Specify the name of the nested hostgroup with the hostgroup variable. To simplify this step, you can copy and modify the examples in the /usr/share/doc/ansible-freeipa/playbooks/hostgroup/ensure-hosts-and-hostgroups-are-absent-in-hostgroup.yml file: This playbook makes sure that the mysql-server and oracle-server host groups are absent from the databases host group. The action: member line indicates that when the playbook is run, no attempt is made to ensure the databases group itself is deleted from IdM. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group from which nested host groups should be absent: The output confirms that the mysql-server and oracle-server nested host groups are absent from the outer databases host group. 47.8. Ensuring the absence of IdM host groups using Ansible playbooks Follow this procedure to ensure the absence of host groups in Identity Management (IdM) using Ansible playbooks. Note Without Ansible, host group entries are removed from IdM using the ipa hostgroup-del command. The result of removing a host group from IdM is the state of the host group being absent from IdM. Because of the Ansible reliance on idempotence, to remove a host group from IdM using Ansible, you must create a playbook in which you define the state of the host group as absent: state: absent . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host group information. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/user/ensure-hostgroup-is-absent.yml file. This playbook ensures the absence of the databases host group from IdM. The state: absent means a request to delete the host group from IdM unless it is already deleted. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group whose absence you ensured: The databases host group does not exist in IdM. 47.9. Ensuring the absence of member managers from IdM host groups using Ansible playbooks The following procedure describes ensuring the absence of member managers in IdM hosts and host groups using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You must have the name of the user or user group you are removing as member managers and the name of the host group they are managing. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the necessary host and host group member management information: Run the playbook: Verification You can verify if the group_name group does not contain example_member or project_admins as member managers by using the ipa group-show command: Log into ipaserver as administrator: Display information about testhostgroup : Additional resources See ipa hostgroup-add-member-manager --help . See the ipa man page on your system. | [
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure host-group databases is present - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases state: present",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hostgroup-is-present.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases Host-group: databases",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure host-group databases is present - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases host: - db.idm.example.com action: member",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-present-in-hostgroup.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases Host-group: databases Member hosts: db.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure hosts and hostgroups are present in existing databases hostgroup - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases hostgroup: - mysql-server - oracle-server action: member",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-present-in-hostgroup.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases Host-group: databases Member hosts: db.idm.example.com Member host-groups: mysql-server, oracle-server",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle host group membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure member manager user example_member is present for group_name ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_name membermanager_user: example_member - name: Ensure member manager group project_admins is present for group_name ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_name membermanager_group: project_admins",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-member-managers-host-groups.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipaserver]USD ipa hostgroup-show group_name Host-group: group_name Member hosts: server.idm.example.com Member host-groups: testhostgroup2 Membership managed by groups: project_admins Membership managed by users: example_member",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure host-group databases is absent - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases host: - db.idm.example.com action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-absent-in-hostgroup.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases Host-group: databases Member host-groups: mysql-server, oracle-server",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure hosts and hostgroups are absent in existing databases hostgroup - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases hostgroup: - mysql-server - oracle-server action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-absent-in-hostgroup.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases Host-group: databases",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - Ensure host-group databases is absent ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hostgroup-is-absent.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases ipa: ERROR: databases: host group not found",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle host group membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure member manager host and host group members are absent for group_name ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_name membermanager_user: example_member membermanager_group: project_admins action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-member-managers-host-groups-are-absent.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipaserver]USD ipa hostgroup-show group_name Host-group: group_name Member hosts: server.idm.example.com Member host-groups: testhostgroup2"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/managing-host-groups-using-ansible-playbooks_managing-users-groups-hosts |
Service Mesh | Service Mesh OpenShift Container Platform 4.11 Service Mesh installation, usage, and release notes Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/service_mesh/index |
Chapter 49. Storage | Chapter 49. Storage Multi-queue I/O scheduling for SCSI Red Hat Enterprise Linux 7 includes a new multiple-queue I/O scheduling mechanism for block devices known as blk-mq. The scsi-mq package allows the Small Computer System Interface (SCSI) subsystem to make use of this new queuing mechanism. This functionality is provided as a Technology Preview and is not enabled by default. To enable it, add scsi_mod.use_blk_mq=Y to the kernel command line. Although blk-mq is intended to offer improved performance, particularly for low-latency devices, it is not guaranteed to always provide better performance. In particular, in some cases, enabling scsi-mq can result in significantly worse performance, especially on systems with many CPUs. (BZ#1109348) Targetd plug-in from the libStorageMgmt API Since Red Hat Enterprise Linux 7.1, storage array management with libStorageMgmt, a storage array independent API, has been fully supported. The provided API is stable, consistent, and allows developers to programmatically manage different storage arrays and utilize the hardware-accelerated features provided. System administrators can also use libStorageMgmt to manually configure storage and to automate storage management tasks with the included command-line interface. The Targetd plug-in is not fully supported and remains a Technology Preview. (BZ#1119909) Support for Data Integrity Field/Data Integrity Extension (DIF/DIX) DIF/DIX is a new addition to the SCSI Standard. It is fully supported in Red Hat Enterprise Linux 7 for the HBAs and storage arrays specified in the Features chapter, but it remains in Technology Preview for all other HBAs and storage arrays. DIF/DIX increases the size of the commonly used 512 byte disk block from 512 to 520 bytes, adding the Data Integrity Field (DIF). The DIF stores a checksum value for the data block that is calculated by the Host Bus Adapter (HBA) when a write occurs. The storage device then confirms the checksum on receipt, and stores both the data and the checksum. Conversely, when a read occurs, the checksum can be verified by the storage device, and by the receiving HBA. (BZ#1072107) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/technology_previews_storage |
Appendix F. Notable Changes in IdM | Appendix F. Notable Changes in IdM Certain IdM versions introduce new commands or replace existing ones. Additionally, sometimes configuration or installation procedures change extensively. This appendix describes the most important changes. For a more detailed list of changes, see the Red Hat Enterprise Linux (RHEL) 7 Release Notes for the individual versions. IdM 4.6 running on RHEL 7.7 The ipa-cert-fix utility has been added to renew system certificates when IdM is offline. For details, see Section 26.2.3, "Renewing Expired System Certificates When IdM is Offline" . IdM now supports IP addresses in the SAN extension of certificates: in certain situations, administrators need to issue certificates with an IP address in the Subject Alternative Name (SAN) extension. Starting with this release, administrators can set an IP address in the SAN extension if the address is managed in the IdM DNS service and associated with the subject host or service principal. IdM now prevents using single-label domain names, for example .company. The IdM domain must be composed of one or more subdomains and a top level domain, for example example.com or company.example.com. For further changes in this release, see the following sections in the Red Hat Enterprise Linux 7.7 Release Notes : New Features - Authentication and Interoperability Notable Bug Fixes - Authentication and Interoperability IdM 4.6 running on RHEL 7.6 For changes in this release, see the following sections in the Red Hat Enterprise Linux 7.6 Release Notes : New Features - Authentication and Interoperability Notable Bug Fixes - Authentication and Interoperability IdM 4.5 running on RHEL 7.5 For changes in this release, see the following sections in the Red Hat Enterprise Linux 7.5 Release Notes : New Features - Authentication and Interoperability Notable Bug Fixes - Authentication and Interoperability IdM 4.5 running on RHEL 7.4 This version changed the SSL back end for client HTTPS connections from Network Security Services (NSS) to OpenSSL. As a consequence, the Registration Authority (RA) stores now its certificate in the /var/lib/ipa/ directory instead of an NSS database. For further changes in this release, see the following sections in the Red Hat Enterprise Linux 7.4 Release Notes : New Features - Authentication and Interoperability Notable Bug Fixes - Authentication and Interoperability IdM 4.4 running on RHEL 7.3 The new ipa replica-manage clean-dangling-ruv command enables administrators to remove all relative update vectors (RUV) from an uninstalled replica. The new ipa server-del command enables administrators to uninstall an IdM server. The following commands introduced in this version enable administrators to manage IdM Certificate Authorities (CA): ipa ca-add ipd ca-del ipa ca-enable ipa ca-disble ipa ca-find ipa ca-mod ipa ca-show The following commands introduced in this version replace the ipa-replica manage command to manage replication agreements: ipa topology-configure ipa topologysegment-mod ipa topologysegment-del ipa topologysuffix-add ipa topologysuffix-show ipa topologysuffix-verify The following commands introduced in this version enable administrators to display a list of IdM servers stored in the cn=masters,cn=ipa,cn=etc, domain_suffix entry: ipa server-find ipa server-show The certmonger helper scripts have been moved from the /usr/lib64/ipa/certmonger/ to the /usr/libexec/ipa/certmonger/ directory. This version introduced domain levels and the following commands to display and set the domain level: ipa domainlevel-set ipa domainlevel-show For further changes in this release, see the following sections in the Red Hat Enterprise Linux 7.3 Release Notes : New Features - Authentication and Interoperability Notable Bug Fixes - Authentication and Interoperability IdM 4.2 running on RHEL 7.2 Support for multiple certificate profiles and user certificates: Identity Management now supports multiple profiles for issuing server and other certificates instead of only supporting a single server certificate profile. The profiles are stored in the Directory Server and shared between IdM replicas. In addition, the administrator can now issue certificates to individual users. Previously, it was only possible to issue certificates to hosts and services. For further changes in this release, see the New Features - Authentication and Interoperability section in the Red Hat Enterprise Linux 7.2 Release Notes . IdM 4.1 running on RHEL 7.1 The following commands introduced in this version replace the ipa-getkeytab -r command to retrieve keytabs and set retrieval permissions: ipa-host-allow-retrieve-keytab ipa-host-disallow-retrieve-keytab ipa-host-allow-create-keytab ipa-host-disallow-create-keytab ipa-service-allow-retrieve-keytab ipa-service-disallow-retrieve-keytab ipa-service-allow-create-keytab ipa-service-disallow-create-keytab For further changes in this release, see the New Features - Authentication and Interoperability section in the Red Hat Enterprise Linux 7.1 Release Notes . IdM 3.3 running on RHEL 7.0 For changes in this release, see the New Features - Authentication and Interoperability section in the Red Hat Enterprise Linux 7.0 Release Notes . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/notable_changes_in_idm |
5.257. python-configshell | 5.257. python-configshell 5.257.1. RHBA-2012:0856 - python-configshell bug fix and enhancement update Updated python-configshell packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The python-configshell packages provide a library for implementing configuration command line interfaces for the Python programming environment. The python-configshell package has been upgraded to version 1.1.fb4 which provides a number of bug fixes and enhancements over the version, and adds support for the configuration shell functionality of fcoe-target-utils packages. (BZ# 765977 ) All users of python-configshell are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/python-configshell |
7.76. hwdata | 7.76. hwdata 7.76.1. RHEA-2015:1349 - hwdata enhancement update An updated hwdata package that adds one enhancement is now available for Red Hat Enterprise Linux 6. The hwdata package contains tools for accessing and displaying hardware identification and configuration data. Enhancement BZ# 1170975 The PCI, USB, and vendor ID files have been updated with information about recently released hardware. Hardware utility tools that use these ID files are now able to correctly identify recently released hardware. Users of hwdata are advised to upgrade to this updated package, which adds this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-hwdata |
Chapter 6. Creating alerts in Datadog | Chapter 6. Creating alerts in Datadog Administrators can create monitors that track the metrics of the Red Hat Ceph Storage cluster and generate alerts. For example, if an OSD is down, Datadog can alert an administrator that one or more OSDs are down. Prerequisites Root-level access to the Ceph Monitor node. Appropriate Ceph key providing access to the Red Hat Ceph Storage cluster. Internet access. Procedure Click Monitors to see an overview of the Datadog monitors. To create a monitor, select Monitors->New Monitor . Select the detection method. For example, "Threshold Alert." Define the metric. To create an advanced alert, click on the Advanced... link. Then, select a metric from the combo box. For example, select the ceph.num_in_osds Ceph metric. Click Add Query+ to add another query. Select another metric from the combo box. For example, select the ceph.num_up_osds Ceph metric. In the Express these queries as: field, enter a-b , where a is the value of ceph.num_in_osds and b is the value of ceph.num_up_osds . When the difference is 1 or greater, there is at least one OSD down. Set the alert conditions. For example, set the trigger to be above or equal to , the threshold to in total and the time elapsed to 1 minute . Set the Alert threshold field to 1 . When at least one OSD is in the cluster and it is not up and running, the monitor will alert the user. Give the monitor a title in the input field below Preview and Edit . This is required to save the monitor. Enter a description of the alert in the text field. Note The text field supports metric variables and Markdown syntax. Add the recipients of the alert. This will add an email address to the text field. When the alert gets triggered, the recipients will receive the alert. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/monitoring_ceph_with_datadog_guide/procedure-module-name-with-dashes_datadog |
13.7. Configuring a System to Authenticate Using OpenLDAP | 13.7. Configuring a System to Authenticate Using OpenLDAP This section provides a brief overview of how to configure OpenLDAP user authentication. Unless you are an OpenLDAP expert, more documentation than is provided here is necessary. Refer to the references provided in Section 13.9, "Additional Resources" for more information. Install the Necessary LDAP Package First, make sure that the appropriate packages are installed on both the LDAP server and the LDAP client machines. The LDAP server needs the openldap-servers package. The openldap , openldap-clients , and nss_ldap packages need to be installed on all LDAP client machines. Edit the Configuration Files On the server, edit the /etc/openldap/slapd.conf file on the LDAP server to make sure it matches the specifics of the organization. Refer to Section 13.6.1, "Editing /etc/openldap/slapd.conf " for instructions about editing slapd.conf . On the client machines, both /etc/ldap.conf and /etc/openldap/ldap.conf need to contain the proper server and search base information for the organization. To do this, run the graphical Authentication Configuration Tool ( system-config-authentication ) and select Enable LDAP Support under the User Information tab. It is also possible to edit these files by hand. On the client machines, the /etc/nsswitch.conf must be edited to use LDAP. To do this, run the Authentication Configuration Tool ( system-config-authentication ) and select Enable LDAP Support under the User Information tab. If editing /etc/nsswitch.conf by hand, add ldap to the appropriate lines. For example: 13.7.1. PAM and LDAP To have standard PAM-enabled applications use LDAP for authentication, run the Authentication Configuration Tool ( system-config-authentication ) and select Enable LDAP Support under the the Authentication tab. For more about configuring PAM, refer to Chapter 16, Pluggable Authentication Modules (PAM) and the PAM man pages. | [
"passwd: files ldap shadow: files ldap group: files ldap"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-ldap-pam |
Chapter 2. Builds | Chapter 2. Builds 2.1. Understanding image builds 2.1.1. Builds A build is the process of transforming input parameters into a resulting object. Most often, the process is used to transform input parameters or source code into a runnable image. A BuildConfig object is the definition of the entire build process. OpenShift Container Platform uses Kubernetes by creating containers from build images and pushing them to a container image registry. Build objects share common characteristics including inputs for a build, the requirement to complete a build process, logging the build process, publishing resources from successful builds, and publishing the final status of the build. Builds take advantage of resource restrictions, specifying limitations on resources such as CPU usage, memory usage, and build or pod execution time. The OpenShift Container Platform build system provides extensible support for build strategies that are based on selectable types specified in the build API. There are three primary build strategies available: Docker build Source-to-image (S2I) build Custom build By default, docker builds and S2I builds are supported. The resulting object of a build depends on the builder used to create it. For docker and S2I builds, the resulting objects are runnable images. For custom builds, the resulting objects are whatever the builder image author has specified. Additionally, the pipeline build strategy can be used to implement sophisticated workflows: Continuous integration Continuous deployment 2.1.1.1. Docker build OpenShift Container Platform uses Buildah to build a container image from a Dockerfile. For more information on building container images with Dockerfiles, see the Dockerfile reference documentation . Tip If you set Docker build arguments by using the buildArgs array, see Understand how ARG and FROM interact in the Dockerfile reference documentation. 2.1.1.2. Source-to-image build Source-to-image (S2I) is a tool for building reproducible container images. It produces ready-to-run images by injecting application source into a container image and assembling a new image. The new image incorporates the base image, the builder, and built source and is ready to use with the buildah run command. S2I supports incremental builds, which re-use previously downloaded dependencies, previously built artifacts, and so on. 2.1.1.3. Custom build The custom build strategy allows developers to define a specific builder image responsible for the entire build process. Using your own builder image allows you to customize your build process. A custom builder image is a plain container image embedded with build process logic, for example for building RPMs or base images. Custom builds run with a high level of privilege and are not available to users by default. Only users who can be trusted with cluster administration permissions should be granted access to run custom builds. 2.1.1.4. Pipeline build Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. The Pipeline build strategy allows developers to define a Jenkins pipeline for use by the Jenkins pipeline plug-in. The build can be started, monitored, and managed by OpenShift Container Platform in the same way as any other build type. Pipeline workflows are defined in a jenkinsfile , either embedded directly in the build configuration, or supplied in a Git repository and referenced by the build configuration. 2.2. Understanding build configurations The following sections define the concept of a build, build configuration, and outline the primary build strategies available. 2.2.1. BuildConfigs A build configuration describes a single build definition and a set of triggers for when a new build is created. Build configurations are defined by a BuildConfig , which is a REST object that can be used in a POST to the API server to create a new instance. A build configuration, or BuildConfig , is characterized by a build strategy and one or more sources. The strategy determines the process, while the sources provide its input. Depending on how you choose to create your application using OpenShift Container Platform, a BuildConfig is typically generated automatically for you if you use the web console or CLI, and it can be edited at any time. Understanding the parts that make up a BuildConfig and their available options can help if you choose to manually change your configuration later. The following example BuildConfig results in a new build every time a container image tag or the source code changes: BuildConfig object definition kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: "ruby-sample-build" 1 spec: runPolicy: "Serial" 2 triggers: 3 - type: "GitHub" github: secret: "secret101" - type: "Generic" generic: secret: "secret101" - type: "ImageChange" source: 4 git: uri: "https://github.com/openshift/ruby-hello-world" strategy: 5 sourceStrategy: from: kind: "ImageStreamTag" name: "ruby-20-centos7:latest" output: 6 to: kind: "ImageStreamTag" name: "origin-ruby-sample:latest" postCommit: 7 script: "bundle exec rake test" 1 This specification creates a new BuildConfig named ruby-sample-build . 2 The runPolicy field controls whether builds created from this build configuration can be run simultaneously. The default value is Serial , which means new builds run sequentially, not simultaneously. 3 You can specify a list of triggers, which cause a new build to be created. 4 The source section defines the source of the build. The source type determines the primary source of input, and can be either Git , to point to a code repository location, Dockerfile , to build from an inline Dockerfile, or Binary , to accept binary payloads. It is possible to have multiple sources at once. For more information about each source type, see "Creating build inputs". 5 The strategy section describes the build strategy used to execute the build. You can specify a Source , Docker , or Custom strategy here. This example uses the ruby-20-centos7 container image that Source-to-image (S2I) uses for the application build. 6 After the container image is successfully built, it is pushed into the repository described in the output section. 7 The postCommit section defines an optional build hook. 2.3. Creating build inputs Use the following sections for an overview of build inputs, instructions on how to use inputs to provide source content for builds to operate on, and how to use build environments and create secrets. 2.3.1. Build inputs A build input provides source content for builds to operate on. You can use the following build inputs to provide sources in OpenShift Container Platform, listed in order of precedence: Inline Dockerfile definitions Content extracted from existing images Git repositories Binary (Local) inputs Input secrets External artifacts You can combine multiple inputs in a single build. However, as the inline Dockerfile takes precedence, it can overwrite any other file named Dockerfile provided by another input. Binary (local) input and Git repositories are mutually exclusive inputs. You can use input secrets when you do not want certain resources or credentials used during a build to be available in the final application image produced by the build, or want to consume a value that is defined in a secret resource. External artifacts can be used to pull in additional files that are not available as one of the other build input types. When you run a build: A working directory is constructed and all input content is placed in the working directory. For example, the input Git repository is cloned into the working directory, and files specified from input images are copied into the working directory using the target path. The build process changes directories into the contextDir , if one is defined. The inline Dockerfile, if any, is written to the current directory. The content from the current directory is provided to the build process for reference by the Dockerfile, custom builder logic, or assemble script. This means any input content that resides outside the contextDir is ignored by the build. The following example of a source definition includes multiple input types and an explanation of how they are combined. For more details on how each input type is defined, see the specific sections for each input type. source: git: uri: https://github.com/openshift/ruby-hello-world.git 1 ref: "master" images: - from: kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: - destinationDir: app/dir/injected/dir 2 sourcePath: /usr/lib/somefile.jar contextDir: "app/dir" 3 dockerfile: "FROM centos:7\nRUN yum install -y httpd" 4 1 The repository to be cloned into the working directory for the build. 2 /usr/lib/somefile.jar from myinputimage is stored in <workingdir>/app/dir/injected/dir . 3 The working directory for the build becomes <original_workingdir>/app/dir . 4 A Dockerfile with this content is created in <original_workingdir>/app/dir , overwriting any existing file with that name. 2.3.2. Dockerfile source When you supply a dockerfile value, the content of this field is written to disk as a file named dockerfile . This is done after other input sources are processed, so if the input source repository contains a Dockerfile in the root directory, it is overwritten with this content. The source definition is part of the spec section in the BuildConfig : source: dockerfile: "FROM centos:7\nRUN yum install -y httpd" 1 1 The dockerfile field contains an inline Dockerfile that is built. Additional resources The typical use for this field is to provide a Dockerfile to a docker strategy build. 2.3.3. Image source You can add additional files to the build process with images. Input images are referenced in the same way the From and To image targets are defined. This means both container images and image stream tags can be referenced. In conjunction with the image, you must provide one or more path pairs to indicate the path of the files or directories to copy the image and the destination to place them in the build context. The source path can be any absolute path within the image specified. The destination must be a relative directory path. At build time, the image is loaded and the indicated files and directories are copied into the context directory of the build process. This is the same directory into which the source repository content is cloned. If the source path ends in /. then the content of the directory is copied, but the directory itself is not created at the destination. Image inputs are specified in the source definition of the BuildConfig : source: git: uri: https://github.com/openshift/ruby-hello-world.git ref: "master" images: 1 - from: 2 kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: 3 - destinationDir: injected/dir 4 sourcePath: /usr/lib/somefile.jar 5 - from: kind: ImageStreamTag name: myotherinputimage:latest namespace: myothernamespace pullSecret: mysecret 6 paths: - destinationDir: injected/dir sourcePath: /usr/lib/somefile.jar 1 An array of one or more input images and files. 2 A reference to the image containing the files to be copied. 3 An array of source/destination paths. 4 The directory relative to the build root where the build process can access the file. 5 The location of the file to be copied out of the referenced image. 6 An optional secret provided if credentials are needed to access the input image. Note If your cluster uses an ImageContentSourcePolicy object to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project. Optionally, if an input image requires a pull secret, you can link the pull secret to the service account used by the build. By default, builds use the builder service account. The pull secret is automatically added to the build if the secret contains a credential that matches the repository hosting the input image. To link a pull secret to the service account used by the build, run: USD oc secrets link builder dockerhub Note This feature is not supported for builds using the custom strategy. 2.3.4. Git source When specified, source code is fetched from the supplied location. If you supply an inline Dockerfile, it overwrites the Dockerfile in the contextDir of the Git repository. The source definition is part of the spec section in the BuildConfig : source: git: 1 uri: "https://github.com/openshift/ruby-hello-world" ref: "master" contextDir: "app/dir" 2 dockerfile: "FROM openshift/ruby-22-centos7\nUSER example" 3 1 The git field contains the URI to the remote Git repository of the source code. Optionally, specify the ref field to check out a specific Git reference. A valid ref can be a SHA1 tag or a branch name. 2 The contextDir field allows you to override the default location inside the source code repository where the build looks for the application source code. If your application exists inside a sub-directory, you can override the default location (the root folder) using this field. 3 If the optional dockerfile field is provided, it should be a string containing a Dockerfile that overwrites any Dockerfile that may exist in the source repository. If the ref field denotes a pull request, the system uses a git fetch operation and then checkout FETCH_HEAD . When no ref value is provided, OpenShift Container Platform performs a shallow clone ( --depth=1 ). In this case, only the files associated with the most recent commit on the default branch (typically master ) are downloaded. This results in repositories downloading faster, but without the full commit history. To perform a full git clone of the default branch of a specified repository, set ref to the name of the default branch (for example master ). Warning Git clone operations that go through a proxy that is performing man in the middle (MITM) TLS hijacking or reencrypting of the proxied connection do not work. 2.3.4.1. Using a proxy If your Git repository can only be accessed using a proxy, you can define the proxy to use in the source section of the build configuration. You can configure both an HTTP and HTTPS proxy to use. Both fields are optional. Domains for which no proxying should be performed can also be specified in the NoProxy field. Note Your source URI must use the HTTP or HTTPS protocol for this to work. source: git: uri: "https://github.com/openshift/ruby-hello-world" ref: "master" httpProxy: http://proxy.example.com httpsProxy: https://proxy.example.com noProxy: somedomain.com, otherdomain.com Note For Pipeline strategy builds, given the current restrictions with the Git plug-in for Jenkins, any Git operations through the Git plug-in do not leverage the HTTP or HTTPS proxy defined in the BuildConfig . The Git plug-in only uses the proxy configured in the Jenkins UI at the Plugin Manager panel. This proxy is then used for all git interactions within Jenkins, across all jobs. Additional resources You can find instructions on how to configure proxies through the Jenkins UI at JenkinsBehindProxy . 2.3.4.2. Source Clone Secrets Builder pods require access to any Git repositories defined as source for a build. Source clone secrets are used to provide the builder pod with access it would not normally have access to, such as private repositories or repositories with self-signed or untrusted SSL certificates. The following source clone secret configurations are supported: .gitconfig File Basic Authentication SSH Key Authentication Trusted Certificate Authorities Note You can also use combinations of these configurations to meet your specific needs. 2.3.4.2.1. Automatically adding a source clone secret to a build configuration When a BuildConfig is created, OpenShift Container Platform can automatically populate its source clone secret reference. This behavior allows the resulting builds to automatically use the credentials stored in the referenced secret to authenticate to a remote Git repository, without requiring further configuration. To use this functionality, a secret containing the Git repository credentials must exist in the namespace in which the BuildConfig is later created. This secrets must include one or more annotations prefixed with build.openshift.io/source-secret-match-uri- . The value of each of these annotations is a Uniform Resource Identifier (URI) pattern, which is defined as follows. When a BuildConfig is created without a source clone secret reference and its Git source URI matches a URI pattern in a secret annotation, OpenShift Container Platform automatically inserts a reference to that secret in the BuildConfig . Prerequisites A URI pattern must consist of: A valid scheme: *:// , git:// , http:// , https:// or ssh:// A host: *` or a valid hostname or IP address optionally preceded by *. A path: /* or / followed by any characters optionally including * characters In all of the above, a * character is interpreted as a wildcard. Important URI patterns must match Git source URIs which are conformant to RFC3986 . Do not include a username (or password) component in a URI pattern. For example, if you use ssh://[email protected]:7999/ATLASSIAN jira.git for a git repository URL, the source secret must be specified as ssh://bitbucket.atlassian.com:7999/* (and not ssh://[email protected]:7999/* ). USD oc annotate secret mysecret \ 'build.openshift.io/source-secret-match-uri-1=ssh://bitbucket.atlassian.com:7999/*' Procedure If multiple secrets match the Git URI of a particular BuildConfig , OpenShift Container Platform selects the secret with the longest match. This allows for basic overriding, as in the following example. The following fragment shows two partial source clone secrets, the first matching any server in the domain mycorp.com accessed by HTTPS, and the second overriding access to servers mydev1.mycorp.com and mydev2.mycorp.com : kind: Secret apiVersion: v1 metadata: name: matches-all-corporate-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://*.mycorp.com/* data: ... --- kind: Secret apiVersion: v1 metadata: name: override-for-my-dev-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://mydev1.mycorp.com/* build.openshift.io/source-secret-match-uri-2: https://mydev2.mycorp.com/* data: ... Add a build.openshift.io/source-secret-match-uri- annotation to a pre-existing secret using: USD oc annotate secret mysecret \ 'build.openshift.io/source-secret-match-uri-1=https://*.mycorp.com/*' 2.3.4.2.2. Manually adding a source clone secret Source clone secrets can be added manually to a build configuration by adding a sourceSecret field to the source section inside the BuildConfig and setting it to the name of the secret that you created. In this example, it is the basicsecret . apiVersion: "v1" kind: "BuildConfig" metadata: name: "sample-build" spec: output: to: kind: "ImageStreamTag" name: "sample-image:latest" source: git: uri: "https://github.com/user/app.git" sourceSecret: name: "basicsecret" strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "python-33-centos7:latest" Procedure You can also use the oc set build-secret command to set the source clone secret on an existing build configuration. To set the source clone secret on an existing build configuration, enter the following command: USD oc set build-secret --source bc/sample-build basicsecret 2.3.4.2.3. Creating a secret from a .gitconfig file If the cloning of your application is dependent on a .gitconfig file, then you can create a secret that contains it. Add it to the builder service account and then your BuildConfig . Procedure To create a secret from a .gitconfig file: USD oc create secret generic <secret_name> --from-file=<path/to/.gitconfig> Note SSL verification can be turned off if sslVerify=false is set for the http section in your .gitconfig file: [http] sslVerify=false 2.3.4.2.4. Creating a secret from a .gitconfig file for secured Git If your Git server is secured with two-way SSL and user name with password, you must add the certificate files to your source build and add references to the certificate files in the .gitconfig file. Prerequisites You must have Git credentials. Procedure Add the certificate files to your source build and add references to the certificate files in the .gitconfig file. Add the client.crt , cacert.crt , and client.key files to the /var/run/secrets/openshift.io/source/ folder in the application source code. In the .gitconfig file for the server, add the [http] section shown in the following example: # cat .gitconfig Example output [user] name = <name> email = <email> [http] sslVerify = false sslCert = /var/run/secrets/openshift.io/source/client.crt sslKey = /var/run/secrets/openshift.io/source/client.key sslCaInfo = /var/run/secrets/openshift.io/source/cacert.crt Create the secret: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ 1 --from-literal=password=<password> \ 2 --from-file=.gitconfig=.gitconfig \ --from-file=client.crt=/var/run/secrets/openshift.io/source/client.crt \ --from-file=cacert.crt=/var/run/secrets/openshift.io/source/cacert.crt \ --from-file=client.key=/var/run/secrets/openshift.io/source/client.key 1 The user's Git user name. 2 The password for this user. Important To avoid having to enter your password again, be sure to specify the source-to-image (S2I) image in your builds. However, if you cannot clone the repository, you must still specify your user name and password to promote the build. Additional resources /var/run/secrets/openshift.io/source/ folder in the application source code. 2.3.4.2.5. Creating a secret from source code basic authentication Basic authentication requires either a combination of --username and --password , or a token to authenticate against the software configuration management (SCM) server. Prerequisites User name and password to access the private repository. Procedure Create the secret first before using the --username and --password to access the private repository: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ --from-literal=password=<password> \ --type=kubernetes.io/basic-auth Create a basic authentication secret with a token: USD oc create secret generic <secret_name> \ --from-literal=password=<token> \ --type=kubernetes.io/basic-auth 2.3.4.2.6. Creating a secret from source code SSH key authentication SSH key based authentication requires a private SSH key. The repository keys are usually located in the USDHOME/.ssh/ directory, and are named id_dsa.pub , id_ecdsa.pub , id_ed25519.pub , or id_rsa.pub by default. Procedure Generate SSH key credentials: USD ssh-keygen -t ed25519 -C "[email protected]" Note Creating a passphrase for the SSH key prevents OpenShift Container Platform from building. When prompted for a passphrase, leave it blank. Two files are created: the public key and a corresponding private key (one of id_dsa , id_ecdsa , id_ed25519 , or id_rsa ). With both of these in place, consult your source control management (SCM) system's manual on how to upload the public key. The private key is used to access your private repository. Before using the SSH key to access the private repository, create the secret: USD oc create secret generic <secret_name> \ --from-file=ssh-privatekey=<path/to/ssh/private/key> \ --from-file=<path/to/known_hosts> \ 1 --type=kubernetes.io/ssh-auth 1 Optional: Adding this field enables strict server host key check. Warning Skipping the known_hosts file while creating the secret makes the build vulnerable to a potential man-in-the-middle (MITM) attack. Note Ensure that the known_hosts file includes an entry for the host of your source code. 2.3.4.2.7. Creating a secret from source code trusted certificate authorities The set of Transport Layer Security (TLS) certificate authorities (CA) that are trusted during a Git clone operation are built into the OpenShift Container Platform infrastructure images. If your Git server uses a self-signed certificate or one signed by an authority not trusted by the image, you can create a secret that contains the certificate or disable TLS verification. If you create a secret for the CA certificate, OpenShift Container Platform uses it to access your Git server during the Git clone operation. Using this method is significantly more secure than disabling Git SSL verification, which accepts any TLS certificate that is presented. Procedure Create a secret with a CA certificate file. If your CA uses Intermediate Certificate Authorities, combine the certificates for all CAs in a ca.crt file. Enter the following command: USD cat intermediateCA.crt intermediateCA.crt rootCA.crt > ca.crt Create the secret: USD oc create secret generic mycert --from-file=ca.crt=</path/to/file> 1 1 You must use the key name ca.crt . 2.3.4.2.8. Source secret combinations You can combine the different methods for creating source clone secrets for your specific needs. 2.3.4.2.8.1. Creating a SSH-based authentication secret with a .gitconfig file You can combine the different methods for creating source clone secrets for your specific needs, such as a SSH-based authentication secret with a .gitconfig file. Prerequisites SSH authentication .gitconfig file Procedure To create a SSH-based authentication secret with a .gitconfig file, run: USD oc create secret generic <secret_name> \ --from-file=ssh-privatekey=<path/to/ssh/private/key> \ --from-file=<path/to/.gitconfig> \ --type=kubernetes.io/ssh-auth 2.3.4.2.8.2. Creating a secret that combines a .gitconfig file and CA certificate You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a .gitconfig file and certificate authority (CA) certificate. Prerequisites .gitconfig file CA certificate Procedure To create a secret that combines a .gitconfig file and CA certificate, run: USD oc create secret generic <secret_name> \ --from-file=ca.crt=<path/to/certificate> \ --from-file=<path/to/.gitconfig> 2.3.4.2.8.3. Creating a basic authentication secret with a CA certificate You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a basic authentication and certificate authority (CA) certificate. Prerequisites Basic authentication credentials CA certificate Procedure Create a basic authentication secret with a CA certificate, run: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ --from-literal=password=<password> \ --from-file=ca-cert=</path/to/file> \ --type=kubernetes.io/basic-auth 2.3.4.2.8.4. Creating a basic authentication secret with a .gitconfig file You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a basic authentication and .gitconfig file. Prerequisites Basic authentication credentials .gitconfig file Procedure To create a basic authentication secret with a .gitconfig file, run: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ --from-literal=password=<password> \ --from-file=</path/to/.gitconfig> \ --type=kubernetes.io/basic-auth 2.3.4.2.8.5. Creating a basic authentication secret with a .gitconfig file and CA certificate You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a basic authentication, .gitconfig file, and certificate authority (CA) certificate. Prerequisites Basic authentication credentials .gitconfig file CA certificate Procedure To create a basic authentication secret with a .gitconfig file and CA certificate, run: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ --from-literal=password=<password> \ --from-file=</path/to/.gitconfig> \ --from-file=ca-cert=</path/to/file> \ --type=kubernetes.io/basic-auth 2.3.5. Binary (local) source Streaming content from a local file system to the builder is called a Binary type build. The corresponding value of BuildConfig.spec.source.type is Binary for these builds. This source type is unique in that it is leveraged solely based on your use of the oc start-build . Note Binary type builds require content to be streamed from the local file system, so automatically triggering a binary type build, like an image change trigger, is not possible. This is because the binary files cannot be provided. Similarly, you cannot launch binary type builds from the web console. To utilize binary builds, invoke oc start-build with one of these options: --from-file : The contents of the file you specify are sent as a binary stream to the builder. You can also specify a URL to a file. Then, the builder stores the data in a file with the same name at the top of the build context. --from-dir and --from-repo : The contents are archived and sent as a binary stream to the builder. Then, the builder extracts the contents of the archive within the build context directory. With --from-dir , you can also specify a URL to an archive, which is extracted. --from-archive : The archive you specify is sent to the builder, where it is extracted within the build context directory. This option behaves the same as --from-dir ; an archive is created on your host first, whenever the argument to these options is a directory. In each of the previously listed cases: If your BuildConfig already has a Binary source type defined, it is effectively ignored and replaced by what the client sends. If your BuildConfig has a Git source type defined, it is dynamically disabled, since Binary and Git are mutually exclusive, and the data in the binary stream provided to the builder takes precedence. Instead of a file name, you can pass a URL with HTTP or HTTPS schema to --from-file and --from-archive . When using --from-file with a URL, the name of the file in the builder image is determined by the Content-Disposition header sent by the web server, or the last component of the URL path if the header is not present. No form of authentication is supported and it is not possible to use custom TLS certificate or disable certificate validation. When using oc new-build --binary=true , the command ensures that the restrictions associated with binary builds are enforced. The resulting BuildConfig has a source type of Binary , meaning that the only valid way to run a build for this BuildConfig is to use oc start-build with one of the --from options to provide the requisite binary data. The Dockerfile and contextDir source options have special meaning with binary builds. Dockerfile can be used with any binary build source. If Dockerfile is used and the binary stream is an archive, its contents serve as a replacement Dockerfile to any Dockerfile in the archive. If Dockerfile is used with the --from-file argument, and the file argument is named Dockerfile, the value from Dockerfile replaces the value from the binary stream. In the case of the binary stream encapsulating extracted archive content, the value of the contextDir field is interpreted as a subdirectory within the archive, and, if valid, the builder changes into that subdirectory before executing the build. 2.3.6. Input secrets and config maps In some scenarios, build operations require credentials or other configuration data to access dependent resources, but it is undesirable for that information to be placed in source control. You can define input secrets and input config maps for this purpose. For example, when building a Java application with Maven, you can set up a private mirror of Maven Central or JCenter that is accessed by private keys. To download libraries from that private mirror, you have to supply the following: A settings.xml file configured with the mirror's URL and connection settings. A private key referenced in the settings file, such as ~/.ssh/id_rsa . For security reasons, you do not want to expose your credentials in the application image. This example describes a Java application, but you can use the same approach for adding SSL certificates into the /etc/ssl/certs directory, API keys or tokens, license files, and more. 2.3.6.1. What is a secret? The Secret object type provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, dockercfg files, private source repository credentials, and so on. Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plug-in or the system can use secrets to perform actions on behalf of a pod. YAML Secret Object Definition apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: dmFsdWUtMQ0K 3 password: dmFsdWUtMg0KDQo= stringData: 4 hostname: myapp.mydomain.com 5 1 Indicates the structure of the secret's key names and values. 2 The allowable format for the keys in the data field must meet the guidelines in the DNS_SUBDOMAIN value in the Kubernetes identifiers glossary. 3 The value associated with keys in the data map must be base64 encoded. 4 Entries in the stringData map are converted to base64 and the entry are then moved to the data map automatically. This field is write-only. The value is only be returned by the data field. 5 The value associated with keys in the stringData map is made up of plain text strings. 2.3.6.1.1. Properties of secrets Key properties include: Secret data can be referenced independently from its definition. Secret data volumes are backed by temporary file-storage facilities (tmpfs) and never come to rest on a node. Secret data can be shared within a namespace. 2.3.6.1.2. Types of Secrets The value in the type field indicates the structure of the secret's key names and values. The type can be used to enforce the presence of user names and keys in the secret object. If you do not want validation, use the opaque type, which is the default. Specify one of the following types to trigger minimal server-side validation to ensure the presence of specific key names in the secret data: kubernetes.io/service-account-token . Uses a service account token. kubernetes.io/dockercfg . Uses the .dockercfg file for required Docker credentials. kubernetes.io/dockerconfigjson . Uses the .docker/config.json file for required Docker credentials. kubernetes.io/basic-auth . Use with basic authentication. kubernetes.io/ssh-auth . Use with SSH key authentication. kubernetes.io/tls . Use with TLS certificate authorities. Specify type= Opaque if you do not want validation, which means the secret does not claim to conform to any convention for key names or values. An opaque secret, allows for unstructured key:value pairs that can contain arbitrary values. Note You can specify other arbitrary types, such as example.com/my-secret-type . These types are not enforced server-side, but indicate that the creator of the secret intended to conform to the key/value requirements of that type. 2.3.6.1.3. Updates to secrets When you modify the value of a secret, the value used by an already running pod does not dynamically change. To change a secret, you must delete the original pod and create a new pod, in some cases with an identical PodSpec . Updating a secret follows the same workflow as deploying a new container image. You can use the kubectl rolling-update command. The resourceVersion value in a secret is not specified when it is referenced. Therefore, if a secret is updated at the same time as pods are starting, then the version of the secret is used for the pod is not defined. Note Currently, it is not possible to check the resource version of a secret object that was used when a pod was created. It is planned that pods report this information, so that a controller could restart ones using an old resourceVersion . In the interim, do not update the data of existing secrets, but create new ones with distinct names. 2.3.6.2. Creating secrets You must create a secret before creating the pods that depend on that secret. When creating secrets: Create a secret object with secret data. Update the pod service account to allow the reference to the secret. Create a pod, which consumes the secret as an environment variable or as a file using a secret volume. Procedure Use the create command to create a secret object from a JSON or YAML file: USD oc create -f <filename> For example, you can create a secret from your local .docker/config.json file: USD oc create secret generic dockerhub \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson This command generates a JSON specification of the secret named dockerhub and creates the object. YAML Opaque Secret Object Definition apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: dXNlci1uYW1l password: cGFzc3dvcmQ= 1 Specifies an opaque secret. Docker Configuration JSON File Secret Object Definition apiVersion: v1 kind: Secret metadata: name: aregistrykey namespace: myapps type: kubernetes.io/dockerconfigjson 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2 1 Specifies that the secret is using a docker configuration JSON file. 2 The output of a base64-encoded the docker configuration JSON file 2.3.6.3. Using secrets After creating secrets, you can create a pod to reference your secret, get logs, and delete the pod. Procedure Create the pod to reference your secret: USD oc create -f <your_yaml_file>.yaml Get the logs: USD oc logs secret-example-pod Delete the pod: USD oc delete pod secret-example-pod Additional resources Example YAML files with secret data: YAML Secret That Will Create Four Files apiVersion: v1 kind: Secret metadata: name: test-secret data: username: dmFsdWUtMQ0K 1 password: dmFsdWUtMQ0KDQo= 2 stringData: hostname: myapp.mydomain.com 3 secret.properties: |- 4 property1=valueA property2=valueB 1 File contains decoded values. 2 File contains decoded values. 3 File contains the provided string. 4 File contains the provided data. YAML of a pod populating files in a volume with secret data apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "cat /etc/secret-volume/*" ] volumeMounts: # name must match the volume name below - name: secret-volume mountPath: /etc/secret-volume readOnly: true volumes: - name: secret-volume secret: secretName: test-secret restartPolicy: Never YAML of a pod populating environment variables with secret data apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "export" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username restartPolicy: Never YAML of a Build Config Populating Environment Variables with Secret Data apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username 2.3.6.4. Adding input secrets and config maps In some scenarios, build operations require credentials or other configuration data to access dependent resources, but it is undesirable for that information to be placed in source control. You can define input secrets and input config maps for this purpose. Procedure To add an input secret, config maps, or both to an existing BuildConfig object: Create the ConfigMap object, if it does not exist: USD oc create configmap settings-mvn \ --from-file=settings.xml=<path/to/settings.xml> This creates a new config map named settings-mvn , which contains the plain text content of the settings.xml file. Create the Secret object, if it does not exist: USD oc create secret generic secret-mvn \ --from-file=id_rsa=<path/to/.ssh/id_rsa> This creates a new secret named secret-mvn , which contains the base64 encoded content of the id_rsa private key. Add the config map and secret to the source section in the existing BuildConfig object: source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn secrets: - secret: name: secret-mvn To include the secret and config map in a new BuildConfig object, run the following command: USD oc new-build \ openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git \ --context-dir helloworld --build-secret "secret-mvn" \ --build-config-map "settings-mvn" During the build, the settings.xml and id_rsa files are copied into the directory where the source code is located. In OpenShift Container Platform S2I builder images, this is the image working directory, which is set using the WORKDIR instruction in the Dockerfile . If you want to specify another directory, add a destinationDir to the definition: source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn destinationDir: ".m2" secrets: - secret: name: secret-mvn destinationDir: ".ssh" You can also specify the destination directory when creating a new BuildConfig object: USD oc new-build \ openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git \ --context-dir helloworld --build-secret "secret-mvn:.ssh" \ --build-config-map "settings-mvn:.m2" In both cases, the settings.xml file is added to the ./.m2 directory of the build environment, and the id_rsa key is added to the ./.ssh directory. 2.3.6.5. Source-to-image strategy When using a Source strategy, all defined input secrets are copied to their respective destinationDir . If you left destinationDir empty, then the secrets are placed in the working directory of the builder image. The same rule is used when a destinationDir is a relative path. The secrets are placed in the paths that are relative to the working directory of the image. The final directory in the destinationDir path is created if it does not exist in the builder image. All preceding directories in the destinationDir must exist, or an error will occur. Note Input secrets are added as world-writable, have 0666 permissions, and are truncated to size zero after executing the assemble script. This means that the secret files exist in the resulting image, but they are empty for security reasons. Input config maps are not truncated after the assemble script completes. 2.3.6.6. Docker strategy When using a docker strategy, you can add all defined input secrets into your container image using the ADD and COPY instructions in your Dockerfile. If you do not specify the destinationDir for a secret, then the files are copied into the same directory in which the Dockerfile is located. If you specify a relative path as destinationDir , then the secrets are copied into that directory, relative to your Dockerfile location. This makes the secret files available to the Docker build operation as part of the context directory used during the build. Example of a Dockerfile referencing secret and config map data Note Users normally remove their input secrets from the final application image so that the secrets are not present in the container running from that image. However, the secrets still exist in the image itself in the layer where they were added. This removal is part of the Dockerfile itself. 2.3.6.7. Custom strategy When using a Custom strategy, all the defined input secrets and config maps are available in the builder container in the /var/run/secrets/openshift.io/build directory. The custom build image must use these secrets and config maps appropriately. With the Custom strategy, you can define secrets as described in Custom strategy options. There is no technical difference between existing strategy secrets and the input secrets. However, your builder image can distinguish between them and use them differently, based on your build use case. The input secrets are always mounted into the /var/run/secrets/openshift.io/build directory, or your builder can parse the USDBUILD environment variable, which includes the full build object. Important If a pull secret for the registry exists in both the namespace and the node, builds default to using the pull secret in the namespace. 2.3.7. External artifacts It is not recommended to store binary files in a source repository. Therefore, you must define a build which pulls additional files, such as Java .jar dependencies, during the build process. How this is done depends on the build strategy you are using. For a Source build strategy, you must put appropriate shell commands into the assemble script: .s2i/bin/assemble File #!/bin/sh APP_VERSION=1.0 wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar .s2i/bin/run File #!/bin/sh exec java -jar app.jar For a Docker build strategy, you must modify the Dockerfile and invoke shell commands with the RUN instruction : Excerpt of Dockerfile FROM jboss/base-jdk:8 ENV APP_VERSION 1.0 RUN wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar EXPOSE 8080 CMD [ "java", "-jar", "app.jar" ] In practice, you may want to use an environment variable for the file location so that the specific file to be downloaded can be customized using an environment variable defined on the BuildConfig , rather than updating the Dockerfile or assemble script. You can choose between different methods of defining environment variables: Using the .s2i/environment file] (only for a Source build strategy) Setting in BuildConfig Providing explicitly using oc start-build --env (only for builds that are triggered manually) 2.3.8. Using docker credentials for private registries You can supply builds with a . docker/config.json file with valid credentials for private container registries. This allows you to push the output image into a private container image registry or pull a builder image from the private container image registry that requires authentication. Note For the OpenShift Container Platform container image registry, this is not required because secrets are generated automatically for you by OpenShift Container Platform. The .docker/config.json file is found in your home directory by default and has the following format: auths: https://index.docker.io/v1/: 1 auth: "YWRfbGzhcGU6R2labnRib21ifTE=" 2 email: "[email protected]" 3 1 URL of the registry. 2 Encrypted password. 3 Email address for the login. You can define multiple container image registry entries in this file. Alternatively, you can also add authentication entries to this file by running the docker login command. The file will be created if it does not exist. Kubernetes provides Secret objects, which can be used to store configuration and passwords. Prerequisites You must have a .docker/config.json file. Procedure Create the secret from your local .docker/config.json file: USD oc create secret generic dockerhub \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson This generates a JSON specification of the secret named dockerhub and creates the object. Add a pushSecret field into the output section of the BuildConfig and set it to the name of the secret that you created, which in the example is dockerhub : spec: output: to: kind: "DockerImage" name: "private.registry.com/org/private-image:latest" pushSecret: name: "dockerhub" You can use the oc set build-secret command to set the push secret on the build configuration: USD oc set build-secret --push bc/sample-build dockerhub You can also link the push secret to the service account used by the build instead of specifying the pushSecret field. By default, builds use the builder service account. The push secret is automatically added to the build if the secret contains a credential that matches the repository hosting the build's output image. USD oc secrets link builder dockerhub Pull the builder container image from a private container image registry by specifying the pullSecret field, which is part of the build strategy definition: strategy: sourceStrategy: from: kind: "DockerImage" name: "docker.io/user/private_repository" pullSecret: name: "dockerhub" You can use the oc set build-secret command to set the pull secret on the build configuration: USD oc set build-secret --pull bc/sample-build dockerhub Note This example uses pullSecret in a Source build, but it is also applicable in Docker and Custom builds. You can also link the pull secret to the service account used by the build instead of specifying the pullSecret field. By default, builds use the builder service account. The pull secret is automatically added to the build if the secret contains a credential that matches the repository hosting the build's input image. To link the pull secret to the service account used by the build instead of specifying the pullSecret field, run: USD oc secrets link builder dockerhub Note You must specify a from image in the BuildConfig spec to take advantage of this feature. Docker strategy builds generated by oc new-build or oc new-app may not do this in some situations. 2.3.9. Build environments As with pod environment variables, build environment variables can be defined in terms of references to other resources or variables using the Downward API. There are some exceptions, which are noted. You can also manage environment variables defined in the BuildConfig with the oc set env command. Note Referencing container resources using valueFrom in build environment variables is not supported as the references are resolved before the container is created. 2.3.9.1. Using build fields as environment variables You can inject information about the build object by setting the fieldPath environment variable source to the JsonPath of the field from which you are interested in obtaining the value. Note Jenkins Pipeline strategy does not support valueFrom syntax for environment variables. Procedure Set the fieldPath environment variable source to the JsonPath of the field from which you are interested in obtaining the value: env: - name: FIELDREF_ENV valueFrom: fieldRef: fieldPath: metadata.name 2.3.9.2. Using secrets as environment variables You can make key values from secrets available as environment variables using the valueFrom syntax. Important This method shows the secrets as plain text in the output of the build pod console. To avoid this, use input secrets and config maps instead. Procedure To use a secret as an environment variable, set the valueFrom syntax: apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: MYVAL valueFrom: secretKeyRef: key: myval name: mysecret Additional resources Input secrets and config maps 2.3.10. Service serving certificate secrets Service serving certificate secrets are intended to support complex middleware applications that need out-of-the-box certificates. It has the same settings as the server certificates generated by the administrator tooling for nodes and masters. Procedure To secure communication to your service, have the cluster generate a signed serving certificate/key pair into a secret in your namespace. Set the service.beta.openshift.io/serving-cert-secret-name annotation on your service with the value set to the name you want to use for your secret. Then, your PodSpec can mount that secret. When it is available, your pod runs. The certificate is good for the internal service DNS name, <service.name>.<service.namespace>.svc . The certificate and key are in PEM format, stored in tls.crt and tls.key respectively. The certificate/key pair is automatically replaced when it gets close to expiration. View the expiration date in the service.beta.openshift.io/expiry annotation on the secret, which is in RFC3339 format. Note In most cases, the service DNS name <service.name>.<service.namespace>.svc is not externally routable. The primary use of <service.name>.<service.namespace>.svc is for intracluster or intraservice communication, and with re-encrypt routes. Other pods can trust cluster-created certificates, which are only signed for internal DNS names, by using the certificate authority (CA) bundle in the /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt file that is automatically mounted in their pod. The signature algorithm for this feature is x509.SHA256WithRSA . To manually rotate, delete the generated secret. A new certificate is created. 2.3.11. Secrets restrictions To use a secret, a pod needs to reference the secret. A secret can be used with a pod in three ways: To populate environment variables for containers. As files in a volume mounted on one or more of its containers. By kubelet when pulling images for the pod. Volume type secrets write data into the container as a file using the volume mechanism. imagePullSecrets use service accounts for the automatic injection of the secret into all pods in a namespaces. When a template contains a secret definition, the only way for the template to use the provided secret is to ensure that the secret volume sources are validated and that the specified object reference actually points to an object of type Secret . Therefore, a secret needs to be created before any pods that depend on it. The most effective way to ensure this is to have it get injected automatically through the use of a service account. Secret API objects reside in a namespace. They can only be referenced by pods in that same namespace. Individual secrets are limited to 1MB in size. This is to discourage the creation of large secrets that would exhaust apiserver and kubelet memory. However, creation of a number of smaller secrets could also exhaust memory. 2.4. Managing build output Use the following sections for an overview of and instructions for managing build output. 2.4.1. Build output Builds that use the docker or source-to-image (S2I) strategy result in the creation of a new container image. The image is then pushed to the container image registry specified in the output section of the Build specification. If the output kind is ImageStreamTag , then the image will be pushed to the integrated OpenShift Container Platform registry and tagged in the specified imagestream. If the output is of type DockerImage , then the name of the output reference will be used as a docker push specification. The specification may contain a registry or will default to DockerHub if no registry is specified. If the output section of the build specification is empty, then the image will not be pushed at the end of the build. Output to an ImageStreamTag spec: output: to: kind: "ImageStreamTag" name: "sample-image:latest" Output to a docker Push Specification spec: output: to: kind: "DockerImage" name: "my-registry.mycompany.com:5000/myimages/myimage:tag" 2.4.2. Output image environment variables docker and source-to-image (S2I) strategy builds set the following environment variables on output images: Variable Description OPENSHIFT_BUILD_NAME Name of the build OPENSHIFT_BUILD_NAMESPACE Namespace of the build OPENSHIFT_BUILD_SOURCE The source URL of the build OPENSHIFT_BUILD_REFERENCE The Git reference used in the build OPENSHIFT_BUILD_COMMIT Source commit used in the build Additionally, any user-defined environment variable, for example those configured with S2I] or docker strategy options, will also be part of the output image environment variable list. 2.4.3. Output image labels docker and source-to-image (S2I)` builds set the following labels on output images: Label Description io.openshift.build.commit.author Author of the source commit used in the build io.openshift.build.commit.date Date of the source commit used in the build io.openshift.build.commit.id Hash of the source commit used in the build io.openshift.build.commit.message Message of the source commit used in the build io.openshift.build.commit.ref Branch or reference specified in the source io.openshift.build.source-location Source URL for the build You can also use the BuildConfig.spec.output.imageLabels field to specify a list of custom labels that will be applied to each image built from the build configuration. Custom Labels to be Applied to Built Images spec: output: to: kind: "ImageStreamTag" name: "my-image:latest" imageLabels: - name: "vendor" value: "MyCompany" - name: "authoritative-source-url" value: "registry.mycompany.com" 2.5. Using build strategies The following sections define the primary supported build strategies, and how to use them. 2.5.1. Docker build OpenShift Container Platform uses Buildah to build a container image from a Dockerfile. For more information on building container images with Dockerfiles, see the Dockerfile reference documentation . Tip If you set Docker build arguments by using the buildArgs array, see Understand how ARG and FROM interact in the Dockerfile reference documentation. 2.5.1.1. Replacing Dockerfile FROM image You can replace the FROM instruction of the Dockerfile with the from of the BuildConfig object. If the Dockerfile uses multi-stage builds, the image in the last FROM instruction will be replaced. Procedure To replace the FROM instruction of the Dockerfile with the from of the BuildConfig . strategy: dockerStrategy: from: kind: "ImageStreamTag" name: "debian:latest" 2.5.1.2. Using Dockerfile path By default, docker builds use a Dockerfile located at the root of the context specified in the BuildConfig.spec.source.contextDir field. The dockerfilePath field allows the build to use a different path to locate your Dockerfile, relative to the BuildConfig.spec.source.contextDir field. It can be a different file name than the default Dockerfile, such as MyDockerfile , or a path to a Dockerfile in a subdirectory, such as dockerfiles/app1/Dockerfile . Procedure To use the dockerfilePath field for the build to use a different path to locate your Dockerfile, set: strategy: dockerStrategy: dockerfilePath: dockerfiles/app1/Dockerfile 2.5.1.3. Using docker environment variables To make environment variables available to the docker build process and resulting image, you can add environment variables to the dockerStrategy definition of the build configuration. The environment variables defined there are inserted as a single ENV Dockerfile instruction right after the FROM instruction, so that it can be referenced later on within the Dockerfile. Procedure The variables are defined during build and stay in the output image, therefore they will be present in any container that runs that image as well. For example, defining a custom HTTP proxy to be used during build and runtime: dockerStrategy: ... env: - name: "HTTP_PROXY" value: "http://myproxy.net:5187/" You can also manage environment variables defined in the build configuration with the oc set env command. 2.5.1.4. Adding docker build arguments You can set docker build arguments using the buildArgs array. The build arguments are passed to docker when a build is started. Tip See Understand how ARG and FROM interact in the Dockerfile reference documentation. Procedure To set docker build arguments, add entries to the buildArgs array, which is located in the dockerStrategy definition of the BuildConfig object. For example: dockerStrategy: ... buildArgs: - name: "foo" value: "bar" Note Only the name and value fields are supported. Any settings on the valueFrom field are ignored. 2.5.1.5. Squash layers with docker builds Docker builds normally create a layer representing each instruction in a Dockerfile. Setting the imageOptimizationPolicy to SkipLayers merges all instructions into a single layer on top of the base image. Procedure Set the imageOptimizationPolicy to SkipLayers : strategy: dockerStrategy: imageOptimizationPolicy: SkipLayers 2.5.2. Source-to-image build Source-to-image (S2I) is a tool for building reproducible container images. It produces ready-to-run images by injecting application source into a container image and assembling a new image. The new image incorporates the base image, the builder, and built source and is ready to use with the buildah run command. S2I supports incremental builds, which re-use previously downloaded dependencies, previously built artifacts, and so on. 2.5.2.1. Performing source-to-image incremental builds Source-to-image (S2I) can perform incremental builds, which means it reuses artifacts from previously-built images. Procedure To create an incremental build, create a with the following modification to the strategy definition: strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "incremental-image:latest" 1 incremental: true 2 1 Specify an image that supports incremental builds. Consult the documentation of the builder image to determine if it supports this behavior. 2 This flag controls whether an incremental build is attempted. If the builder image does not support incremental builds, the build will still succeed, but you will get a log message stating the incremental build was not successful because of a missing save-artifacts script. Additional resources See S2I Requirements for information on how to create a builder image supporting incremental builds. 2.5.2.2. Overriding source-to-image builder image scripts You can override the assemble , run , and save-artifacts source-to-image (S2I) scripts provided by the builder image. Procedure To override the assemble , run , and save-artifacts S2I scripts provided by the builder image, either: Provide an assemble , run , or save-artifacts script in the .s2i/bin directory of your application source repository. Provide a URL of a directory containing the scripts as part of the strategy definition. For example: strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "builder-image:latest" scripts: "http://somehost.com/scripts_directory" 1 1 This path will have run , assemble , and save-artifacts appended to it. If any or all scripts are found they will be used in place of the same named scripts provided in the image. Note Files located at the scripts URL take precedence over files located in .s2i/bin of the source repository. 2.5.2.3. Source-to-image environment variables There are two ways to make environment variables available to the source build process and resulting image. Environment files and BuildConfig environment values. Variables provided will be present during the build process and in the output image. 2.5.2.3.1. Using source-to-image environment files Source build enables you to set environment values, one per line, inside your application, by specifying them in a .s2i/environment file in the source repository. The environment variables specified in this file are present during the build process and in the output image. If you provide a .s2i/environment file in your source repository, source-to-image (S2I) reads this file during the build. This allows customization of the build behavior as the assemble script may use these variables. Procedure For example, to disable assets compilation for your Rails application during the build: Add DISABLE_ASSET_COMPILATION=true in the .s2i/environment file. In addition to builds, the specified environment variables are also available in the running application itself. For example, to cause the Rails application to start in development mode instead of production : Add RAILS_ENV=development to the .s2i/environment file. The complete list of supported environment variables is available in the using images section for each image. 2.5.2.3.2. Using source-to-image build configuration environment You can add environment variables to the sourceStrategy definition of the build configuration. The environment variables defined there are visible during the assemble script execution and will be defined in the output image, making them also available to the run script and application code. Procedure For example, to disable assets compilation for your Rails application: sourceStrategy: ... env: - name: "DISABLE_ASSET_COMPILATION" value: "true" Additional resources The build environment section provides more advanced instructions. You can also manage environment variables defined in the build configuration with the oc set env command. 2.5.2.4. Ignoring source-to-image source files Source-to-image (S2I) supports a .s2iignore file, which contains a list of file patterns that should be ignored. Files in the build working directory, as provided by the various input sources, that match a pattern found in the .s2iignore file will not be made available to the assemble script. 2.5.2.5. Creating images from source code with source-to-image Source-to-image (S2I) is a framework that makes it easy to write images that take application source code as an input and produce a new image that runs the assembled application as output. The main advantage of using S2I for building reproducible container images is the ease of use for developers. As a builder image author, you must understand two basic concepts in order for your images to provide the best S2I performance, the build process and S2I scripts. 2.5.2.5.1. Understanding the source-to-image build process The build process consists of the following three fundamental elements, which are combined into a final container image: Sources Source-to-image (S2I) scripts Builder image S2I generates a Dockerfile with the builder image as the first FROM instruction. The Dockerfile generated by S2I is then passed to Buildah. 2.5.2.5.2. How to write source-to-image scripts You can write source-to-image (S2I) scripts in any programming language, as long as the scripts are executable inside the builder image. S2I supports multiple options providing assemble / run / save-artifacts scripts. All of these locations are checked on each build in the following order: A script specified in the build configuration. A script found in the application source .s2i/bin directory. A script found at the default image URL with the io.openshift.s2i.scripts-url label. Both the io.openshift.s2i.scripts-url label specified in the image and the script specified in a build configuration can take one of the following forms: image:///path_to_scripts_dir : absolute path inside the image to a directory where the S2I scripts are located. file:///path_to_scripts_dir : relative or absolute path to a directory on the host where the S2I scripts are located. http(s)://path_to_scripts_dir : URL to a directory where the S2I scripts are located. Table 2.1. S2I scripts Script Description assemble The assemble script builds the application artifacts from a source and places them into appropriate directories inside the image. This script is required. The workflow for this script is: Optional: Restore build artifacts. If you want to support incremental builds, make sure to define save-artifacts as well. Place the application source in the desired location. Build the application artifacts. Install the artifacts into locations appropriate for them to run. run The run script executes your application. This script is required. save-artifacts The save-artifacts script gathers all dependencies that can speed up the build processes that follow. This script is optional. For example: For Ruby, gems installed by Bundler. For Java, .m2 contents. These dependencies are gathered into a tar file and streamed to the standard output. usage The usage script allows you to inform the user how to properly use your image. This script is optional. test/run The test/run script allows you to create a process to check if the image is working correctly. This script is optional. The proposed flow of that process is: Build the image. Run the image to verify the usage script. Run s2i build to verify the assemble script. Optional: Run s2i build again to verify the save-artifacts and assemble scripts save and restore artifacts functionality. Run the image to verify the test application is working. Note The suggested location to put the test application built by your test/run script is the test/test-app directory in your image repository. Example S2I scripts The following example S2I scripts are written in Bash. Each example assumes its tar contents are unpacked into the /tmp/s2i directory. assemble script: #!/bin/bash # restore build artifacts if [ "USD(ls /tmp/s2i/artifacts/ 2>/dev/null)" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi # move the application source mv /tmp/s2i/src USDHOME/src # build application artifacts pushd USD{HOME} make all # install the artifacts make install popd run script: #!/bin/bash # run the application /opt/application/run.sh save-artifacts script: #!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd usage script: #!/bin/bash # inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF Additional resources S2I Image Creation Tutorial 2.5.3. Custom build The custom build strategy allows developers to define a specific builder image responsible for the entire build process. Using your own builder image allows you to customize your build process. A custom builder image is a plain container image embedded with build process logic, for example for building RPMs or base images. Custom builds run with a high level of privilege and are not available to users by default. Only users who can be trusted with cluster administration permissions should be granted access to run custom builds. 2.5.3.1. Using FROM image for custom builds You can use the customStrategy.from section to indicate the image to use for the custom build Procedure Set the customStrategy.from section: strategy: customStrategy: from: kind: "DockerImage" name: "openshift/sti-image-builder" 2.5.3.2. Using secrets in custom builds In addition to secrets for source and images that can be added to all build types, custom strategies allow adding an arbitrary list of secrets to the builder pod. Procedure To mount each secret at a specific location, edit the secretSource and mountPath fields of the strategy YAML file: strategy: customStrategy: secrets: - secretSource: 1 name: "secret1" mountPath: "/tmp/secret1" 2 - secretSource: name: "secret2" mountPath: "/tmp/secret2" 1 secretSource is a reference to a secret in the same namespace as the build. 2 mountPath is the path inside the custom builder where the secret should be mounted. 2.5.3.3. Using environment variables for custom builds To make environment variables available to the custom build process, you can add environment variables to the customStrategy definition of the build configuration. The environment variables defined there are passed to the pod that runs the custom build. Procedure Define a custom HTTP proxy to be used during build: customStrategy: ... env: - name: "HTTP_PROXY" value: "http://myproxy.net:5187/" To manage environment variables defined in the build configuration, enter the following command: USD oc set env <enter_variables> 2.5.3.4. Using custom builder images OpenShift Container Platform's custom build strategy enables you to define a specific builder image responsible for the entire build process. When you need a build to produce individual artifacts such as packages, JARs, WARs, installable ZIPs, or base images, use a custom builder image using the custom build strategy. A custom builder image is a plain container image embedded with build process logic, which is used for building artifacts such as RPMs or base container images. Additionally, the custom builder allows implementing any extended build process, such as a CI/CD flow that runs unit or integration tests. 2.5.3.4.1. Custom builder image Upon invocation, a custom builder image receives the following environment variables with the information needed to proceed with the build: Table 2.2. Custom Builder Environment Variables Variable Name Description BUILD The entire serialized JSON of the Build object definition. If you must use a specific API version for serialization, you can set the buildAPIVersion parameter in the custom strategy specification of the build configuration. SOURCE_REPOSITORY The URL of a Git repository with source to be built. SOURCE_URI Uses the same value as SOURCE_REPOSITORY . Either can be used. SOURCE_CONTEXT_DIR Specifies the subdirectory of the Git repository to be used when building. Only present if defined. SOURCE_REF The Git reference to be built. ORIGIN_VERSION The version of the OpenShift Container Platform master that created this build object. OUTPUT_REGISTRY The container image registry to push the image to. OUTPUT_IMAGE The container image tag name for the image being built. PUSH_DOCKERCFG_PATH The path to the container registry credentials for running a podman push operation. 2.5.3.4.2. Custom builder workflow Although custom builder image authors have flexibility in defining the build process, your builder image must adhere to the following required steps necessary for running a build inside of OpenShift Container Platform: The Build object definition contains all the necessary information about input parameters for the build. Run the build process. If your build produces an image, push it to the output location of the build if it is defined. Other output locations can be passed with environment variables. 2.5.4. Pipeline build Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. The Pipeline build strategy allows developers to define a Jenkins pipeline for use by the Jenkins pipeline plug-in. The build can be started, monitored, and managed by OpenShift Container Platform in the same way as any other build type. Pipeline workflows are defined in a jenkinsfile , either embedded directly in the build configuration, or supplied in a Git repository and referenced by the build configuration. 2.5.4.1. Understanding OpenShift Container Platform pipelines Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. Pipelines give you control over building, deploying, and promoting your applications on OpenShift Container Platform. Using a combination of the Jenkins Pipeline build strategy, jenkinsfiles , and the OpenShift Container Platform Domain Specific Language (DSL) provided by the Jenkins Client Plug-in, you can create advanced build, test, deploy, and promote pipelines for any scenario. OpenShift Container Platform Jenkins Sync Plugin The OpenShift Container Platform Jenkins Sync Plugin keeps the build configuration and build objects in sync with Jenkins jobs and builds, and provides the following: Dynamic job and run creation in Jenkins. Dynamic creation of agent pod templates from image streams, image stream tags, or config maps. Injection of environment variables. Pipeline visualization in the OpenShift Container Platform web console. Integration with the Jenkins Git plug-in, which passes commit information from OpenShift Container Platform builds to the Jenkins Git plug-in. Synchronization of secrets into Jenkins credential entries. OpenShift Container Platform Jenkins Client Plugin The OpenShift Container Platform Jenkins Client Plugin is a Jenkins plugin which aims to provide a readable, concise, comprehensive, and fluent Jenkins Pipeline syntax for rich interactions with an OpenShift Container Platform API Server. The plugin uses the OpenShift Container Platform command line tool, oc , which must be available on the nodes executing the script. The Jenkins Client Plug-in must be installed on your Jenkins master so the OpenShift Container Platform DSL will be available to use within the jenkinsfile for your application. This plug-in is installed and enabled by default when using the OpenShift Container Platform Jenkins image. For OpenShift Container Platform Pipelines within your project, you will must use the Jenkins Pipeline Build Strategy. This strategy defaults to using a jenkinsfile at the root of your source repository, but also provides the following configuration options: An inline jenkinsfile field within your build configuration. A jenkinsfilePath field within your build configuration that references the location of the jenkinsfile to use relative to the source contextDir . Note The optional jenkinsfilePath field specifies the name of the file to use, relative to the source contextDir . If contextDir is omitted, it defaults to the root of the repository. If jenkinsfilePath is omitted, it defaults to jenkinsfile . 2.5.4.2. Providing the Jenkins file for pipeline builds Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. The jenkinsfile uses the standard groovy language syntax to allow fine grained control over the configuration, build, and deployment of your application. You can supply the jenkinsfile in one of the following ways: A file located within your source code repository. Embedded as part of your build configuration using the jenkinsfile field. When using the first option, the jenkinsfile must be included in your applications source code repository at one of the following locations: A file named jenkinsfile at the root of your repository. A file named jenkinsfile at the root of the source contextDir of your repository. A file name specified via the jenkinsfilePath field of the JenkinsPipelineStrategy section of your BuildConfig, which is relative to the source contextDir if supplied, otherwise it defaults to the root of the repository. The jenkinsfile is run on the Jenkins agent pod, which must have the OpenShift Container Platform client binaries available if you intend to use the OpenShift Container Platform DSL. Procedure To provide the Jenkins file, you can either: Embed the Jenkins file in the build configuration. Include in the build configuration a reference to the Git repository that contains the Jenkins file. Embedded Definition kind: "BuildConfig" apiVersion: "v1" metadata: name: "sample-pipeline" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: |- node('agent') { stage 'build' openshiftBuild(buildConfig: 'ruby-sample-build', showBuildLogs: 'true') stage 'deploy' openshiftDeploy(deploymentConfig: 'frontend') } Reference to Git Repository kind: "BuildConfig" apiVersion: "v1" metadata: name: "sample-pipeline" spec: source: git: uri: "https://github.com/openshift/ruby-hello-world" strategy: jenkinsPipelineStrategy: jenkinsfilePath: some/repo/dir/filename 1 1 The optional jenkinsfilePath field specifies the name of the file to use, relative to the source contextDir . If contextDir is omitted, it defaults to the root of the repository. If jenkinsfilePath is omitted, it defaults to jenkinsfile . 2.5.4.3. Using environment variables for pipeline builds Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. To make environment variables available to the Pipeline build process, you can add environment variables to the jenkinsPipelineStrategy definition of the build configuration. Once defined, the environment variables will be set as parameters for any Jenkins job associated with the build configuration. Procedure To define environment variables to be used during build, edit the YAML file: jenkinsPipelineStrategy: ... env: - name: "FOO" value: "BAR" You can also manage environment variables defined in the build configuration with the oc set env command. 2.5.4.3.1. Mapping between BuildConfig environment variables and Jenkins job parameters When a Jenkins job is created or updated based on changes to a Pipeline strategy build configuration, any environment variables in the build configuration are mapped to Jenkins job parameters definitions, where the default values for the Jenkins job parameters definitions are the current values of the associated environment variables. After the Jenkins job's initial creation, you can still add additional parameters to the job from the Jenkins console. The parameter names differ from the names of the environment variables in the build configuration. The parameters are honored when builds are started for those Jenkins jobs. How you start builds for the Jenkins job dictates how the parameters are set. If you start with oc start-build , the values of the environment variables in the build configuration are the parameters set for the corresponding job instance. Any changes you make to the parameters' default values from the Jenkins console are ignored. The build configuration values take precedence. If you start with oc start-build -e , the values for the environment variables specified in the -e option take precedence. If you specify an environment variable not listed in the build configuration, they will be added as a Jenkins job parameter definitions. Any changes you make from the Jenkins console to the parameters corresponding to the environment variables are ignored. The build configuration and what you specify with oc start-build -e takes precedence. If you start the Jenkins job with the Jenkins console, then you can control the setting of the parameters with the Jenkins console as part of starting a build for the job. Note It is recommended that you specify in the build configuration all possible environment variables to be associated with job parameters. Doing so reduces disk I/O and improves performance during Jenkins processing. 2.5.4.4. Pipeline build tutorial Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. This example demonstrates how to create an OpenShift Container Platform Pipeline that will build, deploy, and verify a Node.js/MongoDB application using the nodejs-mongodb.json template. Procedure Create the Jenkins master: USD oc project <project_name> Select the project that you want to use or create a new project with oc new-project <project_name> . USD oc new-app jenkins-ephemeral 1 If you want to use persistent storage, use jenkins-persistent instead. Create a file named nodejs-sample-pipeline.yaml with the following content: Note This creates a BuildConfig object that employs the Jenkins pipeline strategy to build, deploy, and scale the Node.js/MongoDB example application. kind: "BuildConfig" apiVersion: "v1" metadata: name: "nodejs-sample-pipeline" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: <pipeline content from below> type: JenkinsPipeline Once you create a BuildConfig object with a jenkinsPipelineStrategy , tell the pipeline what to do by using an inline jenkinsfile : Note This example does not set up a Git repository for the application. The following jenkinsfile content is written in Groovy using the OpenShift Container Platform DSL. For this example, include inline content in the BuildConfig object using the YAML Literal Style, though including a jenkinsfile in your source repository is the preferred method. def templatePath = 'https://raw.githubusercontent.com/openshift/nodejs-ex/master/openshift/templates/nodejs-mongodb.json' 1 def templateName = 'nodejs-mongodb-example' 2 pipeline { agent { node { label 'nodejs' 3 } } options { timeout(time: 20, unit: 'MINUTES') 4 } stages { stage('preamble') { steps { script { openshift.withCluster() { openshift.withProject() { echo "Using project: USD{openshift.project()}" } } } } } stage('cleanup') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.selector("all", [ template : templateName ]).delete() 5 if (openshift.selector("secrets", templateName).exists()) { 6 openshift.selector("secrets", templateName).delete() } } } } } } stage('create') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.newApp(templatePath) 7 } } } } } stage('build') { steps { script { openshift.withCluster() { openshift.withProject() { def builds = openshift.selector("bc", templateName).related('builds') timeout(5) { 8 builds.untilEach(1) { return (it.object().status.phase == "Complete") } } } } } } } stage('deploy') { steps { script { openshift.withCluster() { openshift.withProject() { def rm = openshift.selector("dc", templateName).rollout() timeout(5) { 9 openshift.selector("dc", templateName).related('pods').untilEach(1) { return (it.object().status.phase == "Running") } } } } } } } stage('tag') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.tag("USD{templateName}:latest", "USD{templateName}-staging:latest") 10 } } } } } } } 1 Path of the template to use. 1 2 Name of the template that will be created. 3 Spin up a node.js agent pod on which to run this build. 4 Set a timeout of 20 minutes for this pipeline. 5 Delete everything with this template label. 6 Delete any secrets with this template label. 7 Create a new application from the templatePath . 8 Wait up to five minutes for the build to complete. 9 Wait up to five minutes for the deployment to complete. 10 If everything else succeeded, tag the USD {templateName}:latest image as USD {templateName}-staging:latest . A pipeline build configuration for the staging environment can watch for the USD {templateName}-staging:latest image to change and then deploy it to the staging environment. Note The example was written using the declarative pipeline style, but the older scripted pipeline style is also supported. Create the Pipeline BuildConfig in your OpenShift Container Platform cluster: USD oc create -f nodejs-sample-pipeline.yaml If you do not want to create your own file, you can use the sample from the Origin repository by running: USD oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/pipeline/nodejs-sample-pipeline.yaml Start the Pipeline: USD oc start-build nodejs-sample-pipeline Note Alternatively, you can start your pipeline with the OpenShift Container Platform web console by navigating to the Builds Pipeline section and clicking Start Pipeline , or by visiting the Jenkins Console, navigating to the Pipeline that you created, and clicking Build Now . Once the pipeline is started, you should see the following actions performed within your project: A job instance is created on the Jenkins server. An agent pod is launched, if your pipeline requires one. The pipeline runs on the agent pod, or the master if no agent is required. Any previously created resources with the template=nodejs-mongodb-example label will be deleted. A new application, and all of its associated resources, will be created from the nodejs-mongodb-example template. A build will be started using the nodejs-mongodb-example BuildConfig . The pipeline will wait until the build has completed to trigger the stage. A deployment will be started using the nodejs-mongodb-example deployment configuration. The pipeline will wait until the deployment has completed to trigger the stage. If the build and deploy are successful, the nodejs-mongodb-example:latest image will be tagged as nodejs-mongodb-example:stage . The agent pod is deleted, if one was required for the pipeline. Note The best way to visualize the pipeline execution is by viewing it in the OpenShift Container Platform web console. You can view your pipelines by logging in to the web console and navigating to Builds Pipelines. 2.5.5. Adding secrets with web console You can add a secret to your build configuration so that it can access a private repository. Procedure To add a secret to your build configuration so that it can access a private repository from the OpenShift Container Platform web console: Create a new OpenShift Container Platform project. Create a secret that contains credentials for accessing a private source code repository. Create a build configuration. On the build configuration editor page or in the create app from builder image page of the web console, set the Source Secret . Click Save . 2.5.6. Enabling pulling and pushing You can enable pulling to a private registry by setting the pull secret and pushing by setting the push secret in the build configuration. Procedure To enable pulling to a private registry: Set the pull secret in the build configuration. To enable pushing: Set the push secret in the build configuration. 2.6. Custom image builds with Buildah With OpenShift Container Platform 4.7, a docker socket will not be present on the host nodes. This means the mount docker socket option of a custom build is not guaranteed to provide an accessible docker socket for use within a custom build image. If you require this capability in order to build and push images, add the Buildah tool your custom build image and use it to build and push the image within your custom build logic. The following is an example of how to run custom builds with Buildah. Note Using the custom build strategy requires permissions that normal users do not have by default because it allows the user to execute arbitrary code inside a privileged container running on the cluster. This level of access can be used to compromise the cluster and therefore should be granted only to users who are trusted with administrative privileges on the cluster. 2.6.1. Prerequisites Review how to grant custom build permissions . 2.6.2. Creating custom build artifacts You must create the image you want to use as your custom build image. Procedure Starting with an empty directory, create a file named Dockerfile with the following content: FROM registry.redhat.io/rhel8/buildah # In this example, `/tmp/build` contains the inputs that build when this # custom builder image is run. Normally the custom builder image fetches # this content from some location at build time, by using git clone as an example. ADD dockerfile.sample /tmp/input/Dockerfile ADD build.sh /usr/bin RUN chmod a+x /usr/bin/build.sh # /usr/bin/build.sh contains the actual custom build logic that will be run when # this custom builder image is run. ENTRYPOINT ["/usr/bin/build.sh"] In the same directory, create a file named dockerfile.sample . This file is included in the custom build image and defines the image that is produced by the custom build: FROM registry.access.redhat.com/ubi8/ubi RUN touch /tmp/build In the same directory, create a file named build.sh . This file contains the logic that is run when the custom build runs: #!/bin/sh # Note that in this case the build inputs are part of the custom builder image, but normally this # is retrieved from an external source. cd /tmp/input # OUTPUT_REGISTRY and OUTPUT_IMAGE are env variables provided by the custom # build framework TAG="USD{OUTPUT_REGISTRY}/USD{OUTPUT_IMAGE}" # performs the build of the new image defined by dockerfile.sample buildah --storage-driver vfs bud --isolation chroot -t USD{TAG} . # buildah requires a slight modification to the push secret provided by the service # account to use it for pushing the image cp /var/run/secrets/openshift.io/push/.dockercfg /tmp (echo "{ \"auths\": " ; cat /var/run/secrets/openshift.io/push/.dockercfg ; echo "}") > /tmp/.dockercfg # push the new image to the target for the build buildah --storage-driver vfs push --tls-verify=false --authfile /tmp/.dockercfg USD{TAG} 2.6.3. Build custom builder image You can use OpenShift Container Platform to build and push custom builder images to use in a custom strategy. Prerequisites Define all the inputs that will go into creating your new custom builder image. Procedure Define a BuildConfig object that will build your custom builder image: USD oc new-build --binary --strategy=docker --name custom-builder-image From the directory in which you created your custom build image, run the build: USD oc start-build custom-builder-image --from-dir . -F After the build completes, your new custom builder image is available in your project in an image stream tag that is named custom-builder-image:latest . 2.6.4. Use custom builder image You can define a BuildConfig object that uses the custom strategy in conjunction with your custom builder image to execute your custom build logic. Prerequisites Define all the required inputs for new custom builder image. Build your custom builder image. Procedure Create a file named buildconfig.yaml . This file defines the BuildConfig object that is created in your project and executed: kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: sample-custom-build labels: name: sample-custom-build annotations: template.alpha.openshift.io/wait-for-ready: 'true' spec: strategy: type: Custom customStrategy: forcePull: true from: kind: ImageStreamTag name: custom-builder-image:latest namespace: <yourproject> 1 output: to: kind: ImageStreamTag name: sample-custom:latest 1 Specify your project name. Create the BuildConfig : USD oc create -f buildconfig.yaml Create a file named imagestream.yaml . This file defines the image stream to which the build will push the image: kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: sample-custom spec: {} Create the imagestream: USD oc create -f imagestream.yaml Run your custom build: USD oc start-build sample-custom-build -F When the build runs, it launches a pod running the custom builder image that was built earlier. The pod runs the build.sh logic that is defined as the entrypoint for the custom builder image. The build.sh logic invokes Buildah to build the dockerfile.sample that was embedded in the custom builder image, and then uses Buildah to push the new image to the sample-custom image stream . 2.7. Performing basic builds The following sections provide instructions for basic build operations including starting and canceling builds, deleting BuildConfigs, viewing build details, and accessing build logs. 2.7.1. Starting a build You can manually start a new build from an existing build configuration in your current project. Procedure To manually start a build, enter the following command: USD oc start-build <buildconfig_name> 2.7.1.1. Re-running a build You can manually re-run a build using the --from-build flag. Procedure To manually re-run a build, enter the following command: USD oc start-build --from-build=<build_name> 2.7.1.2. Streaming build logs You can specify the --follow flag to stream the build's logs in stdout . Procedure To manually stream a build's logs in stdout , enter the following command: USD oc start-build <buildconfig_name> --follow 2.7.1.3. Setting environment variables when starting a build You can specify the --env flag to set any desired environment variable for the build. Procedure To specify a desired environment variable, enter the following command: USD oc start-build <buildconfig_name> --env=<key>=<value> 2.7.1.4. Starting a build with source Rather than relying on a Git source pull or a Dockerfile for a build, you can also start a build by directly pushing your source, which could be the contents of a Git or SVN working directory, a set of pre-built binary artifacts you want to deploy, or a single file. This can be done by specifying one of the following options for the start-build command: Option Description --from-dir=<directory> Specifies a directory that will be archived and used as a binary input for the build. --from-file=<file> Specifies a single file that will be the only file in the build source. The file is placed in the root of an empty directory with the same file name as the original file provided. --from-repo=<local_source_repo> Specifies a path to a local repository to use as the binary input for a build. Add the --commit option to control which branch, tag, or commit is used for the build. When passing any of these options directly to the build, the contents are streamed to the build and override the current build source settings. Note Builds triggered from binary input will not preserve the source on the server, so rebuilds triggered by base image changes will use the source specified in the build configuration. Procedure Start a build from a source using the following command to send the contents of a local Git repository as an archive from the tag v2 : USD oc start-build hello-world --from-repo=../hello-world --commit=v2 2.7.2. Canceling a build You can cancel a build using the web console, or with the following CLI command. Procedure To manually cancel a build, enter the following command: USD oc cancel-build <build_name> 2.7.2.1. Canceling multiple builds You can cancel multiple builds with the following CLI command. Procedure To manually cancel multiple builds, enter the following command: USD oc cancel-build <build1_name> <build2_name> <build3_name> 2.7.2.2. Canceling all builds You can cancel all builds from the build configuration with the following CLI command. Procedure To cancel all builds, enter the following command: USD oc cancel-build bc/<buildconfig_name> 2.7.2.3. Canceling all builds in a given state You can cancel all builds in a given state, such as new or pending , while ignoring the builds in other states. Procedure To cancel all in a given state, enter the following command: USD oc cancel-build bc/<buildconfig_name> 2.7.3. Deleting a BuildConfig You can delete a BuildConfig using the following command. Procedure To delete a BuildConfig , enter the following command: USD oc delete bc <BuildConfigName> This also deletes all builds that were instantiated from this BuildConfig . To delete a BuildConfig and keep the builds instatiated from the BuildConfig , specify the --cascade=false flag when you enter the following command: USD oc delete --cascade=false bc <BuildConfigName> 2.7.4. Viewing build details You can view build details with the web console or by using the oc describe CLI command. This displays information including: The build source. The build strategy. The output destination. Digest of the image in the destination registry. How the build was created. If the build uses the Docker or Source strategy, the oc describe output also includes information about the source revision used for the build, including the commit ID, author, committer, and message. Procedure To view build details, enter the following command: USD oc describe build <build_name> 2.7.5. Accessing build logs You can access build logs using the web console or the CLI. Procedure To stream the logs using the build directly, enter the following command: USD oc describe build <build_name> 2.7.5.1. Accessing BuildConfig logs You can access BuildConfig logs using the web console or the CLI. Procedure To stream the logs of the latest build for a BuildConfig , enter the following command: USD oc logs -f bc/<buildconfig_name> 2.7.5.2. Accessing BuildConfig logs for a given version build You can access logs for a given version build for a BuildConfig using the web console or the CLI. Procedure To stream the logs for a given version build for a BuildConfig , enter the following command: USD oc logs --version=<number> bc/<buildconfig_name> 2.7.5.3. Enabling log verbosity You can enable a more verbose output by passing the BUILD_LOGLEVEL environment variable as part of the sourceStrategy or dockerStrategy in a BuildConfig . Note An administrator can set the default build verbosity for the entire OpenShift Container Platform instance by configuring env/BUILD_LOGLEVEL . This default can be overridden by specifying BUILD_LOGLEVEL in a given BuildConfig . You can specify a higher priority override on the command line for non-binary builds by passing --build-loglevel to oc start-build . Available log levels for source builds are as follows: Level 0 Produces output from containers running the assemble script and all encountered errors. This is the default. Level 1 Produces basic information about the executed process. Level 2 Produces very detailed information about the executed process. Level 3 Produces very detailed information about the executed process, and a listing of the archive contents. Level 4 Currently produces the same information as level 3. Level 5 Produces everything mentioned on levels and additionally provides docker push messages. Procedure To enable more verbose output, pass the BUILD_LOGLEVEL environment variable as part of the sourceStrategy or dockerStrategy in a BuildConfig : sourceStrategy: ... env: - name: "BUILD_LOGLEVEL" value: "2" 1 1 Adjust this value to the desired log level. 2.8. Triggering and modifying builds The following sections outline how to trigger builds and modify builds using build hooks. 2.8.1. Build triggers When defining a BuildConfig , you can define triggers to control the circumstances in which the BuildConfig should be run. The following build triggers are available: Webhook Image change Configuration change 2.8.1.1. Webhook triggers Webhook triggers allow you to trigger a new build by sending a request to the OpenShift Container Platform API endpoint. You can define these triggers using GitHub, GitLab, Bitbucket, or Generic webhooks. Currently, OpenShift Container Platform webhooks only support the analogous versions of the push event for each of the Git-based Source Code Management (SCM) systems. All other event types are ignored. When the push events are processed, the OpenShift Container Platform control plane host (also known as the master host) confirms if the branch reference inside the event matches the branch reference in the corresponding BuildConfig . If so, it then checks out the exact commit reference noted in the webhook event on the OpenShift Container Platform build. If they do not match, no build is triggered. Note oc new-app and oc new-build create GitHub and Generic webhook triggers automatically, but any other needed webhook triggers must be added manually. You can manually add triggers by setting triggers. For all webhooks, you must define a secret with a key named WebHookSecretKey and the value being the value to be supplied when invoking the webhook. The webhook definition must then reference the secret. The secret ensures the uniqueness of the URL, preventing others from triggering the build. The value of the key is compared to the secret provided during the webhook invocation. For example here is a GitHub webhook with a reference to a secret named mysecret : type: "GitHub" github: secretReference: name: "mysecret" The secret is then defined as follows. Note that the value of the secret is base64 encoded as is required for any data field of a Secret object. - kind: Secret apiVersion: v1 metadata: name: mysecret creationTimestamp: data: WebHookSecretKey: c2VjcmV0dmFsdWUx 2.8.1.1.1. Using GitHub webhooks GitHub webhooks handle the call made by GitHub when a repository is updated. When defining the trigger, you must specify a secret, which is part of the URL you supply to GitHub when configuring the webhook. Example GitHub webhook definition: type: "GitHub" github: secretReference: name: "mysecret" Note The secret used in the webhook trigger configuration is not the same as secret field you encounter when configuring webhook in GitHub UI. The former is to make the webhook URL unique and hard to predict, the latter is an optional string field used to create HMAC hex digest of the body, which is sent as an X-Hub-Signature header. The payload URL is returned as the GitHub Webhook URL by the oc describe command (see Displaying Webhook URLs), and is structured as follows: Example output https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github Prerequisites Create a BuildConfig from a GitHub repository. Procedure To configure a GitHub Webhook: After creating a BuildConfig from a GitHub repository, run: USD oc describe bc/<name-of-your-BuildConfig> This generates a webhook GitHub URL that looks like: Example output <https://api.starter-us-east-1.openshift.com:443/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github Cut and paste this URL into GitHub, from the GitHub web console. In your GitHub repository, select Add Webhook from Settings Webhooks . Paste the URL output into the Payload URL field. Change the Content Type from GitHub's default application/x-www-form-urlencoded to application/json . Click Add webhook . You should see a message from GitHub stating that your webhook was successfully configured. Now, when you push a change to your GitHub repository, a new build automatically starts, and upon a successful build a new deployment starts. Note Gogs supports the same webhook payload format as GitHub. Therefore, if you are using a Gogs server, you can define a GitHub webhook trigger on your BuildConfig and trigger it by your Gogs server as well. Given a file containing a valid JSON payload, such as payload.json , you can manually trigger the webhook with curl : USD curl -H "X-GitHub-Event: push" -H "Content-Type: application/json" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github The -k argument is only necessary if your API server does not have a properly signed certificate. Additional resources Gogs 2.8.1.1.2. Using GitLab webhooks GitLab webhooks handle the call made by GitLab when a repository is updated. As with the GitHub triggers, you must specify a secret. The following example is a trigger definition YAML within the BuildConfig : type: "GitLab" gitlab: secretReference: name: "mysecret" The payload URL is returned as the GitLab Webhook URL by the oc describe command, and is structured as follows: Example output https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab Procedure To configure a GitLab Webhook: Describe the BuildConfig to get the webhook URL: USD oc describe bc <name> Copy the webhook URL, replacing <secret> with your secret value. Follow the GitLab setup instructions to paste the webhook URL into your GitLab repository settings. Given a file containing a valid JSON payload, such as payload.json , you can manually trigger the webhook with curl : USD curl -H "X-GitLab-Event: Push Hook" -H "Content-Type: application/json" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab The -k argument is only necessary if your API server does not have a properly signed certificate. 2.8.1.1.3. Using Bitbucket webhooks Bitbucket webhooks handle the call made by Bitbucket when a repository is updated. Similar to the triggers, you must specify a secret. The following example is a trigger definition YAML within the BuildConfig : type: "Bitbucket" bitbucket: secretReference: name: "mysecret" The payload URL is returned as the Bitbucket Webhook URL by the oc describe command, and is structured as follows: Example output https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket Procedure To configure a Bitbucket Webhook: Describe the 'BuildConfig' to get the webhook URL: USD oc describe bc <name> Copy the webhook URL, replacing <secret> with your secret value. Follow the Bitbucket setup instructions to paste the webhook URL into your Bitbucket repository settings. Given a file containing a valid JSON payload, such as payload.json , you can manually trigger the webhook with curl : USD curl -H "X-Event-Key: repo:push" -H "Content-Type: application/json" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket The -k argument is only necessary if your API server does not have a properly signed certificate. 2.8.1.1.4. Using generic webhooks Generic webhooks are invoked from any system capable of making a web request. As with the other webhooks, you must specify a secret, which is part of the URL that the caller must use to trigger the build. The secret ensures the uniqueness of the URL, preventing others from triggering the build. The following is an example trigger definition YAML within the BuildConfig : type: "Generic" generic: secretReference: name: "mysecret" allowEnv: true 1 1 Set to true to allow a generic webhook to pass in environment variables. Procedure To set up the caller, supply the calling system with the URL of the generic webhook endpoint for your build: Example output https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic The caller must invoke the webhook as a POST operation. To invoke the webhook manually you can use curl : USD curl -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic The HTTP verb must be set to POST . The insecure -k flag is specified to ignore certificate validation. This second flag is not necessary if your cluster has properly signed certificates. The endpoint can accept an optional payload with the following format: git: uri: "<url to git repository>" ref: "<optional git reference>" commit: "<commit hash identifying a specific git commit>" author: name: "<author name>" email: "<author e-mail>" committer: name: "<committer name>" email: "<committer e-mail>" message: "<commit message>" env: 1 - name: "<variable name>" value: "<variable value>" 1 Similar to the BuildConfig environment variables, the environment variables defined here are made available to your build. If these variables collide with the BuildConfig environment variables, these variables take precedence. By default, environment variables passed by webhook are ignored. Set the allowEnv field to true on the webhook definition to enable this behavior. To pass this payload using curl , define it in a file named payload_file.yaml and run: USD curl -H "Content-Type: application/yaml" --data-binary @payload_file.yaml -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic The arguments are the same as the example with the addition of a header and a payload. The -H argument sets the Content-Type header to application/yaml or application/json depending on your payload format. The --data-binary argument is used to send a binary payload with newlines intact with the POST request. Note OpenShift Container Platform permits builds to be triggered by the generic webhook even if an invalid request payload is presented, for example, invalid content type, unparsable or invalid content, and so on. This behavior is maintained for backwards compatibility. If an invalid request payload is presented, OpenShift Container Platform returns a warning in JSON format as part of its HTTP 200 OK response. 2.8.1.1.5. Displaying webhook URLs You can use the following command to display webhook URLs associated with a build configuration. If the command does not display any webhook URLs, then no webhook trigger is defined for that build configuration. Procedure To display any webhook URLs associated with a BuildConfig , run: USD oc describe bc <name> 2.8.1.2. Using image change triggers Image change triggers allow your build to be automatically invoked when a new version of an upstream image is available. For example, if a build is based on top of a RHEL image, then you can trigger that build to run any time the RHEL image changes. As a result, the application image is always running on the latest RHEL base image. Note Image streams that point to container images in v1 container registries only trigger a build once when the image stream tag becomes available and not on subsequent image updates. This is due to the lack of uniquely identifiable images in v1 container registries. Procedure Configuring an image change trigger requires the following actions: Define an ImageStream that points to the upstream image you want to trigger on: kind: "ImageStream" apiVersion: "v1" metadata: name: "ruby-20-centos7" This defines the image stream that is tied to a container image repository located at <system-registry>_/ <namespace> /ruby-20-centos7 . The <system-registry> is defined as a service with the name docker-registry running in OpenShift Container Platform. If an image stream is the base image for the build, set the from field in the build strategy to point to the ImageStream : strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "ruby-20-centos7:latest" In this case, the sourceStrategy definition is consuming the latest tag of the image stream named ruby-20-centos7 located within this namespace. Define a build with one or more triggers that point to ImageStreams : type: "ImageChange" 1 imageChange: {} type: "ImageChange" 2 imageChange: from: kind: "ImageStreamTag" name: "custom-image:latest" 1 An image change trigger that monitors the ImageStream and Tag as defined by the build strategy's from field. The imageChange object here must be empty. 2 An image change trigger that monitors an arbitrary imagestream. The imageChange part in this case must include a from field that references the ImageStreamTag to monitor. When using an image change trigger for the strategy image stream, the generated build is supplied with an immutable docker tag that points to the latest image corresponding to that tag. This new image reference is used by the strategy when it executes for the build. For other image change triggers that do not reference the strategy image stream, a new build is started, but the build strategy is not updated with a unique image reference. Since this example has an image change trigger for the strategy, the resulting build is: strategy: sourceStrategy: from: kind: "DockerImage" name: "172.30.17.3:5001/mynamespace/ruby-20-centos7:<immutableid>" This ensures that the triggered build uses the new image that was just pushed to the repository, and the build can be re-run any time with the same inputs. You can pause an image change trigger to allow multiple changes on the referenced image stream before a build is started. You can also set the paused attribute to true when initially adding an ImageChangeTrigger to a BuildConfig to prevent a build from being immediately triggered. type: "ImageChange" imageChange: from: kind: "ImageStreamTag" name: "custom-image:latest" paused: true In addition to setting the image field for all Strategy types, for custom builds, the OPENSHIFT_CUSTOM_BUILD_BASE_IMAGE environment variable is checked. If it does not exist, then it is created with the immutable image reference. If it does exist then it is updated with the immutable image reference. If a build is triggered due to a webhook trigger or manual request, the build that is created uses the <immutableid> resolved from the ImageStream referenced by the Strategy . This ensures that builds are performed using consistent image tags for ease of reproduction. Additional resources v1 container registries 2.8.1.3. Configuration change triggers A configuration change trigger allows a build to be automatically invoked as soon as a new BuildConfig is created. The following is an example trigger definition YAML within the BuildConfig : type: "ConfigChange" Note Configuration change triggers currently only work when creating a new BuildConfig . In a future release, configuration change triggers will also be able to launch a build whenever a BuildConfig is updated. 2.8.1.3.1. Setting triggers manually Triggers can be added to and removed from build configurations with oc set triggers . Procedure To set a GitHub webhook trigger on a build configuration, use: USD oc set triggers bc <name> --from-github To set an imagechange trigger, use: USD oc set triggers bc <name> --from-image='<image>' To remove a trigger, add --remove : USD oc set triggers bc <name> --from-bitbucket --remove Note When a webhook trigger already exists, adding it again regenerates the webhook secret. For more information, consult the help documentation with by running: USD oc set triggers --help 2.8.2. Build hooks Build hooks allow behavior to be injected into the build process. The postCommit field of a BuildConfig object runs commands inside a temporary container that is running the build output image. The hook is run immediately after the last layer of the image has been committed and before the image is pushed to a registry. The current working directory is set to the image's WORKDIR , which is the default working directory of the container image. For most images, this is where the source code is located. The hook fails if the script or command returns a non-zero exit code or if starting the temporary container fails. When the hook fails it marks the build as failed and the image is not pushed to a registry. The reason for failing can be inspected by looking at the build logs. Build hooks can be used to run unit tests to verify the image before the build is marked complete and the image is made available in a registry. If all tests pass and the test runner returns with exit code 0 , the build is marked successful. In case of any test failure, the build is marked as failed. In all cases, the build log contains the output of the test runner, which can be used to identify failed tests. The postCommit hook is not only limited to running tests, but can be used for other commands as well. Since it runs in a temporary container, changes made by the hook do not persist, meaning that running the hook cannot affect the final image. This behavior allows for, among other uses, the installation and usage of test dependencies that are automatically discarded and are not present in the final image. 2.8.2.1. Configuring post commit build hooks There are different ways to configure the post build hook. All forms in the following examples are equivalent and run bundle exec rake test --verbose . Procedure Shell script: postCommit: script: "bundle exec rake test --verbose" The script value is a shell script to be run with /bin/sh -ic . Use this when a shell script is appropriate to execute the build hook. For example, for running unit tests as above. To control the image entry point, or if the image does not have /bin/sh , use command and/or args . Note The additional -i flag was introduced to improve the experience working with CentOS and RHEL images, and may be removed in a future release. Command as the image entry point: postCommit: command: ["/bin/bash", "-c", "bundle exec rake test --verbose"] In this form, command is the command to run, which overrides the image entry point in the exec form, as documented in the Dockerfile reference . This is needed if the image does not have /bin/sh , or if you do not want to use a shell. In all other cases, using script might be more convenient. Command with arguments: postCommit: command: ["bundle", "exec", "rake", "test"] args: ["--verbose"] This form is equivalent to appending the arguments to command . Note Providing both script and command simultaneously creates an invalid build hook. 2.8.2.2. Using the CLI to set post commit build hooks The oc set build-hook command can be used to set the build hook for a build configuration. Procedure To set a command as the post-commit build hook: USD oc set build-hook bc/mybc \ --post-commit \ --command \ -- bundle exec rake test --verbose To set a script as the post-commit build hook: USD oc set build-hook bc/mybc --post-commit --script="bundle exec rake test --verbose" 2.9. Performing advanced builds The following sections provide instructions for advanced build operations including setting build resources and maximum duration, assigning builds to nodes, chaining builds, build pruning, and build run policies. 2.9.1. Setting build resources By default, builds are completed by pods using unbound resources, such as memory and CPU. These resources can be limited. Procedure You can limit resource use in two ways: Limit resource use by specifying resource limits in the default container limits of a project. Limit resource use by specifying resource limits as part of the build configuration. ** In the following example, each of the resources , cpu , and memory parameters are optional: apiVersion: "v1" kind: "BuildConfig" metadata: name: "sample-build" spec: resources: limits: cpu: "100m" 1 memory: "256Mi" 2 1 cpu is in CPU units: 100m represents 0.1 CPU units (100 * 1e-3). 2 memory is in bytes: 256Mi represents 268435456 bytes (256 * 2 ^ 20). However, if a quota has been defined for your project, one of the following two items is required: A resources section set with an explicit requests : resources: requests: 1 cpu: "100m" memory: "256Mi" 1 The requests object contains the list of resources that correspond to the list of resources in the quota. A limit range defined in your project, where the defaults from the LimitRange object apply to pods created during the build process. Otherwise, build pod creation will fail, citing a failure to satisfy quota. 2.9.2. Setting maximum duration When defining a BuildConfig object, you can define its maximum duration by setting the completionDeadlineSeconds field. It is specified in seconds and is not set by default. When not set, there is no maximum duration enforced. The maximum duration is counted from the time when a build pod gets scheduled in the system, and defines how long it can be active, including the time needed to pull the builder image. After reaching the specified timeout, the build is terminated by OpenShift Container Platform. Procedure To set maximum duration, specify completionDeadlineSeconds in your BuildConfig . The following example shows the part of a BuildConfig specifying completionDeadlineSeconds field for 30 minutes: spec: completionDeadlineSeconds: 1800 Note This setting is not supported with the Pipeline Strategy option. 2.9.3. Assigning builds to specific nodes Builds can be targeted to run on specific nodes by specifying labels in the nodeSelector field of a build configuration. The nodeSelector value is a set of key-value pairs that are matched to Node labels when scheduling the build pod. The nodeSelector value can also be controlled by cluster-wide default and override values. Defaults will only be applied if the build configuration does not define any key-value pairs for the nodeSelector and also does not define an explicitly empty map value of nodeSelector:{} . Override values will replace values in the build configuration on a key by key basis. Note If the specified NodeSelector cannot be matched to a node with those labels, the build still stay in the Pending state indefinitely. Procedure Assign builds to run on specific nodes by assigning labels in the nodeSelector field of the BuildConfig , for example: apiVersion: "v1" kind: "BuildConfig" metadata: name: "sample-build" spec: nodeSelector: 1 key1: value1 key2: value2 1 Builds associated with this build configuration will run only on nodes with the key1=value2 and key2=value2 labels. 2.9.4. Chained builds For compiled languages such as Go, C, C++, and Java, including the dependencies necessary for compilation in the application image might increase the size of the image or introduce vulnerabilities that can be exploited. To avoid these problems, two builds can be chained together. One build that produces the compiled artifact, and a second build that places that artifact in a separate image that runs the artifact. In the following example, a source-to-image (S2I) build is combined with a docker build to compile an artifact that is then placed in a separate runtime image. Note Although this example chains a S2I build and a docker build, the first build can use any strategy that produces an image containing the desired artifacts, and the second build can use any strategy that can consume input content from an image. The first build takes the application source and produces an image containing a WAR file. The image is pushed to the artifact-image image stream. The path of the output artifact depends on the assemble script of the S2I builder used. In this case, it is output to /wildfly/standalone/deployments/ROOT.war . apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: artifact-build spec: output: to: kind: ImageStreamTag name: artifact-image:latest source: git: uri: https://github.com/openshift/openshift-jee-sample.git ref: "master" strategy: sourceStrategy: from: kind: ImageStreamTag name: wildfly:10.1 namespace: openshift The second build uses image source with a path to the WAR file inside the output image from the first build. An inline dockerfile copies that WAR file into a runtime image. apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: image-build spec: output: to: kind: ImageStreamTag name: image-build:latest source: dockerfile: |- FROM jee-runtime:latest COPY ROOT.war /deployments/ROOT.war images: - from: 1 kind: ImageStreamTag name: artifact-image:latest paths: 2 - sourcePath: /wildfly/standalone/deployments/ROOT.war destinationDir: "." strategy: dockerStrategy: from: 3 kind: ImageStreamTag name: jee-runtime:latest triggers: - imageChange: {} type: ImageChange 1 from specifies that the docker build should include the output of the image from the artifact-image image stream, which was the target of the build. 2 paths specifies which paths from the target image to include in the current docker build. 3 The runtime image is used as the source image for the docker build. The result of this setup is that the output image of the second build does not have to contain any of the build tools that are needed to create the WAR file. Also, because the second build contains an image change trigger, whenever the first build is run and produces a new image with the binary artifact, the second build is automatically triggered to produce a runtime image that contains that artifact. Therefore, both builds behave as a single build with two stages. 2.9.5. Pruning builds By default, builds that have completed their lifecycle are persisted indefinitely. You can limit the number of builds that are retained. Procedure Limit the number of builds that are retained by supplying a positive integer value for successfulBuildsHistoryLimit or failedBuildsHistoryLimit in your BuildConfig , for example: apiVersion: "v1" kind: "BuildConfig" metadata: name: "sample-build" spec: successfulBuildsHistoryLimit: 2 1 failedBuildsHistoryLimit: 2 2 1 successfulBuildsHistoryLimit will retain up to two builds with a status of completed . 2 failedBuildsHistoryLimit will retain up to two builds with a status of failed , canceled , or error . Trigger build pruning by one of the following actions: Updating a build configuration. Waiting for a build to complete its lifecycle. Builds are sorted by their creation timestamp with the oldest builds being pruned first. Note Administrators can manually prune builds using the 'oc adm' object pruning command. 2.9.6. Build run policy The build run policy describes the order in which the builds created from the build configuration should run. This can be done by changing the value of the runPolicy field in the spec section of the Build specification. It is also possible to change the runPolicy value for existing build configurations, by: Changing Parallel to Serial or SerialLatestOnly and triggering a new build from this configuration causes the new build to wait until all parallel builds complete as the serial build can only run alone. Changing Serial to SerialLatestOnly and triggering a new build causes cancellation of all existing builds in queue, except the currently running build and the most recently created build. The newest build runs . 2.10. Using Red Hat subscriptions in builds Use the following sections to run entitled builds on OpenShift Container Platform. 2.10.1. Creating an image stream tag for the Red Hat Universal Base Image To use Red Hat subscriptions within a build, you create an image stream tag to reference the Universal Base Image (UBI). To make the UBI available in every project in the cluster, you add the image stream tag to the openshift namespace. Otherwise, to make it available in a specific project , you add the image stream tag to that project. The benefit of using image stream tags this way is that doing so grants access to the UBI based on the registry.redhat.io credentials in the install pull secret without exposing the pull secret to other users. This is more convenient than requiring each developer to install pull secrets with registry.redhat.io credentials in each project. Procedure To create an ImageStreamTag in the openshift namespace, so it is available to developers in all projects, enter: USD oc tag --source=docker registry.redhat.io/ubi7/ubi:latest ubi:latest -n openshift To create an ImageStreamTag in a single project, enter: USD oc tag --source=docker registry.redhat.io/ubi7/ubi:latest ubi:latest 2.10.2. Adding subscription entitlements as a build secret Builds that use Red Hat subscriptions to install content must include the entitlement keys as a build secret. Prerequisites You must have access to Red Hat entitlements through your subscription, and the entitlements must have separate public and private key files. Tip When you perform an Entitlement Build using Red Hat Enterprise Linux (RHEL) 7, you must have the following instructions in your Dockerfile before you run any yum commands: RUN rm /etc/rhsm-host Procedure Create a secret containing your entitlements, ensuring that there are separate files containing the public and private keys: USD oc create secret generic etc-pki-entitlement --from-file /path/to/entitlement/{ID}.pem \ > --from-file /path/to/entitlement/{ID}-key.pem ... Add the secret as a build input in the build configuration: source: secrets: - secret: name: etc-pki-entitlement destinationDir: etc-pki-entitlement 2.10.3. Running builds with Subscription Manager 2.10.3.1. Docker builds using Subscription Manager Docker strategy builds can use the Subscription Manager to install subscription content. Prerequisites The entitlement keys, subscription manager configuration, and subscription manager certificate authority must be added as build inputs. Procedure Use the following as an example Dockerfile to install content with the Subscription Manager: FROM registry.redhat.io/rhel7:latest USER root # Copy entitlements COPY ./etc-pki-entitlement /etc/pki/entitlement # Copy subscription manager configurations COPY ./rhsm-conf /etc/rhsm COPY ./rhsm-ca /etc/rhsm/ca # Delete /etc/rhsm-host to use entitlements from the build container RUN rm /etc/rhsm-host && \ # Initialize /etc/yum.repos.d/redhat.repo # See https://access.redhat.com/solutions/1443553 yum repolist --disablerepo=* && \ subscription-manager repos --enable <enabled-repo> && \ yum -y update && \ yum -y install <rpms> && \ # Remove entitlements and Subscription Manager configs rm -rf /etc/pki/entitlement && \ rm -rf /etc/rhsm # OpenShift requires images to run as non-root by default USER 1001 ENTRYPOINT ["/bin/bash"] 2.10.4. Running builds with Red Hat Satellite subscriptions 2.10.4.1. Adding Red Hat Satellite configurations to builds Builds that use Red Hat Satellite to install content must provide appropriate configurations to obtain content from Satellite repositories. Prerequisites You must provide or create a yum -compatible repository configuration file that downloads content from your Satellite instance. Sample repository configuration [test-<name>] name=test-<number> baseurl = https://satellite.../content/dist/rhel/server/7/7Server/x86_64/os enabled=1 gpgcheck=0 sslverify=0 sslclientkey = /etc/pki/entitlement/...-key.pem sslclientcert = /etc/pki/entitlement/....pem Procedure Create a ConfigMap containing the Satellite repository configuration file: USD oc create configmap yum-repos-d --from-file /path/to/satellite.repo Add the Satellite repository configuration to the BuildConfig : source: configMaps: - configMap: name: yum-repos-d destinationDir: yum.repos.d 2.10.4.2. Docker builds using Red Hat Satellite subscriptions Docker strategy builds can use Red Hat Satellite repositories to install subscription content. Prerequisites The entitlement keys and Satellite repository configurations must be added as build inputs. Procedure Use the following as an example Dockerfile to install content with Satellite: FROM registry.redhat.io/rhel7:latest USER root # Copy entitlements COPY ./etc-pki-entitlement /etc/pki/entitlement # Copy repository configuration COPY ./yum.repos.d /etc/yum.repos.d # Delete /etc/rhsm-host to use entitlements from the build container RUN sed -i".org" -e "s#^enabled=1#enabled=0#g" /etc/yum/pluginconf.d/subscription-manager.conf 1 #RUN cat /etc/yum/pluginconf.d/subscription-manager.conf RUN yum clean all #RUN yum-config-manager RUN rm /etc/rhsm-host && \ # yum repository info provided by Satellite yum -y update && \ yum -y install <rpms> && \ # Remove entitlements rm -rf /etc/pki/entitlement # OpenShift requires images to run as non-root by default USER 1001 ENTRYPOINT ["/bin/bash"] 1 If adding Satellite configurations to builds using enabled=1 fails, add RUN sed -i".org" -e "s#^enabled=1#enabled=0#g" /etc/yum/pluginconf.d/subscription-manager.conf to the Dockerfile. 2.10.5. Additional resources Managing image streams build strategy 2.11. Securing builds by strategy Builds in OpenShift Container Platform are run in privileged containers. Depending on the build strategy used, if you have privileges, you can run builds to escalate their permissions on the cluster and host nodes. And as a security measure, it limits who can run builds and the strategy that is used for those builds. Custom builds are inherently less safe than source builds, because they can execute any code within a privileged container, and are disabled by default. Grant docker build permissions with caution, because a vulnerability in the Dockerfile processing logic could result in a privileges being granted on the host node. By default, all users that can create builds are granted permission to use the docker and Source-to-image (S2I) build strategies. Users with cluster administrator privileges can enable the custom build strategy, as referenced in the restricting build strategies to a user globally section. You can control who can build and which build strategies they can use by using an authorization policy. Each build strategy has a corresponding build subresource. A user must have permission to create a build and permission to create on the build strategy subresource to create builds using that strategy. Default roles are provided that grant the create permission on the build strategy subresource. Table 2.3. Build Strategy Subresources and Roles Strategy Subresource Role Docker builds/docker system:build-strategy-docker Source-to-Image builds/source system:build-strategy-source Custom builds/custom system:build-strategy-custom JenkinsPipeline builds/jenkinspipeline system:build-strategy-jenkinspipeline 2.11.1. Disabling access to a build strategy globally To prevent access to a particular build strategy globally, log in as a user with cluster administrator privileges, remove the corresponding role from the system:authenticated group, and apply the annotation rbac.authorization.kubernetes.io/autoupdate: "false" to protect them from changes between the API restarts. The following example shows disabling the docker build strategy. Procedure Apply the rbac.authorization.kubernetes.io/autoupdate annotation: USD oc edit clusterrolebinding system:build-strategy-docker-binding Example output apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "false" 1 creationTimestamp: 2018-08-10T01:24:14Z name: system:build-strategy-docker-binding resourceVersion: "225" selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system%3Abuild-strategy-docker-binding uid: 17b1f3d4-9c3c-11e8-be62-0800277d20bf roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:build-strategy-docker subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:authenticated 1 Change the rbac.authorization.kubernetes.io/autoupdate annotation's value to "false" . Remove the role: USD oc adm policy remove-cluster-role-from-group system:build-strategy-docker system:authenticated Ensure the build strategy subresources are also removed from these roles: USD oc edit clusterrole admin USD oc edit clusterrole edit For each role, specify the subresources that correspond to the resource of the strategy to disable. Disable the docker Build Strategy for admin : kind: ClusterRole metadata: name: admin ... - apiGroups: - "" - build.openshift.io resources: - buildconfigs - buildconfigs/webhooks - builds/custom 1 - builds/source verbs: - create - delete - deletecollection - get - list - patch - update - watch ... 1 Add builds/custom and builds/source to disable docker builds globally for users with the admin role. 2.11.2. Restricting build strategies to users globally You can allow a set of specific users to create builds with a particular strategy. Prerequisites Disable global access to the build strategy. Procedure Assign the role that corresponds to the build strategy to a specific user. For example, to add the system:build-strategy-docker cluster role to the user devuser : USD oc adm policy add-cluster-role-to-user system:build-strategy-docker devuser Warning Granting a user access at the cluster level to the builds/docker subresource means that the user can create builds with the docker strategy in any project in which they can create builds. 2.11.3. Restricting build strategies to a user within a project Similar to granting the build strategy role to a user globally, you can allow a set of specific users within a project to create builds with a particular strategy. Prerequisites Disable global access to the build strategy. Procedure Assign the role that corresponds to the build strategy to a specific user within a project. For example, to add the system:build-strategy-docker role within the project devproject to the user devuser : USD oc adm policy add-role-to-user system:build-strategy-docker devuser -n devproject 2.12. Build configuration resources Use the following procedure to configure build settings. 2.12.1. Build controller configuration parameters The build.config.openshift.io/cluster resource offers the following configuration parameters. Parameter Description Build Holds cluster-wide information on how to handle builds. The canonical, and only valid name is cluster . spec : Holds user-settable values for the build controller configuration. buildDefaults Controls the default information for builds. defaultProxy : Contains the default proxy settings for all build operations, including image pull or push and source download. You can override values by setting the HTTP_PROXY , HTTPS_PROXY , and NO_PROXY environment variables in the BuildConfig strategy. gitProxy : Contains the proxy settings for Git operations only. If set, this overrides any proxy settings for all Git commands, such as git clone . Values that are not set here are inherited from DefaultProxy. env : A set of default environment variables that are applied to the build if the specified variables do not exist on the build. imageLabels : A list of labels that are applied to the resulting image. You can override a default label by providing a label with the same name in the BuildConfig . resources : Defines resource requirements to execute the build. ImageLabel name : Defines the name of the label. It must have non-zero length. buildOverrides Controls override settings for builds. imageLabels : A list of labels that are applied to the resulting image. If you provided a label in the BuildConfig with the same name as one in this table, your label will be overwritten. nodeSelector : A selector which must be true for the build pod to fit on a node. tolerations : A list of tolerations that overrides any existing tolerations set on a build pod. BuildList items : Standard object's metadata. 2.12.2. Configuring build settings You can configure build settings by editing the build.config.openshift.io/cluster resource. Procedure Edit the build.config.openshift.io/cluster resource: USD oc edit build.config.openshift.io/cluster The following is an example build.config.openshift.io/cluster resource: apiVersion: config.openshift.io/v1 kind: Build 1 metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 2 name: cluster resourceVersion: "107233" selfLink: /apis/config.openshift.io/v1/builds/cluster uid: e2e9cc14-78a9-11e9-b92b-06d6c7da38dc spec: buildDefaults: 2 defaultProxy: 3 httpProxy: http://proxy.com httpsProxy: https://proxy.com noProxy: internal.com env: 4 - name: envkey value: envvalue gitProxy: 5 httpProxy: http://gitproxy.com httpsProxy: https://gitproxy.com noProxy: internalgit.com imageLabels: 6 - name: labelkey value: labelvalue resources: 7 limits: cpu: 100m memory: 50Mi requests: cpu: 10m memory: 10Mi buildOverrides: 8 imageLabels: 9 - name: labelkey value: labelvalue nodeSelector: 10 selectorkey: selectorvalue tolerations: 11 - effect: NoSchedule key: node-role.kubernetes.io/builds operator: Exists 1 Build : Holds cluster-wide information on how to handle builds. The canonical, and only valid name is cluster . 2 buildDefaults : Controls the default information for builds. 3 defaultProxy : Contains the default proxy settings for all build operations, including image pull or push and source download. 4 env : A set of default environment variables that are applied to the build if the specified variables do not exist on the build. 5 gitProxy : Contains the proxy settings for Git operations only. If set, this overrides any Proxy settings for all Git commands, such as git clone . 6 imageLabels : A list of labels that are applied to the resulting image. You can override a default label by providing a label with the same name in the BuildConfig . 7 resources : Defines resource requirements to execute the build. 8 buildOverrides : Controls override settings for builds. 9 imageLabels : A list of labels that are applied to the resulting image. If you provided a label in the BuildConfig with the same name as one in this table, your label will be overwritten. 10 nodeSelector : A selector which must be true for the build pod to fit on a node. 11 tolerations : A list of tolerations that overrides any existing tolerations set on a build pod. 2.13. Troubleshooting builds Use the following to troubleshoot build issues. 2.13.1. Resolving denial for access to resources If your request for access to resources is denied: Issue A build fails with: requested access to the resource is denied Resolution You have exceeded one of the image quotas set on your project. Check your current quota and verify the limits applied and storage in use: USD oc describe quota 2.13.2. Service certificate generation failure If your request for access to resources is denied: Issue If a service certificate generation fails with (service's service.beta.openshift.io/serving-cert-generation-error annotation contains): Example output secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60 Resolution The service that generated the certificate no longer exists, or has a different serviceUID . You must force certificates regeneration by removing the old secret, and clearing the following annotations on the service: service.beta.openshift.io/serving-cert-generation-error and service.beta.openshift.io/serving-cert-generation-error-num : USD oc delete secret <secret_name> USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error- USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num- Note The command removing annotation has a - after the annotation name to be removed. 2.14. Setting up additional trusted certificate authorities for builds Use the following sections to set up additional certificate authorities (CA) to be trusted by builds when pulling images from an image registry. The procedure requires a cluster administrator to create a ConfigMap and add additional CAs as keys in the ConfigMap . The ConfigMap must be created in the openshift-config namespace. domain is the key in the ConfigMap and value is the PEM-encoded certificate. Each CA must be associated with a domain. The domain format is hostname[..port] . The ConfigMap name must be set in the image.config.openshift.io/cluster cluster scoped configuration resource's spec.additionalTrustedCA field. 2.14.1. Adding certificate authorities to the cluster You can add certificate authorities (CA) to the cluster for use when pushing and pulling images with the following procedure. Prerequisites You must have cluster administrator privileges. You must have access to the public certificates of the registry, usually a hostname/ca.crt file located in the /etc/docker/certs.d/ directory. Procedure Create a ConfigMap in the openshift-config namespace containing the trusted certificates for the registries that use self-signed certificates. For each CA file, ensure the key in the ConfigMap is the hostname of the registry in the hostname[..port] format: USD oc create configmap registry-cas -n openshift-config \ --from-file=myregistry.corp.com..5000=/etc/docker/certs.d/myregistry.corp.com:5000/ca.crt \ --from-file=otherregistry.com=/etc/docker/certs.d/otherregistry.com/ca.crt Update the cluster image configuration: USD oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-cas"}}}' --type=merge 2.14.2. Additional resources Create a ConfigMap Secrets and ConfigMaps Configuring a custom PKI | [
"kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: \"ruby-sample-build\" 1 spec: runPolicy: \"Serial\" 2 triggers: 3 - type: \"GitHub\" github: secret: \"secret101\" - type: \"Generic\" generic: secret: \"secret101\" - type: \"ImageChange\" source: 4 git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: 5 sourceStrategy: from: kind: \"ImageStreamTag\" name: \"ruby-20-centos7:latest\" output: 6 to: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" postCommit: 7 script: \"bundle exec rake test\"",
"source: git: uri: https://github.com/openshift/ruby-hello-world.git 1 ref: \"master\" images: - from: kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: - destinationDir: app/dir/injected/dir 2 sourcePath: /usr/lib/somefile.jar contextDir: \"app/dir\" 3 dockerfile: \"FROM centos:7\\nRUN yum install -y httpd\" 4",
"source: dockerfile: \"FROM centos:7\\nRUN yum install -y httpd\" 1",
"source: git: uri: https://github.com/openshift/ruby-hello-world.git ref: \"master\" images: 1 - from: 2 kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: 3 - destinationDir: injected/dir 4 sourcePath: /usr/lib/somefile.jar 5 - from: kind: ImageStreamTag name: myotherinputimage:latest namespace: myothernamespace pullSecret: mysecret 6 paths: - destinationDir: injected/dir sourcePath: /usr/lib/somefile.jar",
"oc secrets link builder dockerhub",
"source: git: 1 uri: \"https://github.com/openshift/ruby-hello-world\" ref: \"master\" contextDir: \"app/dir\" 2 dockerfile: \"FROM openshift/ruby-22-centos7\\nUSER example\" 3",
"source: git: uri: \"https://github.com/openshift/ruby-hello-world\" ref: \"master\" httpProxy: http://proxy.example.com httpsProxy: https://proxy.example.com noProxy: somedomain.com, otherdomain.com",
"oc annotate secret mysecret 'build.openshift.io/source-secret-match-uri-1=ssh://bitbucket.atlassian.com:7999/*'",
"kind: Secret apiVersion: v1 metadata: name: matches-all-corporate-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://*.mycorp.com/* data: --- kind: Secret apiVersion: v1 metadata: name: override-for-my-dev-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://mydev1.mycorp.com/* build.openshift.io/source-secret-match-uri-2: https://mydev2.mycorp.com/* data:",
"oc annotate secret mysecret 'build.openshift.io/source-secret-match-uri-1=https://*.mycorp.com/*'",
"apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: output: to: kind: \"ImageStreamTag\" name: \"sample-image:latest\" source: git: uri: \"https://github.com/user/app.git\" sourceSecret: name: \"basicsecret\" strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"python-33-centos7:latest\"",
"oc set build-secret --source bc/sample-build basicsecret",
"oc create secret generic <secret_name> --from-file=<path/to/.gitconfig>",
"[http] sslVerify=false",
"cat .gitconfig",
"[user] name = <name> email = <email> [http] sslVerify = false sslCert = /var/run/secrets/openshift.io/source/client.crt sslKey = /var/run/secrets/openshift.io/source/client.key sslCaInfo = /var/run/secrets/openshift.io/source/cacert.crt",
"oc create secret generic <secret_name> --from-literal=username=<user_name> \\ 1 --from-literal=password=<password> \\ 2 --from-file=.gitconfig=.gitconfig --from-file=client.crt=/var/run/secrets/openshift.io/source/client.crt --from-file=cacert.crt=/var/run/secrets/openshift.io/source/cacert.crt --from-file=client.key=/var/run/secrets/openshift.io/source/client.key",
"oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --type=kubernetes.io/basic-auth",
"oc create secret generic <secret_name> --from-literal=password=<token> --type=kubernetes.io/basic-auth",
"ssh-keygen -t ed25519 -C \"[email protected]\"",
"oc create secret generic <secret_name> --from-file=ssh-privatekey=<path/to/ssh/private/key> --from-file=<path/to/known_hosts> \\ 1 --type=kubernetes.io/ssh-auth",
"cat intermediateCA.crt intermediateCA.crt rootCA.crt > ca.crt",
"oc create secret generic mycert --from-file=ca.crt=</path/to/file> 1",
"oc create secret generic <secret_name> --from-file=ssh-privatekey=<path/to/ssh/private/key> --from-file=<path/to/.gitconfig> --type=kubernetes.io/ssh-auth",
"oc create secret generic <secret_name> --from-file=ca.crt=<path/to/certificate> --from-file=<path/to/.gitconfig>",
"oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=ca-cert=</path/to/file> --type=kubernetes.io/basic-auth",
"oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=</path/to/.gitconfig> --type=kubernetes.io/basic-auth",
"oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=</path/to/.gitconfig> --from-file=ca-cert=</path/to/file> --type=kubernetes.io/basic-auth",
"apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: dmFsdWUtMQ0K 3 password: dmFsdWUtMg0KDQo= stringData: 4 hostname: myapp.mydomain.com 5",
"oc create -f <filename>",
"oc create secret generic dockerhub --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson",
"apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: dXNlci1uYW1l password: cGFzc3dvcmQ=",
"apiVersion: v1 kind: Secret metadata: name: aregistrykey namespace: myapps type: kubernetes.io/dockerconfigjson 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2",
"oc create -f <your_yaml_file>.yaml",
"oc logs secret-example-pod",
"oc delete pod secret-example-pod",
"apiVersion: v1 kind: Secret metadata: name: test-secret data: username: dmFsdWUtMQ0K 1 password: dmFsdWUtMQ0KDQo= 2 stringData: hostname: myapp.mydomain.com 3 secret.properties: |- 4 property1=valueA property2=valueB",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"cat /etc/secret-volume/*\" ] volumeMounts: # name must match the volume name below - name: secret-volume mountPath: /etc/secret-volume readOnly: true volumes: - name: secret-volume secret: secretName: test-secret restartPolicy: Never",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"export\" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username restartPolicy: Never",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username",
"oc create configmap settings-mvn --from-file=settings.xml=<path/to/settings.xml>",
"oc create secret generic secret-mvn --from-file=id_rsa=<path/to/.ssh/id_rsa>",
"source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn secrets: - secret: name: secret-mvn",
"oc new-build openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git --context-dir helloworld --build-secret \"secret-mvn\" --build-config-map \"settings-mvn\"",
"source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn destinationDir: \".m2\" secrets: - secret: name: secret-mvn destinationDir: \".ssh\"",
"oc new-build openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git --context-dir helloworld --build-secret \"secret-mvn:.ssh\" --build-config-map \"settings-mvn:.m2\"",
"FROM centos/ruby-22-centos7 USER root COPY ./secret-dir /secrets COPY ./config / Create a shell script that will output secrets and ConfigMaps when the image is run RUN echo '#!/bin/sh' > /input_report.sh RUN echo '(test -f /secrets/secret1 && echo -n \"secret1=\" && cat /secrets/secret1)' >> /input_report.sh RUN echo '(test -f /config && echo -n \"relative-configMap=\" && cat /config)' >> /input_report.sh RUN chmod 755 /input_report.sh CMD [\"/bin/sh\", \"-c\", \"/input_report.sh\"]",
"#!/bin/sh APP_VERSION=1.0 wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar",
"#!/bin/sh exec java -jar app.jar",
"FROM jboss/base-jdk:8 ENV APP_VERSION 1.0 RUN wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar EXPOSE 8080 CMD [ \"java\", \"-jar\", \"app.jar\" ]",
"auths: https://index.docker.io/v1/: 1 auth: \"YWRfbGzhcGU6R2labnRib21ifTE=\" 2 email: \"[email protected]\" 3",
"oc create secret generic dockerhub --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson",
"spec: output: to: kind: \"DockerImage\" name: \"private.registry.com/org/private-image:latest\" pushSecret: name: \"dockerhub\"",
"oc set build-secret --push bc/sample-build dockerhub",
"oc secrets link builder dockerhub",
"strategy: sourceStrategy: from: kind: \"DockerImage\" name: \"docker.io/user/private_repository\" pullSecret: name: \"dockerhub\"",
"oc set build-secret --pull bc/sample-build dockerhub",
"oc secrets link builder dockerhub",
"env: - name: FIELDREF_ENV valueFrom: fieldRef: fieldPath: metadata.name",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: MYVAL valueFrom: secretKeyRef: key: myval name: mysecret",
"spec: output: to: kind: \"ImageStreamTag\" name: \"sample-image:latest\"",
"spec: output: to: kind: \"DockerImage\" name: \"my-registry.mycompany.com:5000/myimages/myimage:tag\"",
"spec: output: to: kind: \"ImageStreamTag\" name: \"my-image:latest\" imageLabels: - name: \"vendor\" value: \"MyCompany\" - name: \"authoritative-source-url\" value: \"registry.mycompany.com\"",
"strategy: dockerStrategy: from: kind: \"ImageStreamTag\" name: \"debian:latest\"",
"strategy: dockerStrategy: dockerfilePath: dockerfiles/app1/Dockerfile",
"dockerStrategy: env: - name: \"HTTP_PROXY\" value: \"http://myproxy.net:5187/\"",
"dockerStrategy: buildArgs: - name: \"foo\" value: \"bar\"",
"strategy: dockerStrategy: imageOptimizationPolicy: SkipLayers",
"strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"incremental-image:latest\" 1 incremental: true 2",
"strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"builder-image:latest\" scripts: \"http://somehost.com/scripts_directory\" 1",
"sourceStrategy: env: - name: \"DISABLE_ASSET_COMPILATION\" value: \"true\"",
"#!/bin/bash restore build artifacts if [ \"USD(ls /tmp/s2i/artifacts/ 2>/dev/null)\" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi move the application source mv /tmp/s2i/src USDHOME/src build application artifacts pushd USD{HOME} make all install the artifacts make install popd",
"#!/bin/bash run the application /opt/application/run.sh",
"#!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd",
"#!/bin/bash inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF",
"strategy: customStrategy: from: kind: \"DockerImage\" name: \"openshift/sti-image-builder\"",
"strategy: customStrategy: secrets: - secretSource: 1 name: \"secret1\" mountPath: \"/tmp/secret1\" 2 - secretSource: name: \"secret2\" mountPath: \"/tmp/secret2\"",
"customStrategy: env: - name: \"HTTP_PROXY\" value: \"http://myproxy.net:5187/\"",
"oc set env <enter_variables>",
"kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: |- node('agent') { stage 'build' openshiftBuild(buildConfig: 'ruby-sample-build', showBuildLogs: 'true') stage 'deploy' openshiftDeploy(deploymentConfig: 'frontend') }",
"kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: source: git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: jenkinsPipelineStrategy: jenkinsfilePath: some/repo/dir/filename 1",
"jenkinsPipelineStrategy: env: - name: \"FOO\" value: \"BAR\"",
"oc project <project_name>",
"oc new-app jenkins-ephemeral 1",
"kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"nodejs-sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: <pipeline content from below> type: JenkinsPipeline",
"def templatePath = 'https://raw.githubusercontent.com/openshift/nodejs-ex/master/openshift/templates/nodejs-mongodb.json' 1 def templateName = 'nodejs-mongodb-example' 2 pipeline { agent { node { label 'nodejs' 3 } } options { timeout(time: 20, unit: 'MINUTES') 4 } stages { stage('preamble') { steps { script { openshift.withCluster() { openshift.withProject() { echo \"Using project: USD{openshift.project()}\" } } } } } stage('cleanup') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.selector(\"all\", [ template : templateName ]).delete() 5 if (openshift.selector(\"secrets\", templateName).exists()) { 6 openshift.selector(\"secrets\", templateName).delete() } } } } } } stage('create') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.newApp(templatePath) 7 } } } } } stage('build') { steps { script { openshift.withCluster() { openshift.withProject() { def builds = openshift.selector(\"bc\", templateName).related('builds') timeout(5) { 8 builds.untilEach(1) { return (it.object().status.phase == \"Complete\") } } } } } } } stage('deploy') { steps { script { openshift.withCluster() { openshift.withProject() { def rm = openshift.selector(\"dc\", templateName).rollout() timeout(5) { 9 openshift.selector(\"dc\", templateName).related('pods').untilEach(1) { return (it.object().status.phase == \"Running\") } } } } } } } stage('tag') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.tag(\"USD{templateName}:latest\", \"USD{templateName}-staging:latest\") 10 } } } } } } }",
"oc create -f nodejs-sample-pipeline.yaml",
"oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/pipeline/nodejs-sample-pipeline.yaml",
"oc start-build nodejs-sample-pipeline",
"FROM registry.redhat.io/rhel8/buildah In this example, `/tmp/build` contains the inputs that build when this custom builder image is run. Normally the custom builder image fetches this content from some location at build time, by using git clone as an example. ADD dockerfile.sample /tmp/input/Dockerfile ADD build.sh /usr/bin RUN chmod a+x /usr/bin/build.sh /usr/bin/build.sh contains the actual custom build logic that will be run when this custom builder image is run. ENTRYPOINT [\"/usr/bin/build.sh\"]",
"FROM registry.access.redhat.com/ubi8/ubi RUN touch /tmp/build",
"#!/bin/sh Note that in this case the build inputs are part of the custom builder image, but normally this is retrieved from an external source. cd /tmp/input OUTPUT_REGISTRY and OUTPUT_IMAGE are env variables provided by the custom build framework TAG=\"USD{OUTPUT_REGISTRY}/USD{OUTPUT_IMAGE}\" performs the build of the new image defined by dockerfile.sample buildah --storage-driver vfs bud --isolation chroot -t USD{TAG} . buildah requires a slight modification to the push secret provided by the service account to use it for pushing the image cp /var/run/secrets/openshift.io/push/.dockercfg /tmp (echo \"{ \\\"auths\\\": \" ; cat /var/run/secrets/openshift.io/push/.dockercfg ; echo \"}\") > /tmp/.dockercfg push the new image to the target for the build buildah --storage-driver vfs push --tls-verify=false --authfile /tmp/.dockercfg USD{TAG}",
"oc new-build --binary --strategy=docker --name custom-builder-image",
"oc start-build custom-builder-image --from-dir . -F",
"kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: sample-custom-build labels: name: sample-custom-build annotations: template.alpha.openshift.io/wait-for-ready: 'true' spec: strategy: type: Custom customStrategy: forcePull: true from: kind: ImageStreamTag name: custom-builder-image:latest namespace: <yourproject> 1 output: to: kind: ImageStreamTag name: sample-custom:latest",
"oc create -f buildconfig.yaml",
"kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: sample-custom spec: {}",
"oc create -f imagestream.yaml",
"oc start-build sample-custom-build -F",
"oc start-build <buildconfig_name>",
"oc start-build --from-build=<build_name>",
"oc start-build <buildconfig_name> --follow",
"oc start-build <buildconfig_name> --env=<key>=<value>",
"oc start-build hello-world --from-repo=../hello-world --commit=v2",
"oc cancel-build <build_name>",
"oc cancel-build <build1_name> <build2_name> <build3_name>",
"oc cancel-build bc/<buildconfig_name>",
"oc cancel-build bc/<buildconfig_name>",
"oc delete bc <BuildConfigName>",
"oc delete --cascade=false bc <BuildConfigName>",
"oc describe build <build_name>",
"oc describe build <build_name>",
"oc logs -f bc/<buildconfig_name>",
"oc logs --version=<number> bc/<buildconfig_name>",
"sourceStrategy: env: - name: \"BUILD_LOGLEVEL\" value: \"2\" 1",
"type: \"GitHub\" github: secretReference: name: \"mysecret\"",
"- kind: Secret apiVersion: v1 metadata: name: mysecret creationTimestamp: data: WebHookSecretKey: c2VjcmV0dmFsdWUx",
"type: \"GitHub\" github: secretReference: name: \"mysecret\"",
"https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github",
"oc describe bc/<name-of-your-BuildConfig>",
"<https://api.starter-us-east-1.openshift.com:443/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github",
"curl -H \"X-GitHub-Event: push\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github",
"type: \"GitLab\" gitlab: secretReference: name: \"mysecret\"",
"https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab",
"oc describe bc <name>",
"curl -H \"X-GitLab-Event: Push Hook\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab",
"type: \"Bitbucket\" bitbucket: secretReference: name: \"mysecret\"",
"https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket",
"oc describe bc <name>",
"curl -H \"X-Event-Key: repo:push\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket",
"type: \"Generic\" generic: secretReference: name: \"mysecret\" allowEnv: true 1",
"https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic",
"curl -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic",
"git: uri: \"<url to git repository>\" ref: \"<optional git reference>\" commit: \"<commit hash identifying a specific git commit>\" author: name: \"<author name>\" email: \"<author e-mail>\" committer: name: \"<committer name>\" email: \"<committer e-mail>\" message: \"<commit message>\" env: 1 - name: \"<variable name>\" value: \"<variable value>\"",
"curl -H \"Content-Type: application/yaml\" --data-binary @payload_file.yaml -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic",
"oc describe bc <name>",
"kind: \"ImageStream\" apiVersion: \"v1\" metadata: name: \"ruby-20-centos7\"",
"strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"ruby-20-centos7:latest\"",
"type: \"ImageChange\" 1 imageChange: {} type: \"ImageChange\" 2 imageChange: from: kind: \"ImageStreamTag\" name: \"custom-image:latest\"",
"strategy: sourceStrategy: from: kind: \"DockerImage\" name: \"172.30.17.3:5001/mynamespace/ruby-20-centos7:<immutableid>\"",
"type: \"ImageChange\" imageChange: from: kind: \"ImageStreamTag\" name: \"custom-image:latest\" paused: true",
"type: \"ConfigChange\"",
"oc set triggers bc <name> --from-github",
"oc set triggers bc <name> --from-image='<image>'",
"oc set triggers bc <name> --from-bitbucket --remove",
"oc set triggers --help",
"postCommit: script: \"bundle exec rake test --verbose\"",
"postCommit: command: [\"/bin/bash\", \"-c\", \"bundle exec rake test --verbose\"]",
"postCommit: command: [\"bundle\", \"exec\", \"rake\", \"test\"] args: [\"--verbose\"]",
"oc set build-hook bc/mybc --post-commit --command -- bundle exec rake test --verbose",
"oc set build-hook bc/mybc --post-commit --script=\"bundle exec rake test --verbose\"",
"apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: resources: limits: cpu: \"100m\" 1 memory: \"256Mi\" 2",
"resources: requests: 1 cpu: \"100m\" memory: \"256Mi\"",
"spec: completionDeadlineSeconds: 1800",
"apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: nodeSelector: 1 key1: value1 key2: value2",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: artifact-build spec: output: to: kind: ImageStreamTag name: artifact-image:latest source: git: uri: https://github.com/openshift/openshift-jee-sample.git ref: \"master\" strategy: sourceStrategy: from: kind: ImageStreamTag name: wildfly:10.1 namespace: openshift",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: image-build spec: output: to: kind: ImageStreamTag name: image-build:latest source: dockerfile: |- FROM jee-runtime:latest COPY ROOT.war /deployments/ROOT.war images: - from: 1 kind: ImageStreamTag name: artifact-image:latest paths: 2 - sourcePath: /wildfly/standalone/deployments/ROOT.war destinationDir: \".\" strategy: dockerStrategy: from: 3 kind: ImageStreamTag name: jee-runtime:latest triggers: - imageChange: {} type: ImageChange",
"apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: successfulBuildsHistoryLimit: 2 1 failedBuildsHistoryLimit: 2 2",
"oc tag --source=docker registry.redhat.io/ubi7/ubi:latest ubi:latest -n openshift",
"oc tag --source=docker registry.redhat.io/ubi7/ubi:latest ubi:latest",
"RUN rm /etc/rhsm-host",
"oc create secret generic etc-pki-entitlement --from-file /path/to/entitlement/{ID}.pem > --from-file /path/to/entitlement/{ID}-key.pem",
"source: secrets: - secret: name: etc-pki-entitlement destinationDir: etc-pki-entitlement",
"FROM registry.redhat.io/rhel7:latest USER root Copy entitlements COPY ./etc-pki-entitlement /etc/pki/entitlement Copy subscription manager configurations COPY ./rhsm-conf /etc/rhsm COPY ./rhsm-ca /etc/rhsm/ca Delete /etc/rhsm-host to use entitlements from the build container RUN rm /etc/rhsm-host && # Initialize /etc/yum.repos.d/redhat.repo # See https://access.redhat.com/solutions/1443553 yum repolist --disablerepo=* && subscription-manager repos --enable <enabled-repo> && yum -y update && yum -y install <rpms> && # Remove entitlements and Subscription Manager configs rm -rf /etc/pki/entitlement && rm -rf /etc/rhsm OpenShift requires images to run as non-root by default USER 1001 ENTRYPOINT [\"/bin/bash\"]",
"[test-<name>] name=test-<number> baseurl = https://satellite.../content/dist/rhel/server/7/7Server/x86_64/os enabled=1 gpgcheck=0 sslverify=0 sslclientkey = /etc/pki/entitlement/...-key.pem sslclientcert = /etc/pki/entitlement/....pem",
"oc create configmap yum-repos-d --from-file /path/to/satellite.repo",
"source: configMaps: - configMap: name: yum-repos-d destinationDir: yum.repos.d",
"FROM registry.redhat.io/rhel7:latest USER root Copy entitlements COPY ./etc-pki-entitlement /etc/pki/entitlement Copy repository configuration COPY ./yum.repos.d /etc/yum.repos.d Delete /etc/rhsm-host to use entitlements from the build container RUN sed -i\".org\" -e \"s#^enabled=1#enabled=0#g\" /etc/yum/pluginconf.d/subscription-manager.conf 1 #RUN cat /etc/yum/pluginconf.d/subscription-manager.conf RUN yum clean all #RUN yum-config-manager RUN rm /etc/rhsm-host && # yum repository info provided by Satellite yum -y update && yum -y install <rpms> && # Remove entitlements rm -rf /etc/pki/entitlement OpenShift requires images to run as non-root by default USER 1001 ENTRYPOINT [\"/bin/bash\"]",
"oc edit clusterrolebinding system:build-strategy-docker-binding",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"false\" 1 creationTimestamp: 2018-08-10T01:24:14Z name: system:build-strategy-docker-binding resourceVersion: \"225\" selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system%3Abuild-strategy-docker-binding uid: 17b1f3d4-9c3c-11e8-be62-0800277d20bf roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:build-strategy-docker subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:authenticated",
"oc adm policy remove-cluster-role-from-group system:build-strategy-docker system:authenticated",
"oc edit clusterrole admin",
"oc edit clusterrole edit",
"kind: ClusterRole metadata: name: admin - apiGroups: - \"\" - build.openshift.io resources: - buildconfigs - buildconfigs/webhooks - builds/custom 1 - builds/source verbs: - create - delete - deletecollection - get - list - patch - update - watch",
"oc adm policy add-cluster-role-to-user system:build-strategy-docker devuser",
"oc adm policy add-role-to-user system:build-strategy-docker devuser -n devproject",
"oc edit build.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Build 1 metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 2 name: cluster resourceVersion: \"107233\" selfLink: /apis/config.openshift.io/v1/builds/cluster uid: e2e9cc14-78a9-11e9-b92b-06d6c7da38dc spec: buildDefaults: 2 defaultProxy: 3 httpProxy: http://proxy.com httpsProxy: https://proxy.com noProxy: internal.com env: 4 - name: envkey value: envvalue gitProxy: 5 httpProxy: http://gitproxy.com httpsProxy: https://gitproxy.com noProxy: internalgit.com imageLabels: 6 - name: labelkey value: labelvalue resources: 7 limits: cpu: 100m memory: 50Mi requests: cpu: 10m memory: 10Mi buildOverrides: 8 imageLabels: 9 - name: labelkey value: labelvalue nodeSelector: 10 selectorkey: selectorvalue tolerations: 11 - effect: NoSchedule key: node-role.kubernetes.io/builds operator: Exists",
"requested access to the resource is denied",
"oc describe quota",
"secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60",
"oc delete secret <secret_name>",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-",
"oc create configmap registry-cas -n openshift-config --from-file=myregistry.corp.com..5000=/etc/docker/certs.d/myregistry.corp.com:5000/ca.crt --from-file=otherregistry.com=/etc/docker/certs.d/otherregistry.com/ca.crt",
"oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-cas\"}}}' --type=merge"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/cicd/builds |
Chapter 1. Setting up and configuring a BIND DNS server | Chapter 1. Setting up and configuring a BIND DNS server BIND is a feature-rich DNS server that is fully compliant with the Internet Engineering Task Force (IETF) DNS standards and draft standards. For example, administrators frequently use BIND as: Caching DNS server in the local network Authoritative DNS server for zones Secondary server to provide high availability for zones 1.1. Considerations about protecting BIND with SELinux or running it in a change-root environment To secure a BIND installation, you can: Run the named service without a change-root environment. In this case, SELinux in enforcing mode prevents exploitation of known BIND security vulnerabilities. By default, Red Hat Enterprise Linux uses SELinux in enforcing mode. Important Running BIND on RHEL with SELinux in enforcing mode is more secure than running BIND in a change-root environment. Run the named-chroot service in a change-root environment. Using the change-root feature, administrators can define that the root directory of a process and its sub-processes is different to the / directory. When you start the named-chroot service, BIND switches its root directory to /var/named/chroot/ . As a consequence, the service uses mount --bind commands to make the files and directories listed in /etc/named-chroot.files available in /var/named/chroot/ , and the process has no access to files outside of /var/named/chroot/ . If you decide to use BIND: In normal mode, use the named service. In a change-root environment, use the named-chroot service. This requires that you install, additionally, the named-chroot package. 1.2. Configuring BIND as a caching DNS server By default, the BIND DNS server resolves and caches successful and failed lookups. The service then answers requests to the same records from its cache. This significantly improves the speed of DNS lookups. Prerequisites The IP address of the server is static. Procedure Install the bind and bind-utils packages: If you want to run BIND in a change-root environment install the bind-chroot package: Note that running BIND on a host with SELinux in enforcing mode, which is default, is more secure. Edit the /etc/named.conf file, and make the following changes in the options statement: Update the listen-on and listen-on-v6 statements to specify on which IPv4 and IPv6 interfaces BIND should listen: Update the allow-query statement to configure from which IP addresses and ranges clients can query this DNS server: Add an allow-recursion statement to define from which IP addresses and ranges BIND accepts recursive queries: Warning Do not allow recursion on public IP addresses of the server. Otherwise, the server can become part of large-scale DNS amplification attacks. By default, BIND resolves queries by recursively querying from the root servers to an authoritative DNS server. Alternatively, you can configure BIND to forward queries to other DNS servers, such as the ones of your provider. In this case, add a forwarders statement with the list of IP addresses of the DNS servers that BIND should forward queries to: As a fall-back behavior, BIND resolves queries recursively if the forwarder servers do not respond. To disable this behavior, add a forward only; statement. Verify the syntax of the /etc/named.conf file: If the command displays no output, the syntax is correct. Update the firewalld rules to allow incoming DNS traffic: Start and enable BIND: If you want to run BIND in a change-root environment, use the systemctl enable --now named-chroot command to enable and start the service. Verification Use the newly set up DNS server to resolve a domain: This example assumes that BIND runs on the same host and responds to queries on the localhost interface. After querying a record for the first time, BIND adds the entry to its cache. Repeat the query: Because of the cached entry, further requests for the same record are significantly faster until the entry expires. steps Configure the clients in your network to use this DNS server. If a DHCP server provides the DNS server setting to the clients, update the DHCP server's configuration accordingly. Additional resources Considerations about protecting BIND with SELinux or running it in a change-root environment named.conf(5) man page on your system /usr/share/doc/bind/sample/etc/named.conf 1.3. Configuring logging on a BIND DNS server The configuration in the default /etc/named.conf file, as provided by the bind package, uses the default_debug channel and logs messages to the /var/named/data/named.run file. The default_debug channel only logs entries when the server's debug level is non-zero. Using different channels and categories, you can configure BIND to write different events with a defined severity to separate files. Prerequisites BIND is already configured, for example, as a caching name server. The named or named-chroot service is running. Procedure Edit the /etc/named.conf file, and add category and channel phrases to the logging statement, for example: With this example configuration, BIND logs messages related to zone transfers to /var/named/log/transfer.log . BIND creates up to 10 versions of the log file and rotates them if they reach a maximum size of 50 MB. The category phrase defines to which channels BIND sends messages of a category. The channel phrase defines the destination of log messages including the number of versions, the maximum file size, and the severity level BIND should log to a channel. Additional settings, such as enabling logging the time stamp, category, and severity of an event are optional, but useful for debugging purposes. Create the log directory if it does not exist, and grant write permissions to the named user on this directory: Verify the syntax of the /etc/named.conf file: If the command displays no output, the syntax is correct. Restart BIND: If you run BIND in a change-root environment, use the systemctl restart named-chroot command to restart the service. Verification Display the content of the log file: Additional resources named.conf(5) man page on your system 1.4. Writing BIND ACLs Controlling access to certain features of BIND can prevent unauthorized access and attacks, such as denial of service (DoS). BIND access control list ( acl ) statements are lists of IP addresses and ranges. Each ACL has a nickname that you can use in several statements, such as allow-query , to refer to the specified IP addresses and ranges. Warning BIND uses only the first matching entry in an ACL. For example, if you define an ACL { 192.0.2/24; !192.0.2.1; } and the host with IP address 192.0.2.1 connects, access is granted even if the second entry excludes this address. BIND has the following built-in ACLs: none : Matches no hosts. any : Matches all hosts. localhost : Matches the loopback addresses 127.0.0.1 and ::1 , as well as the IP addresses of all interfaces on the server that runs BIND. localnets : Matches the loopback addresses 127.0.0.1 and ::1 , as well as all subnets the server that runs BIND is directly connected to. Prerequisites BIND is already configured, for example, as a caching name server. The named or named-chroot service is running. Procedure Edit the /etc/named.conf file and make the following changes: Add acl statements to the file. For example, to create an ACL named internal-networks for 127.0.0.1 , 192.0.2.0/24 , and 2001:db8:1::/64 , enter: Use the ACL's nickname in statements that support them, for example: Verify the syntax of the /etc/named.conf file: If the command displays no output, the syntax is correct. Reload BIND: If you run BIND in a change-root environment, use the systemctl reload named-chroot command to reload the service. Verification Execute an action that triggers a feature which uses the configured ACL. For example, the ACL in this procedure allows only recursive queries from the defined IP addresses. In this case, enter the following command on a host that is not within the ACL's definition to attempt resolving an external domain: If the command returns no output, BIND denied access, and the ACL works. For a verbose output on the client, use the command without +short option: 1.5. Configuring zones on a BIND DNS server A DNS zone is a database with resource records for a specific sub-tree in the domain space. For example, if you are responsible for the example.com domain, you can set up a zone for it in BIND. As a result, clients can, resolve www.example.com to the IP address configured in this zone. 1.5.1. The SOA record in zone files The start of authority (SOA) record is a required record in a DNS zone. This record is important, for example, if multiple DNS servers are authoritative for a zone but also to DNS resolvers. A SOA record in BIND has the following syntax: For better readability, administrators typically split the record in zone files into multiple lines with comments that start with a semicolon ( ; ). Note that, if you split a SOA record, parentheses keep the record together: Important Note the trailing dot at the end of the fully-qualified domain names (FQDNs). FQDNs consist of multiple domain labels, separated by dots. Because the DNS root has an empty label, FQDNs end with a dot. Therefore, BIND appends the zone name to names without a trailing dot. A hostname without a trailing dot, for example, ns1.example.com would be expanded to ns1.example.com.example.com. , which is not the correct address of the primary name server. These are the fields in a SOA record: name : The name of the zone, the so-called origin . If you set this field to @ , BIND expands it to the zone name defined in /etc/named.conf . class : In SOA records, you must set this field always to Internet ( IN ). type : In SOA records, you must set this field always to SOA . mname (master name): The hostname of the primary name server of this zone. rname (responsible name): The email address of who is responsible for this zone. Note that the format is different. You must replace the at sign ( @ ) with a dot ( . ). serial : The version number of this zone file. Secondary name servers only update their copies of the zone if the serial number on the primary server is higher. The format can be any numeric value. A commonly-used format is <year><month><day><two-digit-number> . With this format, you can, theoretically, change the zone file up to a hundred times per day. refresh : The amount of time secondary servers should wait before checking the primary server if the zone was updated. retry : The amount of time after that a secondary server retries to query the primary server after a failed attempt. expire : The amount of time after that a secondary server stops querying the primary server, if all attempts failed. minimum : RFC 2308 changed the meaning of this field to the negative caching time. Compliant resolvers use it to determine how long to cache NXDOMAIN name errors. Note A numeric value in the refresh , retry , expire , and minimum fields define a time in seconds. However, for better readability, use time suffixes, such as m for minute, h for hours, and d for days. For example, 3h stands for 3 hours. Additional resources RFC 1035 : Domain names - implementation and specification RFC 1034 : Domain names - concepts and facilities RFC 2308 : Negative caching of DNS queries (DNS cache) 1.5.2. Setting up a forward zone on a BIND primary server Forward zones map names to IP addresses and other information. For example, if you are responsible for the domain example.com , you can set up a forward zone in BIND to resolve names, such as www.example.com . Prerequisites BIND is already configured, for example, as a caching name server. The named or named-chroot service is running. Procedure Add a zone definition to the /etc/named.conf file: These settings define: This server as the primary server ( type master ) for the example.com zone. The /var/named/example.com.zone file is the zone file. If you set a relative path, as in this example, this path is relative to the directory you set in directory in the options statement. Any host can query this zone. Alternatively, specify IP ranges or BIND access control list (ACL) nicknames to limit the access. No host can transfer the zone. Allow zone transfers only when you set up secondary servers and only for the IP addresses of the secondary servers. Verify the syntax of the /etc/named.conf file: If the command displays no output, the syntax is correct. Create the /var/named/example.com.zone file, for example, with the following content: This zone file: Sets the default time-to-live (TTL) value for resource records to 8 hours. Without a time suffix, such as h for hour, BIND interprets the value as seconds. Contains the required SOA resource record with details about the zone. Sets ns1.example.com as an authoritative DNS server for this zone. To be functional, a zone requires at least one name server ( NS ) record. However, to be compliant with RFC 1912, you require at least two name servers. Sets mail.example.com as the mail exchanger ( MX ) of the example.com domain. The numeric value in front of the host name is the priority of the record. Entries with a lower value have a higher priority. Sets the IPv4 and IPv6 addresses of www.example.com , mail.example.com , and ns1.example.com . Set secure permissions on the zone file that allow only the named group to read it: Verify the syntax of the /var/named/example.com.zone file: Reload BIND: If you run BIND in a change-root environment, use the systemctl reload named-chroot command to reload the service. Verification Query different records from the example.com zone, and verify that the output matches the records you have configured in the zone file: This example assumes that BIND runs on the same host and responds to queries on the localhost interface. Additional resources The SOA record in zone files Writing BIND ACLs RFC 1912 - Common DNS operational and configuration errors 1.5.3. Setting up a reverse zone on a BIND primary server Reverse zones map IP addresses to names. For example, if you are responsible for IP range 192.0.2.0/24 , you can set up a reverse zone in BIND to resolve IP addresses from this range to hostnames. Note If you create a reverse zone for whole classful networks, name the zone accordingly. For example, for the class C network 192.0.2.0/24 , the name of the zone is 2.0.192.in-addr.arpa . If you want to create a reverse zone for a different network size, for example 192.0.2.0/28 , the name of the zone is 28-2.0.192.in-addr.arpa . Prerequisites BIND is already configured, for example, as a caching name server. The named or named-chroot service is running. Procedure Add a zone definition to the /etc/named.conf file: These settings define: This server as the primary server ( type master ) for the 2.0.192.in-addr.arpa reverse zone. The /var/named/2.0.192.in-addr.arpa.zone file is the zone file. If you set a relative path, as in this example, this path is relative to the directory you set in directory in the options statement. Any host can query this zone. Alternatively, specify IP ranges or BIND access control list (ACL) nicknames to limit the access. No host can transfer the zone. Allow zone transfers only when you set up secondary servers and only for the IP addresses of the secondary servers. Verify the syntax of the /etc/named.conf file: If the command displays no output, the syntax is correct. Create the /var/named/2.0.192.in-addr.arpa.zone file, for example, with the following content: This zone file: Sets the default time-to-live (TTL) value for resource records to 8 hours. Without a time suffix, such as h for hour, BIND interprets the value as seconds. Contains the required SOA resource record with details about the zone. Sets ns1.example.com as an authoritative DNS server for this reverse zone. To be functional, a zone requires at least one name server ( NS ) record. However, to be compliant with RFC 1912, you require at least two name servers. Sets the pointer ( PTR ) record for the 192.0.2.1 and 192.0.2.30 addresses. Set secure permissions on the zone file that only allow the named group to read it: Verify the syntax of the /var/named/2.0.192.in-addr.arpa.zone file: Reload BIND: If you run BIND in a change-root environment, use the systemctl reload named-chroot command to reload the service. Verification Query different records from the reverse zone, and verify that the output matches the records you have configured in the zone file: This example assumes that BIND runs on the same host and responds to queries on the localhost interface. Additional resources The SOA record in zone files Writing BIND ACLs RFC 1912 - Common DNS operational and configuration errors 1.5.4. Updating a BIND zone file In certain situations, for example if an IP address of a server changes, you must update a zone file. If multiple DNS servers are responsible for a zone, perform this procedure only on the primary server. Other DNS servers that store a copy of the zone will receive the update through a zone transfer. Prerequisites The zone is configured. The named or named-chroot service is running. Procedure Optional: Identify the path to the zone file in the /etc/named.conf file: You find the path to the zone file in the file statement in the zone's definition. A relative path is relative to the directory set in directory in the options statement. Edit the zone file: Make the required changes. Increment the serial number in the start of authority (SOA) record. Important If the serial number is equal to or lower than the value, secondary servers will not update their copy of the zone. Verify the syntax of the zone file: Reload BIND: If you run BIND in a change-root environment, use the systemctl reload named-chroot command to reload the service. Verification Query the record you have added, modified, or removed, for example: This example assumes that BIND runs on the same host and responds to queries on the localhost interface. Additional resources The SOA record in zone files Setting up a forward zone on a BIND primary server Setting up a reverse zone on a BIND primary server 1.5.5. DNSSEC zone signing using the automated key generation and zone maintenance features You can sign zones with domain name system security extensions (DNSSEC) to ensure authentication and data integrity. Such zones contain additional resource records. Clients can use them to verify the authenticity of the zone information. If you enable the DNSSEC policy feature for a zone, BIND performs the following actions automatically: Creates the keys Signs the zone Maintains the zone, including re-signing and periodically replacing the keys. Important To enable external DNS servers to verify the authenticity of a zone, you must add the public key of the zone to the parent zone. Contact your domain provider or registry for further details on how to accomplish this. This procedure uses the built-in default DNSSEC policy in BIND. This policy uses single ECDSAP256SHA key signatures. Alternatively, create your own policy to use custom keys, algorithms, and timings. Prerequisites The zone for which you want to enable DNSSEC is configured. The named or named-chroot service is running. The server synchronizes the time with a time server. An accurate system time is important for DNSSEC validation. Procedure Edit the /etc/named.conf file, and add dnssec-policy default; to the zone for which you want to enable DNSSEC: Reload BIND: If you run BIND in a change-root environment, use the systemctl reload named-chroot command to reload the service. BIND stores the public key in the /var/named/K <zone_name> .+ <algorithm> + <key_ID> .key file. Use this file to display the public key of the zone in the format that the parent zone requires: DS record format: DNSKEY format: Request to add the public key of the zone to the parent zone. Contact your domain provider or registry for further details on how to accomplish this. Verification Query your own DNS server for a record from the zone for which you enabled DNSSEC signing: This example assumes that BIND runs on the same host and responds to queries on the localhost interface. After the public key has been added to the parent zone and propagated to other servers, verify that the server sets the authenticated data ( ad ) flag on queries to the signed zone: Additional resources Setting up a forward zone on a BIND primary server Setting up a reverse zone on a BIND primary server 1.6. Configuring zone transfers among BIND DNS servers Zone transfers ensure that all DNS servers that have a copy of the zone use up-to-date data. Prerequisites On the future primary server, the zone for which you want to set up zone transfers is already configured. On the future secondary server, BIND is already configured, for example, as a caching name server. On both servers, the named or named-chroot service is running. Procedure On the existing primary server: Create a shared key, and append it to the /etc/named.conf file: This command displays the output of the tsig-keygen command and automatically appends it to /etc/named.conf . You will require the output of the command later on the secondary server as well. Edit the zone definition in the /etc/named.conf file: In the allow-transfer statement, define that servers must provide the key specified in the example-transfer-key statement to transfer a zone: Alternatively, use BIND access control list (ACL) nicknames in the allow-transfer statement. By default, after a zone has been updated, BIND notifies all name servers which have a name server ( NS ) record in this zone. If you do not plan to add an NS record for the secondary server to the zone, you can, configure that BIND notifies this server anyway. For that, add the also-notify statement with the IP addresses of this secondary server to the zone: Verify the syntax of the /etc/named.conf file: If the command displays no output, the syntax is correct. Reload BIND: If you run BIND in a change-root environment, use the systemctl reload named-chroot command to reload the service. On the future secondary server: Edit the /etc/named.conf file as follows: Add the same key definition as on the primary server: Add the zone definition to the /etc/named.conf file: These settings state: This server is a secondary server ( type slave ) for the example.com zone. The /var/named/slaves/example.com.zone file is the zone file. If you set a relative path, as in this example, this path is relative to the directory you set in directory in the options statement. To separate zone files for which this server is secondary from primary ones, you can store them, for example, in the /var/named/slaves/ directory. Any host can query this zone. Alternatively, specify IP ranges or ACL nicknames to limit the access. No host can transfer the zone from this server. The IP addresses of the primary server of this zone are 192.0.2.1 and 2001:db8:1::2 . Alternatively, you can specify ACL nicknames. This secondary server will use the key named example-transfer-key to authenticate to the primary server. Verify the syntax of the /etc/named.conf file: Reload BIND: If you run BIND in a change-root environment, use the systemctl reload named-chroot command to reload the service. Optional: Modify the zone file on the primary server and add an NS record for the new secondary server. Verification On the secondary server: Display the systemd journal entries of the named service: If you run BIND in a change-root environment, use the journalctl -u named-chroot command to display the journal entries. Verify that BIND created the zone file: Note that, by default, secondary servers store zone files in a binary raw format. Query a record of the transferred zone from the secondary server: This example assumes that the secondary server you set up in this procedure listens on IP address 192.0.2.2 . Additional resources Setting up a forward zone on a BIND primary server Setting up a reverse zone on a BIND primary server Writing BIND ACLs Updating a BIND zone file 1.7. Configuring response policy zones in BIND to override DNS records Using DNS blocking and filtering, administrators can rewrite a DNS response to block access to certain domains or hosts. In BIND, response policy zones (RPZs) provide this feature. You can configure different actions for blocked entries, such as returning an NXDOMAIN error or not responding to the query. If you have multiple DNS servers in your environment, use this procedure to configure the RPZ on the primary server, and later configure zone transfers to make the RPZ available on your secondary servers. Prerequisites BIND is already configured, for example, as a caching name server. The named or named-chroot service is running. Procedure Edit the /etc/named.conf file, and make the following changes: Add a response-policy definition to the options statement: You can set a custom name for the RPZ in the zone statement in response-policy . However, you must use the same name in the zone definition in the step. Add a zone definition for the RPZ you set in the step: These settings state: This server is the primary server ( type master ) for the RPZ named rpz.local . The /var/named/rpz.local file is the zone file. If you set a relative path, as in this example, this path is relative to the directory you set in directory in the options statement. Any hosts defined in allow-query can query this RPZ. Alternatively, specify IP ranges or BIND access control list (ACL) nicknames to limit the access. No host can transfer the zone. Allow zone transfers only when you set up secondary servers and only for the IP addresses of the secondary servers. Verify the syntax of the /etc/named.conf file: If the command displays no output, the syntax is correct. Create the /var/named/rpz.local file, for example, with the following content: This zone file: Sets the default time-to-live (TTL) value for resource records to 10 minutes. Without a time suffix, such as h for hour, BIND interprets the value as seconds. Contains the required start of authority (SOA) resource record with details about the zone. Sets ns1.example.com as an authoritative DNS server for this zone. To be functional, a zone requires at least one name server ( NS ) record. However, to be compliant with RFC 1912, you require at least two name servers. Return an NXDOMAIN error for queries to example.org and hosts in this domain. Drop queries to example.net and hosts in this domain. For a full list of actions and examples, see IETF draft: DNS Response Policy Zones (RPZ) . Verify the syntax of the /var/named/rpz.local file: Reload BIND: If you run BIND in a change-root environment, use the systemctl reload named-chroot command to reload the service. Verification Attempt to resolve a host in example.org , that is configured in the RPZ to return an NXDOMAIN error: This example assumes that BIND runs on the same host and responds to queries on the localhost interface. Attempt to resolve a host in the example.net domain, that is configured in the RPZ to drop queries: Additional resources IETF draft: DNS Response Policy Zones (RPZ) 1.8. Recording DNS queries by using dnstap As a network administrator, you can record Domain Name System (DNS) details to analyze DNS traffic patterns, monitor DNS server performance, and troubleshoot DNS issues. If you want an advanced way to monitor and log details of incoming name queries, use the dnstap interface that records sent messages from the named service. You can capture and record DNS queries to collect information about websites or IP addresses. Prerequisites The bind-9.16.23-1 package or a later version is installed. Warning If you already have a BIND version installed and running, adding a new version of BIND will overwrite the existing version. Procedure Enable dnstap and the target file by editing the /etc/named.conf file in the options block: To specify which types of DNS traffic you want to log, add dnstap filters to the dnstap block in the /etc/named.conf file. You can use the following filters: auth - Authoritative zone response or answer. client - Internal client query or answer. forwarder - Forwarded query or response from it. resolver - Iterative resolution query or response. update - Dynamic zone update requests. all - Any from the above options. query or response - If you do not specify a query or a response keyword, dnstap records both. Note The dnstap filter contains multiple definitions delimited by a ; in the dnstap {} block with the following syntax: dnstap { ( all | auth | client | forwarder | resolver | update ) [ ( query | response ) ]; ... }; To customize the behavior of the dnstap utility on the recorded packets, modify the dnstap-output option by providing additional parameters, as follows: size (unlimited | <size>) - Enable automatic rolling over of the dnstap file when its size reaches the specified limit. versions (unlimited | <integer>) - Specify the number of automatically rolled files to keep. suffix (increment | timestamp ) - Choose the naming convention for rolled out files. By default, the increment starts with .0 . Alternatively, you can use the UNIX timestamp by setting the timestamp value. The following example requests auth responses only, client queries, and both queries and responses of dynamic updates : To apply your changes, restart the named service: Configure a periodic rollout for active logs In the following example, the cron scheduler runs the content of the user-edited script once a day. The roll option with the value 3 specifies that dnstap can create up to three backup log files. The value 3 overrides the version parameter of the dnstap-output variable, and limits the number of backup log files to three. Additionally, the binary log file is moved to another directory and renamed, and it never reaches the .2 suffix, even if three backup log files already exist. You can skip this step if automatic rolling of binary logs based on size limit is sufficient. Handle and analyze logs in a human-readable format by using the dnstap-read utility: In the following example, the dnstap-read utility prints the output in the YAML file format. | [
"dnf install bind bind-utils",
"dnf install bind-chroot",
"listen-on port 53 { 127.0.0.1; 192.0.2.1; }; listen-on-v6 port 53 { ::1; 2001:db8:1::1; };",
"allow-query { localhost; 192.0.2.0/24; 2001:db8:1::/64; };",
"allow-recursion { localhost; 192.0.2.0/24; 2001:db8:1::/64; };",
"forwarders { 198.51.100.1; 203.0.113.5; };",
"named-checkconf",
"firewall-cmd --permanent --add-service=dns firewall-cmd --reload",
"systemctl enable --now named",
"dig @ localhost www.example.org www.example.org. 86400 IN A 198.51.100.34 ;; Query time: 917 msec",
"dig @ localhost www.example.org www.example.org. 85332 IN A 198.51.100.34 ;; Query time: 1 msec",
"logging { category notify { zone_transfer_log; }; category xfer-in { zone_transfer_log; }; category xfer-out { zone_transfer_log; }; channel zone_transfer_log { file \" /var/named/log/transfer.log \" versions 10 size 50m ; print-time yes; print-category yes; print-severity yes; severity info; }; };",
"mkdir /var/named/log/ chown named:named /var/named/log/ chmod 700 /var/named/log/",
"named-checkconf",
"systemctl restart named",
"cat /var/named/log/transfer.log 06-Jul-2022 15:08:51.261 xfer-out: info: client @0x7fecbc0b0700 192.0.2.2#36121/key example-transfer-key (example.com): transfer of 'example.com/IN': AXFR started: TSIG example-transfer-key (serial 2022070603) 06-Jul-2022 15:08:51.261 xfer-out: info: client @0x7fecbc0b0700 192.0.2.2#36121/key example-transfer-key (example.com): transfer of 'example.com/IN': AXFR ended",
"acl internal-networks { 127.0.0.1; 192.0.2.0/24; 2001:db8:1::/64; }; acl dmz-networks { 198.51.100.0/24; 2001:db8:2::/64; };",
"allow-query { internal-networks; dmz-networks; }; allow-recursion { internal-networks; };",
"named-checkconf",
"systemctl reload named",
"dig +short @ 192.0.2.1 www.example.com",
"dig @ 192.0.2.1 www.example.com ;; WARNING: recursion requested but not available",
"name class type mname rname serial refresh retry expire minimum",
"@ IN SOA ns1.example.com. hostmaster.example.com. ( 2022070601 ; serial number 1d ; refresh period 3h ; retry period 3d ; expire time 3h ) ; minimum TTL",
"zone \" example.com \" { type master; file \" example.com.zone \"; allow-query { any; }; allow-transfer { none; }; };",
"named-checkconf",
"USDTTL 8h @ IN SOA ns1.example.com. hostmaster.example.com. ( 2022070601 ; serial number 1d ; refresh period 3h ; retry period 3d ; expire time 3h ) ; minimum TTL IN NS ns1.example.com. IN MX 10 mail.example.com. www IN A 192.0.2.30 www IN AAAA 2001:db8:1::30 ns1 IN A 192.0.2.1 ns1 IN AAAA 2001:db8:1::1 mail IN A 192.0.2.20 mail IN AAAA 2001:db8:1::20",
"chown root:named /var/named/ example.com.zone chmod 640 /var/named/ example.com.zone",
"named-checkzone example.com /var/named/example.com.zone zone example.com/IN : loaded serial 2022070601 OK",
"systemctl reload named",
"dig +short @ localhost AAAA www.example.com 2001:db8:1::30 dig +short @ localhost NS example.com ns1.example.com. dig +short @ localhost A ns1.example.com 192.0.2.1",
"zone \" 2.0.192.in-addr.arpa \" { type master; file \" 2.0.192.in-addr.arpa.zone \"; allow-query { any; }; allow-transfer { none; }; };",
"named-checkconf",
"USDTTL 8h @ IN SOA ns1.example.com. hostmaster.example.com. ( 2022070601 ; serial number 1d ; refresh period 3h ; retry period 3d ; expire time 3h ) ; minimum TTL IN NS ns1.example.com. 1 IN PTR ns1.example.com. 30 IN PTR www.example.com.",
"chown root:named /var/named/ 2.0.192.in-addr.arpa.zone chmod 640 /var/named/ 2.0.192.in-addr.arpa.zone",
"named-checkzone 2.0.192.in-addr.arpa /var/named/2.0.192.in-addr.arpa.zone zone 2.0.192.in-addr.arpa/IN : loaded serial 2022070601 OK",
"systemctl reload named",
"dig +short @ localhost -x 192.0.2.1 ns1.example.com. dig +short @ localhost -x 192.0.2.30 www.example.com.",
"options { directory \" /var/named \"; } zone \" example.com \" { file \" example.com.zone \"; };",
"named-checkzone example.com /var/named/example.com.zone zone example.com/IN : loaded serial 2022062802 OK",
"systemctl reload named",
"dig +short @ localhost A ns2.example.com 192.0.2.2",
"zone \" example.com \" { dnssec-policy default; };",
"systemctl reload named",
"dnssec-dsfromkey /var/named/K example.com.+013+61141 .key example.com. IN DS 61141 13 2 3E184188CF6D2521EDFDC3F07CFEE8D0195AACBD85E68BAE0620F638B4B1B027",
"grep DNSKEY /var/named/K example.com.+013+61141.key example.com. 3600 IN DNSKEY 257 3 13 sjzT3jNEp120aSO4mPEHHSkReHUf7AABNnT8hNRTzD5cKMQSjDJin2I3 5CaKVcWO1pm+HltxUEt+X9dfp8OZkg==",
"dig +dnssec +short @ localhost A www.example.com 192.0.2.30 A 13 3 28800 20220718081258 20220705120353 61141 example.com. e7Cfh6GuOBMAWsgsHSVTPh+JJSOI/Y6zctzIuqIU1JqEgOOAfL/Qz474 M0sgi54m1Kmnr2ANBKJN9uvOs5eXYw==",
"dig @ localhost example.com +dnssec ;; flags: qr rd ra ad ; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1",
"tsig-keygen example-transfer-key | tee -a /etc/named.conf key \" example-transfer-key \" { algorithm hmac-sha256; secret \" q7ANbnyliDMuvWgnKOxMLi313JGcTZB5ydMW5CyUGXQ= \"; };",
"zone \" example.com \" { allow-transfer { key example-transfer-key; }; };",
"zone \" example.com \" { also-notify { 192.0.2.2; 2001:db8:1::2; }; };",
"named-checkconf",
"systemctl reload named",
"key \" example-transfer-key \" { algorithm hmac-sha256; secret \" q7ANbnyliDMuvWgnKOxMLi313JGcTZB5ydMW5CyUGXQ= \"; };",
"zone \" example.com \" { type slave; file \" slaves/example.com.zone \"; allow-query { any; }; allow-transfer { none; }; masters { 192.0.2.1 key example-transfer-key; 2001:db8:1::1 key example-transfer-key; }; };",
"named-checkconf",
"systemctl reload named",
"journalctl -u named Jul 06 15:08:51 ns2.example.com named[2024]: zone example.com/IN: Transfer started. Jul 06 15:08:51 ns2.example.com named[2024]: transfer of 'example.com/IN' from 192.0.2.1#53: connected using 192.0.2.2#45803 Jul 06 15:08:51 ns2.example.com named[2024]: zone example.com/IN: transferred serial 2022070101 Jul 06 15:08:51 ns2.example.com named[2024]: transfer of 'example.com/IN' from 192.0.2.1#53: Transfer status: success Jul 06 15:08:51 ns2.example.com named[2024]: transfer of 'example.com/IN' from 192.0.2.1#53: Transfer completed: 1 messages, 29 records, 2002 bytes, 0.003 secs (667333 bytes/sec)",
"ls -l /var/named/slaves/ total 4 -rw-r--r--. 1 named named 2736 Jul 6 15:08 example.com.zone",
"dig +short @ 192.0.2.2 AAAA www.example.com 2001:db8:1::30",
"options { response-policy { zone \" rpz.local \"; }; }",
"zone \"rpz.local\" { type master; file \"rpz.local\"; allow-query { localhost; 192.0.2.0/24; 2001:db8:1::/64; }; allow-transfer { none; }; };",
"named-checkconf",
"USDTTL 10m @ IN SOA ns1.example.com. hostmaster.example.com. ( 2022070601 ; serial number 1h ; refresh period 1m ; retry period 3d ; expire time 1m ) ; minimum TTL IN NS ns1.example.com. example.org IN CNAME . *.example.org IN CNAME . example.net IN CNAME rpz-drop. *.example.net IN CNAME rpz-drop.",
"named-checkzone rpz.local /var/named/rpz.local zone rpz.local/IN : loaded serial 2022070601 OK",
"systemctl reload named",
"dig @localhost www.example.org ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN , id: 30286",
"dig @localhost www.example.net ;; connection timed out; no servers could be reached",
"options { dnstap { all; }; # Configure filter dnstap-output file \"/var/named/data/dnstap.bin\" versions 2; }; end of options",
"Example: dnstap {auth response; client query; update;};",
"systemctl restart named.service",
"Example: sudoedit /etc/cron.daily/dnstap #!/bin/sh rndc dnstap -roll 3 mv /var/named/data/dnstap.bin.1 /var/log/named/dnstap/dnstap-USD(date -I).bin use dnstap-read to analyze saved logs sudo chmod a+x /etc/cron.daily/dnstap",
"Example: dnstap-read -p /var/named/data/dnstap.bin"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_networking_infrastructure_services/assembly_setting-up-and-configuring-a-bind-dns-server_networking-infrastructure-services |
2.9. Package Selection | 2.9. Package Selection Figure 2.14. Package Selection The Package Selection window allows you to choose which package groups to install. There are also options available to resolve and ignore package dependencies automatically. Currently, Kickstart Configurator does not allow you to select individual packages. To install individual packages, modify the %packages section of the kickstart file after you save it. Refer to Section 1.5, "Package Selection" for details. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/rhkstool-package_selection |
1.2. Overview of DM Multipath | 1.2. Overview of DM Multipath DM Multipath can be used to provide: Redundancy DM Multipath can provide failover in an active/passive configuration. In an active/passive configuration, only half the paths are used at any time for I/O. If any element of an I/O path (the cable, switch, or controller) fails, DM Multipath switches to an alternate path. Improved Performance DM Multipath can be configured in active/active mode, where I/O is spread over the paths in a round-robin fashion. In some configurations, DM Multipath can detect loading on the I/O paths and dynamically rebalance the load. Figure 1.1, "Active/Passive Multipath Configuration with One RAID Device" shows an active/passive configuration with two I/O paths from the server to a RAID device. There are 2 HBAs on the server, 2 SAN switches, and 2 RAID controllers. Figure 1.1. Active/Passive Multipath Configuration with One RAID Device In this configuration, there is one I/O path that goes through hba1, SAN1, and controller 1 and a second I/O path that goes through hba2, SAN2, and controller2. There are many points of possible failure in this configuration: HBA failure FC cable failure SAN switch failure Array controller port failure With DM Multipath configured, a failure at any of these points will cause DM Multipath to switch to the alternate I/O path. Figure 1.2, "Active/Passive Multipath Configuration with Two RAID Devices" shows a more complex active/passive configuration with 2 HBAs on the server, 2 SAN switches, and 2 RAID devices with 2 RAID controllers each. Figure 1.2. Active/Passive Multipath Configuration with Two RAID Devices In the example shown in Figure 1.2, "Active/Passive Multipath Configuration with Two RAID Devices" , there are two I/O paths to each RAID device (just as there are in the example shown in Figure 1.1, "Active/Passive Multipath Configuration with One RAID Device" ). With DM Multipath configured, a failure at any of the points of the I/O path to either of the RAID devices will cause DM Multipath to switch to the alternate I/O path for that device. Figure 1.3, "Active/Active Multipath Configuration with One RAID Device" shows an active/active configuration with 2 HBAs on the server, 1 SAN switch, and 2 RAID controllers. There are four I/O paths from the server to a storage device: hba1 to controller1 hba1 to controller2 hba2 to controller1 hba2 to controller2 In this configuration, I/O can be spread among those four paths. Figure 1.3. Active/Active Multipath Configuration with One RAID Device | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/dm_multipath/mpio_description |
Chapter 2. Working with pods | Chapter 2. Working with pods 2.1. Using pods A pod is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. 2.1.1. Understanding pods Pods are the rough equivalent of a machine instance (physical or virtual) to a Container. Each pod is allocated its own internal IP address, therefore owning its entire port space, and containers within pods can share their local storage and networking. Pods have a lifecycle; they are defined, then they are assigned to run on a node, then they run until their container(s) exit or they are removed for some other reason. Pods, depending on policy and exit code, might be removed after exiting, or can be retained to enable access to the logs of their containers. OpenShift Container Platform treats pods as largely immutable; changes cannot be made to a pod definition while it is running. OpenShift Container Platform implements changes by terminating an existing pod and recreating it with modified configuration, base image(s), or both. Pods are also treated as expendable, and do not maintain state when recreated. Therefore pods should usually be managed by higher-level controllers, rather than directly by users. Note For the maximum number of pods per OpenShift Container Platform node host, see the Cluster Limits. Warning Bare pods that are not managed by a replication controller will be not rescheduled upon node disruption. 2.1.2. Example pod configurations OpenShift Container Platform leverages the Kubernetes concept of a pod , which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. The following is an example definition of a pod. It demonstrates many features of pods, most of which are discussed in other topics and thus only briefly mentioned here: Pod object definition (YAML) kind: Pod apiVersion: v1 metadata: name: example labels: environment: production app: abc 1 spec: restartPolicy: Always 2 securityContext: 3 runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: 4 - name: abc args: - sleep - "1000000" volumeMounts: 5 - name: cache-volume mountPath: /cache 6 image: registry.access.redhat.com/ubi7/ubi-init:latest 7 securityContext: allowPrivilegeEscalation: false runAsNonRoot: true capabilities: drop: ["ALL"] resources: limits: memory: "100Mi" cpu: "1" requests: memory: "100Mi" cpu: "1" volumes: 8 - name: cache-volume emptyDir: sizeLimit: 500Mi 1 Pods can be "tagged" with one or more labels, which can then be used to select and manage groups of pods in a single operation. The labels are stored in key/value format in the metadata hash. 2 The pod restart policy with possible values Always , OnFailure , and Never . The default value is Always . 3 OpenShift Container Platform defines a security context for containers which specifies whether they are allowed to run as privileged containers, run as a user of their choice, and more. The default context is very restrictive but administrators can modify this as needed. 4 containers specifies an array of one or more container definitions. 5 The container specifies where external storage volumes are mounted within the container. 6 Specify the volumes to provide for the pod. Volumes mount at the specified path. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 7 Each container in the pod is instantiated from its own container image. 8 The pod defines storage volumes that are available to its container(s) to use. If you attach persistent volumes that have high file counts to pods, those pods can fail or can take a long time to start. For more information, see When using Persistent Volumes with high file counts in OpenShift, why do pods fail to start or take an excessive amount of time to achieve "Ready" state? . Note This pod definition does not include attributes that are filled by OpenShift Container Platform automatically after the pod is created and its lifecycle begins. The Kubernetes pod documentation has details about the functionality and purpose of pods. 2.1.3. Additional resources For more information on pods and storage see Understanding persistent storage and Understanding ephemeral storage . 2.2. Viewing pods As an administrator, you can view the pods in your cluster and to determine the health of those pods and the cluster as a whole. 2.2.1. About pods OpenShift Container Platform leverages the Kubernetes concept of a pod , which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. Pods are the rough equivalent of a machine instance (physical or virtual) to a container. You can view a list of pods associated with a specific project or view usage statistics about pods. 2.2.2. Viewing pods in a project You can view a list of pods associated with the current project, including the number of replica, the current status, number or restarts and the age of the pod. Procedure To view the pods in a project: Change to the project: USD oc project <project-name> Run the following command: USD oc get pods For example: USD oc get pods Example output NAME READY STATUS RESTARTS AGE console-698d866b78-bnshf 1/1 Running 2 165m console-698d866b78-m87pm 1/1 Running 2 165m Add the -o wide flags to view the pod IP address and the node where the pod is located. USD oc get pods -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE console-698d866b78-bnshf 1/1 Running 2 166m 10.128.0.24 ip-10-0-152-71.ec2.internal <none> console-698d866b78-m87pm 1/1 Running 2 166m 10.129.0.23 ip-10-0-173-237.ec2.internal <none> 2.2.3. Viewing pod usage statistics You can display usage statistics about pods, which provide the runtime environments for containers. These usage statistics include CPU, memory, and storage consumption. Prerequisites You must have cluster-reader permission to view the usage statistics. Metrics must be installed to view the usage statistics. Procedure To view the usage statistics: Run the following command: USD oc adm top pods For example: USD oc adm top pods -n openshift-console Example output NAME CPU(cores) MEMORY(bytes) console-7f58c69899-q8c8k 0m 22Mi console-7f58c69899-xhbgg 0m 25Mi downloads-594fcccf94-bcxk8 3m 18Mi downloads-594fcccf94-kv4p6 2m 15Mi Run the following command to view the usage statistics for pods with labels: USD oc adm top pod --selector='' You must choose the selector (label query) to filter on. Supports = , == , and != . For example: USD oc adm top pod --selector='name=my-pod' 2.2.4. Viewing resource logs You can view the log for various resources in the OpenShift CLI ( oc ) and web console. Logs read from the tail, or end, of the log. Prerequisites Access to the OpenShift CLI ( oc ). Procedure (UI) In the OpenShift Container Platform console, navigate to Workloads Pods or navigate to the pod through the resource you want to investigate. Note Some resources, such as builds, do not have pods to query directly. In such instances, you can locate the Logs link on the Details page for the resource. Select a project from the drop-down menu. Click the name of the pod you want to investigate. Click Logs . Procedure (CLI) View the log for a specific pod: USD oc logs -f <pod_name> -c <container_name> where: -f Optional: Specifies that the output follows what is being written into the logs. <pod_name> Specifies the name of the pod. <container_name> Optional: Specifies the name of a container. When a pod has more than one container, you must specify the container name. For example: USD oc logs ruby-58cd97df55-mww7r USD oc logs -f ruby-57f7f4855b-znl92 -c ruby The contents of log files are printed out. View the log for a specific resource: USD oc logs <object_type>/<resource_name> 1 1 Specifies the resource type and name. For example: USD oc logs deployment/ruby The contents of log files are printed out. 2.3. Configuring an OpenShift Container Platform cluster for pods As an administrator, you can create and maintain an efficient cluster for pods. By keeping your cluster efficient, you can provide a better environment for your developers using such tools as what a pod does when it exits, ensuring that the required number of pods is always running, when to restart pods designed to run only once, limit the bandwidth available to pods, and how to keep pods running during disruptions. 2.3.1. Configuring how pods behave after restart A pod restart policy determines how OpenShift Container Platform responds when Containers in that pod exit. The policy applies to all Containers in that pod. The possible values are: Always - Tries restarting a successfully exited Container on the pod continuously, with an exponential back-off delay (10s, 20s, 40s) capped at 5 minutes. The default is Always . OnFailure - Tries restarting a failed Container on the pod with an exponential back-off delay (10s, 20s, 40s) capped at 5 minutes. Never - Does not try to restart exited or failed Containers on the pod. Pods immediately fail and exit. After the pod is bound to a node, the pod will never be bound to another node. This means that a controller is necessary in order for a pod to survive node failure: Condition Controller Type Restart Policy Pods that are expected to terminate (such as batch computations) Job OnFailure or Never Pods that are expected to not terminate (such as web servers) Replication controller Always . Pods that must run one-per-machine Daemon set Any If a Container on a pod fails and the restart policy is set to OnFailure , the pod stays on the node and the Container is restarted. If you do not want the Container to restart, use a restart policy of Never . If an entire pod fails, OpenShift Container Platform starts a new pod. Developers must address the possibility that applications might be restarted in a new pod. In particular, applications must handle temporary files, locks, incomplete output, and so forth caused by runs. Note Kubernetes architecture expects reliable endpoints from cloud providers. When a cloud provider is down, the kubelet prevents OpenShift Container Platform from restarting. If the underlying cloud provider endpoints are not reliable, do not install a cluster using cloud provider integration. Install the cluster as if it was in a no-cloud environment. It is not recommended to toggle cloud provider integration on or off in an installed cluster. For details on how OpenShift Container Platform uses restart policy with failed Containers, see the Example States in the Kubernetes documentation. 2.3.2. Limiting the bandwidth available to pods You can apply quality-of-service traffic shaping to a pod and effectively limit its available bandwidth. Egress traffic (from the pod) is handled by policing, which simply drops packets in excess of the configured rate. Ingress traffic (to the pod) is handled by shaping queued packets to effectively handle data. The limits you place on a pod do not affect the bandwidth of other pods. Procedure To limit the bandwidth on a pod: Write an object definition JSON file, and specify the data traffic speed using kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations. For example, to limit both pod egress and ingress bandwidth to 10M/s: Limited Pod object definition { "kind": "Pod", "spec": { "containers": [ { "image": "openshift/hello-openshift", "name": "hello-openshift" } ] }, "apiVersion": "v1", "metadata": { "name": "iperf-slow", "annotations": { "kubernetes.io/ingress-bandwidth": "10M", "kubernetes.io/egress-bandwidth": "10M" } } } Create the pod using the object definition: USD oc create -f <file_or_dir_path> 2.3.3. Understanding how to use pod disruption budgets to specify the number of pods that must be up A pod disruption budget allows the specification of safety constraints on pods during operations, such as draining a node for maintenance. PodDisruptionBudget is an API object that specifies the minimum number or percentage of replicas that must be up at a time. Setting these in projects can be helpful during node maintenance (such as scaling a cluster down or a cluster upgrade) and is only honored on voluntary evictions (not on node failures). A PodDisruptionBudget object's configuration consists of the following key parts: A label selector, which is a label query over a set of pods. An availability level, which specifies the minimum number of pods that must be available simultaneously, either: minAvailable is the number of pods must always be available, even during a disruption. maxUnavailable is the number of pods can be unavailable during a disruption. Note Available refers to the number of pods that has condition Ready=True . Ready=True refers to the pod that is able to serve requests and should be added to the load balancing pools of all matching services. A maxUnavailable of 0% or 0 or a minAvailable of 100% or equal to the number of replicas is permitted but can block nodes from being drained. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. You can check for pod disruption budgets across all projects with the following: USD oc get poddisruptionbudget --all-namespaces Note The following example contains some values that are specific to OpenShift Container Platform on AWS. Example output NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #... The PodDisruptionBudget is considered healthy when there are at least minAvailable pods running in the system. Every pod above that limit can be evicted. Note Depending on your pod priority and preemption settings, lower-priority pods might be removed despite their pod disruption budget requirements. 2.3.3.1. Specifying the number of pods that must be up with pod disruption budgets You can use a PodDisruptionBudget object to specify the minimum number or percentage of replicas that must be up at a time. Procedure To configure a pod disruption budget: Create a YAML file with the an object definition similar to the following: apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod 1 PodDisruptionBudget is part of the policy/v1 API group. 2 The minimum number of pods that must be available simultaneously. This can be either an integer or a string specifying a percentage, for example, 20% . 3 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. Leave this parameter blank, for example selector {} , to select all pods in the project. Or: apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod 1 PodDisruptionBudget is part of the policy/v1 API group. 2 The maximum number of pods that can be unavailable simultaneously. This can be either an integer or a string specifying a percentage, for example, 20% . 3 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. Leave this parameter blank, for example selector {} , to select all pods in the project. Run the following command to add the object to project: USD oc create -f </path/to/file> -n <project_name> 2.3.3.2. Specifying the eviction policy for unhealthy pods When you use pod disruption budgets (PDBs) to specify how many pods must be available simultaneously, you can also define the criteria for how unhealthy pods are considered for eviction. You can choose one of the following policies: IfHealthyBudget Running pods that are not yet healthy can be evicted only if the guarded application is not disrupted. AlwaysAllow Running pods that are not yet healthy can be evicted regardless of whether the criteria in the pod disruption budget is met. This policy can help evict malfunctioning applications, such as ones with pods stuck in the CrashLoopBackOff state or failing to report the Ready status. Note It is recommended to set the unhealthyPodEvictionPolicy field to AlwaysAllow in the PodDisruptionBudget object to support the eviction of misbehaving applications during a node drain. The default behavior is to wait for the application pods to become healthy before the drain can proceed. Procedure Create a YAML file that defines a PodDisruptionBudget object and specify the unhealthy pod eviction policy: Example pod-disruption-budget.yaml file apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 selector: matchLabels: name: my-pod unhealthyPodEvictionPolicy: AlwaysAllow 1 1 Choose either IfHealthyBudget or AlwaysAllow as the unhealthy pod eviction policy. The default is IfHealthyBudget when the unhealthyPodEvictionPolicy field is empty. Create the PodDisruptionBudget object by running the following command: USD oc create -f pod-disruption-budget.yaml With a PDB that has the AlwaysAllow unhealthy pod eviction policy set, you can now drain nodes and evict the pods for a malfunctioning application guarded by this PDB. Additional resources Enabling features using feature gates Unhealthy Pod Eviction Policy in the Kubernetes documentation 2.3.4. Preventing pod removal using critical pods There are a number of core components that are critical to a fully functional cluster, but, run on a regular cluster node rather than the master. A cluster might stop working properly if a critical add-on is evicted. Pods marked as critical are not allowed to be evicted. Procedure To make a pod critical: Create a Pod spec or edit existing pods to include the system-cluster-critical priority class: apiVersion: v1 kind: Pod metadata: name: my-pdb spec: template: metadata: name: critical-pod priorityClassName: system-cluster-critical 1 # ... 1 Default priority class for pods that should never be evicted from a node. Alternatively, you can specify system-node-critical for pods that are important to the cluster but can be removed if necessary. Create the pod: USD oc create -f <file-name>.yaml 2.3.5. Reducing pod timeouts when using persistent volumes with high file counts If a storage volume contains many files (~1,000,000 or greater), you might experience pod timeouts. This can occur because, when volumes are mounted, OpenShift Container Platform recursively changes the ownership and permissions of the contents of each volume in order to match the fsGroup specified in a pod's securityContext . For large volumes, checking and changing the ownership and permissions can be time consuming, resulting in a very slow pod startup. You can reduce this delay by applying one of the following workarounds: Use a security context constraint (SCC) to skip the SELinux relabeling for a volume. Use the fsGroupChangePolicy field inside an SCC to control the way that OpenShift Container Platform checks and manages ownership and permissions for a volume. Use the Cluster Resource Override Operator to automatically apply an SCC to skip the SELinux relabeling. Use a runtime class to skip the SELinux relabeling for a volume. For information, see When using Persistent Volumes with high file counts in OpenShift, why do pods fail to start or take an excessive amount of time to achieve "Ready" state? . 2.4. Automatically scaling pods with the horizontal pod autoscaler As a developer, you can use a horizontal pod autoscaler (HPA) to specify how OpenShift Container Platform should automatically increase or decrease the scale of a replication controller or deployment configuration, based on metrics collected from the pods that belong to that replication controller or deployment configuration. You can create an HPA for any deployment, deployment config, replica set, replication controller, or stateful set. For information on scaling pods based on custom metrics, see Automatically scaling pods based on custom metrics . Note It is recommended to use a Deployment object or ReplicaSet object unless you need a specific feature or behavior provided by other objects. For more information on these objects, see Understanding deployments . 2.4.1. Understanding horizontal pod autoscalers You can create a horizontal pod autoscaler to specify the minimum and maximum number of pods you want to run, as well as the CPU utilization or memory utilization your pods should target. After you create a horizontal pod autoscaler, OpenShift Container Platform begins to query the CPU and/or memory resource metrics on the pods. When these metrics are available, the horizontal pod autoscaler computes the ratio of the current metric utilization with the desired metric utilization, and scales up or down accordingly. The query and scaling occurs at a regular interval, but can take one to two minutes before metrics become available. For replication controllers, this scaling corresponds directly to the replicas of the replication controller. For deployment configurations, scaling corresponds directly to the replica count of the deployment configuration. Note that autoscaling applies only to the latest deployment in the Complete phase. OpenShift Container Platform automatically accounts for resources and prevents unnecessary autoscaling during resource spikes, such as during start up. Pods in the unready state have 0 CPU usage when scaling up and the autoscaler ignores the pods when scaling down. Pods without known metrics have 0% CPU usage when scaling up and 100% CPU when scaling down. This allows for more stability during the HPA decision. To use this feature, you must configure readiness checks to determine if a new pod is ready for use. To use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics. 2.4.1.1. Supported metrics The following metrics are supported by horizontal pod autoscalers: Table 2.1. Metrics Metric Description API version CPU utilization Number of CPU cores used. Can be used to calculate a percentage of the pod's requested CPU. autoscaling/v1 , autoscaling/v2 Memory utilization Amount of memory used. Can be used to calculate a percentage of the pod's requested memory. autoscaling/v2 Important For memory-based autoscaling, memory usage must increase and decrease proportionally to the replica count. On average: An increase in replica count must lead to an overall decrease in memory (working set) usage per-pod. A decrease in replica count must lead to an overall increase in per-pod memory usage. Use the OpenShift Container Platform web console to check the memory behavior of your application and ensure that your application meets these requirements before using memory-based autoscaling. The following example shows autoscaling for the hello-node Deployment object. The initial deployment requires 3 pods. The HPA object increases the minimum to 5. If CPU usage on the pods reaches 75%, the pods increase to 7: USD oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75 Example output horizontalpodautoscaler.autoscaling/hello-node autoscaled Sample YAML to create an HPA for the hello-node deployment object with minReplicas set to 3 apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: hello-node namespace: default spec: maxReplicas: 7 minReplicas: 3 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: hello-node targetCPUUtilizationPercentage: 75 status: currentReplicas: 5 desiredReplicas: 0 After you create the HPA, you can view the new state of the deployment by running the following command: USD oc get deployment hello-node There are now 5 pods in the deployment: Example output NAME REVISION DESIRED CURRENT TRIGGERED BY hello-node 1 5 5 config 2.4.2. How does the HPA work? The horizontal pod autoscaler (HPA) extends the concept of pod auto-scaling. The HPA lets you create and manage a group of load-balanced nodes. The HPA automatically increases or decreases the number of pods when a given CPU or memory threshold is crossed. Figure 2.1. High level workflow of the HPA The HPA is an API resource in the Kubernetes autoscaling API group. The autoscaler works as a control loop with a default of 15 seconds for the sync period. During this period, the controller manager queries the CPU, memory utilization, or both, against what is defined in the YAML file for the HPA. The controller manager obtains the utilization metrics from the resource metrics API for per-pod resource metrics like CPU or memory, for each pod that is targeted by the HPA. If a utilization value target is set, the controller calculates the utilization value as a percentage of the equivalent resource request on the containers in each pod. The controller then takes the average of utilization across all targeted pods and produces a ratio that is used to scale the number of desired replicas. The HPA is configured to fetch metrics from metrics.k8s.io , which is provided by the metrics server. Because of the dynamic nature of metrics evaluation, the number of replicas can fluctuate during scaling for a group of replicas. Note To implement the HPA, all targeted pods must have a resource request set on their containers. 2.4.3. About requests and limits The scheduler uses the resource request that you specify for containers in a pod, to decide which node to place the pod on. The kubelet enforces the resource limit that you specify for a container to ensure that the container is not allowed to use more than the specified limit. The kubelet also reserves the request amount of that system resource specifically for that container to use. How to use resource metrics? In the pod specifications, you must specify the resource requests, such as CPU and memory. The HPA uses this specification to determine the resource utilization and then scales the target up or down. For example, the HPA object uses the following metric source: type: Resource resource: name: cpu target: type: Utilization averageUtilization: 60 In this example, the HPA keeps the average utilization of the pods in the scaling target at 60%. Utilization is the ratio between the current resource usage to the requested resource of the pod. 2.4.4. Best practices All pods must have resource requests configured The HPA makes a scaling decision based on the observed CPU or memory utilization values of pods in an OpenShift Container Platform cluster. Utilization values are calculated as a percentage of the resource requests of each pod. Missing resource request values can affect the optimal performance of the HPA. Configure the cool down period During horizontal pod autoscaling, there might be a rapid scaling of events without a time gap. Configure the cool down period to prevent frequent replica fluctuations. You can specify a cool down period by configuring the stabilizationWindowSeconds field. The stabilization window is used to restrict the fluctuation of replicas count when the metrics used for scaling keep fluctuating. The autoscaling algorithm uses this window to infer a desired state and avoid unwanted changes to workload scale. For example, a stabilization window is specified for the scaleDown field: behavior: scaleDown: stabilizationWindowSeconds: 300 In the above example, all desired states for the past 5 minutes are considered. This approximates a rolling maximum, and avoids having the scaling algorithm frequently remove pods only to trigger recreating an equivalent pod just moments later. 2.4.4.1. Scaling policies The autoscaling/v2 API allows you to add scaling policies to a horizontal pod autoscaler. A scaling policy controls how the OpenShift Container Platform horizontal pod autoscaler (HPA) scales pods. Scaling policies allow you to restrict the rate that HPAs scale pods up or down by setting a specific number or specific percentage to scale in a specified period of time. You can also define a stabilization window , which uses previously computed desired states to control scaling if the metrics are fluctuating. You can create multiple policies for the same scaling direction, and determine which policy is used, based on the amount of change. You can also restrict the scaling by timed iterations. The HPA scales pods during an iteration, then performs scaling, as needed, in further iterations. Sample HPA object with a scaling policy apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: behavior: scaleDown: 1 policies: 2 - type: Pods 3 value: 4 4 periodSeconds: 60 5 - type: Percent value: 10 6 periodSeconds: 60 selectPolicy: Min 7 stabilizationWindowSeconds: 300 8 scaleUp: 9 policies: - type: Pods value: 5 10 periodSeconds: 70 - type: Percent value: 12 11 periodSeconds: 80 selectPolicy: Max stabilizationWindowSeconds: 0 ... 1 Specifies the direction for the scaling policy, either scaleDown or scaleUp . This example creates a policy for scaling down. 2 Defines the scaling policy. 3 Determines if the policy scales by a specific number of pods or a percentage of pods during each iteration. The default value is pods . 4 Limits the amount of scaling, either the number of pods or percentage of pods, during each iteration. There is no default value for scaling down by number of pods. 5 Determines the length of a scaling iteration. The default value is 15 seconds. 6 The default value for scaling down by percentage is 100%. 7 Determines which policy to use first, if multiple policies are defined. Specify Max to use the policy that allows the highest amount of change, Min to use the policy that allows the lowest amount of change, or Disabled to prevent the HPA from scaling in that policy direction. The default value is Max . 8 Determines the time period the HPA should look back at desired states. The default value is 0 . 9 This example creates a policy for scaling up. 10 Limits the amount of scaling up by the number of pods. The default value for scaling up the number of pods is 4%. 11 Limits the amount of scaling up by the percentage of pods. The default value for scaling up by percentage is 100%. Example policy for scaling down apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: ... minReplicas: 20 ... behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 30 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max scaleUp: selectPolicy: Disabled In this example, when the number of pods is greater than 40, the percent-based policy is used for scaling down, as that policy results in a larger change, as required by the selectPolicy . If there are 80 pod replicas, in the first iteration the HPA reduces the pods by 8, which is 10% of the 80 pods (based on the type: Percent and value: 10 parameters), over one minute ( periodSeconds: 60 ). For the iteration, the number of pods is 72. The HPA calculates that 10% of the remaining pods is 7.2, which it rounds up to 8 and scales down 8 pods. On each subsequent iteration, the number of pods to be scaled is re-calculated based on the number of remaining pods. When the number of pods falls below 40, the pods-based policy is applied, because the pod-based number is greater than the percent-based number. The HPA reduces 4 pods at a time ( type: Pods and value: 4 ), over 30 seconds ( periodSeconds: 30 ), until there are 20 replicas remaining ( minReplicas ). The selectPolicy: Disabled parameter prevents the HPA from scaling up the pods. You can manually scale up by adjusting the number of replicas in the replica set or deployment set, if needed. If set, you can view the scaling policy by using the oc edit command: USD oc edit hpa hpa-resource-metrics-memory Example output apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: annotations: autoscaling.alpha.kubernetes.io/behavior:\ '{"ScaleUp":{"StabilizationWindowSeconds":0,"SelectPolicy":"Max","Policies":[{"Type":"Pods","Value":4,"PeriodSeconds":15},{"Type":"Percent","Value":100,"PeriodSeconds":15}]},\ "ScaleDown":{"StabilizationWindowSeconds":300,"SelectPolicy":"Min","Policies":[{"Type":"Pods","Value":4,"PeriodSeconds":60},{"Type":"Percent","Value":10,"PeriodSeconds":60}]}}' ... 2.4.5. Creating a horizontal pod autoscaler by using the web console From the web console, you can create a horizontal pod autoscaler (HPA) that specifies the minimum and maximum number of pods you want to run on a Deployment or DeploymentConfig object. You can also define the amount of CPU or memory usage that your pods should target. Note An HPA cannot be added to deployments that are part of an Operator-backed service, Knative service, or Helm chart. Procedure To create an HPA in the web console: In the Topology view, click the node to reveal the side pane. From the Actions drop-down list, select Add HorizontalPodAutoscaler to open the Add HorizontalPodAutoscaler form. Figure 2.2. Add HorizontalPodAutoscaler From the Add HorizontalPodAutoscaler form, define the name, minimum and maximum pod limits, the CPU and memory usage, and click Save . Note If any of the values for CPU and memory usage are missing, a warning is displayed. To edit an HPA in the web console: In the Topology view, click the node to reveal the side pane. From the Actions drop-down list, select Edit HorizontalPodAutoscaler to open the Edit Horizontal Pod Autoscaler form. From the Edit Horizontal Pod Autoscaler form, edit the minimum and maximum pod limits and the CPU and memory usage, and click Save . Note While creating or editing the horizontal pod autoscaler in the web console, you can switch from Form view to YAML view . To remove an HPA in the web console: In the Topology view, click the node to reveal the side panel. From the Actions drop-down list, select Remove HorizontalPodAutoscaler . In the confirmation pop-up window, click Remove to remove the HPA. 2.4.6. Creating a horizontal pod autoscaler for CPU utilization by using the CLI Using the OpenShift Container Platform CLI, you can create a horizontal pod autoscaler (HPA) to automatically scale an existing Deployment , DeploymentConfig , ReplicaSet , ReplicationController , or StatefulSet object. The HPA scales the pods associated with that object to maintain the CPU usage you specify. Note It is recommended to use a Deployment object or ReplicaSet object unless you need a specific feature or behavior provided by other objects. The HPA increases and decreases the number of replicas between the minimum and maximum numbers to maintain the specified CPU utilization across all pods. When autoscaling for CPU utilization, you can use the oc autoscale command and specify the minimum and maximum number of pods you want to run at any given time and the average CPU utilization your pods should target. If you do not specify a minimum, the pods are given default values from the OpenShift Container Platform server. To autoscale for a specific CPU value, create a HorizontalPodAutoscaler object with the target CPU and pod limits. Prerequisites To use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics. You can use the oc describe PodMetrics <pod-name> command to determine if metrics are configured. If metrics are configured, the output appears similar to the following, with Cpu and Memory displayed under Usage . USD oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Example output Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none> Procedure To create a horizontal pod autoscaler for CPU utilization: Perform one of the following: To scale based on the percent of CPU utilization, create a HorizontalPodAutoscaler object for an existing object: USD oc autoscale <object_type>/<name> \ 1 --min <number> \ 2 --max <number> \ 3 --cpu-percent=<percent> 4 1 Specify the type and name of the object to autoscale. The object must exist and be a Deployment , DeploymentConfig / dc , ReplicaSet / rs , ReplicationController / rc , or StatefulSet . 2 Optionally, specify the minimum number of replicas when scaling down. 3 Specify the maximum number of replicas when scaling up. 4 Specify the target average CPU utilization over all the pods, represented as a percent of requested CPU. If not specified or negative, a default autoscaling policy is used. For example, the following command shows autoscaling for the hello-node deployment object. The initial deployment requires 3 pods. The HPA object increases the minimum to 5. If CPU usage on the pods reaches 75%, the pods will increase to 7: USD oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75 To scale for a specific CPU value, create a YAML file similar to the following for an existing object: Create a YAML file similar to the following: apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: cpu-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: cpu 9 target: type: AverageValue 10 averageValue: 500m 11 1 Use the autoscaling/v2 API. 2 Specify a name for this horizontal pod autoscaler object. 3 Specify the API version of the object to scale: For a Deployment , ReplicaSet , Statefulset object, use apps/v1 . For a ReplicationController , use v1 . For a DeploymentConfig , use apps.openshift.io/v1 . 4 Specify the type of object. The object must be a Deployment , DeploymentConfig / dc , ReplicaSet / rs , ReplicationController / rc , or StatefulSet . 5 Specify the name of the object to scale. The object must exist. 6 Specify the minimum number of replicas when scaling down. 7 Specify the maximum number of replicas when scaling up. 8 Use the metrics parameter for memory utilization. 9 Specify cpu for CPU utilization. 10 Set to AverageValue . 11 Set to averageValue with the targeted CPU value. Create the horizontal pod autoscaler: USD oc create -f <file-name>.yaml Verify that the horizontal pod autoscaler was created: USD oc get hpa cpu-autoscale Example output NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE cpu-autoscale Deployment/example 173m/500m 1 10 1 20m 2.4.7. Creating a horizontal pod autoscaler object for memory utilization by using the CLI Using the OpenShift Container Platform CLI, you can create a horizontal pod autoscaler (HPA) to automatically scale an existing Deployment , DeploymentConfig , ReplicaSet , ReplicationController , or StatefulSet object. The HPA scales the pods associated with that object to maintain the average memory utilization you specify, either a direct value or a percentage of requested memory. Note It is recommended to use a Deployment object or ReplicaSet object unless you need a specific feature or behavior provided by other objects. The HPA increases and decreases the number of replicas between the minimum and maximum numbers to maintain the specified memory utilization across all pods. For memory utilization, you can specify the minimum and maximum number of pods and the average memory utilization your pods should target. If you do not specify a minimum, the pods are given default values from the OpenShift Container Platform server. Prerequisites To use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics. You can use the oc describe PodMetrics <pod-name> command to determine if metrics are configured. If metrics are configured, the output appears similar to the following, with Cpu and Memory displayed under Usage . USD oc describe PodMetrics openshift-kube-scheduler-ip-10-0-129-223.compute.internal -n openshift-kube-scheduler Example output Name: openshift-kube-scheduler-ip-10-0-129-223.compute.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Cpu: 0 Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2020-02-14T22:21:14Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-129-223.compute.internal Timestamp: 2020-02-14T22:21:14Z Window: 5m0s Events: <none> Procedure To create a horizontal pod autoscaler for memory utilization: Create a YAML file for one of the following: To scale for a specific memory value, create a HorizontalPodAutoscaler object similar to the following for an existing object: apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: AverageValue 10 averageValue: 500Mi 11 behavior: 12 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 60 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max 1 Use the autoscaling/v2 API. 2 Specify a name for this horizontal pod autoscaler object. 3 Specify the API version of the object to scale: For a Deployment , ReplicaSet , or Statefulset object, use apps/v1 . For a ReplicationController , use v1 . For a DeploymentConfig , use apps.openshift.io/v1 . 4 Specify the type of object. The object must be a Deployment , DeploymentConfig , ReplicaSet , ReplicationController , or StatefulSet . 5 Specify the name of the object to scale. The object must exist. 6 Specify the minimum number of replicas when scaling down. 7 Specify the maximum number of replicas when scaling up. 8 Use the metrics parameter for memory utilization. 9 Specify memory for memory utilization. 10 Set the type to AverageValue . 11 Specify averageValue and a specific memory value. 12 Optional: Specify a scaling policy to control the rate of scaling up or down. To scale for a percentage, create a HorizontalPodAutoscaler object similar to the following for an existing object: apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: memory-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: Utilization 10 averageUtilization: 50 11 behavior: 12 scaleUp: stabilizationWindowSeconds: 180 policies: - type: Pods value: 6 periodSeconds: 120 - type: Percent value: 10 periodSeconds: 120 selectPolicy: Max 1 Use the autoscaling/v2 API. 2 Specify a name for this horizontal pod autoscaler object. 3 Specify the API version of the object to scale: For a ReplicationController, use v1 . For a DeploymentConfig, use apps.openshift.io/v1 . For a Deployment, ReplicaSet, Statefulset object, use apps/v1 . 4 Specify the type of object. The object must be a Deployment , DeploymentConfig , ReplicaSet , ReplicationController , or StatefulSet . 5 Specify the name of the object to scale. The object must exist. 6 Specify the minimum number of replicas when scaling down. 7 Specify the maximum number of replicas when scaling up. 8 Use the metrics parameter for memory utilization. 9 Specify memory for memory utilization. 10 Set to Utilization . 11 Specify averageUtilization and a target average memory utilization over all the pods, represented as a percent of requested memory. The target pods must have memory requests configured. 12 Optional: Specify a scaling policy to control the rate of scaling up or down. Create the horizontal pod autoscaler: USD oc create -f <file-name>.yaml For example: USD oc create -f hpa.yaml Example output horizontalpodautoscaler.autoscaling/hpa-resource-metrics-memory created Verify that the horizontal pod autoscaler was created: USD oc get hpa hpa-resource-metrics-memory Example output NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE hpa-resource-metrics-memory Deployment/example 2441216/500Mi 1 10 1 20m USD oc describe hpa hpa-resource-metrics-memory Example output Name: hpa-resource-metrics-memory Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Wed, 04 Mar 2020 16:31:37 +0530 Reference: Deployment/example Metrics: ( current / target ) resource memory on pods: 2441216 / 500Mi Min replicas: 1 Max replicas: 10 ReplicationController pods: 1 current / 1 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale recommended size matches current size ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource ScalingLimited False DesiredWithinRange the desired count is within the acceptable range Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulRescale 6m34s horizontal-pod-autoscaler New size: 1; reason: All metrics below target 2.4.8. Understanding horizontal pod autoscaler status conditions by using the CLI You can use the status conditions set to determine whether or not the horizontal pod autoscaler (HPA) is able to scale and whether or not it is currently restricted in any way. The HPA status conditions are available with the v2 version of the autoscaling API. The HPA responds with the following status conditions: The AbleToScale condition indicates whether HPA is able to fetch and update metrics, as well as whether any backoff-related conditions could prevent scaling. A True condition indicates scaling is allowed. A False condition indicates scaling is not allowed for the reason specified. The ScalingActive condition indicates whether the HPA is enabled (for example, the replica count of the target is not zero) and is able to calculate desired metrics. A True condition indicates metrics is working properly. A False condition generally indicates a problem with fetching metrics. The ScalingLimited condition indicates that the desired scale was capped by the maximum or minimum of the horizontal pod autoscaler. A True condition indicates that you need to raise or lower the minimum or maximum replica count in order to scale. A False condition indicates that the requested scaling is allowed. USD oc describe hpa cm-test Example output Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) "http_requests" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range Events: 1 The horizontal pod autoscaler status messages. The following is an example of a pod that is unable to scale: Example output Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale False FailedGetScale the HPA controller was unable to get the target's current scale: no matches for kind "ReplicationController" in group "apps" Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetScale 6s (x3 over 36s) horizontal-pod-autoscaler no matches for kind "ReplicationController" in group "apps" The following is an example of a pod that could not obtain the needed metrics for scaling: Example output Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API The following is an example of a pod where the requested autoscaling was less than the required minimums: Example output Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range 2.4.8.1. Viewing horizontal pod autoscaler status conditions by using the CLI You can view the status conditions set on a pod by the horizontal pod autoscaler (HPA). Note The horizontal pod autoscaler status conditions are available with the v2 version of the autoscaling API. Prerequisites To use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics. You can use the oc describe PodMetrics <pod-name> command to determine if metrics are configured. If metrics are configured, the output appears similar to the following, with Cpu and Memory displayed under Usage . USD oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Example output Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none> Procedure To view the status conditions on a pod, use the following command with the name of the pod: USD oc describe hpa <pod-name> For example: USD oc describe hpa cm-test The conditions appear in the Conditions field in the output. Example output Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) "http_requests" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range 2.4.9. Additional resources For more information on replication controllers and deployment controllers, see Understanding deployments and deployment configs . For an example on the usage of HPA, see Horizontal Pod Autoscaling of Quarkus Application Based on Memory Utilization . 2.5. Automatically adjust pod resource levels with the vertical pod autoscaler The OpenShift Container Platform Vertical Pod Autoscaler Operator (VPA) automatically reviews the historic and current CPU and memory resources for containers in pods and can update the resource limits and requests based on the usage values it learns. The VPA uses individual custom resources (CR) to update all of the pods in a project that are associated with any built-in workload objects, including the following object types: Deployment DeploymentConfig StatefulSet Job DaemonSet ReplicaSet ReplicationController The VPA can also update certain custom resource object that manage pods, as described in Using the Vertical Pod Autoscaler Operator with Custom Resources . The VPA helps you to understand the optimal CPU and memory usage for your pods and can automatically maintain pod resources through the pod lifecycle. 2.5.1. About the Vertical Pod Autoscaler Operator The Vertical Pod Autoscaler Operator (VPA) is implemented as an API resource and a custom resource (CR). The CR determines the actions that the VPA Operator should take with the pods associated with a specific workload object, such as a daemon set, replication controller, and so forth, in a project. The VPA Operator consists of three components, each of which has its own pod in the VPA namespace: Recommender The VPA recommender monitors the current and past resource consumption and, based on this data, determines the optimal CPU and memory resources for the pods in the associated workload object. Updater The VPA updater checks if the pods in the associated workload object have the correct resources. If the resources are correct, the updater takes no action. If the resources are not correct, the updater kills the pod so that they can be recreated by their controllers with the updated requests. Admission controller The VPA admission controller sets the correct resource requests on each new pod in the associated workload object, whether the pod is new or was recreated by its controller due to the VPA updater actions. You can use the default recommender or use your own alternative recommender to autoscale based on your own algorithms. The default recommender automatically computes historic and current CPU and memory usage for the containers in those pods and uses this data to determine optimized resource limits and requests to ensure that these pods are operating efficiently at all times. For example, the default recommender suggests reduced resources for pods that are requesting more resources than they are using and increased resources for pods that are not requesting enough. The VPA then automatically deletes any pods that are out of alignment with these recommendations one at a time, so that your applications can continue to serve requests with no downtime. The workload objects then re-deploy the pods with the original resource limits and requests. The VPA uses a mutating admission webhook to update the pods with optimized resource limits and requests before the pods are admitted to a node. If you do not want the VPA to delete pods, you can view the VPA resource limits and requests and manually update the pods as needed. Note By default, workload objects must specify a minimum of two replicas in order for the VPA to automatically delete their pods. Workload objects that specify fewer replicas than this minimum are not deleted. If you manually delete these pods, when the workload object redeploys the pods, the VPA does update the new pods with its recommendations. You can change this minimum by modifying the VerticalPodAutoscalerController object as shown in Changing the VPA minimum value . For example, if you have a pod that uses 50% of the CPU but only requests 10%, the VPA determines that the pod is consuming more CPU than requested and deletes the pod. The workload object, such as replica set, restarts the pods and the VPA updates the new pod with its recommended resources. For developers, you can use the VPA to help ensure your pods stay up during periods of high demand by scheduling pods onto nodes that have appropriate resources for each pod. Administrators can use the VPA to better utilize cluster resources, such as preventing pods from reserving more CPU resources than needed. The VPA monitors the resources that workloads are actually using and adjusts the resource requirements so capacity is available to other workloads. The VPA also maintains the ratios between limits and requests that are specified in initial container configuration. Note If you stop running the VPA or delete a specific VPA CR in your cluster, the resource requests for the pods already modified by the VPA do not change. Any new pods get the resources defined in the workload object, not the recommendations made by the VPA. 2.5.2. Installing the Vertical Pod Autoscaler Operator You can use the OpenShift Container Platform web console to install the Vertical Pod Autoscaler Operator (VPA). Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Choose VerticalPodAutoscaler from the list of available Operators, and click Install . On the Install Operator page, ensure that the Operator recommended namespace option is selected. This installs the Operator in the mandatory openshift-vertical-pod-autoscaler namespace, which is automatically created if it does not exist. Click Install . Verifiction Verify the installation by listing the VPA Operator components: Navigate to Workloads Pods . Select the openshift-vertical-pod-autoscaler project from the drop-down menu and verify that there are four pods running. Navigate to Workloads Deployments to verify that there are four deployments running. Optional: Verify the installation in the OpenShift Container Platform CLI using the following command: USD oc get all -n openshift-vertical-pod-autoscaler The output shows four pods and four deployments: Example output NAME READY STATUS RESTARTS AGE pod/vertical-pod-autoscaler-operator-85b4569c47-2gmhc 1/1 Running 0 3m13s pod/vpa-admission-plugin-default-67644fc87f-xq7k9 1/1 Running 0 2m56s pod/vpa-recommender-default-7c54764b59-8gckt 1/1 Running 0 2m56s pod/vpa-updater-default-7f6cc87858-47vw9 1/1 Running 0 2m56s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/vpa-webhook ClusterIP 172.30.53.206 <none> 443/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/vertical-pod-autoscaler-operator 1/1 1 1 3m13s deployment.apps/vpa-admission-plugin-default 1/1 1 1 2m56s deployment.apps/vpa-recommender-default 1/1 1 1 2m56s deployment.apps/vpa-updater-default 1/1 1 1 2m56s NAME DESIRED CURRENT READY AGE replicaset.apps/vertical-pod-autoscaler-operator-85b4569c47 1 1 1 3m13s replicaset.apps/vpa-admission-plugin-default-67644fc87f 1 1 1 2m56s replicaset.apps/vpa-recommender-default-7c54764b59 1 1 1 2m56s replicaset.apps/vpa-updater-default-7f6cc87858 1 1 1 2m56s 2.5.3. Moving the Vertical Pod Autoscaler Operator components The Vertical Pod Autoscaler Operator (VPA) and each component has its own pod in the VPA namespace on the control plane nodes. You can move the VPA Operator and component pods to infrastructure or worker nodes by adding a node selector to the VPA subscription and the VerticalPodAutoscalerController CR. You can create and use infrastructure nodes to host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure nodes are not counted toward the total number of subscriptions that are required to run the environment. For more information, see Creating infrastructure machine sets . You can move the components to the same node or separate nodes as appropriate for your organization. The following example shows the default deployment of the VPA pods to the control plane nodes. Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-master-1 <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-master-1 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-master-0 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-master-1 <none> <none> Procedure Move the VPA Operator pod by adding a node selector to the Subscription custom resource (CR) for the VPA Operator: Edit the CR: USD oc edit Subscription vertical-pod-autoscaler -n openshift-vertical-pod-autoscaler Add a node selector to match the node role label on the node where you want to install the VPA Operator pod: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: "" name: vertical-pod-autoscaler # ... spec: config: nodeSelector: node-role.kubernetes.io/<node_role>: "" 1 1 1 Specifies the node role of the node where you want to move the VPA Operator pod. Note If the infra node uses taints, you need to add a toleration to the Subscription CR. For example: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: "" name: vertical-pod-autoscaler # ... spec: config: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: 1 - key: "node-role.kubernetes.io/infra" operator: "Exists" effect: "NoSchedule" 1 Specifies a toleration for a taint on the node where you want to move the VPA Operator pod. Move each VPA component by adding node selectors to the VerticalPodAutoscaler custom resource (CR): Edit the CR: USD oc edit VerticalPodAutoscalerController default -n openshift-vertical-pod-autoscaler Add node selectors to match the node role label on the node where you want to install the VPA components: apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler # ... spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: "" 1 recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: "" 2 updater: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: "" 3 1 Optional: Specifies the node role for the VPA admission pod. 2 Optional: Specifies the node role for the VPA recommender pod. 3 Optional: Specifies the node role for the VPA updater pod. Note If a target node uses taints, you need to add a toleration to the VerticalPodAutoscalerController CR. For example: apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler # ... spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: "" tolerations: 1 - key: "my-example-node-taint-key" operator: "Exists" effect: "NoSchedule" recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: "" tolerations: 2 - key: "my-example-node-taint-key" operator: "Exists" effect: "NoSchedule" updater: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: "" tolerations: 3 - key: "my-example-node-taint-key" operator: "Exists" effect: "NoSchedule" 1 Specifies a toleration for the admission controller pod for a taint on the node where you want to install the pod. 2 Specifies a toleration for the recommender pod for a taint on the node where you want to install the pod. 3 Specifies a toleration for the updater pod for a taint on the node where you want to install the pod. Verification You can verify the pods have moved by using the following command: USD oc get pods -n openshift-vertical-pod-autoscaler -o wide The pods are no longer deployed to the control plane nodes. Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-infra-eastus3-2bndt <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> Additional resources Creating infrastructure machine sets 2.5.4. About Using the Vertical Pod Autoscaler Operator To use the Vertical Pod Autoscaler Operator (VPA), you create a VPA custom resource (CR) for a workload object in your cluster. The VPA learns and applies the optimal CPU and memory resources for the pods associated with that workload object. You can use a VPA with a deployment, stateful set, job, daemon set, replica set, or replication controller workload object. The VPA CR must be in the same project as the pods you want to monitor. You use the VPA CR to associate a workload object and specify which mode the VPA operates in: The Auto and Recreate modes automatically apply the VPA CPU and memory recommendations throughout the pod lifetime. The VPA deletes any pods in the project that are out of alignment with its recommendations. When redeployed by the workload object, the VPA updates the new pods with its recommendations. The Initial mode automatically applies VPA recommendations only at pod creation. The Off mode only provides recommended resource limits and requests, allowing you to manually apply the recommendations. The off mode does not update pods. You can also use the CR to opt-out certain containers from VPA evaluation and updates. For example, a pod has the following limits and requests: resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi After creating a VPA that is set to auto , the VPA learns the resource usage and deletes the pod. When redeployed, the pod uses the new resource limits and requests: resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k You can view the VPA recommendations using the following command: USD oc get vpa <vpa-name> --output yaml After a few minutes, the output shows the recommendations for CPU and memory requests, similar to the following: Example output ... status: ... recommendation: containerRecommendations: - containerName: frontend lowerBound: cpu: 25m memory: 262144k target: cpu: 25m memory: 262144k uncappedTarget: cpu: 25m memory: 262144k upperBound: cpu: 262m memory: "274357142" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: "498558823" ... The output shows the recommended resources, target , the minimum recommended resources, lowerBound , the highest recommended resources, upperBound , and the most recent resource recommendations, uncappedTarget . The VPA uses the lowerBound and upperBound values to determine if a pod needs to be updated. If a pod has resource requests below the lowerBound values or above the upperBound values, the VPA terminates and recreates the pod with the target values. 2.5.4.1. Changing the VPA minimum value By default, workload objects must specify a minimum of two replicas in order for the VPA to automatically delete and update their pods. As a result, workload objects that specify fewer than two replicas are not automatically acted upon by the VPA. The VPA does update new pods from these workload objects if the pods are restarted by some process external to the VPA. You can change this cluster-wide minimum value by modifying the minReplicas parameter in the VerticalPodAutoscalerController custom resource (CR). For example, if you set minReplicas to 3 , the VPA does not delete and update pods for workload objects that specify fewer than three replicas. Note If you set minReplicas to 1 , the VPA can delete the only pod for a workload object that specifies only one replica. You should use this setting with one-replica objects only if your workload can tolerate downtime whenever the VPA deletes a pod to adjust its resources. To avoid unwanted downtime with one-replica objects, configure the VPA CRs with the podUpdatePolicy set to Initial , which automatically updates the pod only when it is restarted by some process external to the VPA, or Off , which allows you to update the pod manually at an appropriate time for your application. Example VerticalPodAutoscalerController object apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: creationTimestamp: "2021-04-21T19:29:49Z" generation: 2 name: default namespace: openshift-vertical-pod-autoscaler resourceVersion: "142172" uid: 180e17e9-03cc-427f-9955-3b4d7aeb2d59 spec: minReplicas: 3 1 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15 1 Specify the minimum number of replicas in a workload object for the VPA to act on. Any objects with replicas fewer than the minimum are not automatically deleted by the VPA. 2.5.4.2. Automatically applying VPA recommendations To use the VPA to automatically update pods, create a VPA CR for a specific workload object with updateMode set to Auto or Recreate . When the pods are created for the workload object, the VPA constantly monitors the containers to analyze their CPU and memory needs. The VPA deletes any pods that do not meet the VPA recommendations for CPU and memory. When redeployed, the pods use the new resource limits and requests based on the VPA recommendations, honoring any pod disruption budget set for your applications. The recommendations are added to the status field of the VPA CR for reference. Note By default, workload objects must specify a minimum of two replicas in order for the VPA to automatically delete their pods. Workload objects that specify fewer replicas than this minimum are not deleted. If you manually delete these pods, when the workload object redeploys the pods, the VPA does update the new pods with its recommendations. You can change this minimum by modifying the VerticalPodAutoscalerController object as shown in Changing the VPA minimum value . Example VPA CR for the Auto mode apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: "apps/v1" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: "Auto" 3 1 The type of workload object you want this VPA CR to manage. 2 The name of the workload object you want this VPA CR to manage. 3 Set the mode to Auto or Recreate : Auto . The VPA assigns resource requests on pod creation and updates the existing pods by terminating them when the requested resources differ significantly from the new recommendation. Recreate . The VPA assigns resource requests on pod creation and updates the existing pods by terminating them when the requested resources differ significantly from the new recommendation. This mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes. Note Before a VPA can determine recommendations for resources and apply the recommended resources to new pods, operating pods must exist and be running in the project. If a workload's resource usage, such as CPU and memory, is consistent, the VPA can determine recommendations for resources in a few minutes. If a workload's resource usage is inconsistent, the VPA must collect metrics at various resource usage intervals for the VPA to make an accurate recommendation. 2.5.4.3. Automatically applying VPA recommendations on pod creation To use the VPA to apply the recommended resources only when a pod is first deployed, create a VPA CR for a specific workload object with updateMode set to Initial . Then, manually delete any pods associated with the workload object that you want to use the VPA recommendations. In the Initial mode, the VPA does not delete pods and does not update the pods as it learns new resource recommendations. Example VPA CR for the Initial mode apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: "apps/v1" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: "Initial" 3 1 The type of workload object you want this VPA CR to manage. 2 The name of the workload object you want this VPA CR to manage. 3 Set the mode to Initial . The VPA assigns resources when pods are created and does not change the resources during the lifetime of the pod. Note Before a VPA can determine recommended resources and apply the recommendations to new pods, operating pods must exist and be running in the project. To obtain the most accurate recommendations from the VPA, wait at least 8 days for the pods to run and for the VPA to stabilize. 2.5.4.4. Manually applying VPA recommendations To use the VPA to only determine the recommended CPU and memory values, create a VPA CR for a specific workload object with updateMode set to off . When the pods are created for that workload object, the VPA analyzes the CPU and memory needs of the containers and records those recommendations in the status field of the VPA CR. The VPA does not update the pods as it determines new resource recommendations. Example VPA CR for the Off mode apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: "apps/v1" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: "Off" 3 1 The type of workload object you want this VPA CR to manage. 2 The name of the workload object you want this VPA CR to manage. 3 Set the mode to Off . You can view the recommendations using the following command. USD oc get vpa <vpa-name> --output yaml With the recommendations, you can edit the workload object to add CPU and memory requests, then delete and redeploy the pods using the recommended resources. Note Before a VPA can determine recommended resources and apply the recommendations to new pods, operating pods must exist and be running in the project. To obtain the most accurate recommendations from the VPA, wait at least 8 days for the pods to run and for the VPA to stabilize. 2.5.4.5. Exempting containers from applying VPA recommendations If your workload object has multiple containers and you do not want the VPA to evaluate and act on all of the containers, create a VPA CR for a specific workload object and add a resourcePolicy to opt-out specific containers. When the VPA updates the pods with recommended resources, any containers with a resourcePolicy are not updated and the VPA does not present recommendations for those containers in the pod. apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: "apps/v1" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: "Auto" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: "Off" 1 The type of workload object you want this VPA CR to manage. 2 The name of the workload object you want this VPA CR to manage. 3 Set the mode to Auto , Recreate , or Off . The Recreate mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes. 4 Specify the containers you want to opt-out and set mode to Off . For example, a pod has two containers, the same resource requests and limits: # ... spec: containers: - name: frontend resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi - name: backend resources: limits: cpu: "1" memory: 500Mi requests: cpu: 500m memory: 100Mi # ... After launching a VPA CR with the backend container set to opt-out, the VPA terminates and recreates the pod with the recommended resources applied only to the frontend container: ... spec: containers: name: frontend resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k ... name: backend resources: limits: cpu: "1" memory: 500Mi requests: cpu: 500m memory: 100Mi ... 2.5.4.6. Performance tuning the VPA Operator As a cluster administrator, you can tune the performance of your Vertical Pod Autoscaler Operator (VPA) to limit the rate at which the VPA makes requests of the Kubernetes API server and to specify the CPU and memory resources for the VPA recommender, updater, and admission controller component pods. Additionally, you can configure the VPA Operator to monitor only those workloads that are being managed by a VPA custom resource (CR). By default, the VPA Operator monitors every workload in the cluster. This allows the VPA Operator to accrue and store 8 days of historical data for all workloads, which the Operator can use if a new VPA CR is created for a workload. However, this causes the VPA Operator to use significant CPU and memory, which could cause the Operator to fail, particularly on larger clusters. By configuring the VPA Operator to monitor only workloads with a VPA CR, you can save on CPU and memory resources. One trade-off is that if you have a workload that has been running, and you create a VPA CR to manage that workload, the VPA Operator does not have any historical data for that workload. As a result, the initial recommendations are not as useful as those after the workload had been running for some time. These tunings allow you to ensure the VPA has sufficient resources to operate at peak efficiency and to prevent throttling and a possible delay in pod admissions. You can perform the following tunings on the VPA components by editing the VerticalPodAutoscalerController custom resource (CR): To prevent throttling and pod admission delays, set the queries-per-second (QPS) and burst rates for VPA requests of the Kubernetes API server by using the kube-api-qps and kube-api-burst parameters. To ensure sufficient CPU and memory, set the CPU and memory requests for VPA component pods by using the standard cpu and memory resource requests. To configure the VPA Operator to monitor only workloads that are being managed by a VPA CR, set the memory-saver parameter to true for the recommender component. For guidelines on the resources and rate limits that you could set for each VPA component, the following tables provide recommended baseline values, depending on the size of your cluster and other factors. Important These recommended values were derived from internal Red Hat testing on clusters that are not necessarily representative of real-world clusters. You should test these values in a non-production cluster before configuring a production cluster. Table 2.2. Requests by containers in the cluster Component 1-500 containers 500-1000 containers 1000-2000 containers 2000-4000 containers 4000+ containers CPU Memory CPU Memory CPU Memory CPU Memory CPU Memory Admission 25m 50Mi 25m 75Mi 40m 150Mi 75m 260Mi (0.03c)/2 + 10 [1] (0.1c)/2 + 50 [1] Recommender 25m 100Mi 50m 160Mi 75m 275Mi 120m 420Mi (0.05c)/2 + 50 [1] (0.15c)/2 + 120 [1] Updater 25m 100Mi 50m 220Mi 80m 350Mi 150m 500Mi (0.07c)/2 + 20 [1] (0.15c)/2 + 200 [1] c is the number of containers in the cluster. Note It is recommended that you set the memory limit on your containers to at least double the recommended requests in the table. However, because CPU is a compressible resource, setting CPU limits for containers can throttle the VPA. As such, it is recommended that you do not set a CPU limit on your containers. Table 2.3. Rate limits by VPAs in the cluster Component 1 - 150 VPAs 151 - 500 VPAs 501-2000 VPAs 2001-4000 VPAs QPS Limit [1] Burst [2] QPS Limit Burst QPS Limit Burst QPS Limit Burst Recommender 5 10 30 60 60 120 120 240 Updater 5 10 30 60 60 120 120 240 QPS specifies the queries per second (QPS) limit when making requests to Kubernetes API server. The default for the updater and recommender pods is 5.0 . Burst specifies the burst limit when making requests to Kubernetes API server. The default for the updater and recommender pods is 10.0 . Note If you have more than 4000 VPAs in your cluster, it is recommended that you start performance tuning with the values in the table and slowly increase the values until you achieve the desired recommender and updater latency and performance. You should adjust these values slowly because increased QPS and Burst could affect the cluster health and slow down the Kubernetes API server if too many API requests are being sent to the API server from the VPA components. The following example VPA controller CR is for a cluster with 1000 to 2000 containers and a pod creation surge of 26 to 50. The CR sets the following values: The container memory and CPU requests for all three VPA components The container memory limit for all three VPA components The QPS and burst rates for all three VPA components The memory-saver parameter to true for the VPA recommender component Example VerticalPodAutoscalerController CR apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: 1 container: args: 2 - '--kube-api-qps=50.0' - '--kube-api-burst=100.0' resources: requests: 3 cpu: 40m memory: 150Mi limits: memory: 300Mi recommender: 4 container: args: - '--kube-api-qps=60.0' - '--kube-api-burst=120.0' - '--memory-saver=true' 5 resources: requests: cpu: 75m memory: 275Mi limits: memory: 550Mi updater: 6 container: args: - '--kube-api-qps=60.0' - '--kube-api-burst=120.0' resources: requests: cpu: 80m memory: 350M limits: memory: 700Mi minReplicas: 2 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15 1 Specifies the tuning parameters for the VPA admission controller. 2 Specifies the API QPS and burst rates for the VPA admission controller. kube-api-qps : Specifies the queries per second (QPS) limit when making requests to Kubernetes API server. The default is 5.0 . kube-api-burst : Specifies the burst limit when making requests to Kubernetes API server. The default is 10.0 . 3 Specifies the resource requests and limits for the VPA admission controller pod. 4 Specifies the tuning parameters for the VPA recommender. 5 Specifies that the VPA Operator monitors only workloads with a VPA CR. The default is false . 6 Specifies the tuning parameters for the VPA updater. You can verify that the settings were applied to each VPA component pod. Example updater pod apiVersion: v1 kind: Pod metadata: name: vpa-updater-default-d65ffb9dc-hgw44 namespace: openshift-vertical-pod-autoscaler # ... spec: containers: - args: - --logtostderr - --v=1 - --min-replicas=2 - --kube-api-qps=60.0 - --kube-api-burst=120.0 # ... resources: requests: cpu: 80m memory: 350M # ... Example admission controller pod apiVersion: v1 kind: Pod metadata: name: vpa-admission-plugin-default-756999448c-l7tsd namespace: openshift-vertical-pod-autoscaler # ... spec: containers: - args: - --logtostderr - --v=1 - --tls-cert-file=/data/tls-certs/tls.crt - --tls-private-key=/data/tls-certs/tls.key - --client-ca-file=/data/tls-ca-certs/service-ca.crt - --webhook-timeout-seconds=10 - --kube-api-qps=50.0 - --kube-api-burst=100.0 # ... resources: requests: cpu: 40m memory: 150Mi # ... Example recommender pod apiVersion: v1 kind: Pod metadata: name: vpa-recommender-default-74c979dbbc-znrd2 namespace: openshift-vertical-pod-autoscaler # ... spec: containers: - args: - --logtostderr - --v=1 - --recommendation-margin-fraction=0.15 - --pod-recommendation-min-cpu-millicores=25 - --pod-recommendation-min-memory-mb=250 - --kube-api-qps=60.0 - --kube-api-burst=120.0 - --memory-saver=true # ... resources: requests: cpu: 75m memory: 275Mi # ... 2.5.4.7. Using an alternative recommender You can use your own recommender to autoscale based on your own algorithms. If you do not specify an alternative recommender, OpenShift Container Platform uses the default recommender, which suggests CPU and memory requests based on historical usage. Because there is no universal recommendation policy that applies to all types of workloads, you might want to create and deploy different recommenders for specific workloads. For example, the default recommender might not accurately predict future resource usage when containers exhibit certain resource behaviors, such as cyclical patterns that alternate between usage spikes and idling as used by monitoring applications, or recurring and repeating patterns used with deep learning applications. Using the default recommender with these usage behaviors might result in significant over-provisioning and Out of Memory (OOM) kills for your applications. Note Instructions for how to create a recommender are beyond the scope of this documentation, Procedure To use an alternative recommender for your pods: Create a service account for the alternative recommender and bind that service account to the required cluster role: apiVersion: v1 1 kind: ServiceAccount metadata: name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 2 kind: ClusterRoleBinding metadata: name: system:example-metrics-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 3 kind: ClusterRoleBinding metadata: name: system:example-vpa-actor roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-actor subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 4 kind: ClusterRoleBinding metadata: name: system:example-vpa-target-reader-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-target-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> 1 Creates a service account for the recommender in the namespace where the recommender is deployed. 2 Binds the recommender service account to the metrics-reader role. Specify the namespace where the recommender is to be deployed. 3 Binds the recommender service account to the vpa-actor role. Specify the namespace where the recommender is to be deployed. 4 Binds the recommender service account to the vpa-target-reader role. Specify the namespace where the recommender is to be deployed. To add the alternative recommender to the cluster, create a Deployment object similar to the following: apiVersion: apps/v1 kind: Deployment metadata: name: alt-vpa-recommender namespace: <namespace_name> spec: replicas: 1 selector: matchLabels: app: alt-vpa-recommender template: metadata: labels: app: alt-vpa-recommender spec: containers: 1 - name: recommender image: quay.io/example/alt-recommender:latest 2 imagePullPolicy: Always resources: limits: cpu: 200m memory: 1000Mi requests: cpu: 50m memory: 500Mi ports: - name: prometheus containerPort: 8942 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL seccompProfile: type: RuntimeDefault serviceAccountName: alt-vpa-recommender-sa 3 securityContext: runAsNonRoot: true 1 Creates a container for your alternative recommender. 2 Specifies your recommender image. 3 Associates the service account that you created for the recommender. A new pod is created for the alternative recommender in the same namespace. USD oc get pods Example output NAME READY STATUS RESTARTS AGE frontend-845d5478d-558zf 1/1 Running 0 4m25s frontend-845d5478d-7z9gx 1/1 Running 0 4m25s frontend-845d5478d-b7l4j 1/1 Running 0 4m25s vpa-alt-recommender-55878867f9-6tp5v 1/1 Running 0 9s Configure a VPA CR that includes the name of the alternative recommender Deployment object. Example VPA CR to include the alternative recommender apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender namespace: <namespace_name> spec: recommenders: - name: alt-vpa-recommender 1 targetRef: apiVersion: "apps/v1" kind: Deployment 2 name: frontend 1 Specifies the name of the alternative recommender deployment. 2 Specifies the name of an existing workload object you want this VPA to manage. 2.5.5. Using the Vertical Pod Autoscaler Operator You can use the Vertical Pod Autoscaler Operator (VPA) by creating a VPA custom resource (CR). The CR indicates which pods it should analyze and determines the actions the VPA should take with those pods. You can use the VPA to scale built-in resources such as deployments or stateful sets, and custom resources that manage pods. For more information on using the VPA with custom resources, see "Using the Vertical Pod Autoscaler Operator with Custom Resources." Prerequisites The workload object that you want to autoscale must exist. If you want to use an alternative recommender, a deployment including that recommender must exist. Procedure To create a VPA CR for a specific workload object: Change to the project where the workload object you want to scale is located. Create a VPA CR YAML file: apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: "apps/v1" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: "Auto" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: "Off" recommenders: 5 - name: my-recommender 1 Specify the type of workload object you want this VPA to manage: Deployment , StatefulSet , Job , DaemonSet , ReplicaSet , or ReplicationController . 2 Specify the name of an existing workload object you want this VPA to manage. 3 Specify the VPA mode: auto to automatically apply the recommended resources on pods associated with the controller. The VPA terminates existing pods and creates new pods with the recommended resource limits and requests. recreate to automatically apply the recommended resources on pods associated with the workload object. The VPA terminates existing pods and creates new pods with the recommended resource limits and requests. The recreate mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes. initial to automatically apply the recommended resources when pods associated with the workload object are created. The VPA does not update the pods as it learns new resource recommendations. off to only generate resource recommendations for the pods associated with the workload object. The VPA does not update the pods as it learns new resource recommendations and does not apply the recommendations to new pods. 4 Optional. Specify the containers you want to opt-out and set the mode to Off . 5 Optional. Specify an alternative recommender. Create the VPA CR: USD oc create -f <file-name>.yaml After a few moments, the VPA learns the resource usage of the containers in the pods associated with the workload object. You can view the VPA recommendations using the following command: USD oc get vpa <vpa-name> --output yaml The output shows the recommendations for CPU and memory requests, similar to the following: Example output ... status: ... recommendation: containerRecommendations: - containerName: frontend lowerBound: 1 cpu: 25m memory: 262144k target: 2 cpu: 25m memory: 262144k uncappedTarget: 3 cpu: 25m memory: 262144k upperBound: 4 cpu: 262m memory: "274357142" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: "498558823" ... 1 lowerBound is the minimum recommended resource levels. 2 target is the recommended resource levels. 3 upperBound is the highest recommended resource levels. 4 uncappedTarget is the most recent resource recommendations. 2.5.5.1. Example custom resources for the Vertical Pod Autoscaler The Vertical Pod Autoscaler Operator (VPA) can update not only built-in resources such as deployments or stateful sets, but also custom resources that manage pods. In order to use the VPA with a custom resource, when you create the CustomResourceDefinition (CRD) object, you must configure the labelSelectorPath field in the /scale subresource. The /scale subresource creates a Scale object. The labelSelectorPath field defines the JSON path inside the custom resource that corresponds to Status.Selector in the Scale object and in the custom resource. The following is an example of a CustomResourceDefinition and a CustomResource that fulfills these requirements, along with a VerticalPodAutoscaler definition that targets the custom resource. The following example shows the /scale subresource contract. Note This example does not result in the VPA scaling pods because there is no controller for the custom resource that allows it to own any pods. As such, you must write a controller in a language supported by Kubernetes to manage the reconciliation and state management between the custom resource and your pods. The example illustrates the configuration for the VPA to understand the custom resource as scalable. Example custom CRD, CR apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: scalablepods.testing.openshift.io spec: group: testing.openshift.io versions: - name: v1 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: replicas: type: integer minimum: 0 selector: type: string status: type: object properties: replicas: type: integer subresources: status: {} scale: specReplicasPath: .spec.replicas statusReplicasPath: .status.replicas labelSelectorPath: .spec.selector 1 scope: Namespaced names: plural: scalablepods singular: scalablepod kind: ScalablePod shortNames: - spod 1 Specifies the JSON path that corresponds to status.selector field of the custom resource object. Example custom CR apiVersion: testing.openshift.io/v1 kind: ScalablePod metadata: name: scalable-cr namespace: default spec: selector: "app=scalable-cr" 1 replicas: 1 1 Specify the label type to apply to managed pods. This is the field referenced by the labelSelectorPath in the custom resource definition object. Example VPA object apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: scalable-cr namespace: default spec: targetRef: apiVersion: testing.openshift.io/v1 kind: ScalablePod name: scalable-cr updatePolicy: updateMode: "Auto" 2.5.6. Uninstalling the Vertical Pod Autoscaler Operator You can remove the Vertical Pod Autoscaler Operator (VPA) from your OpenShift Container Platform cluster. After uninstalling, the resource requests for the pods already modified by an existing VPA CR do not change. Any new pods get the resources defined in the workload object, not the recommendations made by the Vertical Pod Autoscaler Operator. Note You can remove a specific VPA CR by using the oc delete vpa <vpa-name> command. The same actions apply for resource requests as uninstalling the vertical pod autoscaler. After removing the VPA Operator, it is recommended that you remove the other components associated with the Operator to avoid potential issues. Prerequisites The Vertical Pod Autoscaler Operator must be installed. Procedure In the OpenShift Container Platform web console, click Operators Installed Operators . Switch to the openshift-vertical-pod-autoscaler project. For the VerticalPodAutoscaler Operator, click the Options menu and select Uninstall Operator . Optional: To remove all operands associated with the Operator, in the dialog box, select Delete all operand instances for this operator checkbox. Click Uninstall . Optional: Use the OpenShift CLI to remove the VPA components: Delete the VPA namespace: USD oc delete namespace openshift-vertical-pod-autoscaler Delete the VPA custom resource definition (CRD) objects: USD oc delete crd verticalpodautoscalercheckpoints.autoscaling.k8s.io USD oc delete crd verticalpodautoscalercontrollers.autoscaling.openshift.io USD oc delete crd verticalpodautoscalers.autoscaling.k8s.io Deleting the CRDs removes the associated roles, cluster roles, and role bindings. Note This action removes from the cluster all user-created VPA CRs. If you re-install the VPA, you must create these objects again. Delete the MutatingWebhookConfiguration object by running the following command: USD oc delete MutatingWebhookConfiguration vpa-webhook-config Delete the VPA Operator: USD oc delete operator/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler 2.6. Providing sensitive data to pods by using secrets Some applications need sensitive information, such as passwords and user names, that you do not want developers to have. As an administrator, you can use Secret objects to provide this information without exposing that information in clear text. 2.6.1. Understanding secrets The Secret object type provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, private source repository credentials, and so on. Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. Key properties include: Secret data can be referenced independently from its definition. Secret data volumes are backed by temporary file-storage facilities (tmpfs) and never come to rest on a node. Secret data can be shared within a namespace. YAML Secret object definition apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5 1 Indicates the structure of the secret's key names and values. 2 The allowable format for the keys in the data field must meet the guidelines in the DNS_SUBDOMAIN value in the Kubernetes identifiers glossary . 3 The value associated with keys in the data map must be base64 encoded. 4 Entries in the stringData map are converted to base64 and the entry will then be moved to the data map automatically. This field is write-only; the value will only be returned via the data field. 5 The value associated with keys in the stringData map is made up of plain text strings. You must create a secret before creating the pods that depend on that secret. When creating secrets: Create a secret object with secret data. Update the pod's service account to allow the reference to the secret. Create a pod, which consumes the secret as an environment variable or as a file (using a secret volume). 2.6.1.1. Types of secrets The value in the type field indicates the structure of the secret's key names and values. The type can be used to enforce the presence of user names and keys in the secret object. If you do not want validation, use the opaque type, which is the default. Specify one of the following types to trigger minimal server-side validation to ensure the presence of specific key names in the secret data: kubernetes.io/basic-auth : Use with Basic authentication kubernetes.io/dockercfg : Use as an image pull secret kubernetes.io/dockerconfigjson : Use as an image pull secret kubernetes.io/service-account-token : Use to obtain a legacy service account API token kubernetes.io/ssh-auth : Use with SSH key authentication kubernetes.io/tls : Use with TLS certificate authorities Specify type: Opaque if you do not want validation, which means the secret does not claim to conform to any convention for key names or values. An opaque secret, allows for unstructured key:value pairs that can contain arbitrary values. Note You can specify other arbitrary types, such as example.com/my-secret-type . These types are not enforced server-side, but indicate that the creator of the secret intended to conform to the key/value requirements of that type. For examples of creating different types of secrets, see Understanding how to create secrets . 2.6.1.2. Secret data keys Secret keys must be in a DNS subdomain. 2.6.1.3. Automatically generated image pull secrets By default, OpenShift Container Platform creates an image pull secret for each service account. Note Prior to OpenShift Container Platform 4.16, a long-lived service account API token secret was also generated for each service account that was created. Starting with OpenShift Container Platform 4.16, this service account API token secret is no longer created. After upgrading to 4.18, any existing long-lived service account API token secrets are not deleted and will continue to function. For information about detecting long-lived API tokens that are in use in your cluster or deleting them if they are not needed, see the Red Hat Knowledgebase article Long-lived service account API tokens in OpenShift Container Platform . This image pull secret is necessary to integrate the OpenShift image registry into the cluster's user authentication and authorization system. However, if you do not enable the ImageRegistry capability or if you disable the integrated OpenShift image registry in the Cluster Image Registry Operator's configuration, an image pull secret is not generated for each service account. When the integrated OpenShift image registry is disabled on a cluster that previously had it enabled, the previously generated image pull secrets are deleted automatically. 2.6.2. Understanding how to create secrets As an administrator you must create a secret before developers can create the pods that depend on that secret. When creating secrets: Create a secret object that contains the data you want to keep secret. The specific data required for each secret type is descibed in the following sections. Example YAML object that creates an opaque secret apiVersion: v1 kind: Secret metadata: name: test-secret type: Opaque 1 data: 2 username: <username> password: <password> stringData: 3 hostname: myapp.mydomain.com secret.properties: | property1=valueA property2=valueB 1 Specifies the type of secret. 2 Specifies encoded string and data. 3 Specifies decoded string and data. Use either the data or stringdata fields, not both. Update the pod's service account to reference the secret: YAML of a service account that uses a secret apiVersion: v1 kind: ServiceAccount ... secrets: - name: test-secret Create a pod, which consumes the secret as an environment variable or as a file (using a secret volume): YAML of a pod populating files in a volume with secret data apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "cat /etc/secret-volume/*" ] volumeMounts: 1 - name: secret-volume mountPath: /etc/secret-volume 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: secret-volume secret: secretName: test-secret 4 restartPolicy: Never 1 Add a volumeMounts field to each container that needs the secret. 2 Specifies an unused directory name where you would like the secret to appear. Each key in the secret data map becomes the filename under mountPath . 3 Set to true . If true, this instructs the driver to provide a read-only volume. 4 Specifies the name of the secret. YAML of a pod populating environment variables with secret data apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "export" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never 1 Specifies the environment variable that consumes the secret key. YAML of a build config populating environment variables with secret data apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username from: kind: ImageStreamTag namespace: openshift name: 'cli:latest' 1 Specifies the environment variable that consumes the secret key. 2.6.2.1. Secret creation restrictions To use a secret, a pod needs to reference the secret. A secret can be used with a pod in three ways: To populate environment variables for containers. As files in a volume mounted on one or more of its containers. By kubelet when pulling images for the pod. Volume type secrets write data into the container as a file using the volume mechanism. Image pull secrets use service accounts for the automatic injection of the secret into all pods in a namespace. When a template contains a secret definition, the only way for the template to use the provided secret is to ensure that the secret volume sources are validated and that the specified object reference actually points to a Secret object. Therefore, a secret needs to be created before any pods that depend on it. The most effective way to ensure this is to have it get injected automatically through the use of a service account. Secret API objects reside in a namespace. They can only be referenced by pods in that same namespace. Individual secrets are limited to 1MB in size. This is to discourage the creation of large secrets that could exhaust apiserver and kubelet memory. However, creation of a number of smaller secrets could also exhaust memory. 2.6.2.2. Creating an opaque secret As an administrator, you can create an opaque secret, which allows you to store unstructured key:value pairs that can contain arbitrary values. Procedure Create a Secret object in a YAML file. For example: apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password> 1 Specifies an opaque secret. Use the following command to create a Secret object: USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources Understanding how to create secrets 2.6.2.3. Creating a legacy service account token secret As an administrator, you can create a legacy service account token secret, which allows you to distribute a service account token to applications that must authenticate to the API. Warning It is recommended to obtain bound service account tokens using the TokenRequest API instead of using legacy service account token secrets. You should create a service account token secret only if you cannot use the TokenRequest API and if the security exposure of a nonexpiring token in a readable API object is acceptable to you. Bound service account tokens are more secure than service account token secrets for the following reasons: Bound service account tokens have a bounded lifetime. Bound service account tokens contain audiences. Bound service account tokens can be bound to pods or secrets and the bound tokens are invalidated when the bound object is removed. Workloads are automatically injected with a projected volume to obtain a bound service account token. If your workload needs an additional service account token, add an additional projected volume in your workload manifest. For more information, see "Configuring bound service account tokens using volume projection". Procedure Create a Secret object in a YAML file: Example Secret object apiVersion: v1 kind: Secret metadata: name: secret-sa-sample annotations: kubernetes.io/service-account.name: "sa-name" 1 type: kubernetes.io/service-account-token 2 1 Specifies an existing service account name. If you are creating both the ServiceAccount and the Secret objects, create the ServiceAccount object first. 2 Specifies a service account token secret. Use the following command to create the Secret object: USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources Understanding how to create secrets Configuring bound service account tokens using volume projection Understanding and creating service accounts 2.6.2.4. Creating a basic authentication secret As an administrator, you can create a basic authentication secret, which allows you to store the credentials needed for basic authentication. When using this secret type, the data parameter of the Secret object must contain the following keys encoded in the base64 format: username : the user name for authentication password : the password or token for authentication Note You can use the stringData parameter to use clear text content. Procedure Create a Secret object in a YAML file: Example secret object apiVersion: v1 kind: Secret metadata: name: secret-basic-auth type: kubernetes.io/basic-auth 1 data: stringData: 2 username: admin password: <password> 1 Specifies a basic authentication secret. 2 Specifies the basic authentication values to use. Use the following command to create the Secret object: USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources Understanding how to create secrets 2.6.2.5. Creating an SSH authentication secret As an administrator, you can create an SSH authentication secret, which allows you to store data used for SSH authentication. When using this secret type, the data parameter of the Secret object must contain the SSH credential to use. Procedure Create a Secret object in a YAML file on a control plane node: Example secret object apiVersion: v1 kind: Secret metadata: name: secret-ssh-auth type: kubernetes.io/ssh-auth 1 data: ssh-privatekey: | 2 MIIEpQIBAAKCAQEAulqb/Y ... 1 Specifies an SSH authentication secret. 2 Specifies the SSH key/value pair as the SSH credentials to use. Use the following command to create the Secret object: USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources Understanding how to create secrets 2.6.2.6. Creating a Docker configuration secret As an administrator, you can create a Docker configuration secret, which allows you to store the credentials for accessing a container image registry. kubernetes.io/dockercfg . Use this secret type to store your local Docker configuration file. The data parameter of the secret object must contain the contents of a .dockercfg file encoded in the base64 format. kubernetes.io/dockerconfigjson . Use this secret type to store your local Docker configuration JSON file. The data parameter of the secret object must contain the contents of a .docker/config.json file encoded in the base64 format. Procedure Create a Secret object in a YAML file. Example Docker configuration secret object apiVersion: v1 kind: Secret metadata: name: secret-docker-cfg namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfig:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2 1 Specifies that the secret is using a Docker configuration file. 2 The output of a base64-encoded Docker configuration file Example Docker configuration JSON secret object apiVersion: v1 kind: Secret metadata: name: secret-docker-json namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2 1 Specifies that the secret is using a Docker configuration JSONfile. 2 The output of a base64-encoded Docker configuration JSON file Use the following command to create the Secret object USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources Understanding how to create secrets 2.6.2.7. Creating a secret using the web console You can create secrets using the web console. Procedure Navigate to Workloads Secrets . Click Create From YAML . Edit the YAML manually to your specifications, or drag and drop a file into the YAML editor. For example: apiVersion: v1 kind: Secret metadata: name: example namespace: <namespace> type: Opaque 1 data: username: <base64 encoded username> password: <base64 encoded password> stringData: 2 hostname: myapp.mydomain.com 1 This example specifies an opaque secret; however, you may see other secret types such as service account token secret, basic authentication secret, SSH authentication secret, or a secret that uses Docker configuration. 2 Entries in the stringData map are converted to base64 and the entry will then be moved to the data map automatically. This field is write-only; the value will only be returned via the data field. Click Create . Click Add Secret to workload . From the drop-down menu, select the workload to add. Click Save . 2.6.3. Understanding how to update secrets When you modify the value of a secret, the value (used by an already running pod) will not dynamically change. To change a secret, you must delete the original pod and create a new pod (perhaps with an identical PodSpec). Updating a secret follows the same workflow as deploying a new Container image. You can use the kubectl rolling-update command. The resourceVersion value in a secret is not specified when it is referenced. Therefore, if a secret is updated at the same time as pods are starting, the version of the secret that is used for the pod is not defined. Note Currently, it is not possible to check the resource version of a secret object that was used when a pod was created. It is planned that pods will report this information, so that a controller could restart ones using an old resourceVersion . In the interim, do not update the data of existing secrets, but create new ones with distinct names. 2.6.4. Creating and using secrets As an administrator, you can create a service account token secret. This allows you to distribute a service account token to applications that must authenticate to the API. Procedure Create a service account in your namespace by running the following command: USD oc create sa <service_account_name> -n <your_namespace> Save the following YAML example to a file named service-account-token-secret.yaml . The example includes a Secret object configuration that you can use to generate a service account token: apiVersion: v1 kind: Secret metadata: name: <secret_name> 1 annotations: kubernetes.io/service-account.name: "sa-name" 2 type: kubernetes.io/service-account-token 3 1 Replace <secret_name> with the name of your service token secret. 2 Specifies an existing service account name. If you are creating both the ServiceAccount and the Secret objects, create the ServiceAccount object first. 3 Specifies a service account token secret type. Generate the service account token by applying the file: USD oc apply -f service-account-token-secret.yaml Get the service account token from the secret by running the following command: USD oc get secret <sa_token_secret> -o jsonpath='{.data.token}' | base64 --decode 1 Example output ayJhbGciOiJSUzI1NiIsImtpZCI6IklOb2dtck1qZ3hCSWpoNnh5YnZhSE9QMkk3YnRZMVZoclFfQTZfRFp1YlUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImJ1aWxkZXItdG9rZW4tdHZrbnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiYnVpbGRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjNmZGU2MGZmLTA1NGYtNDkyZi04YzhjLTNlZjE0NDk3MmFmNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmJ1aWxkZXIifQ.OmqFTDuMHC_lYvvEUrjr1x453hlEEHYcxS9VKSzmRkP1SiVZWPNPkTWlfNRp6bIUZD3U6aN3N7dMSN0eI5hu36xPgpKTdvuckKLTCnelMx6cxOdAbrcw1mCmOClNscwjS1KO1kzMtYnnq8rXHiMJELsNlhnRyyIXRTtNBsy4t64T3283s3SLsancyx0gy0ujx-Ch3uKAKdZi5iT-I8jnnQ-ds5THDs2h65RJhgglQEmSxpHrLGZFmyHAQI-_SjvmHZPXEc482x3SkaQHNLqpmrpJorNqh1M8ZHKzlujhZgVooMvJmWPXTb2vnvi3DGn2XI-hZxl1yD2yGH1RBpYUHA 1 Replace <sa_token_secret> with the name of your service token secret. Use your service account token to authenticate with the API of your cluster: USD curl -X GET <openshift_cluster_api> --header "Authorization: Bearer <token>" 1 2 1 Replace <openshift_cluster_api> with the OpenShift cluster API. 2 Replace <token> with the service account token that is output in the preceding command. 2.6.5. About using signed certificates with secrets To secure communication to your service, you can configure OpenShift Container Platform to generate a signed serving certificate/key pair that you can add into a secret in a project. A service serving certificate secret is intended to support complex middleware applications that need out-of-the-box certificates. It has the same settings as the server certificates generated by the administrator tooling for nodes and masters. Service Pod spec configured for a service serving certificates secret. apiVersion: v1 kind: Service metadata: name: registry annotations: service.beta.openshift.io/serving-cert-secret-name: registry-cert 1 # ... 1 Specify the name for the certificate Other pods can trust cluster-created certificates (which are only signed for internal DNS names), by using the CA bundle in the /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt file that is automatically mounted in their pod. The signature algorithm for this feature is x509.SHA256WithRSA . To manually rotate, delete the generated secret. A new certificate is created. 2.6.5.1. Generating signed certificates for use with secrets To use a signed serving certificate/key pair with a pod, create or edit the service to add the service.beta.openshift.io/serving-cert-secret-name annotation, then add the secret to the pod. Procedure To create a service serving certificate secret : Edit the Pod spec for your service. Add the service.beta.openshift.io/serving-cert-secret-name annotation with the name you want to use for your secret. kind: Service apiVersion: v1 metadata: name: my-service annotations: service.beta.openshift.io/serving-cert-secret-name: my-cert 1 spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376 The certificate and key are in PEM format, stored in tls.crt and tls.key respectively. Create the service: USD oc create -f <file-name>.yaml View the secret to make sure it was created: View a list of all secrets: USD oc get secrets Example output NAME TYPE DATA AGE my-cert kubernetes.io/tls 2 9m View details on your secret: USD oc describe secret my-cert Example output Name: my-cert Namespace: openshift-console Labels: <none> Annotations: service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z service.beta.openshift.io/originating-service-name: my-service service.beta.openshift.io/originating-service-uid: 640f0ec3-afc2-4380-bf31-a8c784846a11 service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z Type: kubernetes.io/tls Data ==== tls.key: 1679 bytes tls.crt: 2595 bytes Edit your Pod spec with that secret. apiVersion: v1 kind: Pod metadata: name: my-service-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mypod image: redis volumeMounts: - name: my-container mountPath: "/etc/my-path" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: my-volume secret: secretName: my-cert items: - key: username path: my-group/my-username mode: 511 When it is available, your pod will run. The certificate will be good for the internal service DNS name, <service.name>.<service.namespace>.svc . The certificate/key pair is automatically replaced when it gets close to expiration. View the expiration date in the service.beta.openshift.io/expiry annotation on the secret, which is in RFC3339 format. Note In most cases, the service DNS name <service.name>.<service.namespace>.svc is not externally routable. The primary use of <service.name>.<service.namespace>.svc is for intracluster or intraservice communication, and with re-encrypt routes. 2.6.6. Troubleshooting secrets If a service certificate generation fails with (service's service.beta.openshift.io/serving-cert-generation-error annotation contains): secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60 The service that generated the certificate no longer exists, or has a different serviceUID . You must force certificates regeneration by removing the old secret, and clearing the following annotations on the service service.beta.openshift.io/serving-cert-generation-error , service.beta.openshift.io/serving-cert-generation-error-num : Delete the secret: USD oc delete secret <secret_name> Clear the annotations: USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error- USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num- Note The command removing annotation has a - after the annotation name to be removed. 2.7. Providing sensitive data to pods by using an external secrets store Some applications need sensitive information, such as passwords and user names, that you do not want developers to have. As an alternative to using Kubernetes Secret objects to provide sensitive information, you can use an external secrets store to store the sensitive information. You can use the Secrets Store CSI Driver Operator to integrate with an external secrets store and mount the secret content as a pod volume. 2.7.1. About the Secrets Store CSI Driver Operator Kubernetes secrets are stored with Base64 encoding. etcd provides encryption at rest for these secrets, but when secrets are retrieved, they are decrypted and presented to the user. If role-based access control is not configured properly on your cluster, anyone with API or etcd access can retrieve or modify a secret. Additionally, anyone who is authorized to create a pod in a namespace can use that access to read any secret in that namespace. To store and manage your secrets securely, you can configure the OpenShift Container Platform Secrets Store Container Storage Interface (CSI) Driver Operator to mount secrets from an external secret management system, such as Azure Key Vault, by using a provider plugin. Applications can then use the secret, but the secret does not persist on the system after the application pod is destroyed. The Secrets Store CSI Driver Operator, secrets-store.csi.k8s.io , enables OpenShift Container Platform to mount multiple secrets, keys, and certificates stored in enterprise-grade external secrets stores into pods as a volume. The Secrets Store CSI Driver Operator communicates with the provider using gRPC to fetch the mount contents from the specified external secrets store. After the volume is attached, the data in it is mounted into the container's file system. Secrets store volumes are mounted in-line. 2.7.1.1. Secrets store providers The Secrets Store CSI Driver Operator has been tested with the following secrets store providers: AWS Secrets Manager AWS Systems Manager Parameter Store Azure Key Vault Google Secret Manager HashiCorp Vault Note Red Hat does not test all factors associated with third-party secrets store provider functionality. For more information about third-party support, see the Red Hat third-party support policy . 2.7.1.2. Automatic rotation The Secrets Store CSI driver periodically rotates the content in the mounted volume with the content from the external secrets store. If a secret is updated in the external secrets store, the secret will be updated in the mounted volume. The Secrets Store CSI Driver Operator polls for updates every 2 minutes. If you enabled synchronization of mounted content as Kubernetes secrets, the Kubernetes secrets are also rotated. Applications consuming the secret data must watch for updates to the secrets. 2.7.2. Installing the Secrets Store CSI driver Prerequisites Access to the OpenShift Container Platform web console. Administrator access to the cluster. Procedure To install the Secrets Store CSI driver: Install the Secrets Store CSI Driver Operator: Log in to the web console. Click Operators OperatorHub . Locate the Secrets Store CSI Driver Operator by typing "Secrets Store CSI" in the filter box. Click the Secrets Store CSI Driver Operator button. On the Secrets Store CSI Driver Operator page, click Install . On the Install Operator page, ensure that: All namespaces on the cluster (default) is selected. Installed Namespace is set to openshift-cluster-csi-drivers . Click Install . After the installation finishes, the Secrets Store CSI Driver Operator is listed in the Installed Operators section of the web console. Create the ClusterCSIDriver instance for the driver ( secrets-store.csi.k8s.io ): Click Administration CustomResourceDefinitions ClusterCSIDriver . On the Instances tab, click Create ClusterCSIDriver . Use the following YAML file: apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: secrets-store.csi.k8s.io spec: managementState: Managed Click Create . 2.7.3. Mounting secrets from an external secrets store to a CSI volume After installing the Secrets Store CSI Driver Operator, you can mount secrets from one of the following external secrets stores to a CSI volume: AWS Secrets Manager AWS Systems Manager Parameter Store Azure Key Vault Google Secret Manager HashiCorp Vault 2.7.3.1. Mounting secrets from AWS Secrets Manager You can use the Secrets Store CSI Driver Operator to mount secrets from AWS Secrets Manager to a Container Storage Interface (CSI) volume in OpenShift Container Platform. To mount secrets from AWS Secrets Manager, your cluster must be installed on AWS and use AWS Security Token Service (STS). Prerequisites Your cluster is installed on AWS and uses AWS Security Token Service (STS). You installed the Secrets Store CSI Driver Operator. See Installing the Secrets Store CSI driver for instructions. You configured AWS Secrets Manager to store the required secrets. You extracted and prepared the ccoctl binary. You installed the jq CLI tool. You have access to the cluster as a user with the cluster-admin role. Procedure Install the AWS Secrets Manager provider: Create a YAML file with the following configuration for the provider resources: Important The AWS Secrets Manager provider for the Secrets Store CSI driver is an upstream provider. This configuration is modified from the configuration provided in the upstream AWS documentation so that it works properly with OpenShift Container Platform. Changes to this configuration might impact functionality. Example aws-provider.yaml file apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [""] resources: ["serviceaccounts/token"] verbs: ["create"] - apiGroups: [""] resources: ["serviceaccounts"] verbs: ["get"] - apiGroups: [""] resources: ["pods"] verbs: ["get"] - apiGroups: [""] resources: ["nodes"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: "/etc/kubernetes/secrets-store-csi-providers" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: "/etc/kubernetes/secrets-store-csi-providers" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux Grant privileged access to the csi-secrets-store-provider-aws service account by running the following command: USD oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers Create the provider resources by running the following command: USD oc apply -f aws-provider.yaml Grant permission to allow the service account to read the AWS secret object: Create a directory to contain the credentials request by running the following command: USD mkdir credentialsrequest-dir-aws Create a YAML file with the following configuration for the credentials request: Example credentialsrequest.yaml file apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - "secretsmanager:GetSecretValue" - "secretsmanager:DescribeSecret" effect: Allow resource: "arn:*:secretsmanager:*:*:secret:testSecret-??????" secretRef: name: aws-creds namespace: my-namespace serviceAccountNames: - aws-provider Retrieve the OIDC provider by running the following command: USD oc get --raw=/.well-known/openid-configuration | jq -r '.issuer' Example output https://<oidc_provider_name> Copy the OIDC provider name <oidc_provider_name> from the output to use in the step. Use the ccoctl tool to process the credentials request by running the following command: USD ccoctl aws create-iam-roles \ --name my-role --region=<aws_region> \ --credentials-requests-dir=credentialsrequest-dir-aws \ --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output Example output 2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds Copy the <aws_role_arn> from the output to use in the step. For example, arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds . Bind the service account with the role ARN by running the following command: USD oc annotate -n my-namespace sa/aws-provider eks.amazonaws.com/role-arn="<aws_role_arn>" Create a secret provider class to define your secrets store provider: Create a YAML file that defines the SecretProviderClass object: Example secret-provider-class-aws.yaml apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: my-namespace 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: "testSecret" objectType: "secretsmanager" 1 1 Specify the name for the secret provider class. 2 Specify the namespace for the secret provider class. 3 Specify the provider as aws . 4 Specify the provider-specific configuration parameters. Create the SecretProviderClass object by running the following command: USD oc create -f secret-provider-class-aws.yaml Create a deployment to use this secret provider class: Create a YAML file that defines the Deployment object: Example deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-aws-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: aws-provider containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - "/bin/sleep" - "10000" volumeMounts: - name: secrets-store-inline mountPath: "/mnt/secrets-store" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "my-aws-provider" 3 1 Specify the name for the deployment. 2 Specify the namespace for the deployment. This must be the same namespace as the secret provider class. 3 Specify the name of the secret provider class. Create the Deployment object by running the following command: USD oc create -f deployment.yaml Verification Verify that you can access the secrets from AWS Secrets Manager in the pod volume mount: List the secrets in the pod mount by running the following command: USD oc exec my-aws-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/ Example output testSecret View a secret in the pod mount by running the following command: USD oc exec my-aws-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret Example output <secret_value> Additional resources Configuring the Cloud Credential Operator utility 2.7.3.2. Mounting secrets from AWS Systems Manager Parameter Store You can use the Secrets Store CSI Driver Operator to mount secrets from AWS Systems Manager Parameter Store to a Container Storage Interface (CSI) volume in OpenShift Container Platform. To mount secrets from AWS Systems Manager Parameter Store, your cluster must be installed on AWS and use AWS Security Token Service (STS). Prerequisites Your cluster is installed on AWS and uses AWS Security Token Service (STS). You installed the Secrets Store CSI Driver Operator. See Installing the Secrets Store CSI driver for instructions. You configured AWS Systems Manager Parameter Store to store the required secrets. You extracted and prepared the ccoctl binary. You installed the jq CLI tool. You have access to the cluster as a user with the cluster-admin role. Procedure Install the AWS Systems Manager Parameter Store provider: Create a YAML file with the following configuration for the provider resources: Important The AWS Systems Manager Parameter Store provider for the Secrets Store CSI driver is an upstream provider. This configuration is modified from the configuration provided in the upstream AWS documentation so that it works properly with OpenShift Container Platform. Changes to this configuration might impact functionality. Example aws-provider.yaml file apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [""] resources: ["serviceaccounts/token"] verbs: ["create"] - apiGroups: [""] resources: ["serviceaccounts"] verbs: ["get"] - apiGroups: [""] resources: ["pods"] verbs: ["get"] - apiGroups: [""] resources: ["nodes"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: "/etc/kubernetes/secrets-store-csi-providers" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: "/etc/kubernetes/secrets-store-csi-providers" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux Grant privileged access to the csi-secrets-store-provider-aws service account by running the following command: USD oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers Create the provider resources by running the following command: USD oc apply -f aws-provider.yaml Grant permission to allow the service account to read the AWS secret object: Create a directory to contain the credentials request by running the following command: USD mkdir credentialsrequest-dir-aws Create a YAML file with the following configuration for the credentials request: Example credentialsrequest.yaml file apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - "ssm:GetParameter" - "ssm:GetParameters" effect: Allow resource: "arn:*:ssm:*:*:parameter/testParameter*" secretRef: name: aws-creds namespace: my-namespace serviceAccountNames: - aws-provider Retrieve the OIDC provider by running the following command: USD oc get --raw=/.well-known/openid-configuration | jq -r '.issuer' Example output https://<oidc_provider_name> Copy the OIDC provider name <oidc_provider_name> from the output to use in the step. Use the ccoctl tool to process the credentials request by running the following command: USD ccoctl aws create-iam-roles \ --name my-role --region=<aws_region> \ --credentials-requests-dir=credentialsrequest-dir-aws \ --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output Example output 2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds Copy the <aws_role_arn> from the output to use in the step. For example, arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds . Bind the service account with the role ARN by running the following command: USD oc annotate -n my-namespace sa/aws-provider eks.amazonaws.com/role-arn="<aws_role_arn>" Create a secret provider class to define your secrets store provider: Create a YAML file that defines the SecretProviderClass object: Example secret-provider-class-aws.yaml apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: my-namespace 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: "testParameter" objectType: "ssmparameter" 1 Specify the name for the secret provider class. 2 Specify the namespace for the secret provider class. 3 Specify the provider as aws . 4 Specify the provider-specific configuration parameters. Create the SecretProviderClass object by running the following command: USD oc create -f secret-provider-class-aws.yaml Create a deployment to use this secret provider class: Create a YAML file that defines the Deployment object: Example deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-aws-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: aws-provider containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - "/bin/sleep" - "10000" volumeMounts: - name: secrets-store-inline mountPath: "/mnt/secrets-store" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "my-aws-provider" 3 1 Specify the name for the deployment. 2 Specify the namespace for the deployment. This must be the same namespace as the secret provider class. 3 Specify the name of the secret provider class. Create the Deployment object by running the following command: USD oc create -f deployment.yaml Verification Verify that you can access the secrets from AWS Systems Manager Parameter Store in the pod volume mount: List the secrets in the pod mount by running the following command: USD oc exec my-aws-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/ Example output testParameter View a secret in the pod mount by running the following command: USD oc exec my-aws-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret Example output <secret_value> Additional resources Configuring the Cloud Credential Operator utility 2.7.3.3. Mounting secrets from Azure Key Vault You can use the Secrets Store CSI Driver Operator to mount secrets from Azure Key Vault to a Container Storage Interface (CSI) volume in OpenShift Container Platform. To mount secrets from Azure Key Vault, your cluster must be installed on Microsoft Azure. Prerequisites Your cluster is installed on Azure. You installed the Secrets Store CSI Driver Operator. See Installing the Secrets Store CSI driver for instructions. You configured Azure Key Vault to store the required secrets. You installed the Azure CLI ( az ). You have access to the cluster as a user with the cluster-admin role. Procedure Install the Azure Key Vault provider: Create a YAML file with the following configuration for the provider resources: Important The Azure Key Vault provider for the Secrets Store CSI driver is an upstream provider. This configuration is modified from the configuration provided in the upstream Azure documentation so that it works properly with OpenShift Container Platform. Changes to this configuration might impact functionality. Example azure-provider.yaml file apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-azure namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-azure-cluster-role rules: - apiGroups: [""] resources: ["serviceaccounts/token"] verbs: ["create"] - apiGroups: [""] resources: ["serviceaccounts"] verbs: ["get"] - apiGroups: [""] resources: ["pods"] verbs: ["get"] - apiGroups: [""] resources: ["nodes"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-azure-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-azure-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-azure namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-azure labels: app: csi-secrets-store-provider-azure spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-azure template: metadata: labels: app: csi-secrets-store-provider-azure spec: serviceAccountName: csi-secrets-store-provider-azure hostNetwork: true containers: - name: provider-azure-installer image: mcr.microsoft.com/oss/azure/secrets-store/provider-azure:v1.4.1 imagePullPolicy: IfNotPresent args: - --endpoint=unix:///provider/azure.sock - --construct-pem-chain=true - --healthz-port=8989 - --healthz-path=/healthz - --healthz-timeout=5s livenessProbe: httpGet: path: /healthz port: 8989 failureThreshold: 3 initialDelaySeconds: 5 timeoutSeconds: 10 periodSeconds: 30 resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 0 capabilities: drop: - ALL volumeMounts: - mountPath: "/provider" name: providervol affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: type operator: NotIn values: - virtual-kubelet volumes: - name: providervol hostPath: path: "/var/run/secrets-store-csi-providers" tolerations: - operator: Exists nodeSelector: kubernetes.io/os: linux Grant privileged access to the csi-secrets-store-provider-azure service account by running the following command: USD oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-azure -n openshift-cluster-csi-drivers Create the provider resources by running the following command: USD oc apply -f azure-provider.yaml Create a service principal to access the key vault: Set the service principal client secret as an environment variable by running the following command: USD SERVICE_PRINCIPAL_CLIENT_SECRET="USD(az ad sp create-for-rbac --name https://USDKEYVAULT_NAME --query 'password' -otsv)" Set the service principal client ID as an environment variable by running the following command: USD SERVICE_PRINCIPAL_CLIENT_ID="USD(az ad sp list --display-name https://USDKEYVAULT_NAME --query '[0].appId' -otsv)" Create a generic secret with the service principal client secret and ID by running the following command: USD oc create secret generic secrets-store-creds -n my-namespace --from-literal clientid=USD{SERVICE_PRINCIPAL_CLIENT_ID} --from-literal clientsecret=USD{SERVICE_PRINCIPAL_CLIENT_SECRET} Apply the secrets-store.csi.k8s.io/used=true label to allow the provider to find this nodePublishSecretRef secret: USD oc -n my-namespace label secret secrets-store-creds secrets-store.csi.k8s.io/used=true Create a secret provider class to define your secrets store provider: Create a YAML file that defines the SecretProviderClass object: Example secret-provider-class-azure.yaml apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-azure-provider 1 namespace: my-namespace 2 spec: provider: azure 3 parameters: 4 usePodIdentity: "false" useVMManagedIdentity: "false" userAssignedIdentityID: "" keyvaultName: "kvname" objects: | array: - | objectName: secret1 objectType: secret tenantId: "tid" 1 Specify the name for the secret provider class. 2 Specify the namespace for the secret provider class. 3 Specify the provider as azure . 4 Specify the provider-specific configuration parameters. Create the SecretProviderClass object by running the following command: USD oc create -f secret-provider-class-azure.yaml Create a deployment to use this secret provider class: Create a YAML file that defines the Deployment object: Example deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-azure-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - "/bin/sleep" - "10000" volumeMounts: - name: secrets-store-inline mountPath: "/mnt/secrets-store" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "my-azure-provider" 3 nodePublishSecretRef: name: secrets-store-creds 4 1 Specify the name for the deployment. 2 Specify the namespace for the deployment. This must be the same namespace as the secret provider class. 3 Specify the name of the secret provider class. 4 Specify the name of the Kubernetes secret that contains the service principal credentials to access Azure Key Vault. Create the Deployment object by running the following command: USD oc create -f deployment.yaml Verification Verify that you can access the secrets from Azure Key Vault in the pod volume mount: List the secrets in the pod mount by running the following command: USD oc exec my-azure-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/ Example output secret1 View a secret in the pod mount by running the following command: USD oc exec my-azure-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/secret1 Example output my-secret-value 2.7.3.4. Mounting secrets from Google Secret Manager You can use the Secrets Store CSI Driver Operator to mount secrets from Google Secret Manager to a Container Storage Interface (CSI) volume in OpenShift Container Platform. To mount secrets from Google Secret Manager, your cluster must be installed on Google Cloud Platform (GCP). Prerequisites You installed the Secrets Store CSI Driver Operator. See Installing the Secrets Store CSI driver for instructions. You configured Google Secret Manager to store the required secrets. You created a service account key named key.json from your Google Cloud service account. You have access to the cluster as a user with the cluster-admin role. Procedure Install the Google Secret Manager provider: Create a YAML file with the following configuration for the provider resources: Example gcp-provider.yaml file apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-gcp namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-gcp-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-gcp-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-gcp namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-gcp-role rules: - apiGroups: - "" resources: - serviceaccounts/token verbs: - create - apiGroups: - "" resources: - serviceaccounts verbs: - get --- apiVersion: apps/v1 kind: DaemonSet metadata: name: csi-secrets-store-provider-gcp namespace: openshift-cluster-csi-drivers labels: app: csi-secrets-store-provider-gcp spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-gcp template: metadata: labels: app: csi-secrets-store-provider-gcp spec: serviceAccountName: csi-secrets-store-provider-gcp initContainers: - name: chown-provider-mount image: busybox command: - chown - "1000:1000" - /etc/kubernetes/secrets-store-csi-providers volumeMounts: - mountPath: "/etc/kubernetes/secrets-store-csi-providers" name: providervol securityContext: privileged: true hostNetwork: false hostPID: false hostIPC: false containers: - name: provider image: us-docker.pkg.dev/secretmanager-csi/secrets-store-csi-driver-provider-gcp/plugin@sha256:a493a78bbb4ebce5f5de15acdccc6f4d19486eae9aa4fa529bb60ac112dd6650 securityContext: privileged: true imagePullPolicy: IfNotPresent resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi env: - name: TARGET_DIR value: "/etc/kubernetes/secrets-store-csi-providers" volumeMounts: - mountPath: "/etc/kubernetes/secrets-store-csi-providers" name: providervol mountPropagation: None readOnly: false livenessProbe: failureThreshold: 3 httpGet: path: /live port: 8095 initialDelaySeconds: 5 timeoutSeconds: 10 periodSeconds: 30 volumes: - name: providervol hostPath: path: /etc/kubernetes/secrets-store-csi-providers tolerations: - key: kubernetes.io/arch operator: Equal value: amd64 effect: NoSchedule nodeSelector: kubernetes.io/os: linux Grant privileged access to the csi-secrets-store-provider-gcp service account by running the following command: USD oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-gcp -n openshift-cluster-csi-drivers Create the provider resources by running the following command: USD oc apply -f gcp-provider.yaml Grant permission to read the Google Secret Manager secret: Create a new project by running the following command: USD oc new-project my-namespace Label the my-namespace namespace for pod security admission by running the following command: USD oc label ns my-namespace security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite Create a service account for the pod deployment: USD oc create serviceaccount my-service-account --namespace=my-namespace Create a generic secret from the key.json file by running the following command: USD oc create secret generic secrets-store-creds -n my-namespace --from-file=key.json 1 1 You created this key.json file from the Google Secret Manager. Apply the secrets-store.csi.k8s.io/used=true label to allow the provider to find this nodePublishSecretRef secret: USD oc -n my-namespace label secret secrets-store-creds secrets-store.csi.k8s.io/used=true Create a secret provider class to define your secrets store provider: Create a YAML file that defines the SecretProviderClass object: Example secret-provider-class-gcp.yaml apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-gcp-provider 1 namespace: my-namespace 2 spec: provider: gcp 3 parameters: 4 secrets: | - resourceName: "projects/my-project/secrets/testsecret1/versions/1" path: "testsecret1.txt" 1 Specify the name for the secret provider class. 2 Specify the namespace for the secret provider class. 3 Specify the provider as gcp . 4 Specify the provider-specific configuration parameters. Create the SecretProviderClass object by running the following command: USD oc create -f secret-provider-class-gcp.yaml Create a deployment to use this secret provider class: Create a YAML file that defines the Deployment object: Example deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-gcp-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: my-service-account 3 containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - "/bin/sleep" - "10000" volumeMounts: - name: secrets-store-inline mountPath: "/mnt/secrets-store" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "my-gcp-provider" 4 nodePublishSecretRef: name: secrets-store-creds 5 1 Specify the name for the deployment. 2 Specify the namespace for the deployment. This must be the same namespace as the secret provider class. 3 Specify the service account you created. 4 Specify the name of the secret provider class. 5 Specify the name of the Kubernetes secret that contains the service principal credentials to access Google Secret Manager. Create the Deployment object by running the following command: USD oc create -f deployment.yaml Verification Verify that you can access the secrets from Google Secret Manager in the pod volume mount: List the secrets in the pod mount by running the following command: USD oc exec my-gcp-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/ Example output testsecret1 View a secret in the pod mount by running the following command: USD oc exec my-gcp-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/testsecret1 Example output <secret_value> 2.7.3.5. Mounting secrets from HashiCorp Vault You can use the Secrets Store CSI Driver Operator to mount secrets from HashiCorp Vault to a Container Storage Interface (CSI) volume in OpenShift Container Platform. Important Mounting secrets from HashiCorp Vault by using the Secrets Store CSI Driver Operator has been tested with the following cloud providers: Amazon Web Services (AWS) Microsoft Azure Other cloud providers might work, but have not been tested yet. Additional cloud providers might be tested in the future. Prerequisites You installed the Secrets Store CSI Driver Operator. See Installing the Secrets Store CSI driver for instructions. You installed Helm. You have access to the cluster as a user with the cluster-admin role. Procedure Add the HashiCorp Helm repository by running the following command: USD helm repo add hashicorp https://helm.releases.hashicorp.com Update all repositories to ensure that Helm is aware of the latest versions by running the following command: USD helm repo update Install the HashiCorp Vault provider: Create a new project for Vault by running the following command: USD oc new-project vault Label the vault namespace for pod security admission by running the following command: USD oc label ns vault security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite Grant privileged access to the vault service account by running the following command: USD oc adm policy add-scc-to-user privileged -z vault -n vault Grant privileged access to the vault-csi-provider service account by running the following command: USD oc adm policy add-scc-to-user privileged -z vault-csi-provider -n vault Deploy HashiCorp Vault by running the following command: USD helm install vault hashicorp/vault --namespace=vault \ --set "server.dev.enabled=true" \ --set "injector.enabled=false" \ --set "csi.enabled=true" \ --set "global.openshift=true" \ --set "injector.agentImage.repository=docker.io/hashicorp/vault" \ --set "server.image.repository=docker.io/hashicorp/vault" \ --set "csi.image.repository=docker.io/hashicorp/vault-csi-provider" \ --set "csi.agent.image.repository=docker.io/hashicorp/vault" \ --set "csi.daemonSet.providersDir=/var/run/secrets-store-csi-providers" Patch the vault-csi-driver daemon set to set the securityContext to privileged by running the following command: USD oc patch daemonset -n vault vault-csi-provider --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/securityContext", "value": {"privileged": true} }]' Verify that the vault-csi-provider pods have started properly by running the following command: USD oc get pods -n vault Example output NAME READY STATUS RESTARTS AGE vault-0 1/1 Running 0 24m vault-csi-provider-87rgw 1/2 Running 0 5s vault-csi-provider-bd6hp 1/2 Running 0 4s vault-csi-provider-smlv7 1/2 Running 0 5s Configure HashiCorp Vault to store the required secrets: Create a secret by running the following command: USD oc exec vault-0 --namespace=vault -- vault kv put secret/example1 testSecret1=my-secret-value Verify that the secret is readable at the path secret/example1 by running the following command: USD oc exec vault-0 --namespace=vault -- vault kv get secret/example1 Example output = Secret Path = secret/data/example1 ======= Metadata ======= Key Value --- ----- created_time 2024-04-05T07:05:16.713911211Z custom_metadata <nil> deletion_time n/a destroyed false version 1 === Data === Key Value --- ----- testSecret1 my-secret-value Configure Vault to use Kubernetes authentication: Enable the Kubernetes auth method by running the following command: USD oc exec vault-0 --namespace=vault -- vault auth enable kubernetes Example output Success! Enabled kubernetes auth method at: kubernetes/ Configure the Kubernetes auth method: Set the token reviewer as an environment variable by running the following command: USD TOKEN_REVIEWER_JWT="USD(oc exec vault-0 --namespace=vault -- cat /var/run/secrets/kubernetes.io/serviceaccount/token)" Set the Kubernetes service IP address as an environment variable by running the following command: USD KUBERNETES_SERVICE_IP="USD(oc get svc kubernetes --namespace=default -o go-template="{{ .spec.clusterIP }}")" Update the Kubernetes auth method by running the following command: USD oc exec -i vault-0 --namespace=vault -- vault write auth/kubernetes/config \ issuer="https://kubernetes.default.svc.cluster.local" \ token_reviewer_jwt="USD{TOKEN_REVIEWER_JWT}" \ kubernetes_host="https://USD{KUBERNETES_SERVICE_IP}:443" \ kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt Example output Success! Data written to: auth/kubernetes/config Create a policy for the application by running the following command: USD oc exec -i vault-0 --namespace=vault -- vault policy write csi -<<EOF path "secret/data/*" { capabilities = ["read"] } EOF Example output Success! Uploaded policy: csi Create an authentication role to access the application by running the following command: USD oc exec -i vault-0 --namespace=vault -- vault write auth/kubernetes/role/csi \ bound_service_account_names=default \ bound_service_account_namespaces=default,test-ns,negative-test-ns,my-namespace \ policies=csi \ ttl=20m Example output Success! Data written to: auth/kubernetes/role/csi Verify that all of the vault pods are running properly by running the following command: USD oc get pods -n vault Example output NAME READY STATUS RESTARTS AGE vault-0 1/1 Running 0 43m vault-csi-provider-87rgw 2/2 Running 0 19m vault-csi-provider-bd6hp 2/2 Running 0 19m vault-csi-provider-smlv7 2/2 Running 0 19m Verify that all of the secrets-store-csi-driver pods are running properly by running the following command: USD oc get pods -n openshift-cluster-csi-drivers | grep -E "secrets" Example output secrets-store-csi-driver-node-46d2g 3/3 Running 0 45m secrets-store-csi-driver-node-d2jjn 3/3 Running 0 45m secrets-store-csi-driver-node-drmt4 3/3 Running 0 45m secrets-store-csi-driver-node-j2wlt 3/3 Running 0 45m secrets-store-csi-driver-node-v9xv4 3/3 Running 0 45m secrets-store-csi-driver-node-vlz28 3/3 Running 0 45m secrets-store-csi-driver-operator-84bd699478-fpxrw 1/1 Running 0 47m Create a secret provider class to define your secrets store provider: Create a YAML file that defines the SecretProviderClass object: Example secret-provider-class-vault.yaml apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-vault-provider 1 namespace: my-namespace 2 spec: provider: vault 3 parameters: 4 roleName: "csi" vaultAddress: "http://vault.vault:8200" objects: | - secretPath: "secret/data/example1" objectName: "testSecret1" secretKey: "testSecret1 1 Specify the name for the secret provider class. 2 Specify the namespace for the secret provider class. 3 Specify the provider as vault . 4 Specify the provider-specific configuration parameters. Create the SecretProviderClass object by running the following command: USD oc create -f secret-provider-class-vault.yaml Create a deployment to use this secret provider class: Create a YAML file that defines the Deployment object: Example deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: busybox-deployment 1 namespace: my-namespace 2 labels: app: busybox spec: replicas: 1 selector: matchLabels: app: busybox template: metadata: labels: app: busybox spec: terminationGracePeriodSeconds: 0 containers: - image: registry.k8s.io/e2e-test-images/busybox:1.29-4 name: busybox imagePullPolicy: IfNotPresent command: - "/bin/sleep" - "10000" volumeMounts: - name: secrets-store-inline mountPath: "/mnt/secrets-store" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "my-vault-provider" 3 1 Specify the name for the deployment. 2 Specify the namespace for the deployment. This must be the same namespace as the secret provider class. 3 Specify the name of the secret provider class. Create the Deployment object by running the following command: USD oc create -f deployment.yaml Verification Verify that you can access the secrets from your HashiCorp Vault in the pod volume mount: List the secrets in the pod mount by running the following command: USD oc exec busybox-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/ Example output testSecret1 View a secret in the pod mount by running the following command: USD oc exec busybox-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret1 Example output my-secret-value 2.7.4. Enabling synchronization of mounted content as Kubernetes secrets You can enable synchronization to create Kubernetes secrets from the content on a mounted volume. An example where you might want to enable synchronization is to use an environment variable in your deployment to reference the Kubernetes secret. Warning Do not enable synchronization if you do not want to store your secrets on your OpenShift Container Platform cluster and in etcd. Enable this functionality only if you require it, such as when you want to use environment variables to refer to the secret. If you enable synchronization, the secrets from the mounted volume are synchronized as Kubernetes secrets after you start a pod that mounts the secrets. The synchronized Kubernetes secret is deleted when all pods that mounted the content are deleted. Prerequisites You have installed the Secrets Store CSI Driver Operator. You have installed a secrets store provider. You have created the secret provider class. You have access to the cluster as a user with the cluster-admin role. Procedure Edit the SecretProviderClass resource by running the following command: USD oc edit secretproviderclass my-azure-provider 1 1 Replace my-azure-provider with the name of your secret provider class. Add the secretsObjects section with the configuration for the synchronized Kubernetes secrets: apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-azure-provider namespace: my-namespace spec: provider: azure secretObjects: 1 - secretName: tlssecret 2 type: kubernetes.io/tls 3 labels: environment: "test" data: - objectName: tlskey 4 key: tls.key 5 - objectName: tlscrt key: tls.crt parameters: usePodIdentity: "false" keyvaultName: "kvname" objects: | array: - | objectName: tlskey objectType: secret - | objectName: tlscrt objectType: secret tenantId: "tid" 1 Specify the configuration for synchronized Kubernetes secrets. 2 Specify the name of the Kubernetes Secret object to create. 3 Specify the type of Kubernetes Secret object to create. For example, Opaque or kubernetes.io/tls . 4 Specify the object name or alias of the mounted content to synchronize. 5 Specify the data field from the specified objectName to populate the Kubernetes secret with. Save the file to apply the changes. 2.7.5. Viewing the status of secrets in the pod volume mount You can view detailed information, including the versions, of the secrets in the pod volume mount. The Secrets Store CSI Driver Operator creates a SecretProviderClassPodStatus resource in the same namespace as the pod. You can review this resource to see detailed information, including versions, about the secrets in the pod volume mount. Prerequisites You have installed the Secrets Store CSI Driver Operator. You have installed a secrets store provider. You have created the secret provider class. You have deployed a pod that mounts a volume from the Secrets Store CSI Driver Operator. You have access to the cluster as a user with the cluster-admin role. Procedure View detailed information about the secrets in a pod volume mount by running the following command: USD oc get secretproviderclasspodstatus <secret_provider_class_pod_status_name> -o yaml 1 1 The name of the secret provider class pod status object is in the format of <pod_name>-<namespace>-<secret_provider_class_name> . Example output ... status: mounted: true objects: - id: secret/tlscrt version: f352293b97da4fa18d96a9528534cb33 - id: secret/tlskey version: 02534bc3d5df481cb138f8b2a13951ef podName: busybox-<hash> secretProviderClassName: my-azure-provider targetPath: /var/lib/kubelet/pods/f0d49c1e-c87a-4beb-888f-37798456a3e7/volumes/kubernetes.io~csi/secrets-store-inline/mount 2.7.6. Uninstalling the Secrets Store CSI Driver Operator Prerequisites Access to the OpenShift Container Platform web console. Administrator access to the cluster. Procedure To uninstall the Secrets Store CSI Driver Operator: Stop all application pods that use the secrets-store.csi.k8s.io provider. Remove any third-party provider plug-in for your chosen secret store. Remove the Container Storage Interface (CSI) driver and associated manifests: Click Administration CustomResourceDefinitions ClusterCSIDriver . On the Instances tab, for secrets-store.csi.k8s.io , on the far left side, click the drop-down menu, and then click Delete ClusterCSIDriver . When prompted, click Delete . Verify that the CSI driver pods are no longer running. Uninstall the Secrets Store CSI Driver Operator: Note Before you can uninstall the Operator, you must remove the CSI driver first. Click Operators Installed Operators . On the Installed Operators page, scroll or type "Secrets Store CSI" into the Search by name box to find the Operator, and then click it. On the upper, right of the Installed Operators > Operator details page, click Actions Uninstall Operator . When prompted on the Uninstall Operator window, click the Uninstall button to remove the Operator from the namespace. Any applications deployed by the Operator on the cluster need to be cleaned up manually. After uninstalling, the Secrets Store CSI Driver Operator is no longer listed in the Installed Operators section of the web console. 2.8. Authenticating pods with short-term credentials Some OpenShift Container Platform clusters use short-term security credentials for individual components that are created and managed outside the cluster. Applications in customer workloads on these clusters can authenticate by using the short-term authentication method that the cluster uses. 2.8.1. Configuring short-term authentication for workloads To use this authentication method in your applications, you must complete the following steps: Create a federated identity service account in the Identity and Access Management (IAM) settings for your cloud provider. Create an OpenShift Container Platform service account that can impersonate a service account for your cloud provider. Configure any workloads related to your application to use the OpenShift Container Platform service account. 2.8.1.1. Environment and user access requirements To configure this authentication method, you must meet the following requirements: Your cluster must use short-term security credentials . You must have access to the OpenShift CLI ( oc ) as a user with the cluster-admin role. In your cloud provider console, you must have access as a user with privileges to manage Identity and Access Management (IAM) and federated identity configurations. 2.8.2. Configuring GCP Workload Identity authentication for applications on GCP To use short-term authentication for applications on a GCP clusters that use GCP Workload Identity authentication, you must complete the following steps: Configure access in GCP. Create an OpenShift Container Platform service account that can use this access. Deploy customer workloads that authenticate with GCP Workload Identity. Creating a federated GCP service account You can use the Google Cloud console to create a workload identity pool and provider and allow an OpenShift Container Platform service account to impersonate a GCP service account. Prerequisites Your GCP cluster uses GCP Workload Identity. You have access to the Google Cloud console as a user with privileges to manage Identity and Access Management (IAM) and workload identity configurations. You have created a Google Cloud project to use with your application. Procedure In the IAM configuration for your Google Cloud project, identify the identity pool and provider that the cluster uses for GCP Workload Identity authentication. Grant permission for external identities to impersonate a GCP service account. With these permissions, an OpenShift Container Platform service account can work as a federated workload identity. For more information, see GCP documentation about allowing your external workload to access Google Cloud resources . Creating an OpenShift Container Platform service account for GCP You create an OpenShift Container Platform service account and annotate it to impersonate a GCP service account. Prerequisites Your GCP cluster uses GCP Workload Identity. You have created a federated GCP service account. You have access to the OpenShift CLI ( oc ) as a user with the cluster-admin role. You have access to the Google Cloud CLI ( gcloud ) as a user with privileges to manage Identity and Access Management (IAM) and workload identity configurations. Procedure Create an OpenShift Container Platform service account to use for GCP Workload Identity pod authentication by running the following command: USD oc create serviceaccount <service_account_name> Annotate the service account with the identity provider and GCP service account to impersonate by running the following command: USD oc patch serviceaccount <service_account_name> -p '{"metadata": {"annotations": {"cloud.google.com/workload-identity-provider": "projects/<project_number>/locations/global/workloadIdentityPools/<identity_pool>/providers/<identity_provider>"}}}' Replace <project_number> , <identity_pool> , and <identity_provider> with the values for your configuration. Note For <project_number> , specify the Google Cloud project number, not the project ID. Annotate the service account with the email address for the GCP service account by running the following command: USD oc patch serviceaccount <service_account_name> -p '{"metadata": {"annotations": {"cloud.google.com/service-account-email": "<service_account_email>"}}}' Replace <service_account_email> with the email address for the GCP service account. Tip GCP service account email addresses typically use the format <service_account_name>@<project_id>.iam.gserviceaccount.com Annotate the service account to use the direct external credentials configuration injection mode by running the following command: USD oc patch serviceaccount <service_account_name> -p '{"metadata": {"annotations": {"cloud.google.com/injection-mode": "direct"}}}' In this mode, the Workload Identity Federation webhook controller directly generates the GCP external credentials configuration and injects them into the pod. Use the Google Cloud CLI ( gcloud ) to specify the permissions for the workload by running the following command: USD gcloud projects add-iam-policy-binding <project_id> --member "<service_account_email>" --role "projects/<project_id>/roles/<role_for_workload_permissions>" Replace <role_for_workload_permissions> with the role for the workload. Specify a role that grants the permissions that your workload requires. Verification To verify the service account configuration, inspect the ServiceAccount manifest by running the following command: USD oc get serviceaccount <service_account_name> In the following example, the service-a/app-x OpenShift Container Platform service account can impersonate a GCP service account called app-x : apiVersion: v1 kind: ServiceAccount metadata: name: app-x namespace: service-a annotations: cloud.google.com/workload-identity-provider: "projects/<project_number>/locations/global/workloadIdentityPools/<identity_pool>/providers/<identity_provider>" 1 cloud.google.com/service-account-email: "[email protected]" cloud.google.com/audience: "sts.googleapis.com" 2 cloud.google.com/token-expiration: "86400" 3 cloud.google.com/gcloud-run-as-user: "1000" cloud.google.com/injection-mode: "direct" 4 1 The workload identity provider for the service account of the cluster. 2 The allowed audience for the workload identity provider. 3 The token expiration time period in seconds. 4 The direct external credentials configuration injection mode. Deploying customer workloads that authenticate with GCP Workload Identity To use short-term authentication in your application, you must configure its related pods to use the OpenShift Container Platform service account. Use of the OpenShift Container Platform service account triggers the webhook to mutate the pods so they can impersonate the GCP service account. The following example demonstrates how to deploy a pod that uses the OpenShift Container Platform service account and verify the configuration. Prerequisites Your GCP cluster uses GCP Workload Identity. You have created a federated GCP service account. You have created an OpenShift Container Platform service account for GCP. Procedure To create a pod that authenticates with GCP Workload Identity, create a deployment YAML file similar to the following example: Sample deployment apiVersion: apps/v1 kind: Deployment metadata: name: ubi9 spec: replicas: 1 selector: matchLabels: app: ubi9 template: metadata: labels: app: ubi9 spec: serviceAccountName: "<service_account_name>" 1 containers: - name: ubi image: 'registry.access.redhat.com/ubi9/ubi-micro:latest' command: - /bin/sh - '-c' - | sleep infinity 1 Specify the name of the OpenShift Container Platform service account. Apply the deployment file by running the following command: USD oc apply -f deployment.yaml Verification To verify that a pod is using short-term authentication, run the following command: USD oc get pods -o json | jq -r '.items[0].spec.containers[0].env[] | select(.name=="GOOGLE_APPLICATION_CREDENTIALS")' Example output { "name": "GOOGLE_APPLICATION_CREDENTIALS", "value": "/var/run/secrets/workload-identity/federation.json" } The presence of the GOOGLE_APPLICATION_CREDENTIALS environment variable indicates a pod that authenticates with GCP Workload Identity. To verify additional configuration details, examine the pod specification. The following example pod specifications show the environment variables and volume fields that the webhook mutates. Example pod specification with the direct injection mode: apiVersion: v1 kind: Pod metadata: name: app-x-pod namespace: service-a annotations: cloud.google.com/skip-containers: "init-first,sidecar" cloud.google.com/external-credentials-json: |- 1 { "type": "external_account", "audience": "//iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/on-prem-kubernetes/providers/<identity_provider>", "subject_token_type": "urn:ietf:params:oauth:token-type:jwt", "token_url": "https://sts.googleapis.com/v1/token", "service_account_impersonation_url": "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/[email protected]:generateAccessToken", "credential_source": { "file": "/var/run/secrets/sts.googleapis.com/serviceaccount/token", "format": { "type": "text" } } } spec: serviceAccountName: app-x initContainers: - name: init-first image: container-image:version containers: - name: sidecar image: container-image:version - name: container-name image: container-image:version env: 2 - name: GOOGLE_APPLICATION_CREDENTIALS value: /var/run/secrets/gcloud/config/federation.json - name: CLOUDSDK_COMPUTE_REGION value: asia-northeast1 volumeMounts: - name: gcp-iam-token readOnly: true mountPath: /var/run/secrets/sts.googleapis.com/serviceaccount - mountPath: /var/run/secrets/gcloud/config name: external-credential-config readOnly: true volumes: - name: gcp-iam-token projected: sources: - serviceAccountToken: audience: sts.googleapis.com expirationSeconds: 86400 path: token - downwardAPI: defaultMode: 288 items: - fieldRef: apiVersion: v1 fieldPath: metadata.annotations['cloud.google.com/external-credentials-json'] path: federation.json name: external-credential-config 1 The external credentials configuration generated by the webhook controller. The Kubernetes downwardAPI volume mounts the configuration into the container filesystem. 2 The webhook-injected environment variables for token-based authentication. 2.9. Creating and using config maps The following sections define config maps and how to create and use them. 2.9.1. Understanding config maps Many applications require configuration by using some combination of configuration files, command line arguments, and environment variables. In OpenShift Container Platform, these configuration artifacts are decoupled from image content to keep containerized applications portable. The ConfigMap object provides mechanisms to inject containers with configuration data while keeping containers agnostic of OpenShift Container Platform. A config map can be used to store fine-grained information like individual properties or coarse-grained information like entire configuration files or JSON blobs. The ConfigMap object holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data for system components such as controllers. For example: ConfigMap Object Definition kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2 1 Contains the configuration data. 2 Points to a file that contains non-UTF8 data, for example, a binary Java keystore file. Enter the file data in Base 64. Note You can use the binaryData field when you create a config map from a binary file, such as an image. Configuration data can be consumed in pods in a variety of ways. A config map can be used to: Populate environment variable values in containers Set command-line arguments in a container Populate configuration files in a volume Users and system components can store configuration data in a config map. A config map is similar to a secret, but designed to more conveniently support working with strings that do not contain sensitive information. Config map restrictions A config map must be created before its contents can be consumed in pods. Controllers can be written to tolerate missing configuration data. Consult individual components configured by using config maps on a case-by-case basis. ConfigMap objects reside in a project. They can only be referenced by pods in the same project. The Kubelet only supports the use of a config map for pods it gets from the API server. This includes any pods created by using the CLI, or indirectly from a replication controller. It does not include pods created by using the OpenShift Container Platform node's --manifest-url flag, its --config flag, or its REST API because these are not common ways to create pods. 2.9.2. Creating a config map in the OpenShift Container Platform web console You can create a config map in the OpenShift Container Platform web console. Procedure To create a config map as a cluster administrator: In the Administrator perspective, select Workloads Config Maps . At the top right side of the page, select Create Config Map . Enter the contents of your config map. Select Create . To create a config map as a developer: In the Developer perspective, select Config Maps . At the top right side of the page, select Create Config Map . Enter the contents of your config map. Select Create . 2.9.3. Creating a config map by using the CLI You can use the following command to create a config map from directories, specific files, or literal values. Procedure Create a config map: USD oc create configmap <configmap_name> [options] 2.9.3.1. Creating a config map from a directory You can create a config map from a directory by using the --from-file flag. This method allows you to use multiple files within a directory to create a config map. Each file in the directory is used to populate a key in the config map, where the name of the key is the file name, and the value of the key is the content of the file. For example, the following command creates a config map with the contents of the example-files directory: USD oc create configmap game-config --from-file=example-files/ View the keys in the config map: USD oc describe configmaps game-config Example output Name: game-config Namespace: default Labels: <none> Annotations: <none> Data game.properties: 158 bytes ui.properties: 83 bytes You can see that the two keys in the map are created from the file names in the directory specified in the command. The content of those keys might be large, so the output of oc describe only shows the names of the keys and their sizes. Prerequisite You must have a directory with files that contain the data you want to populate a config map with. The following procedure uses these example files: game.properties and ui.properties : USD cat example-files/game.properties Example output enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 USD cat example-files/ui.properties Example output color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice Procedure Create a config map holding the content of each file in this directory by entering the following command: USD oc create configmap game-config \ --from-file=example-files/ Verification Enter the oc get command for the object with the -o option to see the values of the keys: USD oc get configmaps game-config -o yaml Example output apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:34:05Z name: game-config namespace: default resourceVersion: "407" selflink: /api/v1/namespaces/default/configmaps/game-config uid: 30944725-d66e-11e5-8cd0-68f728db1985 2.9.3.2. Creating a config map from a file You can create a config map from a file by using the --from-file flag. You can pass the --from-file option multiple times to the CLI. You can also specify the key to set in a config map for content imported from a file by passing a key=value expression to the --from-file option. For example: USD oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties Note If you create a config map from a file, you can include files containing non-UTF8 data that are placed in this field without corrupting the non-UTF8 data. OpenShift Container Platform detects binary files and transparently encodes the file as MIME . On the server, the MIME payload is decoded and stored without corrupting the data. Prerequisite You must have a directory with files that contain the data you want to populate a config map with. The following procedure uses these example files: game.properties and ui.properties : USD cat example-files/game.properties Example output enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 USD cat example-files/ui.properties Example output color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice Procedure Create a config map by specifying a specific file: USD oc create configmap game-config-2 \ --from-file=example-files/game.properties \ --from-file=example-files/ui.properties Create a config map by specifying a key-value pair: USD oc create configmap game-config-3 \ --from-file=game-special-key=example-files/game.properties Verification Enter the oc get command for the object with the -o option to see the values of the keys from the file: USD oc get configmaps game-config-2 -o yaml Example output apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:52:05Z name: game-config-2 namespace: default resourceVersion: "516" selflink: /api/v1/namespaces/default/configmaps/game-config-2 uid: b4952dc3-d670-11e5-8cd0-68f728db1985 Enter the oc get command for the object with the -o option to see the values of the keys from the key-value pair: USD oc get configmaps game-config-3 -o yaml Example output apiVersion: v1 data: game-special-key: |- 1 enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:54:22Z name: game-config-3 namespace: default resourceVersion: "530" selflink: /api/v1/namespaces/default/configmaps/game-config-3 uid: 05f8da22-d671-11e5-8cd0-68f728db1985 1 This is the key that you set in the preceding step. 2.9.3.3. Creating a config map from literal values You can supply literal values for a config map. The --from-literal option takes a key=value syntax, which allows literal values to be supplied directly on the command line. Procedure Create a config map by specifying a literal value: USD oc create configmap special-config \ --from-literal=special.how=very \ --from-literal=special.type=charm Verification Enter the oc get command for the object with the -o option to see the values of the keys: USD oc get configmaps special-config -o yaml Example output apiVersion: v1 data: special.how: very special.type: charm kind: ConfigMap metadata: creationTimestamp: 2016-02-18T19:14:38Z name: special-config namespace: default resourceVersion: "651" selflink: /api/v1/namespaces/default/configmaps/special-config uid: dadce046-d673-11e5-8cd0-68f728db1985 2.9.4. Use cases: Consuming config maps in pods The following sections describe some uses cases when consuming ConfigMap objects in pods. 2.9.4.1. Populating environment variables in containers by using config maps You can use config maps to populate individual environment variables in containers or to populate environment variables in containers from all keys that form valid environment variable names. As an example, consider the following config map: ConfigMap with two environment variables apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4 1 Name of the config map. 2 The project in which the config map resides. Config maps can only be referenced by pods in the same project. 3 4 Environment variables to inject. ConfigMap with one environment variable apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2 1 Name of the config map. 2 Environment variable to inject. Procedure You can consume the keys of this ConfigMap in a pod using configMapKeyRef sections. Sample Pod specification configured to inject specific environment variables apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never 1 Stanza to pull the specified environment variables from a ConfigMap . 2 Name of a pod environment variable that you are injecting a key's value into. 3 5 Name of the ConfigMap to pull specific environment variables from. 4 6 Environment variable to pull from the ConfigMap . 7 Makes the environment variable optional. As optional, the pod will be started even if the specified ConfigMap and keys do not exist. 8 Stanza to pull all environment variables from a ConfigMap . 9 Name of the ConfigMap to pull all environment variables from. When this pod is run, the pod logs will include the following output: Note SPECIAL_TYPE_KEY=charm is not listed in the example output because optional: true is set. 2.9.4.2. Setting command-line arguments for container commands with config maps You can use a config map to set the value of the commands or arguments in a container by using the Kubernetes substitution syntax USD(VAR_NAME) . As an example, consider the following config map: apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm Procedure To inject values into a command in a container, you must consume the keys you want to use as environment variables. Then you can refer to them in a container's command using the USD(VAR_NAME) syntax. Sample pod specification configured to inject specific environment variables apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never 1 Inject the values into a command in a container using the keys you want to use as environment variables. When this pod is run, the output from the echo command run in the test-container container is as follows: 2.9.4.3. Injecting content into a volume by using config maps You can inject content into a volume by using config maps. Example ConfigMap custom resource (CR) apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm Procedure You have a couple different options for injecting content into a volume by using config maps. The most basic way to inject content into a volume by using a config map is to populate the volume with files where the key is the file name and the content of the file is the value of the key: apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "cat", "/etc/config/special.how" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never 1 File containing key. When this pod is run, the output of the cat command will be: You can also control the paths within the volume where config map keys are projected: apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "cat", "/etc/config/path/to/special-key" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never 1 Path to config map key. When this pod is run, the output of the cat command will be: 2.10. Using device plugins to access external resources with pods Device plugins allow you to use a particular device type (GPU, InfiniBand, or other similar computing resources that require vendor-specific initialization and setup) in your OpenShift Container Platform pod without needing to write custom code. 2.10.1. Understanding device plugins The device plugin provides a consistent and portable solution to consume hardware devices across clusters. The device plugin provides support for these devices through an extension mechanism, which makes these devices available to Containers, provides health checks of these devices, and securely shares them. Important OpenShift Container Platform supports the device plugin API, but the device plugin Containers are supported by individual vendors. A device plugin is a gRPC service running on the nodes (external to the kubelet ) that is responsible for managing specific hardware resources. Any device plugin must support following remote procedure calls (RPCs): service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as resetting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} } Example device plugins Nvidia GPU device plugin for COS-based operating system Nvidia official GPU device plugin Solarflare device plugin KubeVirt device plugins: vfio and kvm Kubernetes device plugin for IBM(R) Crypto Express (CEX) cards Note For easy device plugin reference implementation, there is a stub device plugin in the Device Manager code: vendor/k8s.io/kubernetes/pkg/kubelet/cm/deviceplugin/device_plugin_stub.go . 2.10.1.1. Methods for deploying a device plugin Daemon sets are the recommended approach for device plugin deployments. Upon start, the device plugin will try to create a UNIX domain socket at /var/lib/kubelet/device-plugin/ on the node to serve RPCs from Device Manager. Since device plugins must manage hardware resources, access to the host file system, as well as socket creation, they must be run in a privileged security context. More specific details regarding deployment steps can be found with each device plugin implementation. 2.10.2. Understanding the Device Manager Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plugins known as device plugins. You can advertise specialized hardware without requiring any upstream code changes. Important OpenShift Container Platform supports the device plugin API, but the device plugin Containers are supported by individual vendors. Device Manager advertises devices as Extended Resources . User pods can consume devices, advertised by Device Manager, using the same Limit/Request mechanism, which is used for requesting any other Extended Resource . Upon start, the device plugin registers itself with Device Manager invoking Register on the /var/lib/kubelet/device-plugins/kubelet.sock and starts a gRPC service at /var/lib/kubelet/device-plugins/<plugin>.sock for serving Device Manager requests. Device Manager, while processing a new registration request, invokes ListAndWatch remote procedure call (RPC) at the device plugin service. In response, Device Manager gets a list of Device objects from the plugin over a gRPC stream. Device Manager will keep watching on the stream for new updates from the plugin. On the plugin side, the plugin will also keep the stream open and whenever there is a change in the state of any of the devices, a new device list is sent to the Device Manager over the same streaming connection. While handling a new pod admission request, Kubelet passes requested Extended Resources to the Device Manager for device allocation. Device Manager checks in its database to verify if a corresponding plugin exists or not. If the plugin exists and there are free allocatable devices as well as per local cache, Allocate RPC is invoked at that particular device plugin. Additionally, device plugins can also perform several other device-specific operations, such as driver installation, device initialization, and device resets. These functionalities vary from implementation to implementation. 2.10.3. Enabling Device Manager Enable Device Manager to implement a device plugin to advertise specialized hardware without any upstream code changes. Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plugins known as device plugins. Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command. Perform one of the following steps: View the machine config: # oc describe machineconfig <name> For example: # oc describe machineconfig 00-worker Example output Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1 1 Label required for the Device Manager. Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a Device Manager CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3 1 Assign a name to CR. 2 Enter the label from the Machine Config Pool. 3 Set DevicePlugins to 'true`. Create the Device Manager: USD oc create -f devicemgr.yaml Example output kubeletconfig.machineconfiguration.openshift.io/devicemgr created Ensure that Device Manager was actually enabled by confirming that /var/lib/kubelet/device-plugins/kubelet.sock is created on the node. This is the UNIX domain socket on which the Device Manager gRPC server listens for new plugin registrations. This sock file is created when the Kubelet is started only if Device Manager is enabled. 2.11. Including pod priority in pod scheduling decisions You can enable pod priority and preemption in your cluster. Pod priority indicates the importance of a pod relative to other pods and queues the pods based on that priority. pod preemption allows the cluster to evict, or preempt, lower-priority pods so that higher-priority pods can be scheduled if there is no available space on a suitable node pod priority also affects the scheduling order of pods and out-of-resource eviction ordering on the node. To use priority and preemption, you create priority classes that define the relative weight of your pods. Then, reference a priority class in the pod specification to apply that weight for scheduling. 2.11.1. Understanding pod priority When you use the Pod Priority and Preemption feature, the scheduler orders pending pods by their priority, and a pending pod is placed ahead of other pending pods with lower priority in the scheduling queue. As a result, the higher priority pod might be scheduled sooner than pods with lower priority if its scheduling requirements are met. If a pod cannot be scheduled, scheduler continues to schedule other lower priority pods. 2.11.1.1. Pod priority classes You can assign pods a priority class, which is a non-namespaced object that defines a mapping from a name to the integer value of the priority. The higher the value, the higher the priority. A priority class object can take any 32-bit integer value smaller than or equal to 1000000000 (one billion). Reserve numbers larger than or equal to one billion for critical pods that must not be preempted or evicted. By default, OpenShift Container Platform has two reserved priority classes for critical system pods to have guaranteed scheduling. USD oc get priorityclasses Example output NAME VALUE GLOBAL-DEFAULT AGE system-node-critical 2000001000 false 72m system-cluster-critical 2000000000 false 72m openshift-user-critical 1000000000 false 3d13h cluster-logging 1000000 false 29s system-node-critical - This priority class has a value of 2000001000 and is used for all pods that should never be evicted from a node. Examples of pods that have this priority class are ovnkube-node , and so forth. A number of critical components include the system-node-critical priority class by default, for example: master-api master-controller master-etcd ovn-kubernetes sync system-cluster-critical - This priority class has a value of 2000000000 (two billion) and is used with pods that are important for the cluster. Pods with this priority class can be evicted from a node in certain circumstances. For example, pods configured with the system-node-critical priority class can take priority. However, this priority class does ensure guaranteed scheduling. Examples of pods that can have this priority class are fluentd, add-on components like descheduler, and so forth. A number of critical components include the system-cluster-critical priority class by default, for example: fluentd metrics-server descheduler openshift-user-critical - You can use the priorityClassName field with important pods that cannot bind their resource consumption and do not have predictable resource consumption behavior. Prometheus pods under the openshift-monitoring and openshift-user-workload-monitoring namespaces use the openshift-user-critical priorityClassName . Monitoring workloads use system-critical as their first priorityClass , but this causes problems when monitoring uses excessive memory and the nodes cannot evict them. As a result, monitoring drops priority to give the scheduler flexibility, moving heavy workloads around to keep critical nodes operating. cluster-logging - This priority is used by Fluentd to make sure Fluentd pods are scheduled to nodes over other apps. 2.11.1.2. Pod priority names After you have one or more priority classes, you can create pods that specify a priority class name in a Pod spec. The priority admission controller uses the priority class name field to populate the integer value of the priority. If the named priority class is not found, the pod is rejected. 2.11.2. Understanding pod preemption When a developer creates a pod, the pod goes into a queue. If the developer configured the pod for pod priority or preemption, the scheduler picks a pod from the queue and tries to schedule the pod on a node. If the scheduler cannot find space on an appropriate node that satisfies all the specified requirements of the pod, preemption logic is triggered for the pending pod. When the scheduler preempts one or more pods on a node, the nominatedNodeName field of higher-priority Pod spec is set to the name of the node, along with the nodename field. The scheduler uses the nominatedNodeName field to keep track of the resources reserved for pods and also provides information to the user about preemptions in the clusters. After the scheduler preempts a lower-priority pod, the scheduler honors the graceful termination period of the pod. If another node becomes available while scheduler is waiting for the lower-priority pod to terminate, the scheduler can schedule the higher-priority pod on that node. As a result, the nominatedNodeName field and nodeName field of the Pod spec might be different. Also, if the scheduler preempts pods on a node and is waiting for termination, and a pod with a higher-priority pod than the pending pod needs to be scheduled, the scheduler can schedule the higher-priority pod instead. In such a case, the scheduler clears the nominatedNodeName of the pending pod, making the pod eligible for another node. Preemption does not necessarily remove all lower-priority pods from a node. The scheduler can schedule a pending pod by removing a portion of the lower-priority pods. The scheduler considers a node for pod preemption only if the pending pod can be scheduled on the node. 2.11.2.1. Non-preempting priority classes Pods with the preemption policy set to Never are placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. A non-preempting pod waiting to be scheduled stays in the scheduling queue until sufficient resources are free and it can be scheduled. Non-preempting pods, like other pods, are subject to scheduler back-off. This means that if the scheduler tries unsuccessfully to schedule these pods, they are retried with lower frequency, allowing other pods with lower priority to be scheduled before them. Non-preempting pods can still be preempted by other, high-priority pods. 2.11.2.2. Pod preemption and other scheduler settings If you enable pod priority and preemption, consider your other scheduler settings: Pod priority and pod disruption budget A pod disruption budget specifies the minimum number or percentage of replicas that must be up at a time. If you specify pod disruption budgets, OpenShift Container Platform respects them when preempting pods at a best effort level. The scheduler attempts to preempt pods without violating the pod disruption budget. If no such pods are found, lower-priority pods might be preempted despite their pod disruption budget requirements. Pod priority and pod affinity Pod affinity requires a new pod to be scheduled on the same node as other pods with the same label. If a pending pod has inter-pod affinity with one or more of the lower-priority pods on a node, the scheduler cannot preempt the lower-priority pods without violating the affinity requirements. In this case, the scheduler looks for another node to schedule the pending pod. However, there is no guarantee that the scheduler can find an appropriate node and pending pod might not be scheduled. To prevent this situation, carefully configure pod affinity with equal-priority pods. 2.11.2.3. Graceful termination of preempted pods When preempting a pod, the scheduler waits for the pod graceful termination period to expire, allowing the pod to finish working and exit. If the pod does not exit after the period, the scheduler kills the pod. This graceful termination period creates a time gap between the point that the scheduler preempts the pod and the time when the pending pod can be scheduled on the node. To minimize this gap, configure a small graceful termination period for lower-priority pods. 2.11.3. Configuring priority and preemption You apply pod priority and preemption by creating a priority class object and associating pods to the priority by using the priorityClassName in your pod specs. Note You cannot add a priority class directly to an existing scheduled pod. Procedure To configure your cluster to use priority and preemption: Create one or more priority classes: Create a YAML file similar to the following: apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority 1 value: 1000000 2 preemptionPolicy: PreemptLowerPriority 3 globalDefault: false 4 description: "This priority class should be used for XYZ service pods only." 5 1 The name of the priority class object. 2 The priority value of the object. 3 Optional. Specifies whether this priority class is preempting or non-preempting. The preemption policy defaults to PreemptLowerPriority , which allows pods of that priority class to preempt lower-priority pods. If the preemption policy is set to Never , pods in that priority class are non-preempting. 4 Optional. Specifies whether this priority class should be used for pods without a priority class name specified. This field is false by default. Only one priority class with globalDefault set to true can exist in the cluster. If there is no priority class with globalDefault:true , the priority of pods with no priority class name is zero. Adding a priority class with globalDefault:true affects only pods created after the priority class is added and does not change the priorities of existing pods. 5 Optional. Describes which pods developers should use with this priority class. Enter an arbitrary text string. Create the priority class: USD oc create -f <file-name>.yaml Create a pod spec to include the name of a priority class: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] priorityClassName: high-priority 1 1 Specify the priority class to use with this pod. Create the pod: USD oc create -f <file-name>.yaml You can add the priority name directly to the pod configuration or to a pod template. 2.12. Placing pods on specific nodes using node selectors A node selector specifies a map of key-value pairs. The rules are defined using custom labels on nodes and selectors specified in pods. For the pod to be eligible to run on a node, the pod must have the indicated key-value pairs as the label on the node. If you are using node affinity and node selectors in the same pod configuration, see the important considerations below. 2.12.1. Using node selectors to control pod placement You can use node selectors on pods and labels on nodes to control where the pod is scheduled. With node selectors, OpenShift Container Platform schedules the pods on nodes that contain matching labels. You add labels to a node, a compute machine set, or a machine config. Adding the label to the compute machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down. To add node selectors to an existing pod, add a node selector to the controlling object for that pod, such as a ReplicaSet object, DaemonSet object, StatefulSet object, Deployment object, or DeploymentConfig object. Any existing pods under that controlling object are recreated on a node with a matching label. If you are creating a new pod, you can add the node selector directly to the pod spec. If the pod does not have a controlling object, you must delete the pod, edit the pod spec, and recreate the pod. Note You cannot add a node selector directly to an existing scheduled pod. Prerequisites To add a node selector to existing pods, determine the controlling object for that pod. For example, the router-default-66d5cf9464-m2g75 pod is controlled by the router-default-66d5cf9464 replica set: USD oc describe pod router-default-66d5cf9464-7pwkc Example output kind: Pod apiVersion: v1 metadata: # ... Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress # ... Controlled By: ReplicaSet/router-default-66d5cf9464 # ... The web console lists the controlling object under ownerReferences in the pod YAML: apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc # ... ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true # ... Procedure Add labels to a node by using a compute machine set or editing the node directly: Use a MachineSet object to add labels to nodes managed by the compute machine set when a node is created: Run the following command to add labels to a MachineSet object: USD oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api For example: USD oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api Tip You can alternatively apply the following YAML to add labels to a compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: "east" type: "user-node" # ... Verify that the labels are added to the MachineSet object by using the oc edit command: For example: USD oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api Example MachineSet object apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: # ... template: metadata: # ... spec: metadata: labels: region: east type: user-node # ... Add labels directly to a node: Edit the Node object for the node: USD oc label nodes <name> <key>=<value> For example, to label a node: USD oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east Tip You can alternatively apply the following YAML to add labels to a node: kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: "user-node" region: "east" # ... Verify that the labels are added to the node: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.31.3 Add the matching node selector to a pod: To add a node selector to existing and future pods, add a node selector to the controlling object for the pods: Example ReplicaSet object with labels kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 # ... spec: # ... template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1 # ... 1 Add the node selector. To add a node selector to a specific, new pod, add the selector to the Pod object directly: Example Pod object with a node selector apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 # ... spec: nodeSelector: region: east type: user-node # ... Note You cannot add a node selector directly to an existing scheduled pod. 2.13. Run Once Duration Override Operator 2.13.1. Run Once Duration Override Operator overview You can use the Run Once Duration Override Operator to specify a maximum time limit that run-once pods can be active for. 2.13.1.1. About the Run Once Duration Override Operator OpenShift Container Platform relies on run-once pods to perform tasks such as deploying a pod or performing a build. Run-once pods are pods that have a RestartPolicy of Never or OnFailure . Cluster administrators can use the Run Once Duration Override Operator to force a limit on the time that those run-once pods can be active. After the time limit expires, the cluster will try to actively terminate those pods. The main reason to have such a limit is to prevent tasks such as builds to run for an excessive amount of time. To apply the run-once duration override from the Run Once Duration Override Operator to run-once pods, you must enable it on each applicable namespace. If both the run-once pod and the Run Once Duration Override Operator have their activeDeadlineSeconds value set, the lower of the two values is used. Note You cannot install the Run Once Duration Override Operator on clusters managed by the HyperShift Operator. 2.13.2. Run Once Duration Override Operator release notes Cluster administrators can use the Run Once Duration Override Operator to force a limit on the time that run-once pods can be active. After the time limit expires, the cluster tries to terminate the run-once pods. The main reason to have such a limit is to prevent tasks such as builds to run for an excessive amount of time. To apply the run-once duration override from the Run Once Duration Override Operator to run-once pods, you must enable it on each applicable namespace. These release notes track the development of the Run Once Duration Override Operator for OpenShift Container Platform. For an overview of the Run Once Duration Override Operator, see About the Run Once Duration Override Operator . 2.13.2.1. Run Once Duration Override Operator 1.2.0 Issued: 16 October 2024 The following advisory is available for the Run Once Duration Override Operator 1.2.0: ( RHSA-2024:7548 ) 2.13.2.1.1. Bug fixes This release of the Run Once Duration Override Operator addresses several Common Vulnerabilities and Exposures (CVEs). 2.13.3. Overriding the active deadline for run-once pods You can use the Run Once Duration Override Operator to specify a maximum time limit that run-once pods can be active for. By enabling the run-once duration override on a namespace, all future run-once pods created or updated in that namespace have their activeDeadlineSeconds field set to the value specified by the Run Once Duration Override Operator. Note If both the run-once pod and the Run Once Duration Override Operator have their activeDeadlineSeconds value set, the lower of the two values is used. 2.13.3.1. Installing the Run Once Duration Override Operator You can use the web console to install the Run Once Duration Override Operator. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Create the required namespace for the Run Once Duration Override Operator. Navigate to Administration Namespaces and click Create Namespace . Enter openshift-run-once-duration-override-operator in the Name field and click Create . Install the Run Once Duration Override Operator. Navigate to Operators OperatorHub . Enter Run Once Duration Override Operator into the filter box. Select the Run Once Duration Override Operator and click Install . On the Install Operator page: The Update channel is set to stable , which installs the latest stable release of the Run Once Duration Override Operator. Select A specific namespace on the cluster . Choose openshift-run-once-duration-override-operator from the dropdown menu under Installed namespace . Select an Update approval strategy. The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Create a RunOnceDurationOverride instance. From the Operators Installed Operators page, click Run Once Duration Override Operator . Select the Run Once Duration Override tab and click Create RunOnceDurationOverride . Edit the settings as necessary. Under the runOnceDurationOverride section, you can update the spec.activeDeadlineSeconds value, if required. The predefined value is 3600 seconds, or 1 hour. Click Create . Verification Log in to the OpenShift CLI. Verify all pods are created and running properly. USD oc get pods -n openshift-run-once-duration-override-operator Example output NAME READY STATUS RESTARTS AGE run-once-duration-override-operator-7b88c676f6-lcxgc 1/1 Running 0 7m46s runoncedurationoverride-62blp 1/1 Running 0 41s runoncedurationoverride-h8h8b 1/1 Running 0 41s runoncedurationoverride-tdsqk 1/1 Running 0 41s 2.13.3.2. Enabling the run-once duration override on a namespace To apply the run-once duration override from the Run Once Duration Override Operator to run-once pods, you must enable it on each applicable namespace. Prerequisites The Run Once Duration Override Operator is installed. Procedure Log in to the OpenShift CLI. Add the label to enable the run-once duration override to your namespace: USD oc label namespace <namespace> \ 1 runoncedurationoverrides.admission.runoncedurationoverride.openshift.io/enabled=true 1 Specify the namespace to enable the run-once duration override on. After you enable the run-once duration override on this namespace, future run-once pods that are created in this namespace will have their activeDeadlineSeconds field set to the override value from the Run Once Duration Override Operator. Existing pods in this namespace will also have their activeDeadlineSeconds value set when they are updated . Verification Create a test run-once pod in the namespace that you enabled the run-once duration override on: apiVersion: v1 kind: Pod metadata: name: example namespace: <namespace> 1 spec: restartPolicy: Never 2 securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: busybox securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] image: busybox:1.25 command: - /bin/sh - -ec - | while sleep 5; do date; done 1 Replace <namespace> with the name of your namespace. 2 The restartPolicy must be Never or OnFailure to be a run-once pod. Verify that the pod has its activeDeadlineSeconds field set: USD oc get pods -n <namespace> -o yaml | grep activeDeadlineSeconds Example output activeDeadlineSeconds: 3600 2.13.3.3. Updating the run-once active deadline override value You can customize the override value that the Run Once Duration Override Operator applies to run-once pods. The predefined value is 3600 seconds, or 1 hour. Prerequisites You have access to the cluster with cluster-admin privileges. You have installed the Run Once Duration Override Operator. Procedure Log in to the OpenShift CLI. Edit the RunOnceDurationOverride resource: USD oc edit runoncedurationoverride cluster Update the activeDeadlineSeconds field: apiVersion: operator.openshift.io/v1 kind: RunOnceDurationOverride metadata: # ... spec: runOnceDurationOverride: spec: activeDeadlineSeconds: 1800 1 # ... 1 Set the activeDeadlineSeconds field to the desired value, in seconds. Save the file to apply the changes. Any future run-once pods created in namespaces where the run-once duration override is enabled will have their activeDeadlineSeconds field set to this new value. Existing run-once pods in these namespaces will receive this new value when they are updated. 2.13.4. Uninstalling the Run Once Duration Override Operator You can remove the Run Once Duration Override Operator from OpenShift Container Platform by uninstalling the Operator and removing its related resources. 2.13.4.1. Uninstalling the Run Once Duration Override Operator You can use the web console to uninstall the Run Once Duration Override Operator. Uninstalling the Run Once Duration Override Operator does not unset the activeDeadlineSeconds field for run-once pods, but it will no longer apply the override value to future run-once pods. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. You have installed the Run Once Duration Override Operator. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Select openshift-run-once-duration-override-operator from the Project dropdown list. Delete the RunOnceDurationOverride instance. Click Run Once Duration Override Operator and select the Run Once Duration Override tab. Click the Options menu to the cluster entry and select Delete RunOnceDurationOverride . In the confirmation dialog, click Delete . Uninstall the Run Once Duration Override Operator Operator. Navigate to Operators Installed Operators . Click the Options menu to the Run Once Duration Override Operator entry and click Uninstall Operator . In the confirmation dialog, click Uninstall . 2.13.4.2. Uninstalling Run Once Duration Override Operator resources Optionally, after uninstalling the Run Once Duration Override Operator, you can remove its related resources from your cluster. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. You have uninstalled the Run Once Duration Override Operator. Procedure Log in to the OpenShift Container Platform web console. Remove CRDs that were created when the Run Once Duration Override Operator was installed: Navigate to Administration CustomResourceDefinitions . Enter RunOnceDurationOverride in the Name field to filter the CRDs. Click the Options menu to the RunOnceDurationOverride CRD and select Delete CustomResourceDefinition . In the confirmation dialog, click Delete . Delete the openshift-run-once-duration-override-operator namespace. Navigate to Administration Namespaces . Enter openshift-run-once-duration-override-operator into the filter box. Click the Options menu to the openshift-run-once-duration-override-operator entry and select Delete Namespace . In the confirmation dialog, enter openshift-run-once-duration-override-operator and click Delete . Remove the run-once duration override label from the namespaces that it was enabled on. Navigate to Administration Namespaces . Select your namespace. Click Edit to the Labels field. Remove the runoncedurationoverrides.admission.runoncedurationoverride.openshift.io/enabled=true label and click Save . 2.14. Running pods in Linux user namespaces Linux user namespaces allow administrators to isolate the container user and group identifiers (UIDs and GIDs) so that a container can have a different set of permissions in the user namespace than on the host system where it is running. This allows containers to run processes with full privileges inside the user namespace, but the processes can be unprivileged for operations on the host machine. By default, a container runs in the host system's root user namespace. Running a container in the host user namespace can be useful when the container needs a feature that is available only in that user namespace. However, it introduces security concerns, such as the possibility of container breakouts, in which a process inside a container breaks out onto the host where the process can access or modify files on the host or in other containers. Running containers in individual user namespaces can mitigate container breakouts and several other vulnerabilities that a compromised container can pose to other pods and the node itself. You can configure Linux user namespace use by setting the hostUsers parameter to false in the pod spec, as shown in the following procedure. Important Support for Linux user namespaces is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.14.1. Configuring Linux user namespace support Prerequisites You enabled the required Technology Preview features for your cluster by editing the FeatureGate CR named cluster : USD oc edit featuregate cluster Example FeatureGate CR apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade 1 1 Enables the required UserNamespacesSupport and ProcMountType features. Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. This feature set allows you to enable these Technology Preview features on test clusters, where you can fully test them. Do not enable this feature set on production clusters. After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied. The crun container runtime is present on the worker nodes. crun is currently the only OCI runtime packaged with OpenShift Container Platform that supports user namespaces. crun is active by default. apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-worker spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 containerRuntimeConfig: defaultRuntime: crun 2 1 Specifies the machine config pool label. 2 Specifies the container runtime to deploy. Procedure Edit the default user ID (UID) and group ID (GID) range of the OpenShift Container Platform namespace where your pod is deployed by running the following command: USD oc edit ns/<namespace_name> Example namespace apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/description: "" openshift.io/display-name: "" openshift.io/requester: system:admin openshift.io/sa.scc.mcs: s0:c27,c24 openshift.io/sa.scc.supplemental-groups: 1000/10000 1 openshift.io/sa.scc.uid-range: 1000/10000 2 # ... name: userns # ... 1 Edit the default GID to match the value you specified in the pod spec. The range for a Linux user namespace must be lower than 65,535. The default is 1000000000/10000 . 2 Edit the default UID to match the value you specified in the pod spec. The range for a Linux user namespace must be lower than 65,535. The default is 1000000000/10000 . Note The range 1000/10000 means 10,000 values starting with ID 1000, so it specifies the range of IDs from 1000 to 10,999. Enable the use of Linux user namespaces by creating a pod configured to run with a restricted profile and with the hostUsers parameter set to false . Create a YAML file similar to the following: Example pod specification apiVersion: v1 kind: Pod metadata: name: userns-pod # ... spec: containers: - name: userns-container image: registry.access.redhat.com/ubi9 command: ["sleep", "1000"] securityContext: capabilities: drop: ["ALL"] allowPrivilegeEscalation: false 1 runAsNonRoot: true 2 seccompProfile: type: RuntimeDefault runAsUser: 1000 3 runAsGroup: 1000 4 hostUsers: false 5 # ... 1 Specifies that a pod cannot request privilege escalation. This is required for the restricted-v2 security context constraints (SCC). 2 Specifies that the container will run with a user with any UID other than 0. 3 Specifies the UID the container is run with. 4 Specifies which primary GID the containers is run with. 5 Requests that the pod is to be run in a user namespace. If true , the pod runs in the host user namespace. If false , the pod runs in a new user namespace that is created for the pod. The default is true . Create the pod by running the following command: Verification Check the pod user and group IDs being used in the pod container you created. The pod is inside the Linux user namespace. Start a shell session with the container in your pod: USD oc rsh -c <container_name> pod/<pod_name> Example command USD oc rsh -c userns-container_name pod/userns-pod Display the user and group IDs being used inside the container: sh-5.1USD id Example output uid=1000(1000) gid=1000(1000) groups=1000(1000) Display the user ID being used in the container user namespace: sh-5.1USD lsns -t user Example output NS TYPE NPROCS PID USER COMMAND 4026532447 user 3 1 1000 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000 1 1 The UID for the process is 1000 , the same as you set in the pod spec. Check the pod user ID being used on the node where the pod was created. The node is outside of the Linux user namespace. This user ID should be different from the UID being used in the container. Start a debug session for that node: USD oc debug node/ci-ln-z5vppzb-72292-8zp2b-worker-c-q8sh9 Example command USD oc debug node/ci-ln-z5vppzb-72292-8zp2b-worker-c-q8sh9 Set /host as the root directory within the debug shell: sh-5.1# chroot /host Display the user ID being used in the node user namespace: sh-5.1# lsns -t user Example command NS TYPE NPROCS PID USER COMMAND 4026531837 user 233 1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 28 4026532447 user 1 4767 2908816384 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000 1 1 The UID for the process is 2908816384 , which is different from what you set in the pod spec. | [
"kind: Pod apiVersion: v1 metadata: name: example labels: environment: production app: abc 1 spec: restartPolicy: Always 2 securityContext: 3 runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: 4 - name: abc args: - sleep - \"1000000\" volumeMounts: 5 - name: cache-volume mountPath: /cache 6 image: registry.access.redhat.com/ubi7/ubi-init:latest 7 securityContext: allowPrivilegeEscalation: false runAsNonRoot: true capabilities: drop: [\"ALL\"] resources: limits: memory: \"100Mi\" cpu: \"1\" requests: memory: \"100Mi\" cpu: \"1\" volumes: 8 - name: cache-volume emptyDir: sizeLimit: 500Mi",
"oc project <project-name>",
"oc get pods",
"oc get pods",
"NAME READY STATUS RESTARTS AGE console-698d866b78-bnshf 1/1 Running 2 165m console-698d866b78-m87pm 1/1 Running 2 165m",
"oc get pods -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE console-698d866b78-bnshf 1/1 Running 2 166m 10.128.0.24 ip-10-0-152-71.ec2.internal <none> console-698d866b78-m87pm 1/1 Running 2 166m 10.129.0.23 ip-10-0-173-237.ec2.internal <none>",
"oc adm top pods",
"oc adm top pods -n openshift-console",
"NAME CPU(cores) MEMORY(bytes) console-7f58c69899-q8c8k 0m 22Mi console-7f58c69899-xhbgg 0m 25Mi downloads-594fcccf94-bcxk8 3m 18Mi downloads-594fcccf94-kv4p6 2m 15Mi",
"oc adm top pod --selector=''",
"oc adm top pod --selector='name=my-pod'",
"oc logs -f <pod_name> -c <container_name>",
"oc logs ruby-58cd97df55-mww7r",
"oc logs -f ruby-57f7f4855b-znl92 -c ruby",
"oc logs <object_type>/<resource_name> 1",
"oc logs deployment/ruby",
"{ \"kind\": \"Pod\", \"spec\": { \"containers\": [ { \"image\": \"openshift/hello-openshift\", \"name\": \"hello-openshift\" } ] }, \"apiVersion\": \"v1\", \"metadata\": { \"name\": \"iperf-slow\", \"annotations\": { \"kubernetes.io/ingress-bandwidth\": \"10M\", \"kubernetes.io/egress-bandwidth\": \"10M\" } } }",
"oc create -f <file_or_dir_path>",
"oc get poddisruptionbudget --all-namespaces",
"NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #",
"apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod",
"apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod",
"oc create -f </path/to/file> -n <project_name>",
"apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 selector: matchLabels: name: my-pod unhealthyPodEvictionPolicy: AlwaysAllow 1",
"oc create -f pod-disruption-budget.yaml",
"apiVersion: v1 kind: Pod metadata: name: my-pdb spec: template: metadata: name: critical-pod priorityClassName: system-cluster-critical 1",
"oc create -f <file-name>.yaml",
"oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75",
"horizontalpodautoscaler.autoscaling/hello-node autoscaled",
"apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: hello-node namespace: default spec: maxReplicas: 7 minReplicas: 3 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: hello-node targetCPUUtilizationPercentage: 75 status: currentReplicas: 5 desiredReplicas: 0",
"oc get deployment hello-node",
"NAME REVISION DESIRED CURRENT TRIGGERED BY hello-node 1 5 5 config",
"type: Resource resource: name: cpu target: type: Utilization averageUtilization: 60",
"behavior: scaleDown: stabilizationWindowSeconds: 300",
"apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: behavior: scaleDown: 1 policies: 2 - type: Pods 3 value: 4 4 periodSeconds: 60 5 - type: Percent value: 10 6 periodSeconds: 60 selectPolicy: Min 7 stabilizationWindowSeconds: 300 8 scaleUp: 9 policies: - type: Pods value: 5 10 periodSeconds: 70 - type: Percent value: 12 11 periodSeconds: 80 selectPolicy: Max stabilizationWindowSeconds: 0",
"apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: minReplicas: 20 behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 30 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max scaleUp: selectPolicy: Disabled",
"oc edit hpa hpa-resource-metrics-memory",
"apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: annotations: autoscaling.alpha.kubernetes.io/behavior: '{\"ScaleUp\":{\"StabilizationWindowSeconds\":0,\"SelectPolicy\":\"Max\",\"Policies\":[{\"Type\":\"Pods\",\"Value\":4,\"PeriodSeconds\":15},{\"Type\":\"Percent\",\"Value\":100,\"PeriodSeconds\":15}]}, \"ScaleDown\":{\"StabilizationWindowSeconds\":300,\"SelectPolicy\":\"Min\",\"Policies\":[{\"Type\":\"Pods\",\"Value\":4,\"PeriodSeconds\":60},{\"Type\":\"Percent\",\"Value\":10,\"PeriodSeconds\":60}]}}'",
"oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal",
"Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>",
"oc autoscale <object_type>/<name> \\ 1 --min <number> \\ 2 --max <number> \\ 3 --cpu-percent=<percent> 4",
"oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75",
"apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: cpu-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: cpu 9 target: type: AverageValue 10 averageValue: 500m 11",
"oc create -f <file-name>.yaml",
"oc get hpa cpu-autoscale",
"NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE cpu-autoscale Deployment/example 173m/500m 1 10 1 20m",
"oc describe PodMetrics openshift-kube-scheduler-ip-10-0-129-223.compute.internal -n openshift-kube-scheduler",
"Name: openshift-kube-scheduler-ip-10-0-129-223.compute.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Cpu: 0 Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2020-02-14T22:21:14Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-129-223.compute.internal Timestamp: 2020-02-14T22:21:14Z Window: 5m0s Events: <none>",
"apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: AverageValue 10 averageValue: 500Mi 11 behavior: 12 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 60 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max",
"apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: memory-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: Utilization 10 averageUtilization: 50 11 behavior: 12 scaleUp: stabilizationWindowSeconds: 180 policies: - type: Pods value: 6 periodSeconds: 120 - type: Percent value: 10 periodSeconds: 120 selectPolicy: Max",
"oc create -f <file-name>.yaml",
"oc create -f hpa.yaml",
"horizontalpodautoscaler.autoscaling/hpa-resource-metrics-memory created",
"oc get hpa hpa-resource-metrics-memory",
"NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE hpa-resource-metrics-memory Deployment/example 2441216/500Mi 1 10 1 20m",
"oc describe hpa hpa-resource-metrics-memory",
"Name: hpa-resource-metrics-memory Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Wed, 04 Mar 2020 16:31:37 +0530 Reference: Deployment/example Metrics: ( current / target ) resource memory on pods: 2441216 / 500Mi Min replicas: 1 Max replicas: 10 ReplicationController pods: 1 current / 1 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale recommended size matches current size ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource ScalingLimited False DesiredWithinRange the desired count is within the acceptable range Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulRescale 6m34s horizontal-pod-autoscaler New size: 1; reason: All metrics below target",
"oc describe hpa cm-test",
"Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) \"http_requests\" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range Events:",
"Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale False FailedGetScale the HPA controller was unable to get the target's current scale: no matches for kind \"ReplicationController\" in group \"apps\" Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetScale 6s (x3 over 36s) horizontal-pod-autoscaler no matches for kind \"ReplicationController\" in group \"apps\"",
"Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API",
"Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range",
"oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal",
"Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>",
"oc describe hpa <pod-name>",
"oc describe hpa cm-test",
"Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) \"http_requests\" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range",
"oc get all -n openshift-vertical-pod-autoscaler",
"NAME READY STATUS RESTARTS AGE pod/vertical-pod-autoscaler-operator-85b4569c47-2gmhc 1/1 Running 0 3m13s pod/vpa-admission-plugin-default-67644fc87f-xq7k9 1/1 Running 0 2m56s pod/vpa-recommender-default-7c54764b59-8gckt 1/1 Running 0 2m56s pod/vpa-updater-default-7f6cc87858-47vw9 1/1 Running 0 2m56s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/vpa-webhook ClusterIP 172.30.53.206 <none> 443/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/vertical-pod-autoscaler-operator 1/1 1 1 3m13s deployment.apps/vpa-admission-plugin-default 1/1 1 1 2m56s deployment.apps/vpa-recommender-default 1/1 1 1 2m56s deployment.apps/vpa-updater-default 1/1 1 1 2m56s NAME DESIRED CURRENT READY AGE replicaset.apps/vertical-pod-autoscaler-operator-85b4569c47 1 1 1 3m13s replicaset.apps/vpa-admission-plugin-default-67644fc87f 1 1 1 2m56s replicaset.apps/vpa-recommender-default-7c54764b59 1 1 1 2m56s replicaset.apps/vpa-updater-default-7f6cc87858 1 1 1 2m56s",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-master-1 <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-master-1 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-master-0 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-master-1 <none> <none>",
"oc edit Subscription vertical-pod-autoscaler -n openshift-vertical-pod-autoscaler",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: \"\" name: vertical-pod-autoscaler spec: config: nodeSelector: node-role.kubernetes.io/<node_role>: \"\" 1",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: \"\" name: vertical-pod-autoscaler spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 1 - key: \"node-role.kubernetes.io/infra\" operator: \"Exists\" effect: \"NoSchedule\"",
"oc edit VerticalPodAutoscalerController default -n openshift-vertical-pod-autoscaler",
"apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: \"\" 1 recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: \"\" 2 updater: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: \"\" 3",
"apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: \"\" tolerations: 1 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\" recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: \"\" tolerations: 2 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\" updater: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: \"\" tolerations: 3 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\"",
"oc get pods -n openshift-vertical-pod-autoscaler -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-infra-eastus3-2bndt <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-infra-eastus1-lrgj8 <none> <none>",
"resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi",
"resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k",
"oc get vpa <vpa-name> --output yaml",
"status: recommendation: containerRecommendations: - containerName: frontend lowerBound: cpu: 25m memory: 262144k target: cpu: 25m memory: 262144k uncappedTarget: cpu: 25m memory: 262144k upperBound: cpu: 262m memory: \"274357142\" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: \"498558823\"",
"apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: creationTimestamp: \"2021-04-21T19:29:49Z\" generation: 2 name: default namespace: openshift-vertical-pod-autoscaler resourceVersion: \"142172\" uid: 180e17e9-03cc-427f-9955-3b4d7aeb2d59 spec: minReplicas: 3 1 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Initial\" 3",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Off\" 3",
"oc get vpa <vpa-name> --output yaml",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: \"Off\"",
"spec: containers: - name: frontend resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi - name: backend resources: limits: cpu: \"1\" memory: 500Mi requests: cpu: 500m memory: 100Mi",
"spec: containers: name: frontend resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k name: backend resources: limits: cpu: \"1\" memory: 500Mi requests: cpu: 500m memory: 100Mi",
"apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: 1 container: args: 2 - '--kube-api-qps=50.0' - '--kube-api-burst=100.0' resources: requests: 3 cpu: 40m memory: 150Mi limits: memory: 300Mi recommender: 4 container: args: - '--kube-api-qps=60.0' - '--kube-api-burst=120.0' - '--memory-saver=true' 5 resources: requests: cpu: 75m memory: 275Mi limits: memory: 550Mi updater: 6 container: args: - '--kube-api-qps=60.0' - '--kube-api-burst=120.0' resources: requests: cpu: 80m memory: 350M limits: memory: 700Mi minReplicas: 2 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15",
"apiVersion: v1 kind: Pod metadata: name: vpa-updater-default-d65ffb9dc-hgw44 namespace: openshift-vertical-pod-autoscaler spec: containers: - args: - --logtostderr - --v=1 - --min-replicas=2 - --kube-api-qps=60.0 - --kube-api-burst=120.0 resources: requests: cpu: 80m memory: 350M",
"apiVersion: v1 kind: Pod metadata: name: vpa-admission-plugin-default-756999448c-l7tsd namespace: openshift-vertical-pod-autoscaler spec: containers: - args: - --logtostderr - --v=1 - --tls-cert-file=/data/tls-certs/tls.crt - --tls-private-key=/data/tls-certs/tls.key - --client-ca-file=/data/tls-ca-certs/service-ca.crt - --webhook-timeout-seconds=10 - --kube-api-qps=50.0 - --kube-api-burst=100.0 resources: requests: cpu: 40m memory: 150Mi",
"apiVersion: v1 kind: Pod metadata: name: vpa-recommender-default-74c979dbbc-znrd2 namespace: openshift-vertical-pod-autoscaler spec: containers: - args: - --logtostderr - --v=1 - --recommendation-margin-fraction=0.15 - --pod-recommendation-min-cpu-millicores=25 - --pod-recommendation-min-memory-mb=250 - --kube-api-qps=60.0 - --kube-api-burst=120.0 - --memory-saver=true resources: requests: cpu: 75m memory: 275Mi",
"apiVersion: v1 1 kind: ServiceAccount metadata: name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 2 kind: ClusterRoleBinding metadata: name: system:example-metrics-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 3 kind: ClusterRoleBinding metadata: name: system:example-vpa-actor roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-actor subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 4 kind: ClusterRoleBinding metadata: name: system:example-vpa-target-reader-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-target-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name>",
"apiVersion: apps/v1 kind: Deployment metadata: name: alt-vpa-recommender namespace: <namespace_name> spec: replicas: 1 selector: matchLabels: app: alt-vpa-recommender template: metadata: labels: app: alt-vpa-recommender spec: containers: 1 - name: recommender image: quay.io/example/alt-recommender:latest 2 imagePullPolicy: Always resources: limits: cpu: 200m memory: 1000Mi requests: cpu: 50m memory: 500Mi ports: - name: prometheus containerPort: 8942 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL seccompProfile: type: RuntimeDefault serviceAccountName: alt-vpa-recommender-sa 3 securityContext: runAsNonRoot: true",
"oc get pods",
"NAME READY STATUS RESTARTS AGE frontend-845d5478d-558zf 1/1 Running 0 4m25s frontend-845d5478d-7z9gx 1/1 Running 0 4m25s frontend-845d5478d-b7l4j 1/1 Running 0 4m25s vpa-alt-recommender-55878867f9-6tp5v 1/1 Running 0 9s",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender namespace: <namespace_name> spec: recommenders: - name: alt-vpa-recommender 1 targetRef: apiVersion: \"apps/v1\" kind: Deployment 2 name: frontend",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: \"Off\" recommenders: 5 - name: my-recommender",
"oc create -f <file-name>.yaml",
"oc get vpa <vpa-name> --output yaml",
"status: recommendation: containerRecommendations: - containerName: frontend lowerBound: 1 cpu: 25m memory: 262144k target: 2 cpu: 25m memory: 262144k uncappedTarget: 3 cpu: 25m memory: 262144k upperBound: 4 cpu: 262m memory: \"274357142\" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: \"498558823\"",
"apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: scalablepods.testing.openshift.io spec: group: testing.openshift.io versions: - name: v1 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: replicas: type: integer minimum: 0 selector: type: string status: type: object properties: replicas: type: integer subresources: status: {} scale: specReplicasPath: .spec.replicas statusReplicasPath: .status.replicas labelSelectorPath: .spec.selector 1 scope: Namespaced names: plural: scalablepods singular: scalablepod kind: ScalablePod shortNames: - spod",
"apiVersion: testing.openshift.io/v1 kind: ScalablePod metadata: name: scalable-cr namespace: default spec: selector: \"app=scalable-cr\" 1 replicas: 1",
"apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: scalable-cr namespace: default spec: targetRef: apiVersion: testing.openshift.io/v1 kind: ScalablePod name: scalable-cr updatePolicy: updateMode: \"Auto\"",
"oc delete namespace openshift-vertical-pod-autoscaler",
"oc delete crd verticalpodautoscalercheckpoints.autoscaling.k8s.io",
"oc delete crd verticalpodautoscalercontrollers.autoscaling.openshift.io",
"oc delete crd verticalpodautoscalers.autoscaling.k8s.io",
"oc delete MutatingWebhookConfiguration vpa-webhook-config",
"oc delete operator/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler",
"apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5",
"apiVersion: v1 kind: Secret metadata: name: test-secret type: Opaque 1 data: 2 username: <username> password: <password> stringData: 3 hostname: myapp.mydomain.com secret.properties: | property1=valueA property2=valueB",
"apiVersion: v1 kind: ServiceAccount secrets: - name: test-secret",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"cat /etc/secret-volume/*\" ] volumeMounts: 1 - name: secret-volume mountPath: /etc/secret-volume 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: secret-volume secret: secretName: test-secret 4 restartPolicy: Never",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"export\" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username from: kind: ImageStreamTag namespace: openshift name: 'cli:latest'",
"apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password>",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-sa-sample annotations: kubernetes.io/service-account.name: \"sa-name\" 1 type: kubernetes.io/service-account-token 2",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-basic-auth type: kubernetes.io/basic-auth 1 data: stringData: 2 username: admin password: <password>",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-ssh-auth type: kubernetes.io/ssh-auth 1 data: ssh-privatekey: | 2 MIIEpQIBAAKCAQEAulqb/Y",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-docker-cfg namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfig:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2",
"apiVersion: v1 kind: Secret metadata: name: secret-docker-json namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: example namespace: <namespace> type: Opaque 1 data: username: <base64 encoded username> password: <base64 encoded password> stringData: 2 hostname: myapp.mydomain.com",
"oc create sa <service_account_name> -n <your_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <secret_name> 1 annotations: kubernetes.io/service-account.name: \"sa-name\" 2 type: kubernetes.io/service-account-token 3",
"oc apply -f service-account-token-secret.yaml",
"oc get secret <sa_token_secret> -o jsonpath='{.data.token}' | base64 --decode 1",
"ayJhbGciOiJSUzI1NiIsImtpZCI6IklOb2dtck1qZ3hCSWpoNnh5YnZhSE9QMkk3YnRZMVZoclFfQTZfRFp1YlUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImJ1aWxkZXItdG9rZW4tdHZrbnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiYnVpbGRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjNmZGU2MGZmLTA1NGYtNDkyZi04YzhjLTNlZjE0NDk3MmFmNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmJ1aWxkZXIifQ.OmqFTDuMHC_lYvvEUrjr1x453hlEEHYcxS9VKSzmRkP1SiVZWPNPkTWlfNRp6bIUZD3U6aN3N7dMSN0eI5hu36xPgpKTdvuckKLTCnelMx6cxOdAbrcw1mCmOClNscwjS1KO1kzMtYnnq8rXHiMJELsNlhnRyyIXRTtNBsy4t64T3283s3SLsancyx0gy0ujx-Ch3uKAKdZi5iT-I8jnnQ-ds5THDs2h65RJhgglQEmSxpHrLGZFmyHAQI-_SjvmHZPXEc482x3SkaQHNLqpmrpJorNqh1M8ZHKzlujhZgVooMvJmWPXTb2vnvi3DGn2XI-hZxl1yD2yGH1RBpYUHA",
"curl -X GET <openshift_cluster_api> --header \"Authorization: Bearer <token>\" 1 2",
"apiVersion: v1 kind: Service metadata: name: registry annotations: service.beta.openshift.io/serving-cert-secret-name: registry-cert 1",
"kind: Service apiVersion: v1 metadata: name: my-service annotations: service.beta.openshift.io/serving-cert-secret-name: my-cert 1 spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376",
"oc create -f <file-name>.yaml",
"oc get secrets",
"NAME TYPE DATA AGE my-cert kubernetes.io/tls 2 9m",
"oc describe secret my-cert",
"Name: my-cert Namespace: openshift-console Labels: <none> Annotations: service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z service.beta.openshift.io/originating-service-name: my-service service.beta.openshift.io/originating-service-uid: 640f0ec3-afc2-4380-bf31-a8c784846a11 service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z Type: kubernetes.io/tls Data ==== tls.key: 1679 bytes tls.crt: 2595 bytes",
"apiVersion: v1 kind: Pod metadata: name: my-service-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mypod image: redis volumeMounts: - name: my-container mountPath: \"/etc/my-path\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: my-volume secret: secretName: my-cert items: - key: username path: my-group/my-username mode: 511",
"secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60",
"oc delete secret <secret_name>",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-",
"apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: secrets-store.csi.k8s.io spec: managementState: Managed",
"apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: \"/etc/kubernetes/secrets-store-csi-providers\" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux",
"oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers",
"oc apply -f aws-provider.yaml",
"mkdir credentialsrequest-dir-aws",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"secretsmanager:GetSecretValue\" - \"secretsmanager:DescribeSecret\" effect: Allow resource: \"arn:*:secretsmanager:*:*:secret:testSecret-??????\" secretRef: name: aws-creds namespace: my-namespace serviceAccountNames: - aws-provider",
"oc get --raw=/.well-known/openid-configuration | jq -r '.issuer'",
"https://<oidc_provider_name>",
"ccoctl aws create-iam-roles --name my-role --region=<aws_region> --credentials-requests-dir=credentialsrequest-dir-aws --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output",
"2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds",
"oc annotate -n my-namespace sa/aws-provider eks.amazonaws.com/role-arn=\"<aws_role_arn>\"",
"apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: my-namespace 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: \"testSecret\" objectType: \"secretsmanager\"",
"oc create -f secret-provider-class-aws.yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: my-aws-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: aws-provider containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-aws-provider\" 3",
"oc create -f deployment.yaml",
"oc exec my-aws-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/",
"testSecret",
"oc exec my-aws-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret",
"<secret_value>",
"apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: \"/etc/kubernetes/secrets-store-csi-providers\" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux",
"oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers",
"oc apply -f aws-provider.yaml",
"mkdir credentialsrequest-dir-aws",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"ssm:GetParameter\" - \"ssm:GetParameters\" effect: Allow resource: \"arn:*:ssm:*:*:parameter/testParameter*\" secretRef: name: aws-creds namespace: my-namespace serviceAccountNames: - aws-provider",
"oc get --raw=/.well-known/openid-configuration | jq -r '.issuer'",
"https://<oidc_provider_name>",
"ccoctl aws create-iam-roles --name my-role --region=<aws_region> --credentials-requests-dir=credentialsrequest-dir-aws --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output",
"2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds",
"oc annotate -n my-namespace sa/aws-provider eks.amazonaws.com/role-arn=\"<aws_role_arn>\"",
"apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: my-namespace 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: \"testParameter\" objectType: \"ssmparameter\"",
"oc create -f secret-provider-class-aws.yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: my-aws-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: aws-provider containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-aws-provider\" 3",
"oc create -f deployment.yaml",
"oc exec my-aws-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/",
"testParameter",
"oc exec my-aws-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret",
"<secret_value>",
"apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-azure namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-azure-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-azure-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-azure-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-azure namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-azure labels: app: csi-secrets-store-provider-azure spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-azure template: metadata: labels: app: csi-secrets-store-provider-azure spec: serviceAccountName: csi-secrets-store-provider-azure hostNetwork: true containers: - name: provider-azure-installer image: mcr.microsoft.com/oss/azure/secrets-store/provider-azure:v1.4.1 imagePullPolicy: IfNotPresent args: - --endpoint=unix:///provider/azure.sock - --construct-pem-chain=true - --healthz-port=8989 - --healthz-path=/healthz - --healthz-timeout=5s livenessProbe: httpGet: path: /healthz port: 8989 failureThreshold: 3 initialDelaySeconds: 5 timeoutSeconds: 10 periodSeconds: 30 resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 0 capabilities: drop: - ALL volumeMounts: - mountPath: \"/provider\" name: providervol affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: type operator: NotIn values: - virtual-kubelet volumes: - name: providervol hostPath: path: \"/var/run/secrets-store-csi-providers\" tolerations: - operator: Exists nodeSelector: kubernetes.io/os: linux",
"oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-azure -n openshift-cluster-csi-drivers",
"oc apply -f azure-provider.yaml",
"SERVICE_PRINCIPAL_CLIENT_SECRET=\"USD(az ad sp create-for-rbac --name https://USDKEYVAULT_NAME --query 'password' -otsv)\"",
"SERVICE_PRINCIPAL_CLIENT_ID=\"USD(az ad sp list --display-name https://USDKEYVAULT_NAME --query '[0].appId' -otsv)\"",
"oc create secret generic secrets-store-creds -n my-namespace --from-literal clientid=USD{SERVICE_PRINCIPAL_CLIENT_ID} --from-literal clientsecret=USD{SERVICE_PRINCIPAL_CLIENT_SECRET}",
"oc -n my-namespace label secret secrets-store-creds secrets-store.csi.k8s.io/used=true",
"apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-azure-provider 1 namespace: my-namespace 2 spec: provider: azure 3 parameters: 4 usePodIdentity: \"false\" useVMManagedIdentity: \"false\" userAssignedIdentityID: \"\" keyvaultName: \"kvname\" objects: | array: - | objectName: secret1 objectType: secret tenantId: \"tid\"",
"oc create -f secret-provider-class-azure.yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: my-azure-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-azure-provider\" 3 nodePublishSecretRef: name: secrets-store-creds 4",
"oc create -f deployment.yaml",
"oc exec my-azure-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/",
"secret1",
"oc exec my-azure-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/secret1",
"my-secret-value",
"apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-gcp namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-gcp-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-gcp-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-gcp namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-gcp-role rules: - apiGroups: - \"\" resources: - serviceaccounts/token verbs: - create - apiGroups: - \"\" resources: - serviceaccounts verbs: - get --- apiVersion: apps/v1 kind: DaemonSet metadata: name: csi-secrets-store-provider-gcp namespace: openshift-cluster-csi-drivers labels: app: csi-secrets-store-provider-gcp spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-gcp template: metadata: labels: app: csi-secrets-store-provider-gcp spec: serviceAccountName: csi-secrets-store-provider-gcp initContainers: - name: chown-provider-mount image: busybox command: - chown - \"1000:1000\" - /etc/kubernetes/secrets-store-csi-providers volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol securityContext: privileged: true hostNetwork: false hostPID: false hostIPC: false containers: - name: provider image: us-docker.pkg.dev/secretmanager-csi/secrets-store-csi-driver-provider-gcp/plugin@sha256:a493a78bbb4ebce5f5de15acdccc6f4d19486eae9aa4fa529bb60ac112dd6650 securityContext: privileged: true imagePullPolicy: IfNotPresent resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi env: - name: TARGET_DIR value: \"/etc/kubernetes/secrets-store-csi-providers\" volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol mountPropagation: None readOnly: false livenessProbe: failureThreshold: 3 httpGet: path: /live port: 8095 initialDelaySeconds: 5 timeoutSeconds: 10 periodSeconds: 30 volumes: - name: providervol hostPath: path: /etc/kubernetes/secrets-store-csi-providers tolerations: - key: kubernetes.io/arch operator: Equal value: amd64 effect: NoSchedule nodeSelector: kubernetes.io/os: linux",
"oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-gcp -n openshift-cluster-csi-drivers",
"oc apply -f gcp-provider.yaml",
"oc new-project my-namespace",
"oc label ns my-namespace security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite",
"oc create serviceaccount my-service-account --namespace=my-namespace",
"oc create secret generic secrets-store-creds -n my-namespace --from-file=key.json 1",
"oc -n my-namespace label secret secrets-store-creds secrets-store.csi.k8s.io/used=true",
"apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-gcp-provider 1 namespace: my-namespace 2 spec: provider: gcp 3 parameters: 4 secrets: | - resourceName: \"projects/my-project/secrets/testsecret1/versions/1\" path: \"testsecret1.txt\"",
"oc create -f secret-provider-class-gcp.yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: my-gcp-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: my-service-account 3 containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-gcp-provider\" 4 nodePublishSecretRef: name: secrets-store-creds 5",
"oc create -f deployment.yaml",
"oc exec my-gcp-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/",
"testsecret1",
"oc exec my-gcp-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/testsecret1",
"<secret_value>",
"helm repo add hashicorp https://helm.releases.hashicorp.com",
"helm repo update",
"oc new-project vault",
"oc label ns vault security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite",
"oc adm policy add-scc-to-user privileged -z vault -n vault",
"oc adm policy add-scc-to-user privileged -z vault-csi-provider -n vault",
"helm install vault hashicorp/vault --namespace=vault --set \"server.dev.enabled=true\" --set \"injector.enabled=false\" --set \"csi.enabled=true\" --set \"global.openshift=true\" --set \"injector.agentImage.repository=docker.io/hashicorp/vault\" --set \"server.image.repository=docker.io/hashicorp/vault\" --set \"csi.image.repository=docker.io/hashicorp/vault-csi-provider\" --set \"csi.agent.image.repository=docker.io/hashicorp/vault\" --set \"csi.daemonSet.providersDir=/var/run/secrets-store-csi-providers\"",
"oc patch daemonset -n vault vault-csi-provider --type='json' -p='[{\"op\": \"add\", \"path\": \"/spec/template/spec/containers/0/securityContext\", \"value\": {\"privileged\": true} }]'",
"oc get pods -n vault",
"NAME READY STATUS RESTARTS AGE vault-0 1/1 Running 0 24m vault-csi-provider-87rgw 1/2 Running 0 5s vault-csi-provider-bd6hp 1/2 Running 0 4s vault-csi-provider-smlv7 1/2 Running 0 5s",
"oc exec vault-0 --namespace=vault -- vault kv put secret/example1 testSecret1=my-secret-value",
"oc exec vault-0 --namespace=vault -- vault kv get secret/example1",
"= Secret Path = secret/data/example1 ======= Metadata ======= Key Value --- ----- created_time 2024-04-05T07:05:16.713911211Z custom_metadata <nil> deletion_time n/a destroyed false version 1 === Data === Key Value --- ----- testSecret1 my-secret-value",
"oc exec vault-0 --namespace=vault -- vault auth enable kubernetes",
"Success! Enabled kubernetes auth method at: kubernetes/",
"TOKEN_REVIEWER_JWT=\"USD(oc exec vault-0 --namespace=vault -- cat /var/run/secrets/kubernetes.io/serviceaccount/token)\"",
"KUBERNETES_SERVICE_IP=\"USD(oc get svc kubernetes --namespace=default -o go-template=\"{{ .spec.clusterIP }}\")\"",
"oc exec -i vault-0 --namespace=vault -- vault write auth/kubernetes/config issuer=\"https://kubernetes.default.svc.cluster.local\" token_reviewer_jwt=\"USD{TOKEN_REVIEWER_JWT}\" kubernetes_host=\"https://USD{KUBERNETES_SERVICE_IP}:443\" kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt",
"Success! Data written to: auth/kubernetes/config",
"oc exec -i vault-0 --namespace=vault -- vault policy write csi -<<EOF path \"secret/data/*\" { capabilities = [\"read\"] } EOF",
"Success! Uploaded policy: csi",
"oc exec -i vault-0 --namespace=vault -- vault write auth/kubernetes/role/csi bound_service_account_names=default bound_service_account_namespaces=default,test-ns,negative-test-ns,my-namespace policies=csi ttl=20m",
"Success! Data written to: auth/kubernetes/role/csi",
"oc get pods -n vault",
"NAME READY STATUS RESTARTS AGE vault-0 1/1 Running 0 43m vault-csi-provider-87rgw 2/2 Running 0 19m vault-csi-provider-bd6hp 2/2 Running 0 19m vault-csi-provider-smlv7 2/2 Running 0 19m",
"oc get pods -n openshift-cluster-csi-drivers | grep -E \"secrets\"",
"secrets-store-csi-driver-node-46d2g 3/3 Running 0 45m secrets-store-csi-driver-node-d2jjn 3/3 Running 0 45m secrets-store-csi-driver-node-drmt4 3/3 Running 0 45m secrets-store-csi-driver-node-j2wlt 3/3 Running 0 45m secrets-store-csi-driver-node-v9xv4 3/3 Running 0 45m secrets-store-csi-driver-node-vlz28 3/3 Running 0 45m secrets-store-csi-driver-operator-84bd699478-fpxrw 1/1 Running 0 47m",
"apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-vault-provider 1 namespace: my-namespace 2 spec: provider: vault 3 parameters: 4 roleName: \"csi\" vaultAddress: \"http://vault.vault:8200\" objects: | - secretPath: \"secret/data/example1\" objectName: \"testSecret1\" secretKey: \"testSecret1",
"oc create -f secret-provider-class-vault.yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: busybox-deployment 1 namespace: my-namespace 2 labels: app: busybox spec: replicas: 1 selector: matchLabels: app: busybox template: metadata: labels: app: busybox spec: terminationGracePeriodSeconds: 0 containers: - image: registry.k8s.io/e2e-test-images/busybox:1.29-4 name: busybox imagePullPolicy: IfNotPresent command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-vault-provider\" 3",
"oc create -f deployment.yaml",
"oc exec busybox-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/",
"testSecret1",
"oc exec busybox-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret1",
"my-secret-value",
"oc edit secretproviderclass my-azure-provider 1",
"apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-azure-provider namespace: my-namespace spec: provider: azure secretObjects: 1 - secretName: tlssecret 2 type: kubernetes.io/tls 3 labels: environment: \"test\" data: - objectName: tlskey 4 key: tls.key 5 - objectName: tlscrt key: tls.crt parameters: usePodIdentity: \"false\" keyvaultName: \"kvname\" objects: | array: - | objectName: tlskey objectType: secret - | objectName: tlscrt objectType: secret tenantId: \"tid\"",
"oc get secretproviderclasspodstatus <secret_provider_class_pod_status_name> -o yaml 1",
"status: mounted: true objects: - id: secret/tlscrt version: f352293b97da4fa18d96a9528534cb33 - id: secret/tlskey version: 02534bc3d5df481cb138f8b2a13951ef podName: busybox-<hash> secretProviderClassName: my-azure-provider targetPath: /var/lib/kubelet/pods/f0d49c1e-c87a-4beb-888f-37798456a3e7/volumes/kubernetes.io~csi/secrets-store-inline/mount",
"oc create serviceaccount <service_account_name>",
"oc patch serviceaccount <service_account_name> -p '{\"metadata\": {\"annotations\": {\"cloud.google.com/workload-identity-provider\": \"projects/<project_number>/locations/global/workloadIdentityPools/<identity_pool>/providers/<identity_provider>\"}}}'",
"oc patch serviceaccount <service_account_name> -p '{\"metadata\": {\"annotations\": {\"cloud.google.com/service-account-email\": \"<service_account_email>\"}}}'",
"oc patch serviceaccount <service_account_name> -p '{\"metadata\": {\"annotations\": {\"cloud.google.com/injection-mode\": \"direct\"}}}'",
"gcloud projects add-iam-policy-binding <project_id> --member \"<service_account_email>\" --role \"projects/<project_id>/roles/<role_for_workload_permissions>\"",
"oc get serviceaccount <service_account_name>",
"apiVersion: v1 kind: ServiceAccount metadata: name: app-x namespace: service-a annotations: cloud.google.com/workload-identity-provider: \"projects/<project_number>/locations/global/workloadIdentityPools/<identity_pool>/providers/<identity_provider>\" 1 cloud.google.com/service-account-email: \"[email protected]\" cloud.google.com/audience: \"sts.googleapis.com\" 2 cloud.google.com/token-expiration: \"86400\" 3 cloud.google.com/gcloud-run-as-user: \"1000\" cloud.google.com/injection-mode: \"direct\" 4",
"apiVersion: apps/v1 kind: Deployment metadata: name: ubi9 spec: replicas: 1 selector: matchLabels: app: ubi9 template: metadata: labels: app: ubi9 spec: serviceAccountName: \"<service_account_name>\" 1 containers: - name: ubi image: 'registry.access.redhat.com/ubi9/ubi-micro:latest' command: - /bin/sh - '-c' - | sleep infinity",
"oc apply -f deployment.yaml",
"oc get pods -o json | jq -r '.items[0].spec.containers[0].env[] | select(.name==\"GOOGLE_APPLICATION_CREDENTIALS\")'",
"{ \"name\": \"GOOGLE_APPLICATION_CREDENTIALS\", \"value\": \"/var/run/secrets/workload-identity/federation.json\" }",
"apiVersion: v1 kind: Pod metadata: name: app-x-pod namespace: service-a annotations: cloud.google.com/skip-containers: \"init-first,sidecar\" cloud.google.com/external-credentials-json: |- 1 { \"type\": \"external_account\", \"audience\": \"//iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/on-prem-kubernetes/providers/<identity_provider>\", \"subject_token_type\": \"urn:ietf:params:oauth:token-type:jwt\", \"token_url\": \"https://sts.googleapis.com/v1/token\", \"service_account_impersonation_url\": \"https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/[email protected]:generateAccessToken\", \"credential_source\": { \"file\": \"/var/run/secrets/sts.googleapis.com/serviceaccount/token\", \"format\": { \"type\": \"text\" } } } spec: serviceAccountName: app-x initContainers: - name: init-first image: container-image:version containers: - name: sidecar image: container-image:version - name: container-name image: container-image:version env: 2 - name: GOOGLE_APPLICATION_CREDENTIALS value: /var/run/secrets/gcloud/config/federation.json - name: CLOUDSDK_COMPUTE_REGION value: asia-northeast1 volumeMounts: - name: gcp-iam-token readOnly: true mountPath: /var/run/secrets/sts.googleapis.com/serviceaccount - mountPath: /var/run/secrets/gcloud/config name: external-credential-config readOnly: true volumes: - name: gcp-iam-token projected: sources: - serviceAccountToken: audience: sts.googleapis.com expirationSeconds: 86400 path: token - downwardAPI: defaultMode: 288 items: - fieldRef: apiVersion: v1 fieldPath: metadata.annotations['cloud.google.com/external-credentials-json'] path: federation.json name: external-credential-config",
"kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2",
"oc create configmap <configmap_name> [options]",
"oc create configmap game-config --from-file=example-files/",
"oc describe configmaps game-config",
"Name: game-config Namespace: default Labels: <none> Annotations: <none> Data game.properties: 158 bytes ui.properties: 83 bytes",
"cat example-files/game.properties",
"enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30",
"cat example-files/ui.properties",
"color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice",
"oc create configmap game-config --from-file=example-files/",
"oc get configmaps game-config -o yaml",
"apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:34:05Z name: game-config namespace: default resourceVersion: \"407\" selflink: /api/v1/namespaces/default/configmaps/game-config uid: 30944725-d66e-11e5-8cd0-68f728db1985",
"oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties",
"cat example-files/game.properties",
"enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30",
"cat example-files/ui.properties",
"color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice",
"oc create configmap game-config-2 --from-file=example-files/game.properties --from-file=example-files/ui.properties",
"oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties",
"oc get configmaps game-config-2 -o yaml",
"apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:52:05Z name: game-config-2 namespace: default resourceVersion: \"516\" selflink: /api/v1/namespaces/default/configmaps/game-config-2 uid: b4952dc3-d670-11e5-8cd0-68f728db1985",
"oc get configmaps game-config-3 -o yaml",
"apiVersion: v1 data: game-special-key: |- 1 enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:54:22Z name: game-config-3 namespace: default resourceVersion: \"530\" selflink: /api/v1/namespaces/default/configmaps/game-config-3 uid: 05f8da22-d671-11e5-8cd0-68f728db1985",
"oc create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm",
"oc get configmaps special-config -o yaml",
"apiVersion: v1 data: special.how: very special.type: charm kind: ConfigMap metadata: creationTimestamp: 2016-02-18T19:14:38Z name: special-config namespace: default resourceVersion: \"651\" selflink: /api/v1/namespaces/default/configmaps/special-config uid: dadce046-d673-11e5-8cd0-68f728db1985",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4",
"apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"SPECIAL_LEVEL_KEY=very log_level=INFO",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)\" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"very charm",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/special.how\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never",
"very",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/path/to/special-key\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never",
"very",
"service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as resetting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} }",
"oc describe machineconfig <name>",
"oc describe machineconfig 00-worker",
"Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3",
"oc create -f devicemgr.yaml",
"kubeletconfig.machineconfiguration.openshift.io/devicemgr created",
"oc get priorityclasses",
"NAME VALUE GLOBAL-DEFAULT AGE system-node-critical 2000001000 false 72m system-cluster-critical 2000000000 false 72m openshift-user-critical 1000000000 false 3d13h cluster-logging 1000000 false 29s",
"apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority 1 value: 1000000 2 preemptionPolicy: PreemptLowerPriority 3 globalDefault: false 4 description: \"This priority class should be used for XYZ service pods only.\" 5",
"oc create -f <file-name>.yaml",
"apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] priorityClassName: high-priority 1",
"oc create -f <file-name>.yaml",
"oc describe pod router-default-66d5cf9464-7pwkc",
"kind: Pod apiVersion: v1 metadata: Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress Controlled By: ReplicaSet/router-default-66d5cf9464",
"apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api",
"oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"",
"oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc label nodes <name> <key>=<value>",
"oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east",
"kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: \"user-node\" region: \"east\"",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.31.3",
"kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 spec: template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1",
"apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 spec: nodeSelector: region: east type: user-node",
"oc get pods -n openshift-run-once-duration-override-operator",
"NAME READY STATUS RESTARTS AGE run-once-duration-override-operator-7b88c676f6-lcxgc 1/1 Running 0 7m46s runoncedurationoverride-62blp 1/1 Running 0 41s runoncedurationoverride-h8h8b 1/1 Running 0 41s runoncedurationoverride-tdsqk 1/1 Running 0 41s",
"oc label namespace <namespace> \\ 1 runoncedurationoverrides.admission.runoncedurationoverride.openshift.io/enabled=true",
"apiVersion: v1 kind: Pod metadata: name: example namespace: <namespace> 1 spec: restartPolicy: Never 2 securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: busybox securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] image: busybox:1.25 command: - /bin/sh - -ec - | while sleep 5; do date; done",
"oc get pods -n <namespace> -o yaml | grep activeDeadlineSeconds",
"activeDeadlineSeconds: 3600",
"oc edit runoncedurationoverride cluster",
"apiVersion: operator.openshift.io/v1 kind: RunOnceDurationOverride metadata: spec: runOnceDurationOverride: spec: activeDeadlineSeconds: 1800 1",
"oc edit featuregate cluster",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-worker spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 containerRuntimeConfig: defaultRuntime: crun 2",
"oc edit ns/<namespace_name>",
"apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/description: \"\" openshift.io/display-name: \"\" openshift.io/requester: system:admin openshift.io/sa.scc.mcs: s0:c27,c24 openshift.io/sa.scc.supplemental-groups: 1000/10000 1 openshift.io/sa.scc.uid-range: 1000/10000 2 name: userns",
"apiVersion: v1 kind: Pod metadata: name: userns-pod spec: containers: - name: userns-container image: registry.access.redhat.com/ubi9 command: [\"sleep\", \"1000\"] securityContext: capabilities: drop: [\"ALL\"] allowPrivilegeEscalation: false 1 runAsNonRoot: true 2 seccompProfile: type: RuntimeDefault runAsUser: 1000 3 runAsGroup: 1000 4 hostUsers: false 5",
"oc create -f <file_name>.yaml",
"oc rsh -c <container_name> pod/<pod_name>",
"oc rsh -c userns-container_name pod/userns-pod",
"sh-5.1USD id",
"uid=1000(1000) gid=1000(1000) groups=1000(1000)",
"sh-5.1USD lsns -t user",
"NS TYPE NPROCS PID USER COMMAND 4026532447 user 3 1 1000 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000 1",
"oc debug node/ci-ln-z5vppzb-72292-8zp2b-worker-c-q8sh9",
"oc debug node/ci-ln-z5vppzb-72292-8zp2b-worker-c-q8sh9",
"sh-5.1# chroot /host",
"sh-5.1# lsns -t user",
"NS TYPE NPROCS PID USER COMMAND 4026531837 user 233 1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 28 4026532447 user 1 4767 2908816384 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/nodes/working-with-pods |
Chapter 1. Installing OpenShift Pipelines | Chapter 1. Installing OpenShift Pipelines This guide walks cluster administrators through the process of installing the Red Hat OpenShift Pipelines Operator to an OpenShift Container Platform cluster. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have installed oc CLI. You have installed OpenShift Pipelines ( tkn ) CLI on your local system. Your cluster has the Marketplace capability enabled or the Red Hat Operator catalog source configured manually. Note In a cluster with both Windows and Linux nodes, Red Hat OpenShift Pipelines can run on only Linux nodes. 1.1. Installing the Red Hat OpenShift Pipelines Operator in web console You can install Red Hat OpenShift Pipelines using the Operator listed in the OpenShift Container Platform OperatorHub. When you install the Red Hat OpenShift Pipelines Operator, the custom resources (CRs) required for the pipelines configuration are automatically installed along with the Operator. The default Operator custom resource definition (CRD) config.operator.tekton.dev is now replaced by tektonconfigs.operator.tekton.dev . In addition, the Operator provides the following additional CRDs to individually manage OpenShift Pipelines components: tektonpipelines.operator.tekton.dev , tektontriggers.operator.tekton.dev and tektonaddons.operator.tekton.dev . If you have OpenShift Pipelines already installed on your cluster, the existing installation is seamlessly upgraded. The Operator will replace the instance of config.operator.tekton.dev on your cluster with an instance of tektonconfigs.operator.tekton.dev and additional objects of the other CRDs as necessary. Warning If you manually changed your existing installation, such as, changing the target namespace in the config.operator.tekton.dev CRD instance by making changes to the resource name - cluster field, then the upgrade path is not smooth. In such cases, the recommended workflow is to uninstall your installation and reinstall the Red Hat OpenShift Pipelines Operator. The Red Hat OpenShift Pipelines Operator now provides the option to choose the components that you want to install by specifying profiles as part of the TektonConfig custom resource (CR). The TektonConfig CR is automatically installed when the Operator is installed. The supported profiles are: Lite: This profile installs only Tekton Pipelines. Basic: This profile installs Tekton Pipelines, Tekton Triggers, Tekton Chains, and Tekton Results. All: This is the default profile used when the TektonConfig CR is installed. This profile installs all of the Tekton components, including Tekton Pipelines, Tekton Triggers, Tekton Chains, Tekton Results, Pipelines as Code, and Tekton Addons. Tekton Addons includes the ClusterTriggerBindings , ConsoleCLIDownload , ConsoleQuickStart , and ConsoleYAMLSample resources, as well as the tasks and step action definitions available by using the cluster resolver from the openshift-pipelines namespace. Procedure In the Administrator perspective of the web console, navigate to Operators OperatorHub . Use the Filter by keyword box to search for Red Hat OpenShift Pipelines Operator in the catalog. Click the Red Hat OpenShift Pipelines Operator tile. Read the brief description about the Operator on the Red Hat OpenShift Pipelines Operator page. Click Install . On the Install Operator page: Select All namespaces on the cluster (default) for the Installation Mode . This mode installs the Operator in the default openshift-operators namespace, which enables the Operator to watch and be made available to all namespaces in the cluster. Select Automatic for the Approval Strategy . This ensures that the future upgrades to the Operator are handled automatically by the Operator Lifecycle Manager (OLM). If you select the Manual approval strategy, OLM creates an update request. As a cluster administrator, you must then manually approve the OLM update request to update the Operator to the new version. Select an Update Channel . The latest channel enables installation of the most recent stable version of the Red Hat OpenShift Pipelines Operator. Currently, it is the default channel for installing the Red Hat OpenShift Pipelines Operator. To install a specific version of the Red Hat OpenShift Pipelines Operator, cluster administrators can use the corresponding pipelines-<version> channel. For example, to install the Red Hat OpenShift Pipelines Operator version 1.8.x , you can use the pipelines-1.8 channel. Note Starting with OpenShift Container Platform 4.11, the preview and stable channels for installing and upgrading the Red Hat OpenShift Pipelines Operator are not available. However, in OpenShift Container Platform 4.10 and earlier versions, you can use the preview and stable channels for installing and upgrading the Operator. Click Install . You will see the Operator listed on the Installed Operators page. Note The Operator is installed automatically into the openshift-operators namespace. Verify that the Status is set to Succeeded Up to date to confirm successful installation of Red Hat OpenShift Pipelines Operator. Warning The success status may show as Succeeded Up to date even if installation of other components is in-progress. Therefore, it is important to verify the installation manually in the terminal. Verify that all components of the Red Hat OpenShift Pipelines Operator were installed successfully. Login to the cluster on the terminal, and run the following command: USD oc get tektonconfig config Example output NAME VERSION READY REASON config 1.18.0 True If the READY condition is True , the Operator and its components have been installed successfully. Additonally, check the components' versions by running the following command: USD oc get tektonpipeline,tektontrigger,tektonchain,tektonaddon,pac Example output 1.2. Installing the OpenShift Pipelines Operator by using the CLI You can install Red Hat OpenShift Pipelines Operator from OperatorHub by using the command-line interface (CLI). Procedure Create a Subscription object YAML file to subscribe a namespace to the Red Hat OpenShift Pipelines Operator, for example, sub.yaml : Example Subscription YAML apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-pipelines-operator namespace: openshift-operators spec: channel: <channel_name> 1 name: openshift-pipelines-operator-rh 2 source: redhat-operators 3 sourceNamespace: openshift-marketplace 4 1 Name of the channel that you want to subscribe. The pipelines-<version> channel is the default channel. For example, the default channel for Red Hat OpenShift Pipelines Operator version 1.7 is pipelines-1.7 . The latest channel enables installation of the most recent stable version of the Red Hat OpenShift Pipelines Operator. 2 Name of the Operator to subscribe to. 3 Name of the CatalogSource object that provides the Operator. 4 Namespace of the CatalogSource object. Use openshift-marketplace for the default OperatorHub catalog sources. Create the Subscription object by running the following command: USD oc apply -f sub.yaml The subscription installs the Red Hat OpenShift Pipelines Operator into the openshift-operators namespace. The Operator automatically installs OpenShift Pipelines into the default openshift-pipelines target namespace. 1.3. Red Hat OpenShift Pipelines Operator in a restricted environment The Red Hat OpenShift Pipelines Operator enables support for installation of pipelines in a restricted network environment. The Operator installs a proxy webhook that sets the proxy environment variables in the containers of the pod created by tekton-controllers based on the cluster proxy object. It also sets the proxy environment variables in the TektonPipelines , TektonTriggers , Controllers , Webhooks , and Operator Proxy Webhook resources. By default, the proxy webhook is disabled for the openshift-pipelines namespace. To disable it for any other namespace, you can add the operator.tekton.dev/disable-proxy: true label to the namespace object. 1.4. Additional resources You can learn more about installing Operators on OpenShift Container Platform in the adding Operators to a cluster section. To configure Tekton Chains, see Using Tekton Chains for Red Hat OpenShift Pipelines supply chain security . To configure Tekton Results, see Using Tekton Results for OpenShift Pipelines observability . To install and deploy in-cluster Tekton Hub, see Using Tekton Hub with Red Hat OpenShift Pipelines . For more information on using OpenShift Pipelines in a restricted environment, see: Mirroring images to run pipelines in a restricted environment Configuring Samples Operator for a restricted cluster About disconnected installation mirroring | [
"oc get tektonconfig config",
"NAME VERSION READY REASON config 1.18.0 True",
"oc get tektonpipeline,tektontrigger,tektonchain,tektonaddon,pac",
"NAME VERSION READY REASON tektonpipeline.operator.tekton.dev/pipeline v0.47.0 True NAME VERSION READY REASON tektontrigger.operator.tekton.dev/trigger v0.23.1 True NAME VERSION READY REASON tektonchain.operator.tekton.dev/chain v0.16.0 True NAME VERSION READY REASON tektonaddon.operator.tekton.dev/addon 1.11.0 True NAME VERSION READY REASON openshiftpipelinesascode.operator.tekton.dev/pipelines-as-code v0.19.0 True",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-pipelines-operator namespace: openshift-operators spec: channel: <channel_name> 1 name: openshift-pipelines-operator-rh 2 source: redhat-operators 3 sourceNamespace: openshift-marketplace 4",
"oc apply -f sub.yaml"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.18/html/installing_and_configuring/installing-pipelines |
4.3. Extending the rh-ruby23 Software Collection | 4.3. Extending the rh-ruby23 Software Collection In Red Hat Software Collections 3.8, it is possible to extend the rh-ruby23 Software Collection by adding dependent packages. The Ruby on Rails 4.2 (rh-ror42) Software Collection, which is built on top of Ruby 2.3 provided by the rh-ruby23 Software Collection, is one example of such an extension. This section provides detailed information about the rh-ror42 metapackage and the rh-ror42-rubygem-bcrypt package, which are both part of the rh-ror42 Software Collection. 4.3.1. The rh-ror42 Software Collection This section contains a commented example of the Ruby on Rails 4.2 metapackage for the rh-ror42 Software Collection. The rh-ror42 Software Collection depends on the rh-ruby23 Software Collection. Note the following in the rh-ror42 Software Collection metapackage example: The rh-ror42 Software Collection spec file has the following build dependencies set: BuildRequires: %{scl_prefix_ruby}scldevel BuildRequires: %{scl_prefix_ruby}rubygems-devel This expands to, for example, rh-ruby23-scldevel and rh-ruby23-rubygems-devel . The rh-ruby23-scldevel subpackage contains two important macros, %scl_ruby and %scl_prefix_ruby . The rh-ruby23-scldevel subpackage should be available in the build root. In case there are multiple Ruby Software Collections available, rh-ruby23-scldevel determines which of the available Software Collections should be used. Note that the %scl_ruby and %scl_prefix_ruby macros are also defined at the top of the spec file. Although the definitions are not required, they provide a visual hint that the rh-ror42 Software Collection has been designed to be built on top of the rh-ruby23 Software Collection. They also serve as a fallback value. The rh-ror42-runtime subpackage must depend on the runtime subpackage of the Software Collection it depends on. This dependency is specified as follows: %package runtime Requires: %{scl_prefix_ruby}runtime When the package is built against the rh-ruby23 Software Collection, this expands to rh-ruby23-runtime . The rh-ror42-build subpackage must depend on the scldevel subpackage of the Software Collection it depends on. This is to ensure that all other packages of this Software Collection will have the same macros defined, thus it is built against the same Ruby version. %package build Requires: %{scl_prefix_ruby}scldevel In the case of the rh-ruby23 Software Collection, this expands to rh-ruby23-scldevel . The enable scriptlet for the rh-ror42 Software Collection contains the following line: . scl_source enable %{scl_ruby} Note the dot at the beginning of the line. This line makes the Ruby Software Collection start implicitly when the rh-ror42 Software Collection is started so that the user can only type scl enable rh-ror42 command instead of scl enable rh-ruby23 rh-ror42 command to run command in the Software Collection environment. The rh-ror42-scldevel subpackage is provided so that it is available in case you need it to build a Software Collection which extends the rh-ror42 Software Collection. The package provides the %{scl_ror} and %{scl_prefix_ror} macros, which can be used to extend the rh-ror42 Software Collection. Because the rh-ror42 Software Collection's gems are installed in a separate root directory structure, you need to ensure that the correct ownership for the rubygems directories is set. This is done by using a snippet to generate a file list rubygems_filesystem.list . You are advised to set the runtime package to own all directories which would, if located in the root file system, be owned by another package. One example of such directories in the case of the rh-ror42 Software Collection is the Rubygem directory structure. %global scl_name_prefix rh- %global scl_name_base ror %global scl_name_version 41 %global scl %{scl_name_prefix}%{scl_name_base}%{scl_name_version} # Fallback to rh-ruby23. rh-ruby23-scldevel is unlikely to be available in # the build root. %{!?scl_ruby:%global scl_ruby rh-ruby23} %{!?scl_prefix_ruby:%global scl_prefix_ruby %{scl_ruby}-} # Do not produce empty debuginfo package. %global debug_package %{nil} # Support SCL over NFS. %global nfsmountable 1 %{!?install_scl: %global install_scl 1} %scl_package %scl Summary: Package that installs %scl Name: %scl_name Version: 2.0 Release: 5%{?dist} License: GPLv2+ %if 0%{?install_scl} Requires: %{scl_prefix}rubygem-therubyracer Requires: %{scl_prefix}rubygem-sqlite3 Requires: %{scl_prefix}rubygem-rails Requires: %{scl_prefix}rubygem-sass-rails Requires: %{scl_prefix}rubygem-coffee-rails Requires: %{scl_prefix}rubygem-jquery-rails Requires: %{scl_prefix}rubygem-sdoc Requires: %{scl_prefix}rubygem-turbolinks Requires: %{scl_prefix}rubygem-bcrypt Requires: %{scl_prefix}rubygem-uglifier Requires: %{scl_prefix}rubygem-jbuilder Requires: %{scl_prefix}rubygem-spring %endif BuildRequires: help2man BuildRequires: scl-utils-build BuildRequires: %{scl_prefix_ruby}scldevel BuildRequires: %{scl_prefix_ruby}rubygems-devel %description This is the main package for %scl Software Collection. %package runtime Summary: Package that handles %scl Software Collection. Requires: scl-utils # The enable scriptlet depends on the ruby executable. Requires: %{scl_prefix_ruby}ruby %description runtime Package shipping essential scripts to work with %scl Software Collection. %package build Summary: Package shipping basic build configuration Requires: scl-utils-build Requires: %{scl_runtime} Requires: %{scl_prefix_ruby}scldevel %description build Package shipping essential configuration macros to build %scl Software Collection. %package scldevel Summary: Package shipping development files for %scl Provides: scldevel(%{scl_name_base}) %description scldevel Package shipping development files, especially usefull for development of packages depending on %scl Software Collection. %prep %setup -c -T %install %scl_install cat >> %{buildroot}%{_scl_scripts}/enable << EOF export PATH="%{_bindir}:%{_sbindir}\USD{PATH:+:\USD{PATH}}" export LD_LIBRARY_PATH="%{_libdir}\USD{LD_LIBRARY_PATH:+:\USD{LD_LIBRARY_PATH}}" export MANPATH="%{_mandir}:\USD{MANPATH:-}" export PKG_CONFIG_PATH="%{_libdir}/pkgconfig\USD{PKG_CONFIG_PATH:+:\USD{PKG_CONFIG_PATH}}" export GEM_PATH="\USD{GEM_PATH:=%{gem_dir}:\`scl enable %{scl_ruby} -- ruby -e "print Gem.path.join(':')"\`}" . scl_source enable %{scl_ruby} EOF cat >> %{buildroot}%{_root_sysconfdir}/rpm/macros.%{scl_name_base}-scldevel << EOF %%scl_%{scl_name_base} %{scl} %%scl_prefix_%{scl_name_base} %{scl_prefix} EOF scl enable %{scl_ruby} - << \EOF set -e # Fake rh-ror42 Software Collection environment. GEM_PATH=%{gem_dir}:`ruby -e "print Gem.path.join(':')"` \ X_SCLS=%{scl} \ ruby -rfileutils > rubygems_filesystem.list << \EOR # Create the RubyGems file system. Gem.ensure_gem_subdirectories '%{buildroot}%{gem_dir}' FileUtils.mkdir_p File.join '%{buildroot}', Gem.default_ext_dir_for('%{gem_dir}') # Output the relevant directories. Gem.default_dirs['%{scl}_system'.to_sym].each { |k, p| puts p } EOR EOF %files %files runtime -f rubygems_filesystem.list %scl_files %files build %{_root_sysconfdir}/rpm/macros.%{scl}-config %files scldevel %{_root_sysconfdir}/rpm/macros.%{scl_name_base}-scldevel %changelog * Thu Jan 16 2015 John Doe <[email protected]> - 1-1 - Initial package. 4.3.2. The rh-ror42-rubygem-bcrypt Package Below is a commented example of the rh-ror42-rubygem-bcrypt package spec file. This package provides the bcrypt Ruby gem. For more information on bcrypt, see the following website: http://rubygems.org/gems/bcrypt-ruby Note that the only significant difference between the rh-ror42-rubygem-bcrypt package spec file and a normal Software Collection package spec file is the following: The BuildRequires tags are prefixed with %{?scl_prefix_ruby} instead of %{scl_prefix} . %{?scl:%scl_package rubygem-%{gem_name}} %{!?scl:%global pkg_name %{name}} %global gem_name bcrypt Summary: Wrapper around bcrypt() password hashing algorithm Name: %{?scl_prefix}rubygem-%{gem_name} Version: 3.1.9 Release: 2%{?dist} Group: Development/Languages # ext/* - Public Domain # spec/TestBCrypt.java - ISC License: MIT and Public Domain and ISC URL: https://github.com/codahale/bcrypt-ruby Source0: http://rubygems.org/downloads/%{gem_name}-%{version}.gem Requires: %{?scl_prefix_ruby}ruby(release) Requires: %{?scl_prefix_ruby}ruby(rubygems) BuildRequires: %{?scl_prefix_ruby}rubygems-devel BuildRequires: %{?scl_prefix_ruby}ruby-devel BuildRequires: %{?scl_prefix}rubygem(rspec) Provides: %{?scl_prefix}rubygem(bcrypt) = %{version} %description bcrypt() is a sophisticated and secure hash algorithm designed by The OpenBSD project for hashing passwords. bcrypt provides a simple, humane wrapper for safely handling passwords. %package doc Summary: Documentation for %{pkg_name} Group: Documentation Requires: %{?scl_prefix}%{pkg_name} = %{version}-%{release} %description doc Documentation for %{pkg_name}. %prep %setup -n %{pkg_name}-%{version} -q -c -T %{?scl:scl enable %{scl} - << \EOF} %gem_install -n %{SOURCE0} %{?scl:EOF} %build %install mkdir -p %{buildroot}%{gem_dir} cp -pa .%{gem_dir}/* \ %{buildroot}%{gem_dir}/ mkdir -p %{buildroot}%{gem_extdir_mri} cp -pa .%{gem_extdir_mri}/* %{buildroot}%{gem_extdir_mri}/ # Prevent a symlink with an invalid target in -debuginfo (BZ#878863). rm -rf %{buildroot}%{gem_instdir}/ext/ %check %{?scl:scl enable %{scl} - << \EOF} pushd .%{gem_instdir} # 2 failutes due to old RSpec # https://github.com/rspec/rspec-expectations/pull/284 rspec -IUSD(dirs +1)%{gem_extdir_mri} spec |grep '34 examples, 2 failures' || exit 1 popd %{?scl:EOF} %files %dir %{gem_instdir} %exclude %{gem_instdir}/.* %{gem_libdir} %{gem_extdir_mri} %exclude %{gem_cache} %{gem_spec} %doc %{gem_instdir}/COPYING %files doc %doc %{gem_docdir} %doc %{gem_instdir}/README.md %doc %{gem_instdir}/CHANGELOG %{gem_instdir}/Rakefile %{gem_instdir}/Gemfile* %{gem_instdir}/%{gem_name}.gemspec %{gem_instdir}/spec %changelog * Fri Mar 21 2015 John Doe <[email protected]> - 3.1.2-4 - Initial package. 4.3.3. Building the rh-ror42 Software Collection To build the rh-ror42 Software Collection: Install the rh-ruby23-scldevel subpackage which is a part of the rh-ruby23 Software Collection. Build rh-ror42.spec and install the ror42-runtime and ror42-build packages. Build rubygem-bcrypt.spec . 4.3.4. Testing the rh-ror42 Software Collection To test the rh-ror42 Software Collection: Install the rh-ror42-rubygem-bcrypt package. Run the following command: Verify that the output contains the following line: | [
"BuildRequires: %{scl_prefix_ruby}scldevel BuildRequires: %{scl_prefix_ruby}rubygems-devel",
"%package runtime Requires: %{scl_prefix_ruby}runtime",
"%package build Requires: %{scl_prefix_ruby}scldevel",
". scl_source enable %{scl_ruby}",
"%global scl_name_prefix rh- %global scl_name_base ror %global scl_name_version 41 %global scl %{scl_name_prefix}%{scl_name_base}%{scl_name_version} Fallback to rh-ruby23. rh-ruby23-scldevel is unlikely to be available in the build root. %{!?scl_ruby:%global scl_ruby rh-ruby23} %{!?scl_prefix_ruby:%global scl_prefix_ruby %{scl_ruby}-} Do not produce empty debuginfo package. %global debug_package %{nil} Support SCL over NFS. %global nfsmountable 1 %{!?install_scl: %global install_scl 1} %scl_package %scl Summary: Package that installs %scl Name: %scl_name Version: 2.0 Release: 5%{?dist} License: GPLv2+ %if 0%{?install_scl} Requires: %{scl_prefix}rubygem-therubyracer Requires: %{scl_prefix}rubygem-sqlite3 Requires: %{scl_prefix}rubygem-rails Requires: %{scl_prefix}rubygem-sass-rails Requires: %{scl_prefix}rubygem-coffee-rails Requires: %{scl_prefix}rubygem-jquery-rails Requires: %{scl_prefix}rubygem-sdoc Requires: %{scl_prefix}rubygem-turbolinks Requires: %{scl_prefix}rubygem-bcrypt Requires: %{scl_prefix}rubygem-uglifier Requires: %{scl_prefix}rubygem-jbuilder Requires: %{scl_prefix}rubygem-spring %endif BuildRequires: help2man BuildRequires: scl-utils-build BuildRequires: %{scl_prefix_ruby}scldevel BuildRequires: %{scl_prefix_ruby}rubygems-devel %description This is the main package for %scl Software Collection. %package runtime Summary: Package that handles %scl Software Collection. Requires: scl-utils The enable scriptlet depends on the ruby executable. Requires: %{scl_prefix_ruby}ruby %description runtime Package shipping essential scripts to work with %scl Software Collection. %package build Summary: Package shipping basic build configuration Requires: scl-utils-build Requires: %{scl_runtime} Requires: %{scl_prefix_ruby}scldevel %description build Package shipping essential configuration macros to build %scl Software Collection. %package scldevel Summary: Package shipping development files for %scl Provides: scldevel(%{scl_name_base}) %description scldevel Package shipping development files, especially usefull for development of packages depending on %scl Software Collection. %prep %setup -c -T %install %scl_install cat >> %{buildroot}%{_scl_scripts}/enable << EOF export PATH=\"%{_bindir}:%{_sbindir}\\USD{PATH:+:\\USD{PATH}}\" export LD_LIBRARY_PATH=\"%{_libdir}\\USD{LD_LIBRARY_PATH:+:\\USD{LD_LIBRARY_PATH}}\" export MANPATH=\"%{_mandir}:\\USD{MANPATH:-}\" export PKG_CONFIG_PATH=\"%{_libdir}/pkgconfig\\USD{PKG_CONFIG_PATH:+:\\USD{PKG_CONFIG_PATH}}\" export GEM_PATH=\"\\USD{GEM_PATH:=%{gem_dir}:\\`scl enable %{scl_ruby} -- ruby -e \"print Gem.path.join(':')\"\\`}\" . scl_source enable %{scl_ruby} EOF cat >> %{buildroot}%{_root_sysconfdir}/rpm/macros.%{scl_name_base}-scldevel << EOF %%scl_%{scl_name_base} %{scl} %%scl_prefix_%{scl_name_base} %{scl_prefix} EOF scl enable %{scl_ruby} - << \\EOF set -e Fake rh-ror42 Software Collection environment. GEM_PATH=%{gem_dir}:`ruby -e \"print Gem.path.join(':')\"` X_SCLS=%{scl} ruby -rfileutils > rubygems_filesystem.list << \\EOR # Create the RubyGems file system. Gem.ensure_gem_subdirectories '%{buildroot}%{gem_dir}' FileUtils.mkdir_p File.join '%{buildroot}', Gem.default_ext_dir_for('%{gem_dir}') # Output the relevant directories. Gem.default_dirs['%{scl}_system'.to_sym].each { |k, p| puts p } EOR EOF %files %files runtime -f rubygems_filesystem.list %scl_files %files build %{_root_sysconfdir}/rpm/macros.%{scl}-config %files scldevel %{_root_sysconfdir}/rpm/macros.%{scl_name_base}-scldevel %changelog * Thu Jan 16 2015 John Doe <[email protected]> - 1-1 - Initial package.",
"%{?scl:%scl_package rubygem-%{gem_name}} %{!?scl:%global pkg_name %{name}} %global gem_name bcrypt Summary: Wrapper around bcrypt() password hashing algorithm Name: %{?scl_prefix}rubygem-%{gem_name} Version: 3.1.9 Release: 2%{?dist} Group: Development/Languages ext/* - Public Domain spec/TestBCrypt.java - ISC License: MIT and Public Domain and ISC URL: https://github.com/codahale/bcrypt-ruby Source0: http://rubygems.org/downloads/%{gem_name}-%{version}.gem Requires: %{?scl_prefix_ruby}ruby(release) Requires: %{?scl_prefix_ruby}ruby(rubygems) BuildRequires: %{?scl_prefix_ruby}rubygems-devel BuildRequires: %{?scl_prefix_ruby}ruby-devel BuildRequires: %{?scl_prefix}rubygem(rspec) Provides: %{?scl_prefix}rubygem(bcrypt) = %{version} %description bcrypt() is a sophisticated and secure hash algorithm designed by The OpenBSD project for hashing passwords. bcrypt provides a simple, humane wrapper for safely handling passwords. %package doc Summary: Documentation for %{pkg_name} Group: Documentation Requires: %{?scl_prefix}%{pkg_name} = %{version}-%{release} %description doc Documentation for %{pkg_name}. %prep %setup -n %{pkg_name}-%{version} -q -c -T %{?scl:scl enable %{scl} - << \\EOF} %gem_install -n %{SOURCE0} %{?scl:EOF} %build %install mkdir -p %{buildroot}%{gem_dir} cp -pa .%{gem_dir}/* %{buildroot}%{gem_dir}/ mkdir -p %{buildroot}%{gem_extdir_mri} cp -pa .%{gem_extdir_mri}/* %{buildroot}%{gem_extdir_mri}/ Prevent a symlink with an invalid target in -debuginfo (BZ#878863). rm -rf %{buildroot}%{gem_instdir}/ext/ %check %{?scl:scl enable %{scl} - << \\EOF} pushd .%{gem_instdir} 2 failutes due to old RSpec https://github.com/rspec/rspec-expectations/pull/284 rspec -IUSD(dirs +1)%{gem_extdir_mri} spec |grep '34 examples, 2 failures' || exit 1 popd %{?scl:EOF} %files %dir %{gem_instdir} %exclude %{gem_instdir}/.* %{gem_libdir} %{gem_extdir_mri} %exclude %{gem_cache} %{gem_spec} %doc %{gem_instdir}/COPYING %files doc %doc %{gem_docdir} %doc %{gem_instdir}/README.md %doc %{gem_instdir}/CHANGELOG %{gem_instdir}/Rakefile %{gem_instdir}/Gemfile* %{gem_instdir}/%{gem_name}.gemspec %{gem_instdir}/spec %changelog * Fri Mar 21 2015 John Doe <[email protected]> - 3.1.2-4 - Initial package.",
"scl enable rh-ror42 -- ruby -r bcrypt -e \"puts BCrypt::Password.create('my password')\"",
"USD2aUSD10USDs./ReniLY.wXPHVBQ9npoeyZf5KzywfpvI5lhjG6Ams3u0hKqwVbW"
] | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/sect-Extending_the_rh-ruby23_Software_Collections |
Template APIs | Template APIs OpenShift Container Platform 4.14 Reference guide for template APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/template_apis/index |
2.4. Storage Formats for Virtual Disks | 2.4. Storage Formats for Virtual Disks QCOW2 Formatted Virtual Machine Storage QCOW2 is a storage format for virtual disks. QCOW stands for QEMU copy-on-write. The QCOW2 format decouples the physical storage layer from the virtual layer by adding a mapping between logical and physical blocks. Each logical block is mapped to its physical offset, which enables storage over-commitment and virtual machine snapshots, where each QCOW volume only represents changes made to an underlying virtual disk. The initial mapping points all logical blocks to the offsets in the backing file or volume. When a virtual machine writes data to a QCOW2 volume after a snapshot, the relevant block is read from the backing volume, modified with the new information and written into a new snapshot QCOW2 volume. Then the map is updated to point to the new place. Raw The raw storage format has a performance advantage over QCOW2 in that no formatting is applied to virtual disks stored in the raw format. Virtual machine data operations on virtual disks stored in raw format require no additional work from hosts. When a virtual machine writes data to a given offset in its virtual disk, the I/O is written to the same offset on the backing file or logical volume. Raw format requires that the entire space of the defined image be preallocated unless using externally managed thin provisioned LUNs from a storage array. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/qcow2 |
Chapter 8. Performing health checks on Red Hat Quay deployments | Chapter 8. Performing health checks on Red Hat Quay deployments Health check mechanisms are designed to assess the health and functionality of a system, service, or component. Health checks help ensure that everything is working correctly, and can be used to identify potential issues before they become critical problems. By monitoring the health of a system, Red Hat Quay administrators can address abnormalities or potential failures for things like geo-replication deployments, Operator deployments, standalone Red Hat Quay deployments, object storage issues, and so on. Performing health checks can also help reduce the likelihood of encountering troubleshooting scenarios. Health check mechanisms can play a role in diagnosing issues by providing valuable information about the system's current state. By comparing health check results with expected benchmarks or predefined thresholds, deviations or anomalies can be identified quicker. 8.1. Red Hat Quay health check endpoints Important Links contained herein to any external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or its entities, products, or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content. Red Hat Quay has several health check endpoints. The following table shows you the health check, a description, an endpoint, and an example output. Table 8.1. Health check endpoints Health check Description Endpoint Example output instance The instance endpoint acquires the entire status of the specific Red Hat Quay instance. Returns a dict with key-value pairs for the following: auth , database , disk_space , registry_gunicorn , service_key , and web_gunicorn. Returns a number indicating the health check response of either 200 , which indicates that the instance is healthy, or 503 , which indicates an issue with your deployment. https://{quay-ip-endpoint}/health/instance or https://{quay-ip-endpoint}/health {"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200} endtoend The endtoend endpoint conducts checks on all services of your Red Hat Quay instance. Returns a dict with key-value pairs for the following: auth , database , redis , storage . Returns a number indicating the health check response of either 200 , which indicates that the instance is healthy, or 503 , which indicates an issue with your deployment. https://{quay-ip-endpoint}/health/endtoend {"data":{"services":{"auth":true,"database":true,"redis":true,"storage":true}},"status_code":200} warning The warning endpoint conducts a check on the warnings. Returns a dict with key-value pairs for the following: disk_space_warning . Returns a number indicating the health check response of either 200 , which indicates that the instance is healthy, or 503 , which indicates an issue with your deployment. https://{quay-ip-endpoint}/health/warning {"data":{"services":{"disk_space_warning":true}},"status_code":503} 8.2. Navigating to a Red Hat Quay health check endpoint Use the following procedure to navigate to the instance endpoint. This procedure can be repeated for endtoend and warning endpoints. Procedure On your web browser, navigate to https://{quay-ip-endpoint}/health/instance . You are taken to the health instance page, which returns information like the following: {"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200} For Red Hat Quay, "status_code": 200 means that the instance is health. Conversely, if you receive "status_code": 503 , there is an issue with your deployment. Additional resources | [
"{\"data\":{\"services\":{\"auth\":true,\"database\":true,\"disk_space\":true,\"registry_gunicorn\":true,\"service_key\":true,\"web_gunicorn\":true}},\"status_code\":200}"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/deploy_red_hat_quay_-_high_availability/health-check-quay |
Appendix I. BlueStore configuration options | Appendix I. BlueStore configuration options The following are Ceph BlueStore configuration options that can be configured during deployment. Note This list is not complete. rocksdb_cache_size Description The size of the RocksDB cache in MB. Type 32-bit Integer Default 512 bluestore_throttle_bytes Description Maximum bytes available before the user throttles the input or output (I/O) submission. Type Size Default 64 MB bluestore_throttle_deferred_bytes Description Maximum bytes for deferred writes before the user throttles the I/O submission. Type Size Default 128 MB bluestore_throttle_cost_per_io Description Overhead added to the transaction cost in bytes for each I/O. Type Size Default 0 B bluestore_throttle_cost_per_io_hdd Description The default bluestore_throttle_cost_per_io value for HDDs. Type Unsigned integer Default 67 000 bluestore_throttle_cost_per_io_ssd Description The default bluestore_throttle_cost_per_io value for SSDs. Type Unsigned integer Default 4 000 bluestore_debug_enforce_settings Description hdd enforces settings intended for BlueStore above a rotational drive. ssd enforces settings intended for BlueStore above a solid drive Type default , hdd , ssd Default default Note After changing the bluestore_debug_enforce_settings option, restart the OSD. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/configuration_guide/bluestore-configuration-options_conf |
Chapter 14. Configuring audit logging | Chapter 14. Configuring audit logging Red Hat Advanced Cluster Security for Kubernetes provides audit logging features that you can use to check all the changes made in Red Hat Advanced Cluster Security for Kubernetes. The audit log captures all the PUT and POST events, which are modifications to Red Hat Advanced Cluster Security for Kubernetes. Use this information to troubleshoot a problem or to keep a record of important events, such as changes to roles and permissions. With audit logging you get a complete picture of all normal and abnormal events that happened on Red Hat Advanced Cluster Security for Kubernetes. Note Audit logging is not enabled by default. You must enable audit logging manually. Warning Currently there is no message delivery guarantee for audit log messages. 14.1. Enabling audit logging When you enable audit logging, every time there is a modification, Red Hat Advanced Cluster Security for Kubernetes sends an HTTP POST message (in JSON format) to the configured system. Prerequisites Configure Splunk or another webhook receiver to handle Red Hat Advanced Cluster Security for Kubernetes log messages. You must have write permission enabled on the Notifiers resource for your role. Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll down to the Notifier Integrations section and select Generic Webhook or Splunk . Fill in the required information and turn on the Enable Audit Logging toggle. 14.2. Sample audit log message The log message has the following format: { "headers": { "Accept-Encoding": [ "gzip" ], "Content-Length": [ "586" ], "Content-Type": [ "application/json" ], "User-Agent": [ "Go-http-client/1.1" ] }, "data": { "audit": { "interaction": "CREATE", "method": "UI", "request": { "endpoint": "/v1/notifiers", "method": "POST", "source": { "requestAddr": "10.131.0.7:58276", "xForwardedFor": "8.8.8.8", }, "sourceIp": "8.8.8.8", "payload": { "@type": "storage.Notifier", "enabled": true, "generic": { "auditLoggingEnabled": true, "endpoint": "http://samplewebhookserver.com:8080" }, "id": "b53232ee-b13e-47e0-b077-1e383c84aa07", "name": "Webhook", "type": "generic", "uiEndpoint": "https://localhost:8000" } }, "status": "REQUEST_SUCCEEDED", "time": "2019-05-28T16:07:05.500171300Z", "user": { "friendlyName": "John Doe", "role": { "globalAccess": "READ_WRITE_ACCESS", "name": "Admin" }, "username": "[email protected]" } } } } The source IP address of the request is displayed in the source parameters, which makes it easier for you to investigate audit log requests and identify their origin. To determine the source IP address of a request, RHACS uses the following parameters: xForwardedFor : The X-Forwarded-For header. requestAddr : The remote address header. sourceIp : The IP address of the HTTP request. Important The determination of the source IP address depends on how you expose Central externally. You can consider the following options: If you expose Central behind a load balancer, for example, if you are running Central on Google Kubernetes Engine (GKE) or Amazon Elastic Kubernetes Service (Amazon EKS) by using the Kubernetes External Load Balancer service type, see Preserving the client source IP . If you expose Central behind an Ingress Controller that forwards requests by using the X-Forwarded-For header , you do not need to make any configuration changes. If you expose Central with a TLS passthrough route, you cannot determine the source IP address of the client. A cluster-internal IP address is displayed in the source parameters as the source IP address of the client. | [
"{ \"headers\": { \"Accept-Encoding\": [ \"gzip\" ], \"Content-Length\": [ \"586\" ], \"Content-Type\": [ \"application/json\" ], \"User-Agent\": [ \"Go-http-client/1.1\" ] }, \"data\": { \"audit\": { \"interaction\": \"CREATE\", \"method\": \"UI\", \"request\": { \"endpoint\": \"/v1/notifiers\", \"method\": \"POST\", \"source\": { \"requestAddr\": \"10.131.0.7:58276\", \"xForwardedFor\": \"8.8.8.8\", }, \"sourceIp\": \"8.8.8.8\", \"payload\": { \"@type\": \"storage.Notifier\", \"enabled\": true, \"generic\": { \"auditLoggingEnabled\": true, \"endpoint\": \"http://samplewebhookserver.com:8080\" }, \"id\": \"b53232ee-b13e-47e0-b077-1e383c84aa07\", \"name\": \"Webhook\", \"type\": \"generic\", \"uiEndpoint\": \"https://localhost:8000\" } }, \"status\": \"REQUEST_SUCCEEDED\", \"time\": \"2019-05-28T16:07:05.500171300Z\", \"user\": { \"friendlyName\": \"John Doe\", \"role\": { \"globalAccess\": \"READ_WRITE_ACCESS\", \"name\": \"Admin\" }, \"username\": \"[email protected]\" } } } }"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/configuring/configure-audit-logging |
Chapter 5. Routing traffic by using Argo Rollouts for OpenShift Service Mesh | Chapter 5. Routing traffic by using Argo Rollouts for OpenShift Service Mesh Argo Rollouts in Red Hat OpenShift GitOps support various traffic management mechanisms such as OpenShift Routes and Istio-based OpenShift Service Mesh. The choice for selecting a traffic manager to be used with Argo Rollouts depends on the existing traffic management solution that you are using to deploy cluster workloads. For example, Red Hat OpenShift Routes provides basic traffic management functionality and does not require the use of a sidecar container. However, Red Hat OpenShift Service Mesh provides more advanced routing capabilities by using Istio but does require the configuration of a sidecar container. You can use OpenShift Service Mesh to split traffic between two application versions. Canary version : A new version of an application where you gradually route the traffic. Stable version : The current version of an application. After the canary version is stable and has all the user traffic directed to it, it becomes the new stable version. The stable version is discarded. The Istio-support within Argo Rollouts uses the Gateway and VirtualService resources to handle traffic routing. Gateway : You can use a Gateway to manage inbound and outbound traffic for your mesh. The gateway is the entry point of OpenShift Service Mesh and handles traffic requests sent to an application. VirtualService : VirtualService defines traffic routing rules and the percentage of traffic that goes to underlying services, such as the stable and canary services. Sample deployment scenario For example, in a sample deployment scenario, 100% of the traffic is directed towards the stable version of the application during the initial instance. The application is running as expected, and no additional attempts are made to deploy a new version. However, after deploying a new version of the application, Argo Rollouts creates a new canary deployment based on the new version of the application and routes some percentage of traffic to that new version. When you use Service Mesh, Argo Rollouts automatically modifies the VirtualService resource to control the traffic split percentage between the stable and canary application versions. In the following diagram, 20% of traffic is sent to the canary application version after the first promotion and then 80% is sent to the stable version by the stable service. 5.1. Configuring Argo Rollouts to route traffic by using OpenShift Service Mesh You can use OpenShift Service Mesh to configure Argo Rollouts by creating the following items: A gateway Two Kubernetes services: stable and canary, which point to the pods within each version of the services A VirtualService A rollout custom resource (CR) In the following example procedure, the rollout routes 20% of traffic to a canary version of the application. After a manual promotion, the rollout routes 40% of traffic. After another manual promotion, the rollout performs multiple automated promotions until all traffic is routed to the new application version. Prerequisites You are logged in to the OpenShift Container Platform cluster as an administrator. You installed Red Hat OpenShift GitOps on your OpenShift Container Platform cluster. You installed Argo Rollouts on your OpenShift Container Platform cluster. You installed the Argo Rollouts CLI on your system. You installed the OpenShift Service Mesh operator on the cluster and configured the ServiceMeshControlPlane . Procedure Create a Gateway object to accept the inbound traffic for your mesh. Create a YAML file with the following snippet content. Example gateway called rollouts-demo-gateway apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: rollouts-demo-gateway 1 spec: selector: istio: ingressgateway 2 servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" 1 The name of the gateway. 2 Specifies the name of the ingress gateway. The gateway configures exposed ports and protocols but does not include any traffic routing configuration. Apply the YAML file by running the following command. USD oc apply -f gateway.yaml Create the services for the canary and stable versions of the application. In the Administrator perspective of the web console, go to Networking Services . Click Create Service . On the Create Service page, click YAML view and add the following snippet. The following example creates a stable service called rollouts-demo-stable . Stable traffic is directed to this service. apiVersion: v1 kind: Service metadata: name: rollouts-demo-stable spec: ports: 1 - port: 80 targetPort: http protocol: TCP name: http selector: 2 app: rollouts-demo 1 Specifies the name of the port used by the application for running inside the container. 2 Ensure that the contents of the selector field are the same in stable service and Rollout CR. Click Create to create a stable service. On the Create Service page, click YAML view and add the following snippet. The following example creates a canary service called rollouts-demo-canary . Canary traffic is directed to this service. apiVersion: v1 kind: Service metadata: name: rollouts-demo-canary spec: ports: 1 - port: 80 targetPort: http protocol: TCP name: http selector: 2 app: rollouts-demo 1 Specifies the name of the port used by the application for running inside the container. 2 Ensure that the contents of the selector field are the same in canary service and Rollout CR. Click Create to create the canary service. Create a VirtualService to route incoming traffic to stable and canary services. Create a YAML file, and copy the following YAML into it. The following example creates a VirtualService called rollouts-demo-vsvc : apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: rollouts-demo-vsvc spec: gateways: - rollouts-demo-gateway 1 hosts: - rollouts-demo-vsvc.local http: - name: primary route: - destination: host: rollouts-demo-stable 2 port: number: 15372 3 weight: 100 - destination: host: rollouts-demo-canary 4 port: number: 15372 weight: 0 tls: 5 - match: - port: 3000 sniHosts: - rollouts-demo-vsvc.local route: - destination: host: rollouts-demo-stable weight: 100 - destination: host: rollouts-demo-canary weight: 0 1 The name of the gateway. 2 The name of the targeted stable service. 3 Specifies the port number used for listening to traffic. 4 The name of the targeted canary service. 5 Specifies the TLS configuration used to secure the VirtualService. Apply the YAML file by running the following command. USD oc apply -f virtual-service.yaml Create the Rollout CR. In this example, Istio is used as a traffic manager. In the Administrator perspective of the web console, go to Operators Installed Operators Red Hat OpenShift GitOps Rollout . On the Create Rollout page, click YAML view and add the following snippet. The following example creates a Rollout CR called rollouts-demo : apiVersion: argoproj.io/v1alpha1 kind: Rollout metadata: name: rollouts-demo spec: replicas: 5 strategy: canary: canaryService: rollouts-demo-canary 1 stableService: rollouts-demo-stable 2 trafficRouting: istio: virtualServices: - name: rollouts-demo-vsvc routes: - primary steps: 3 - setWeight: 20 - pause: {} - setWeight: 40 - pause: {} - setWeight: 60 - pause: {duration: 30} - setWeight: 80 - pause: {duration: 60} revisionHistoryLimit: 2 selector: 4 matchLabels: app: rollouts-demo template: metadata: labels: app: rollouts-demo istio-injection: enabled spec: containers: - name: rollouts-demo image: argoproj/rollouts-demo:blue ports: - name: http containerPort: 8080 protocol: TCP resources: requests: memory: 32Mi cpu: 5m 1 This value must match the name of the created canary Service . 2 This value must match the name of the created stable Service . 3 Specify the steps for the rollout. This example gradually routes 20%, 40%, 60%, and 100% of traffic to the canary version. 4 Ensure that the contents of the selector field are the same as in canary and stable service. Click Create . In the Rollout tab, under the Rollout section, verify that the Status field of the rollout shows Phase: Healthy . Verify that the route is directing 100% of the traffic towards the stable version of the application. Watch the progression of your rollout by running the following command: USD oc argo rollouts get rollout rollouts-demo --watch -n <namespace> 1 1 Specify the namespace where the Rollout resource is defined. Example output Name: rollouts-demo Namespace: argo-rollouts Status: ✔ Healthy Strategy: Canary Step: 8/8 SetWeight: 100 ActualWeight: 100 Images: argoproj/rollouts-demo:blue (stable) Replicas: Desired: 5 Current: 5 Updated: 5 Ready: 5 Available: 5 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ✔ Healthy 4m50s └──# revision:1 └──⧉ rollouts-demo-687d76d795 ReplicaSet ✔ Healthy 4m50s stable ├──□ rollouts-demo-687d76d795-75k57 Pod ✔ Running 4m49s ready:1/1 ├──□ rollouts-demo-687d76d795-bv5zf Pod ✔ Running 4m49s ready:1/1 ├──□ rollouts-demo-687d76d795-jsxg8 Pod ✔ Running 4m49s ready:1/1 ├──□ rollouts-demo-687d76d795-rsgtv Pod ✔ Running 4m49s ready:1/1 └──□ rollouts-demo-687d76d795-xrmrj Pod ✔ Running 4m49s ready:1/1 Note When the first instance of the Rollout resource is created, the rollout regulates the amount of traffic to be directed towards the stable and canary application versions. In the initial instance, the creation of the Rollout resource routes all of the traffic towards the stable version of the application and skips the part where the traffic is sent to the canary version. To verify that the service mesh sends 100% of the traffic for the stable service and 0% for the canary service, run the following command: USD oc describe virtualservice/rollouts-demo-vsvc -n <namespace> View the following output displayed in the terminal: route - destination: host: rollouts-demo-stable weight: 100 1 - destination: host: rollouts-demo-canary weight: 0 2 1 A value of 100 means that 100% of traffic is directed to the stable version. 2 A value of 0 means that 0% of traffic is directed to the canary version. Simulate the new canary version of the application by modifying the container image deployed in the rollout. Modify the .spec.template.spec.containers.image value from argoproj/rollouts-demo:blue to argoproj/rollouts-demo:yellow , by running the following command. USD oc argo rollouts set image rollouts-demo rollouts-demo=argoproj/rollouts-demo:yellow -n <namespace> As a result, the container image deployed in the rollout is modified and the rollout initiates a new canary deployment. Note As per the setWeight property defined in the .spec.strategy.canary.steps field of the Rollout resource, initially 20% of traffic to the route reaches the canary version and 80% of traffic is directed towards the stable version. The rollout is paused after 20% of traffic is directed to the canary version. Watch the progression of your rollout by running the following command. USD oc argo rollouts get rollout rollouts-demo --watch -n <namespace> 1 1 Specify the namespace where the Rollout resource is defined. In the following example, 80% of traffic is routed to the stable service and 20% of traffic is routed to the canary service. The deployment is then paused indefinitely until you manually promote it to the level. Example output Name: rollouts-demo Namespace: argo-rollouts Status: ॥ Paused Message: CanaryPauseStep Strategy: Canary Step: 1/8 SetWeight: 20 ActualWeight: 20 Images: argoproj/rollouts-demo:blue (stable) argoproj/rollouts-demo:yellow (canary) Replicas: Desired: 5 Current: 6 Updated: 1 Ready: 6 Available: 6 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ॥ Paused 6m51s ├──# revision:2 │ └──⧉ rollouts-demo-6cf78c66c5 ReplicaSet ✔ Healthy 99s canary │ └──□ rollouts-demo-6cf78c66c5-zrgd4 Pod ✔ Running 98s ready:1/1 └──# revision:1 └──⧉ rollouts-demo-687d76d795 ReplicaSet ✔ Healthy 9m51s stable ├──□ rollouts-demo-687d76d795-75k57 Pod ✔ Running 9m50s ready:1/1 ├──□ rollouts-demo-687d76d795-jsxg8 Pod ✔ Running 9m50s ready:1/1 ├──□ rollouts-demo-687d76d795-rsgtv Pod ✔ Running 9m50s ready:1/1 └──□ rollouts-demo-687d76d795-xrmrj Pod ✔ Running 9m50s ready:1/1 Example with 80% directed to the stable version and 20% of traffic directed to the canary version. route - destination: host: rollouts-demo-stable weight: 80 1 - destination: host: rollouts-demo-canary weight: 20 2 1 A value of 80 means that 80% of traffic is directed to the stable version. 2 A value of 20 means that 20% of traffic is directed to the canary version. Manually promote the deployment to the promotion step. USD oc argo rollouts promote rollouts-demo -n <namespace> 1 1 Specify the namespace where the Rollout resource is defined. Watch the progression of your rollout by running the following command: USD oc argo rollouts get rollout rollouts-demo --watch -n <namespace> 1 1 Specify the namespace where the Rollout resource is defined. In the following example, 60% of traffic is routed to the stable service and 40% of traffic is routed to the canary service. The deployment is then paused indefinitely until you manually promote it to the level. Example output Name: rollouts-demo Namespace: argo-rollouts Status: ॥ Paused Message: CanaryPauseStep Strategy: Canary Step: 3/8 SetWeight: 40 ActualWeight: 40 Images: argoproj/rollouts-demo:blue (stable) argoproj/rollouts-demo:yellow (canary) Replicas: Desired: 5 Current: 7 Updated: 2 Ready: 7 Available: 7 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ॥ Paused 9m21s ├──# revision:2 │ └──⧉ rollouts-demo-6cf78c66c5 ReplicaSet ✔ Healthy 99s canary │ └──□ rollouts-demo-6cf78c66c5-zrgd4 Pod ✔ Running 98s ready:1/1 └──# revision:1 └──⧉ rollouts-demo-687d76d795 ReplicaSet ✔ Healthy 9m51s stable ├──□ rollouts-demo-687d76d795-75k57 Pod ✔ Running 9m50s ready:1/1 ├──□ rollouts-demo-687d76d795-jsxg8 Pod ✔ Running 9m50s ready:1/1 ├──□ rollouts-demo-687d76d795-rsgtv Pod ✔ Running 9m50s ready:1/1 └──□ rollouts-demo-687d76d795-xrmrj Pod ✔ Running 9m50s ready:1/1 Example of 60% traffic directed to the stable version and 40% directed to the canary version. route - destination: host: rollouts-demo-stable weight: 60 1 - destination: host: rollouts-demo-canary weight: 40 2 1 A value of 60 means that 60% of traffic is directed to the stable version. 2 A value of 40 means that 40% of traffic is directed to the canary version. Increase the traffic weight in the canary version to 100% and discard the traffic in the stable version of the application by running the following command: USD oc argo rollouts promote rollouts-demo -n <namespace> 1 1 Specify the namespace where the Rollout resource is defined. Watch the progression of your rollout by running the following command: USD oc argo rollouts get rollout rollouts-demo --watch -n <namespace> 1 1 Specify the namespace where the Rollout resource is defined. After successful completion, weight on the stable service is 100% and 0% on the canary service. | [
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: rollouts-demo-gateway 1 spec: selector: istio: ingressgateway 2 servers: - port: number: 80 name: http protocol: HTTP hosts: - \"*\"",
"oc apply -f gateway.yaml",
"apiVersion: v1 kind: Service metadata: name: rollouts-demo-stable spec: ports: 1 - port: 80 targetPort: http protocol: TCP name: http selector: 2 app: rollouts-demo",
"apiVersion: v1 kind: Service metadata: name: rollouts-demo-canary spec: ports: 1 - port: 80 targetPort: http protocol: TCP name: http selector: 2 app: rollouts-demo",
"apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: rollouts-demo-vsvc spec: gateways: - rollouts-demo-gateway 1 hosts: - rollouts-demo-vsvc.local http: - name: primary route: - destination: host: rollouts-demo-stable 2 port: number: 15372 3 weight: 100 - destination: host: rollouts-demo-canary 4 port: number: 15372 weight: 0 tls: 5 - match: - port: 3000 sniHosts: - rollouts-demo-vsvc.local route: - destination: host: rollouts-demo-stable weight: 100 - destination: host: rollouts-demo-canary weight: 0",
"oc apply -f virtual-service.yaml",
"apiVersion: argoproj.io/v1alpha1 kind: Rollout metadata: name: rollouts-demo spec: replicas: 5 strategy: canary: canaryService: rollouts-demo-canary 1 stableService: rollouts-demo-stable 2 trafficRouting: istio: virtualServices: - name: rollouts-demo-vsvc routes: - primary steps: 3 - setWeight: 20 - pause: {} - setWeight: 40 - pause: {} - setWeight: 60 - pause: {duration: 30} - setWeight: 80 - pause: {duration: 60} revisionHistoryLimit: 2 selector: 4 matchLabels: app: rollouts-demo template: metadata: labels: app: rollouts-demo istio-injection: enabled spec: containers: - name: rollouts-demo image: argoproj/rollouts-demo:blue ports: - name: http containerPort: 8080 protocol: TCP resources: requests: memory: 32Mi cpu: 5m",
"oc argo rollouts get rollout rollouts-demo --watch -n <namespace> 1",
"Name: rollouts-demo Namespace: argo-rollouts Status: ✔ Healthy Strategy: Canary Step: 8/8 SetWeight: 100 ActualWeight: 100 Images: argoproj/rollouts-demo:blue (stable) Replicas: Desired: 5 Current: 5 Updated: 5 Ready: 5 Available: 5 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ✔ Healthy 4m50s └──# revision:1 └──⧉ rollouts-demo-687d76d795 ReplicaSet ✔ Healthy 4m50s stable ├──□ rollouts-demo-687d76d795-75k57 Pod ✔ Running 4m49s ready:1/1 ├──□ rollouts-demo-687d76d795-bv5zf Pod ✔ Running 4m49s ready:1/1 ├──□ rollouts-demo-687d76d795-jsxg8 Pod ✔ Running 4m49s ready:1/1 ├──□ rollouts-demo-687d76d795-rsgtv Pod ✔ Running 4m49s ready:1/1 └──□ rollouts-demo-687d76d795-xrmrj Pod ✔ Running 4m49s ready:1/1",
"oc describe virtualservice/rollouts-demo-vsvc -n <namespace>",
"route - destination: host: rollouts-demo-stable weight: 100 1 - destination: host: rollouts-demo-canary weight: 0 2",
"oc argo rollouts set image rollouts-demo rollouts-demo=argoproj/rollouts-demo:yellow -n <namespace>",
"oc argo rollouts get rollout rollouts-demo --watch -n <namespace> 1",
"Name: rollouts-demo Namespace: argo-rollouts Status: ॥ Paused Message: CanaryPauseStep Strategy: Canary Step: 1/8 SetWeight: 20 ActualWeight: 20 Images: argoproj/rollouts-demo:blue (stable) argoproj/rollouts-demo:yellow (canary) Replicas: Desired: 5 Current: 6 Updated: 1 Ready: 6 Available: 6 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ॥ Paused 6m51s ├──# revision:2 │ └──⧉ rollouts-demo-6cf78c66c5 ReplicaSet ✔ Healthy 99s canary │ └──□ rollouts-demo-6cf78c66c5-zrgd4 Pod ✔ Running 98s ready:1/1 └──# revision:1 └──⧉ rollouts-demo-687d76d795 ReplicaSet ✔ Healthy 9m51s stable ├──□ rollouts-demo-687d76d795-75k57 Pod ✔ Running 9m50s ready:1/1 ├──□ rollouts-demo-687d76d795-jsxg8 Pod ✔ Running 9m50s ready:1/1 ├──□ rollouts-demo-687d76d795-rsgtv Pod ✔ Running 9m50s ready:1/1 └──□ rollouts-demo-687d76d795-xrmrj Pod ✔ Running 9m50s ready:1/1",
"route - destination: host: rollouts-demo-stable weight: 80 1 - destination: host: rollouts-demo-canary weight: 20 2",
"oc argo rollouts promote rollouts-demo -n <namespace> 1",
"oc argo rollouts get rollout rollouts-demo --watch -n <namespace> 1",
"Name: rollouts-demo Namespace: argo-rollouts Status: ॥ Paused Message: CanaryPauseStep Strategy: Canary Step: 3/8 SetWeight: 40 ActualWeight: 40 Images: argoproj/rollouts-demo:blue (stable) argoproj/rollouts-demo:yellow (canary) Replicas: Desired: 5 Current: 7 Updated: 2 Ready: 7 Available: 7 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ॥ Paused 9m21s ├──# revision:2 │ └──⧉ rollouts-demo-6cf78c66c5 ReplicaSet ✔ Healthy 99s canary │ └──□ rollouts-demo-6cf78c66c5-zrgd4 Pod ✔ Running 98s ready:1/1 └──# revision:1 └──⧉ rollouts-demo-687d76d795 ReplicaSet ✔ Healthy 9m51s stable ├──□ rollouts-demo-687d76d795-75k57 Pod ✔ Running 9m50s ready:1/1 ├──□ rollouts-demo-687d76d795-jsxg8 Pod ✔ Running 9m50s ready:1/1 ├──□ rollouts-demo-687d76d795-rsgtv Pod ✔ Running 9m50s ready:1/1 └──□ rollouts-demo-687d76d795-xrmrj Pod ✔ Running 9m50s ready:1/1",
"route - destination: host: rollouts-demo-stable weight: 60 1 - destination: host: rollouts-demo-canary weight: 40 2",
"oc argo rollouts promote rollouts-demo -n <namespace> 1",
"oc argo rollouts get rollout rollouts-demo --watch -n <namespace> 1"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.15/html/argo_rollouts/routing-traffic-by-using-argo-rollouts-for-openshift-service-mesh |
4.25. Fence kdump | 4.25. Fence kdump Table 4.26, "Fence kdump" lists the fence device parameters used by fence_dkump , the fence agent for kdump crash recovery service. Note that fence_kdump is not a replacement for traditional fencing methods; The fence_kdump agent can detect only that a node has entered the kdump crash recovery service. This allows the kdump crash recovery service to complete without being preempted by traditional power fencing methods. Table 4.26. Fence kdump luci Field cluster.conf Attribute Description Name name A name for the fence_kdump device. IP Family family IP network family. The default value is auto . IP Port (optional) ipport IP port number that the fence_kdump agent will use to listen for messages. The default value is 7410. Operation Timeout (seconds) (optional) timeout Number of seconds to wait for message from failed node. Node name nodename Name or IP address of the node to be fenced. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-kdump-CA |
Chapter 1. Release notes | Chapter 1. Release notes Note For additional information about the OpenShift Serverless life cycle and supported platforms, refer to the Platform Life Cycle Policy . Release notes contain information about new and deprecated features, breaking changes, and known issues. The following release notes apply for the most recent OpenShift Serverless releases on OpenShift Container Platform. For an overview of OpenShift Serverless functionality, see About OpenShift Serverless . Note OpenShift Serverless is based on the open source Knative project. For details about the latest Knative component releases, see the Knative blog . 1.1. About API versions API versions are an important measure of the development status of certain features and custom resources in OpenShift Serverless. Creating resources on your cluster that do not use the correct API version can cause issues in your deployment. The OpenShift Serverless Operator automatically upgrades older resources that use deprecated versions of APIs to use the latest version. For example, if you have created resources on your cluster that use older versions of the ApiServerSource API, such as v1beta1 , the OpenShift Serverless Operator automatically updates these resources to use the v1 version of the API when this is available and the v1beta1 version is deprecated. After they have been deprecated, older versions of APIs might be removed in any upcoming release. Using deprecated versions of APIs does not cause resources to fail. However, if you try to use a version of an API that has been removed, it will cause resources to fail. Ensure that your manifests are updated to use the latest version to avoid issues. 1.2. Generally Available and Technology Preview features Features which are Generally Available (GA) are fully supported and are suitable for production use. Technology Preview (TP) features are experimental features and are not intended for production use. See the Technology Preview scope of support on the Red Hat Customer Portal for more information about TP features. The following table provides information about which OpenShift Serverless features are GA and which are TP: Table 1.1. Generally Available and Technology Preview features tracker Feature 1.26 1.27 1.28 kn func GA GA GA Quarkus functions GA GA GA Node.js functions TP TP GA TypeScript functions TP TP GA Python functions - - TP Service Mesh mTLS GA GA GA emptyDir volumes GA GA GA HTTPS redirection GA GA GA Kafka broker GA GA GA Kafka sink GA GA GA Init containers support for Knative services GA GA GA PVC support for Knative services GA GA GA TLS for internal traffic TP TP TP Namespace-scoped brokers - TP TP multi-container support - - TP 1.3. Deprecated and removed features Some features that were Generally Available (GA) or a Technology Preview (TP) in releases have been deprecated or removed. Deprecated functionality is still included in OpenShift Serverless and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality deprecated and removed within OpenShift Serverless, refer to the following table: Table 1.2. Deprecated and removed features tracker Feature 1.20 1.21 1.22 to 1.26 1.27 1.28 KafkaBinding API Deprecated Deprecated Removed Removed Removed kn func emit ( kn func invoke in 1.21+) Deprecated Removed Removed Removed Removed Serving and Eventing v1alpha1 API - - - Deprecated Deprecated enable-secret-informer-filtering annotation - - - - Deprecated 1.4. Release notes for Red Hat OpenShift Serverless 1.28 OpenShift Serverless 1.28 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic. 1.4.1. New features OpenShift Serverless now uses Knative Serving 1.7. OpenShift Serverless now uses Knative Eventing 1.7. OpenShift Serverless now uses Kourier 1.7. OpenShift Serverless now uses Knative ( kn ) CLI 1.7. OpenShift Serverless now uses Knative Kafka 1.7. The kn func CLI plug-in now uses func 1.9.1 version. Node.js and TypeScript runtimes for OpenShift Serverless Functions are now Generally Available (GA). Python runtime for OpenShift Serverless Functions is now available as a Technology Preview. Multi-container support for Knative Serving is now available as a Technology Preview. This feature allows you to use a single Knative service to deploy a multi-container pod. In OpenShift Serverless 1.29 or later, the following components of Knative Eventing will be scaled down from two pods to one: imc-controller imc-dispatcher mt-broker-controller mt-broker-filter mt-broker-ingress The serverless.openshift.io/enable-secret-informer-filtering annotation for the Serving CR is now deprecated. The annotation is valid only for Istio, and not for Kourier. With OpenShift Serverless 1.28, the OpenShift Serverless Operator allows injecting the environment variable ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID for both net-istio and net-kourier . If you enable secret filtering, all of your secrets need to be labeled with networking.internal.knative.dev/certificate-uid: "<id>" . Otherwise, Knative Serving does not detect them, which leads to failures. You must label both new and existing secrets. In one of the following OpenShift Serverless releases, secret filtering will become enabled by default. To prevent failures, label your secrets in advance. 1.4.2. Known issues Currently, runtimes for Python are not supported for OpenShift Serverless Functions on IBM Power, IBM zSystems, and IBM(R) LinuxONE. Node.js, TypeScript, and Quarkus functions are supported on these architectures. On the Windows platform, Python functions cannot be locally built, run, or deployed using the Source-to-Image builder due to the app.sh file permissions. To work around this problem, use the Windows Subsystem for Linux. 1.5. Release notes for Red Hat OpenShift Serverless 1.27 OpenShift Serverless 1.27 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic. Important OpenShift Serverless 1.26 is the earliest release that is fully supported on OpenShift Container Platform 4.12. OpenShift Serverless 1.25 and older does not deploy on OpenShift Container Platform 4.12. For this reason, before upgrading OpenShift Container Platform to version 4.12, first upgrade OpenShift Serverless to version 1.26 or 1.27. 1.5.1. New features OpenShift Serverless now uses Knative Serving 1.6. OpenShift Serverless now uses Knative Eventing 1.6. OpenShift Serverless now uses Kourier 1.6. OpenShift Serverless now uses Knative ( kn ) CLI 1.6. OpenShift Serverless now uses Knative Kafka 1.6. The kn func CLI plug-in now uses func 1.8.1. Namespace-scoped brokers are now available as a Technology Preview. Such brokers can be used, for instance, to implement role-based access control (RBAC) policies. KafkaSink now uses the CloudEvent binary content mode by default. The binary content mode is more efficient than the structured mode because it uses headers in its body instead of a CloudEvent . For example, for the HTTP protocol, it uses HTTP headers. You can now use the gRPC framework over the HTTP/2 protocol for external traffic using the OpenShift Route on OpenShift Container Platform 4.10 and later. This improves efficiency and speed of the communications between the client and server. API version v1alpha1 of the Knative Operator Serving and Eventings CRDs is deprecated in 1.27. It will be removed in future versions. Red Hat strongly recommends to use the v1beta1 version instead. This does not affect the existing installations, because CRDs are updated automatically when upgrading the Serverless Operator. The delivery timeout feature is now enabled by default. It allows you to specify the timeout for each sent HTTP request. The feature remains a Technology Preview. 1.5.2. Fixed issues Previously, Knative services sometimes did not get into the Ready state, reporting waiting for the load balancer to be ready. This issue has been fixed. 1.5.3. Known issues Integrating OpenShift Serverless with Red Hat OpenShift Service Mesh causes the net-kourier pod to run out of memory on startup when too many secrets are present on the cluster. Namespace-scoped brokers might leave ClusterRoleBindings in the user namespace even after deletion of namespace-scoped brokers. If this happens, delete the ClusterRoleBinding named rbac-proxy-reviews-prom-rb-knative-kafka-broker-data-plane-{{.Namespace}} in the user namespace. If you use net-istio for Ingress and enable mTLS via SMCP using security.dataPlane.mtls: true , Service Mesh deploys DestinationRules for the *.local host, which does not allow DomainMapping for OpenShift Serverless. To work around this issue, enable mTLS by deploying PeerAuthentication instead of using security.dataPlane.mtls: true . 1.6. Release notes for Red Hat OpenShift Serverless 1.26 OpenShift Serverless 1.26 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic. 1.6.1. New features OpenShift Serverless Functions with Quarkus is now GA. OpenShift Serverless now uses Knative Serving 1.5. OpenShift Serverless now uses Knative Eventing 1.5. OpenShift Serverless now uses Kourier 1.5. OpenShift Serverless now uses Knative ( kn ) CLI 1.5. OpenShift Serverless now uses Knative Kafka 1.5. OpenShift Serverless now uses Knative Operator 1.3. The kn func CLI plugin now uses func 1.8.1. Persistent volume claims (PVCs) are now GA. PVCs provide permanent data storage for your Knative services. The new trigger filters feature is now available as a Developer Preview. It allows users to specify a set of filter expressions, where each expression evaluates to either true or false for each event. To enable new trigger filters, add the new-trigger-filters: enabled entry in the section of the KnativeEventing type in the operator config map: apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing ... ... spec: config: features: new-trigger-filters: enabled ... Knative Operator 1.3 adds the updated v1beta1 version of the API for operator.knative.dev . To update from v1alpha1 to v1beta1 in your KnativeServing and KnativeEventing custom resource config maps, edit the apiVersion key: Example KnativeServing custom resource config map apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing ... Example KnativeEventing custom resource config map apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing ... 1.6.2. Fixed issues Previously, Federal Information Processing Standards (FIPS) mode was disabled for Kafka broker, Kafka source, and Kafka sink. This has been fixed, and FIPS mode is now available. 1.6.3. Known issues If you use net-istio for Ingress and enable mTLS via SMCP using security.dataPlane.mtls: true , Service Mesh deploys DestinationRules for the *.local host, which does not allow DomainMapping for OpenShift Serverless. To work around this issue, enable mTLS by deploying PeerAuthentication instead of using security.dataPlane.mtls: true . Additional resources Knative documentation on new trigger filters 1.7. Release notes for Red Hat OpenShift Serverless 1.25.0 OpenShift Serverless 1.25.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic. 1.7.1. New features OpenShift Serverless now uses Knative Serving 1.4. OpenShift Serverless now uses Knative Eventing 1.4. OpenShift Serverless now uses Kourier 1.4. OpenShift Serverless now uses Knative ( kn ) CLI 1.4. OpenShift Serverless now uses Knative Kafka 1.4. The kn func CLI plugin now uses func 1.7.0. Integrated development environment (IDE) plugins for creating and deploying functions are now available for Visual Studio Code and IntelliJ . Knative Kafka broker is now GA. Knative Kafka broker is a highly performant implementation of the Knative broker API, directly targeting Apache Kafka. It is recommended to not use the MT-Channel-Broker, but the Knative Kafka broker instead. Knative Kafka sink is now GA. A KafkaSink takes a CloudEvent and sends it to an Apache Kafka topic. Events can be specified in either structured or binary content modes. Enabling TLS for internal traffic is now available as a Technology Preview. 1.7.2. Fixed issues Previously, Knative Serving had an issue where the readiness probe failed if the container was restarted after a liveness probe fail. This issue has been fixed. 1.7.3. Known issues The Federal Information Processing Standards (FIPS) mode is disabled for Kafka broker, Kafka source, and Kafka sink. The SinkBinding object does not support custom revision names for services. The Knative Serving Controller pod adds a new informer to watch secrets in the cluster. The informer includes the secrets in the cache, which increases memory consumption of the controller pod. If the pod runs out of memory, you can work around the issue by increasing the memory limit for the deployment. If you use net-istio for Ingress and enable mTLS via SMCP using security.dataPlane.mtls: true , Service Mesh deploys DestinationRules for the *.local host, which does not allow DomainMapping for OpenShift Serverless. To work around this issue, enable mTLS by deploying PeerAuthentication instead of using security.dataPlane.mtls: true . Additional resources Configuring TLS authentication 1.8. Release notes for Red Hat OpenShift Serverless 1.24.0 OpenShift Serverless 1.24.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic. 1.8.1. New features OpenShift Serverless now uses Knative Serving 1.3. OpenShift Serverless now uses Knative Eventing 1.3. OpenShift Serverless now uses Kourier 1.3. OpenShift Serverless now uses Knative kn CLI 1.3. OpenShift Serverless now uses Knative Kafka 1.3. The kn func CLI plugin now uses func 0.24. Init containers support for Knative services is now generally available (GA). OpenShift Serverless logic is now available as a Developer Preview. It enables defining declarative workflow models for managing serverless applications. You can now use the cost management service with OpenShift Serverless. 1.8.2. Fixed issues Integrating OpenShift Serverless with Red Hat OpenShift Service Mesh causes the net-istio-controller pod to run out of memory on startup when too many secrets are present on the cluster. It is now possible to enable secret filtering, which causes net-istio-controller to consider only secrets with a networking.internal.knative.dev/certificate-uid label, thus reducing the amount of memory needed. The OpenShift Serverless Functions Technology Preview now uses Cloud Native Buildpacks by default to build container images. 1.8.3. Known issues The Federal Information Processing Standards (FIPS) mode is disabled for Kafka broker, Kafka source, and Kafka sink. In OpenShift Serverless 1.23, support for KafkaBindings and the kafka-binding webhook were removed. However, an existing kafkabindings.webhook.kafka.sources.knative.dev MutatingWebhookConfiguration might remain, pointing to the kafka-source-webhook service, which no longer exists. For certain specifications of KafkaBindings on the cluster, kafkabindings.webhook.kafka.sources.knative.dev MutatingWebhookConfiguration might be configured to pass any create and update events to various resources, such as Deployments, Knative Services, or Jobs, through the webhook, which would then fail. To work around this issue, manually delete kafkabindings.webhook.kafka.sources.knative.dev MutatingWebhookConfiguration from the cluster after upgrading to OpenShift Serverless 1.23: USD oc delete mutatingwebhookconfiguration kafkabindings.webhook.kafka.sources.knative.dev If you use net-istio for Ingress and enable mTLS via SMCP using security.dataPlane.mtls: true , Service Mesh deploys DestinationRules for the *.local host, which does not allow DomainMapping for OpenShift Serverless. To work around this issue, enable mTLS by deploying PeerAuthentication instead of using security.dataPlane.mtls: true . 1.9. Release notes for Red Hat OpenShift Serverless 1.23.0 OpenShift Serverless 1.23.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic. 1.9.1. New features OpenShift Serverless now uses Knative Serving 1.2. OpenShift Serverless now uses Knative Eventing 1.2. OpenShift Serverless now uses Kourier 1.2. OpenShift Serverless now uses Knative ( kn ) CLI 1.2. OpenShift Serverless now uses Knative Kafka 1.2. The kn func CLI plugin now uses func 0.24. It is now possible to use the kafka.eventing.knative.dev/external.topic annotation with the Kafka broker. This annotation makes it possible to use an existing externally managed topic instead of the broker creating its own internal topic. The kafka-ch-controller and kafka-webhook Kafka components no longer exist. These components have been replaced by the kafka-webhook-eventing component. The OpenShift Serverless Functions Technology Preview now uses Source-to-Image (S2I) by default to build container images. 1.9.2. Known issues The Federal Information Processing Standards (FIPS) mode is disabled for Kafka broker, Kafka source, and Kafka sink. If you delete a namespace that includes a Kafka broker, the namespace finalizer may fail to be removed if the broker's auth.secret.ref.name secret is deleted before the broker. Running OpenShift Serverless with a large number of Knative services can cause Knative activator pods to run close to their default memory limits of 600MB. These pods might be restarted if memory consumption reaches this limit. Requests and limits for the activator deployment can be configured by modifying the KnativeServing custom resource: apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: deployments: - name: activator resources: - container: activator requests: cpu: 300m memory: 60Mi limits: cpu: 1000m memory: 1000Mi If you are using Cloud Native Buildpacks as the local build strategy for a function, kn func is unable to automatically start podman or use an SSH tunnel to a remote daemon. The workaround for these issues is to have a Docker or podman daemon already running on the local development computer before deploying a function. On-cluster function builds currently fail for Quarkus and Golang runtimes. They work correctly for Node, Typescript, Python, and Springboot runtimes. If you use net-istio for Ingress and enable mTLS via SMCP using security.dataPlane.mtls: true , Service Mesh deploys DestinationRules for the *.local host, which does not allow DomainMapping for OpenShift Serverless. To work around this issue, enable mTLS by deploying PeerAuthentication instead of using security.dataPlane.mtls: true . Additional resources Source-to-Image 1.10. Release notes for Red Hat OpenShift Serverless 1.22.0 OpenShift Serverless 1.22.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic. 1.10.1. New features OpenShift Serverless now uses Knative Serving 1.1. OpenShift Serverless now uses Knative Eventing 1.1. OpenShift Serverless now uses Kourier 1.1. OpenShift Serverless now uses Knative ( kn ) CLI 1.1. OpenShift Serverless now uses Knative Kafka 1.1. The kn func CLI plugin now uses func 0.23. Init containers support for Knative services is now available as a Technology Preview. Persistent volume claim (PVC) support for Knative services is now available as a Technology Preview. The knative-serving , knative-serving-ingress , knative-eventing and knative-kafka system namespaces now have the knative.openshift.io/part-of: "openshift-serverless" label by default. The Knative Eventing - Kafka Broker/Trigger dashboard has been added, which allows visualizing Kafka broker and trigger metrics in the web console. The Knative Eventing - KafkaSink dashboard has been added, which allows visualizing KafkaSink metrics in the web console. The Knative Eventing - Broker/Trigger dashboard is now called Knative Eventing - Channel-based Broker/Trigger . The knative.openshift.io/part-of: "openshift-serverless" label has substituted the knative.openshift.io/system-namespace label. Naming style in Knative Serving YAML configuration files changed from camel case ( ExampleName ) to hyphen style ( example-name ). Beginning with this release, use the hyphen style notation when creating or editing Knative Serving YAML configuration files. 1.10.2. Known issues The Federal Information Processing Standards (FIPS) mode is disabled for Kafka broker, Kafka source, and Kafka sink. 1.11. Release notes for Red Hat OpenShift Serverless 1.21.0 OpenShift Serverless 1.21.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic. 1.11.1. New features OpenShift Serverless now uses Knative Serving 1.0 OpenShift Serverless now uses Knative Eventing 1.0. OpenShift Serverless now uses Kourier 1.0. OpenShift Serverless now uses Knative ( kn ) CLI 1.0. OpenShift Serverless now uses Knative Kafka 1.0. The kn func CLI plugin now uses func 0.21. The Kafka sink is now available as a Technology Preview. The Knative open source project has begun to deprecate camel-cased configuration keys in favor of using kebab-cased keys consistently. As a result, the defaultExternalScheme key, previously mentioned in the OpenShift Serverless 1.18.0 release notes, is now deprecated and replaced by the default-external-scheme key. Usage instructions for the key remain the same. 1.11.2. Fixed issues In OpenShift Serverless 1.20.0, there was an event delivery issue affecting the use of kn event send to send events to a service. This issue is now fixed. In OpenShift Serverless 1.20.0 ( func 0.20), TypeScript functions created with the http template failed to deploy on the cluster. This issue is now fixed. In OpenShift Serverless 1.20.0 ( func 0.20), deploying a function using the gcr.io registry failed with an error. This issue is now fixed. In OpenShift Serverless 1.20.0 ( func 0.20), creating a Springboot function project directory with the kn func create command and then running the kn func build command failed with an error message. This issue is now fixed. In OpenShift Serverless 1.19.0 ( func 0.19), some runtimes were unable to build a function by using podman. This issue is now fixed. 1.11.3. Known issues Currently, the domain mapping controller cannot process the URI of a broker, which contains a path that is currently not supported. This means that, if you want to use a DomainMapping custom resource (CR) to map a custom domain to a broker, you must configure the DomainMapping CR with the broker's ingress service, and append the exact path of the broker to the custom domain: Example DomainMapping CR apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: <domain-name> namespace: knative-eventing spec: ref: name: broker-ingress kind: Service apiVersion: v1 The URI for the broker is then <domain-name>/<broker-namespace>/<broker-name> . 1.12. Release notes for Red Hat OpenShift Serverless 1.20.0 OpenShift Serverless 1.20.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic. 1.12.1. New features OpenShift Serverless now uses Knative Serving 0.26. OpenShift Serverless now uses Knative Eventing 0.26. OpenShift Serverless now uses Kourier 0.26. OpenShift Serverless now uses Knative ( kn ) CLI 0.26. OpenShift Serverless now uses Knative Kafka 0.26. The kn func CLI plugin now uses func 0.20. The Kafka broker is now available as a Technology Preview. Important The Kafka broker, which is currently in Technology Preview, is not supported on FIPS. The kn event plugin is now available as a Technology Preview. The --min-scale and --max-scale flags for the kn service create command have been deprecated. Use the --scale-min and --scale-max flags instead. 1.12.2. Known issues OpenShift Serverless deploys Knative services with a default address that uses HTTPS. When sending an event to a resource inside the cluster, the sender does not have the cluster certificate authority (CA) configured. This causes event delivery to fail, unless the cluster uses globally accepted certificates. For example, an event delivery to a publicly accessible address works: USD kn event send --to-url https://ce-api.foo.example.com/ On the other hand, this delivery fails if the service uses a public address with an HTTPS certificate issued by a custom CA: USD kn event send --to Service:serving.knative.dev/v1:event-display Sending an event to other addressable objects, such as brokers or channels, is not affected by this issue and works as expected. The Kafka broker currently does not work on a cluster with Federal Information Processing Standards (FIPS) mode enabled. If you create a Springboot function project directory with the kn func create command, subsequent running of the kn func build command fails with this error message: [analyzer] no stack metadata found at path '' [analyzer] ERROR: failed to : set API for buildpack 'paketo-buildpacks/[email protected]': buildpack API version '0.7' is incompatible with the lifecycle As a workaround, you can change the builder property to gcr.io/paketo-buildpacks/builder:base in the function configuration file func.yaml . Deploying a function using the gcr.io registry fails with this error message: Error: failed to get credentials: failed to verify credentials: status code: 404 As a workaround, use a different registry than gcr.io , such as quay.io or docker.io . TypeScript functions created with the http template fail to deploy on the cluster. As a workaround, in the func.yaml file, replace the following section: buildEnvs: [] with this: buildEnvs: - name: BP_NODE_RUN_SCRIPTS value: build In func version 0.20, some runtimes might be unable to build a function by using podman. You might see an error message similar to the following: ERROR: failed to image: error during connect: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/info": EOF The following workaround exists for this issue: Update the podman service by adding --time=0 to the service ExecStart definition: Example service configuration ExecStart=/usr/bin/podman USDLOGGING system service --time=0 Restart the podman service by running the following commands: USD systemctl --user daemon-reload USD systemctl restart --user podman.socket Alternatively, you can expose the podman API by using TCP: USD podman system service --time=0 tcp:127.0.0.1:5534 & export DOCKER_HOST=tcp://127.0.0.1:5534 1.13. Release notes for Red Hat OpenShift Serverless 1.19.0 OpenShift Serverless 1.19.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic. 1.13.1. New features OpenShift Serverless now uses Knative Serving 0.25. OpenShift Serverless now uses Knative Eventing 0.25. OpenShift Serverless now uses Kourier 0.25. OpenShift Serverless now uses Knative ( kn ) CLI 0.25. OpenShift Serverless now uses Knative Kafka 0.25. The kn func CLI plugin now uses func 0.19. The KafkaBinding API is deprecated in OpenShift Serverless 1.19.0 and will be removed in a future release. HTTPS redirection is now supported and can be configured either globally for a cluster or per each Knative service. 1.13.2. Fixed issues In releases, the Kafka channel dispatcher waited only for the local commit to succeed before responding, which might have caused lost events in the case of an Apache Kafka node failure. The Kafka channel dispatcher now waits for all in-sync replicas to commit before responding. 1.13.3. Known issues In func version 0.19, some runtimes might be unable to build a function by using podman. You might see an error message similar to the following: ERROR: failed to image: error during connect: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/info": EOF The following workaround exists for this issue: Update the podman service by adding --time=0 to the service ExecStart definition: Example service configuration ExecStart=/usr/bin/podman USDLOGGING system service --time=0 Restart the podman service by running the following commands: USD systemctl --user daemon-reload USD systemctl restart --user podman.socket Alternatively, you can expose the podman API by using TCP: USD podman system service --time=0 tcp:127.0.0.1:5534 & export DOCKER_HOST=tcp://127.0.0.1:5534 1.14. Release notes for Red Hat OpenShift Serverless 1.18.0 OpenShift Serverless 1.18.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic. 1.14.1. New features OpenShift Serverless now uses Knative Serving 0.24.0. OpenShift Serverless now uses Knative Eventing 0.24.0. OpenShift Serverless now uses Kourier 0.24.0. OpenShift Serverless now uses Knative ( kn ) CLI 0.24.0. OpenShift Serverless now uses Knative Kafka 0.24.7. The kn func CLI plugin now uses func 0.18.0. In the upcoming OpenShift Serverless 1.19.0 release, the URL scheme of external routes will default to HTTPS for enhanced security. If you do not want this change to apply for your workloads, you can override the default setting before upgrading to 1.19.0, by adding the following YAML to your KnativeServing custom resource (CR): ... spec: config: network: defaultExternalScheme: "http" ... If you want the change to apply in 1.18.0 already, add the following YAML: ... spec: config: network: defaultExternalScheme: "https" ... In the upcoming OpenShift Serverless 1.19.0 release, the default service type by which the Kourier Gateway is exposed will be ClusterIP and not LoadBalancer . If you do not want this change to apply to your workloads, you can override the default setting before upgrading to 1.19.0, by adding the following YAML to your KnativeServing custom resource (CR): ... spec: ingress: kourier: service-type: LoadBalancer ... You can now use emptyDir volumes with OpenShift Serverless. See the OpenShift Serverless documentation about Knative Serving for details. Rust templates are now available when you create a function using kn func . 1.14.2. Fixed issues The prior 1.4 version of Camel-K was not compatible with OpenShift Serverless 1.17.0. The issue in Camel-K has been fixed, and Camel-K version 1.4.1 can be used with OpenShift Serverless 1.17.0. Previously, if you created a new subscription for a Kafka channel, or a new Kafka source, a delay was possible in the Kafka data plane becoming ready to dispatch messages after the newly created subscription or sink reported a ready status. As a result, messages that were sent during the time when the data plane was not reporting a ready status, might not have been delivered to the subscriber or sink. In OpenShift Serverless 1.18.0, the issue is fixed and the initial messages are no longer lost. For more information about the issue, see Knowledgebase Article #6343981 . 1.14.3. Known issues Older versions of the Knative kn CLI might use older versions of the Knative Serving and Knative Eventing APIs. For example, version 0.23.2 of the kn CLI uses the v1alpha1 API version. On the other hand, newer releases of OpenShift Serverless might no longer support older API versions. For example, OpenShift Serverless 1.18.0 no longer supports version v1alpha1 of the kafkasources.sources.knative.dev API. Consequently, using an older version of the Knative kn CLI with a newer OpenShift Serverless might produce an error because the kn cannot find the outdated API. For example, version 0.23.2 of the kn CLI does not work with OpenShift Serverless 1.18.0. To avoid issues, use the latest kn CLI version available for your OpenShift Serverless release. For OpenShift Serverless 1.18.0, use Knative kn CLI 0.24.0. | [
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing spec: config: features: new-trigger-filters: enabled",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing",
"oc delete mutatingwebhookconfiguration kafkabindings.webhook.kafka.sources.knative.dev",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: deployments: - name: activator resources: - container: activator requests: cpu: 300m memory: 60Mi limits: cpu: 1000m memory: 1000Mi",
"apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: <domain-name> namespace: knative-eventing spec: ref: name: broker-ingress kind: Service apiVersion: v1",
"kn event send --to-url https://ce-api.foo.example.com/",
"kn event send --to Service:serving.knative.dev/v1:event-display",
"[analyzer] no stack metadata found at path '' [analyzer] ERROR: failed to : set API for buildpack 'paketo-buildpacks/[email protected]': buildpack API version '0.7' is incompatible with the lifecycle",
"Error: failed to get credentials: failed to verify credentials: status code: 404",
"buildEnvs: []",
"buildEnvs: - name: BP_NODE_RUN_SCRIPTS value: build",
"ERROR: failed to image: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.40/info\": EOF",
"ExecStart=/usr/bin/podman USDLOGGING system service --time=0",
"systemctl --user daemon-reload",
"systemctl restart --user podman.socket",
"podman system service --time=0 tcp:127.0.0.1:5534 & export DOCKER_HOST=tcp://127.0.0.1:5534",
"ERROR: failed to image: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.40/info\": EOF",
"ExecStart=/usr/bin/podman USDLOGGING system service --time=0",
"systemctl --user daemon-reload",
"systemctl restart --user podman.socket",
"podman system service --time=0 tcp:127.0.0.1:5534 & export DOCKER_HOST=tcp://127.0.0.1:5534",
"spec: config: network: defaultExternalScheme: \"http\"",
"spec: config: network: defaultExternalScheme: \"https\"",
"spec: ingress: kourier: service-type: LoadBalancer"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/serverless/serverless-release-notes |
Configuring and using network file services | Configuring and using network file services Red Hat Enterprise Linux 9 A guide to configuring and using network file services in Red Hat Enterprise Linux 9. Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_using_network_file_services/index |
8.196. scsi-target-utils | 8.196. scsi-target-utils 8.196.1. RHBA-2013:1684 - scsi-target-utils bug fix update Updated scsi-target-utils packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The scsi-target-utils packages contain a daemon and tools to setup Small Computer System Interface (SCSI) targets. Currently, software Internet SCSI (iSCSI) and iSCSI Extensions for RDMA (iSER) targets are supported. Bug Fixes BZ# 910638 Previously, the tgtadm utility did not check for the presence of the libaio library to enable the asynchronous I/O types of backend storage. Consequently, attempts to add a new iSCSI target device with the "tgtadm --bstype aio" command failed with an "invalid request" error message. This update adds libaio as a runtime dependency. Now, using the "--bstype aio" option with the tgtadm utility no longer fails and attempts to add a new logical unit work as expected. BZ# 813636 Prior to this update, when interruptions were occurring in the network, then reconnection of the TCP protocol did not work properly. As a consequence, memory leaks occurred in the tgtd daemon under these circumstances. This bug has been fixed and the TCP reconnection now works correctly in the described scenario. BZ# 865739 Previously, the tgtd daemon did not report its exported targets properly if configured to report them to an Internet Storage Name Service (iSNS) server. Consequently, running the "iscsiadm -m discoverydb -t isns" command failed. This bug has been fixed and tgtd now reports its exported targets correctly in the described scenario. BZ# 922270 Previously, it was not possible to supply command-line parameters to the tgtd daemon. With this update, it is possible to set the TGTD_OPTIONS variable containing the parameters and use it in the /etc/sysconfig/tgtd file. Users of scsi-target-utils are advised to upgrade to these updated packages, which fix these bugs. All running scsi-target-utils services must be restarted for the update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/scsi-target-utils |
Chapter 31. kubernetes | Chapter 31. kubernetes The namespace for Kubernetes-specific metadata Data type group 31.1. kubernetes.pod_name The name of the pod Data type keyword 31.2. kubernetes.pod_id The Kubernetes ID of the pod Data type keyword 31.3. kubernetes.namespace_name The name of the namespace in Kubernetes Data type keyword 31.4. kubernetes.namespace_id The ID of the namespace in Kubernetes Data type keyword 31.5. kubernetes.host The Kubernetes node name Data type keyword 31.6. kubernetes.container_name The name of the container in Kubernetes Data type keyword 31.7. kubernetes.annotations Annotations associated with the Kubernetes object Data type group 31.8. kubernetes.labels Labels present on the original Kubernetes Pod Data type group 31.9. kubernetes.event The Kubernetes event obtained from the Kubernetes master API. This event description loosely follows type Event in Event v1 core . Data type group 31.9.1. kubernetes.event.verb The type of event, ADDED , MODIFIED , or DELETED Data type keyword Example value ADDED 31.9.2. kubernetes.event.metadata Information related to the location and time of the event creation Data type group 31.9.2.1. kubernetes.event.metadata.name The name of the object that triggered the event creation Data type keyword Example value java-mainclass-1.14d888a4cfc24890 31.9.2.2. kubernetes.event.metadata.namespace The name of the namespace where the event originally occurred. Note that it differs from kubernetes.namespace_name , which is the namespace where the eventrouter application is deployed. Data type keyword Example value default 31.9.2.3. kubernetes.event.metadata.selfLink A link to the event Data type keyword Example value /api/v1/namespaces/javaj/events/java-mainclass-1.14d888a4cfc24890 31.9.2.4. kubernetes.event.metadata.uid The unique ID of the event Data type keyword Example value d828ac69-7b58-11e7-9cf5-5254002f560c 31.9.2.5. kubernetes.event.metadata.resourceVersion A string that identifies the server's internal version of the event. Clients can use this string to determine when objects have changed. Data type integer Example value 311987 31.9.3. kubernetes.event.involvedObject The object that the event is about. Data type group 31.9.3.1. kubernetes.event.involvedObject.kind The type of object Data type keyword Example value ReplicationController 31.9.3.2. kubernetes.event.involvedObject.namespace The namespace name of the involved object. Note that it may differ from kubernetes.namespace_name , which is the namespace where the eventrouter application is deployed. Data type keyword Example value default 31.9.3.3. kubernetes.event.involvedObject.name The name of the object that triggered the event Data type keyword Example value java-mainclass-1 31.9.3.4. kubernetes.event.involvedObject.uid The unique ID of the object Data type keyword Example value e6bff941-76a8-11e7-8193-5254002f560c 31.9.3.5. kubernetes.event.involvedObject.apiVersion The version of kubernetes master API Data type keyword Example value v1 31.9.3.6. kubernetes.event.involvedObject.resourceVersion A string that identifies the server's internal version of the pod that triggered the event. Clients can use this string to determine when objects have changed. Data type keyword Example value 308882 31.9.4. kubernetes.event.reason A short machine-understandable string that gives the reason for generating this event Data type keyword Example value SuccessfulCreate 31.9.5. kubernetes.event.source_component The component that reported this event Data type keyword Example value replication-controller 31.9.6. kubernetes.event.firstTimestamp The time at which the event was first recorded Data type date Example value 2017-08-07 10:11:57.000000000 Z 31.9.7. kubernetes.event.count The number of times this event has occurred Data type integer Example value 1 31.9.8. kubernetes.event.type The type of event, Normal or Warning . New types could be added in the future. Data type keyword Example value Normal | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/logging/cluster-logging-exported-fields-kubernetes_cluster-logging-exported-fields |
Chapter 5. Configuring the hostname | Chapter 5. Configuring the hostname 5.1. Server Endpoints Red Hat build of Keycloak exposes different endpoints to talk with applications as well as to allow accessing the administration console. These endpoints can be categorized into three main groups: Frontend Backend Administration Console The base URL for each group has an important impact on how tokens are issued and validated, on how links are created for actions that require the user to be redirected to Red Hat build of Keycloak (for example, when resetting password through email links), and, most importantly, how applications will discover these endpoints when fetching the OpenID Connect Discovery Document from realms/{realm-name}/.well-known/openid-configuration . 5.1.1. Frontend The frontend endpoints are those accessible through a public domain and usually related to authentication/authorization flows that happen through the front-channel. For instance, when an SPA wants to authenticate their users it redirects them to the authorization_endpoint so that users can authenticate using their browsers through the front-channel. By default, when the hostname settings are not set, the base URL for these endpoints is based on the incoming request so that the HTTP scheme, host, port, and path, are the same from the request. The default behavior also has a direct impact on how the server is going to issue tokens given that the issuer is also based on the URL set to the frontend endpoints. If the hostname settings are not set, the token issuer will also be based on the incoming request and also lack consistency if the client is requesting tokens using different URLs. When deploying to production you usually want a consistent URL for the frontend endpoints and the token issuer regardless of how the request is constructed. In order to achieve this consistency, you can set either the hostname or the hostname-url options. Most of the time, it should be enough to set the hostname option in order to change only the host of the frontend URLs: bin/kc.[sh|bat] start --hostname=<host> When using the hostname option the server is going to resolve the HTTP scheme, port, and path, automatically so that: https scheme is used unless you set hostname-strict-https=false if the proxy option is set, the proxy will use the default ports (i.e.: 80 and 443). If the proxy uses a different port, it needs to be specified via the hostname-port configuration option However, if you want to set not only the host but also a scheme, port, and path, you can set the hostname-url option: bin/kc.[sh|bat] start --hostname-url=<scheme>://<host>:<port>/<path> This option gives you more flexibility as you can set the different parts of the URL from a single option. Note that the hostname and hostname-url are mutually exclusive. Note By hostname and proxy configuration options you affect only the static resources URLs, redirect URIs, OIDC well-known endpoints, etc. In order to change, where/on which port the server actually listens on, you need to use the http/tls configuration options (e.g. http-host , https-port , etc.). For more details, see Configuring TLS and All configuration . 5.1.2. Backend The backend endpoints are those accessible through a public domain or through a private network. They are used for a direct communication between the server and clients without any intermediary but plain HTTP requests. For instance, after the user is authenticated an SPA wants to exchange the code sent by the server with a set of tokens by sending a token request to token_endpoint . By default, the URLs for backend endpoints are also based on the incoming request. To override this behavior, set the hostname-strict-backchannel configuration option by entering this command: bin/kc.[sh|bat] start --hostname=<value> --hostname-strict-backchannel=true By setting the hostname-strict-backchannel option, the URLs for the backend endpoints are going to be exactly the same as the frontend endpoints. When all applications connected to Red Hat build of Keycloak communicate through the public URL, set hostname-strict-backchannel to true . Otherwise, leave this parameter as false to allow client-server communication through a private network. 5.1.3. Administration Console The server exposes the administration console and static resources using a specific URL. By default, the URLs for the administration console are also based on the incoming request. However, you can set a specific host or base URL if you want to restrict access to the administration console using a specific URL. Similarly to how you set the frontend URLs, you can use the hostname-admin and hostname-admin-url options to achieve that. Note that if HTTPS is enabled ( http-enabled configuration option is set to false, which is the default setting for the production mode), the Red Hat build of Keycloak server automatically assumes you want to use HTTPS URLs. The admin console then tries to contact Red Hat build of Keycloak over HTTPS and HTTPS URLs are also used for its configured redirect/web origin URLs. It is not recommended for production, but you can use HTTP URL as hostname-admin-url to override this behaviour. Most of the time, it should be enough to set the hostname-admin option in order to change only the host of the administration console URLs: bin/kc.[sh|bat] start --hostname-admin=<host> However, if you want to set not only the host but also a scheme, port, and path, you can set the hostname-admin-url option: bin/kc.[sh|bat] start --hostname-admin-url=<scheme>://<host>:<port>/<path> Note that the hostname-admin and hostname-admin-url are mutually exclusive. To reduce attack surface, the administration endpoints for Red Hat build of Keycloak and the Admin Console should not be publicly accessible. Therefore, you can secure them by using a reverse proxy. For more information about which paths to expose using a reverse proxy, see Using a reverse proxy . 5.2. Example Scenarios The following are more example scenarios and the corresponding commands for setting up a hostname. Note that the start command requires setting up TLS. The corresponding options are not shown for example purposes. For more details, see Configuring TLS . 5.2.1. Exposing the server behind a TLS termination proxy In this example, the server is running behind a TLS termination proxy and publicly available from https://mykeycloak . Configuration: bin/kc.[sh|bat] start --hostname=mykeycloak --proxy=edge 5.2.2. Exposing the server without a proxy In this example, the server is running without a proxy and exposed using a URL using HTTPS. Red Hat build of Keycloak configuration: bin/kc.[sh|bat] start --hostname-url=https://mykeycloak It is highly recommended using a TLS termination proxy in front of the server for security and availability reasons. For more details, see Using a reverse proxy . 5.2.3. Forcing backend endpoints to use the same URL the server is exposed In this example, backend endpoints are exposed using the same URL used by the server so that clients always fetch the same URL regardless of the origin of the request. Red Hat build of Keycloak configuration: bin/kc.[sh|bat] start --hostname=mykeycloak --hostname-strict-backchannel=true 5.2.4. Exposing the server using a port other than the default ports In this example, the server is accessible using a port other than the default ports. Red Hat build of Keycloak configuration: bin/kc.[sh|bat] start --hostname-url=https://mykeycloak:8989 5.2.5. Exposing Red Hat build of Keycloak behind a TLS reencrypt proxy using different ports In this example, the server is running behind a proxy and both the server and the proxy are using their own certificates, so the communication between Red Hat build of Keycloak and the proxy is encrypted. Because we want the proxy to use its own certificate, the proxy mode reencrypt will be used. We need to keep in mind that the proxy configuration options (as well as hostname configuration options) are not changing the ports on which the server actually is listening on (it changes only the ports of static resources like JavaScript and CSS links, OIDC well-known endpoints, redirect URIs, etc.). Therefore, we need to use HTTP configuration options to change the Red Hat build of Keycloak server to internally listen on a different port, e.g. 8543. The proxy will be listening on the port 8443 (the port visible while accessing the console via a browser). The example hostname my-keycloak.org will be used for the server and similarly the admin console will be accessible via the admin.my-keycloak.org subdomain. Red Hat build of Keycloak configuration: bin/kc.[sh|bat] start --proxy=reencrypt --https-port=8543 --hostname-url=https://my-keycloak.org:8443 --hostname-admin-url=https://admin.my-keycloak.org:8443 Note: there is currently no difference between the passthrough and reencrypt modes. For now, this is meant for future-proof configuration compatibility. The only difference is that when the edge proxy mode is used, HTTP is implicitly enabled (again as mentioned above, this does not affect the server behaviour). Warning Usage any of the proxy modes makes Red Hat build of Keycloak rely on Forwarded and X-Forwarded-* headers. Misconfiguration may leave Red Hat build of Keycloak exposed to security issues. For more details, see Using a reverse proxy . 5.3. Troubleshooting To troubleshoot the hostname configuration, you can use a dedicated debug tool which can be enabled as: Red Hat build of Keycloak configuration: bin/kc.[sh|bat] start --hostname=mykeycloak --hostname-debug=true Then after Red Hat build of Keycloak started properly, open your browser and go to: http://mykeycloak:8080/realms/<your-realm>/hostname-debug 5.4. Relevant options Table 5.1. By default, this endpoint is disabled (--hostname-debug=false) Value hostname Hostname for the Keycloak server. CLI: --hostname Env: KC_HOSTNAME hostname-admin The hostname for accessing the administration console. Use this option if you are exposing the administration console using a hostname other than the value set to the hostname option. CLI: --hostname-admin Env: KC_HOSTNAME_ADMIN hostname-admin-url Set the base URL for accessing the administration console, including scheme, host, port and path CLI: --hostname-admin-url Env: KC_HOSTNAME_ADMIN_URL hostname-debug Toggle the hostname debug page that is accessible at /realms/master/hostname-debug CLI: --hostname-debug Env: KC_HOSTNAME_DEBUG true , false (default) hostname-path This should be set if proxy uses a different context-path for Keycloak. CLI: --hostname-path Env: KC_HOSTNAME_PATH hostname-port The port used by the proxy when exposing the hostname. Set this option if the proxy uses a port other than the default HTTP and HTTPS ports. CLI: --hostname-port Env: KC_HOSTNAME_PORT -1 (default) hostname-strict Disables dynamically resolving the hostname from request headers. Should always be set to true in production, unless proxy verifies the Host header. CLI: --hostname-strict Env: KC_HOSTNAME_STRICT true (default), false hostname-strict-backchannel By default backchannel URLs are dynamically resolved from request headers to allow internal and external applications. If all applications use the public URL this option should be enabled. CLI: --hostname-strict-backchannel Env: KC_HOSTNAME_STRICT_BACKCHANNEL true , false (default) hostname-url Set the base URL for frontend URLs, including scheme, host, port and path. CLI: --hostname-url Env: KC_HOSTNAME_URL proxy The proxy address forwarding mode if the server is behind a reverse proxy. CLI: --proxy Env: KC_PROXY none (default), edge , reencrypt , passthrough | [
"bin/kc.[sh|bat] start --hostname=<host>",
"bin/kc.[sh|bat] start --hostname-url=<scheme>://<host>:<port>/<path>",
"bin/kc.[sh|bat] start --hostname=<value> --hostname-strict-backchannel=true",
"bin/kc.[sh|bat] start --hostname-admin=<host>",
"bin/kc.[sh|bat] start --hostname-admin-url=<scheme>://<host>:<port>/<path>",
"bin/kc.[sh|bat] start --hostname=mykeycloak --proxy=edge",
"bin/kc.[sh|bat] start --hostname-url=https://mykeycloak",
"bin/kc.[sh|bat] start --hostname=mykeycloak --hostname-strict-backchannel=true",
"bin/kc.[sh|bat] start --hostname-url=https://mykeycloak:8989",
"bin/kc.[sh|bat] start --proxy=reencrypt --https-port=8543 --hostname-url=https://my-keycloak.org:8443 --hostname-admin-url=https://admin.my-keycloak.org:8443",
"bin/kc.[sh|bat] start --hostname=mykeycloak --hostname-debug=true"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_guide/hostname- |
Chapter 2. Preparing your environment for managing IdM using Ansible playbooks | Chapter 2. Preparing your environment for managing IdM using Ansible playbooks As a system administrator managing Identity Management (IdM), when working with Red Hat Ansible Engine, it is good practice to do the following: Create a subdirectory dedicated to Ansible playbooks in your home directory, for example ~/MyPlaybooks . Copy and adapt sample Ansible playbooks from the /usr/share/doc/ansible-freeipa/* and /usr/share/doc/rhel-system-roles/* directories and subdirectories into your ~/MyPlaybooks directory. Include your inventory file in your ~/MyPlaybooks directory. Using this practice, you can find all your playbooks in one place and you can run your playbooks without invoking root privileges. Note You only need root privileges on the managed nodes to execute the ipaserver , ipareplica , ipaclient and ipabackup ansible-freeipa roles. These roles require privileged access to directories and the dnf software package manager. Follow this procedure to create the ~/MyPlaybooks directory and configure it so that you can use it to store and run Ansible playbooks. Prerequisites You have installed an IdM server on your managed nodes, server.idm.example.com and replica.idm.example.com . You have configured DNS and networking so you can log in to the managed nodes, server.idm.example.com and replica.idm.example.com , directly from the control node. You know the IdM admin password. Procedure Create a directory for your Ansible configuration and playbooks in your home directory: Change into the ~/MyPlaybooks/ directory: Create the ~/MyPlaybooks/ansible.cfg file with the following content: Create the ~/MyPlaybooks/inventory file with the following content: This configuration defines two host groups, eu and us , for hosts in these locations. Additionally, this configuration defines the ipaserver host group, which contains all hosts from the eu and us groups. Optional: Create an SSH public and private key. To simplify access in your test environment, do not set a password on the private key: Copy the SSH public key to the IdM admin account on each managed node: These commands require that you enter the IdM admin password. Additional resources Installing an Identity Management server using an Ansible playbook How to build your inventory | [
"mkdir ~/MyPlaybooks/",
"cd ~/MyPlaybooks",
"[defaults] inventory = /home/ your_username /MyPlaybooks/inventory [privilege_escalation] become=True",
"[eu] server.idm.example.com [us] replica.idm.example.com [ipaserver:children] eu us",
"ssh-keygen",
"ssh-copy-id [email protected] ssh-copy-id [email protected]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_replication_in_identity_management/preparing-your-environment-for-managing-idm-using-ansible-playbooks_managing-replication-in-idm |
Chapter 151. XPath | Chapter 151. XPath Camel supports XPath to allow an Expression or Predicate to be used in the DSL . For example, you could use XPath to create a predicate in a Message Filter or as an expression for a Recipient List . 151.1. Dependencies When using xpath with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-xpath-starter</artifactId> </dependency> 151.2. XPath Language options The XPath language supports 10 options, which are listed below. Name Default Java Type Description documentType String Name of class for document type The default value is org.w3c.dom.Document. resultType Enum Sets the class name of the result type (type from output) The default result type is NodeSet. Enum values: NUMBER STRING BOOLEAN NODESET NODE saxon Boolean Whether to use Saxon. factoryRef String References to a custom XPathFactory to lookup in the registry. objectModel String The XPath object model to use. logNamespaces Boolean Whether to log namespaces which can assist during troubleshooting. headerName String Name of header to use as input, instead of the message body. threadSafety Boolean Whether to enable thread-safety for the returned result of the xpath expression. This applies to when using NODESET as the result type, and the returned set has multiple elements. In this situation there can be thread-safety issues if you process the NODESET concurrently such as from a Camel Splitter EIP in parallel processing mode. This option prevents concurrency issues by doing defensive copies of the nodes. It is recommended to turn this option on if you are using camel-saxon or Saxon in your application. Saxon has thread-safety issues which can be prevented by turning this option on. preCompile Boolean Whether to enable pre-compiling the xpath expression during initialization phase. pre-compile is enabled by default. This can be used to turn off, for example in cases the compilation phase is desired at the starting phase, such as if the application is ahead of time compiled (for example with camel-quarkus) which would then load the xpath factory of the built operating system, and not a JVM runtime. trim Boolean Whether to trim the value to remove leading and trailing whitespaces and line breaks. 151.3. Namespaces You can easily use namespaces with XPath expressions using the Namespaces helper class. 151.4. Variables Variables in XPath is defined in different namespaces. The default namespace is http://camel.apache.org/schema/spring . Namespace URI Local part Type Description http://camel.apache.org/xml/in/ in Message the message http://camel.apache.org/xml/out/ out Message deprecated the output message (do not use) http://camel.apache.org/xml/function/ functions Object Additional functions http://camel.apache.org/xml/variables/environment-variables env Object OS environment variables http://camel.apache.org/xml/variables/system-properties system Object Java System properties http://camel.apache.org/xml/variables/exchange-property Object the exchange property Camel will resolve variables according to either: namespace given no namespace given 151.4.1. Namespace given If the namespace is given then Camel is instructed exactly what to return. However, when resolving Camel will try to resolve a header with the given local part first, and return it. If the local part has the value body then the body is returned instead. 151.4.2. No namespace given If there is no namespace given then Camel resolves only based on the local part. Camel will try to resolve a variable in the following steps: from variables that has been set using the variable(name, value) fluent builder from message.in.header if there is a header with the given key from exchange.properties if there is a property with the given key 151.5. Functions Camel adds the following XPath functions that can be used to access the exchange: Function Argument Type Description in:body none Object Will return the message body. in:header the header name Object Will return the message header. out:body none Object deprecated Will return the out message body. out:header the header name Object deprecated Will return the out message header. function:properties key for property String To use a . function:simple simple expression Object To evaluate a language. Note function:properties and function:simple is not supported when the return type is a NodeSet , such as when using with a Split EIP. Here's an example showing some of these functions in use. 151.5.1. Functions example If you prefer to configure your routes in your Spring XML file then you can use XPath expressions as follows <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <camelContext id="camel" xmlns="http://activemq.apache.org/camel/schema/spring" xmlns:foo="http://example.com/person"> <route> <from uri="activemq:MyQueue"/> <filter> <xpath>/foo:person[@name='James']</xpath> <to uri="mqseries:SomeOtherQueue"/> </filter> </route> </camelContext> </beans> Notice how we can reuse the namespace prefixes, foo in this case, in the XPath expression for easier namespace based XPath expressions. 151.6. Stream based message bodies If the message body is stream based, which means the input it receives is submitted to Camel as a stream. That means you will only be able to read the content of the stream once . So often when you use XPath as Message Filter or Content Based Router then you need to access the data multiple times, and you should use Stream Caching or convert the message body to a String prior which is safe to be re-read multiple times. from("queue:foo"). filter().xpath("//foo")). to("queue:bar") from("queue:foo"). choice().xpath("//foo")).to("queue:bar"). otherwise().to("queue:others"); 151.7. Setting result type The XPath expression will return a result type using native XML objects such as org.w3c.dom.NodeList . However, many times you want a result type to be a String . To do this you have to instruct the XPath which result type to use. In Java DSL: xpath("/foo:person/@id", String.class) In XML DSL you use the resultType attribute to provide the fully qualified classname. <xpath resultType="java.lang.String">/foo:person/@id</xpath> Note Classes from java.lang can omit the FQN name, so you can use resultType="String" Using @XPath annotation: @XPath(value = "concat('foo-',//order/name/)", resultType = String.class) String name) Where we use the xpath function concat to prefix the order name with foo- . In this case we have to specify that we want a String as result type, so the concat function works. 151.8. Using XPath on Headers Some users may have XML stored in a header. To apply an XPath to a header's value you can do this by defining the 'headerName' attribute. <xpath headerName="invoiceDetails">/invoice/@orderType = 'premium'</xpath> And in Java DSL you specify the headerName as the 2nd parameter as shown: xpath("/invoice/@orderType = 'premium'", "invoiceDetails") 151.9. Example Here is a simple example using an XPath expression as a predicate in a Message Filter : from("direct:start") .filter().xpath("/person[@name='James']") .to("mock:result"); And in XML <route> <from uri="direct:start"/> <filter> <xpath>/person[@name='James']</xpath> <to uri="mock:result"/> </filter> </route> 151.10. Using namespaces If you have a standard set of namespaces you wish to work with and wish to share them across many XPath expressions you can use the org.apache.camel.support.builder.Namespaces when using Java DSL as shown: Namespaces ns = new Namespaces("c", "http://acme.com/cheese"); from("direct:start") .filter(xpath("/c:person[@name='James']", ns)) .to("mock:result"); Notice how the namespaces are provided to xpath with the ns variable that are passed in as the 2nd parameter. Each namespace is a key=value pair, where the prefix is the key. In the XPath expression then the namespace is used by its prefix, eg: /c:person[@name='James'] The namespace builder supports adding multiple namespaces as shown: Namespaces ns = new Namespaces("c", "http://acme.com/cheese") .add("w", "http://acme.com/wine") .add("b", "http://acme.com/beer"); When using namespaces in XML DSL then its different, as you setup the namespaces in the XML root tag (or one of the camelContext , routes , route tags). In the XML example below we use Spring XML where the namespace is declared in the root tag beans , in the line with xmlns:foo="http://example.com/person" : <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:foo="http://example.com/person" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd "> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start"/> <filter> <xpath logNamespaces="true">/foo:person[@name='James']</xpath> <to uri="mock:result"/> </filter> </route> </camelContext> </beans> This namespace uses foo as prefix, so the <xpath> expression uses /foo: to use this namespace. 151.11. Using @XPath Annotation for Bean Integration You can use Bean Integration to invoke a method on a bean and use various languages such as @XPath to extract a value from the message and bind it to a method parameter. Note The default @XPath annotation has SOAP and XML namespaces available. public class Foo { @Consume(uri = "activemq:my.queue") public void doSomething(@XPath("/person/@name") String name, String xml) { // process the inbound message here } } 151.12. Using XPathBuilder without an Exchange You can now use the org.apache.camel.language.xpath.XPathBuilder without the need for an Exchange . This comes handy if you want to use it as a helper to do custom XPath evaluations. It requires that you pass in a CamelContext since a lot of the moving parts inside the XPathBuilder requires access to the Camel Type Converter and hence why CamelContext is needed. For example, you can do something like this: boolean matches = XPathBuilder.xpath("/foo/bar/@xyz").matches(context, "<foo><bar xyz='cheese'/></foo>")); This will match the given predicate. You can also evaluate as shown in the following three examples: String name = XPathBuilder.xpath("foo/bar").evaluate(context, "<foo><bar>cheese</bar></foo>", String.class); Integer number = XPathBuilder.xpath("foo/bar").evaluate(context, "<foo><bar>123</bar></foo>", Integer.class); Boolean bool = XPathBuilder.xpath("foo/bar").evaluate(context, "<foo><bar>true</bar></foo>", Boolean.class); Evaluating with a String result is a common requirement and make this simpler: String name = XPathBuilder.xpath("foo/bar").evaluate(context, "<foo><bar>cheese</bar></foo>"); 151.13. Using Saxon with XPathBuilder You need to add camel-saxon as dependency to your project. It's now easier to use Saxon with the XPathBuilder which can be done in several ways as shown below Using a custom XPathFactory Using ObjectModel 151.13.1. Setting a custom XPathFactory using System Property Camel now supports reading the JVM system property javax.xml.xpath.XPathFactory that can be used to set a custom XPathFactory to use. This unit test shows how this can be done to use Saxon instead: Camel will log at INFO level if it uses a non default XPathFactory such as: XPathBuilder INFO Using system property javax.xml.xpath.XPathFactory:http://saxon.sf.net/jaxp/xpath/om with value: net.sf.saxon.xpath.XPathFactoryImpl when creating XPathFactory To use Apache Xerces you can configure the system property -Djavax.xml.xpath.XPathFactory=org.apache.xpath.jaxp.XPathFactoryImpl 151.13.2. Enabling Saxon from XML DSL Similarly to Java DSL, to enable Saxon from XML DSL you have three options: Referring to a custom factory: <xpath factoryRef="saxonFactory" resultType="java.lang.String">current-dateTime()</xpath> And declare a bean with the factory: <bean id="saxonFactory" class="net.sf.saxon.xpath.XPathFactoryImpl"/> Specifying the object model: <xpath objectModel="http://saxon.sf.net/jaxp/xpath/om" resultType="java.lang.String">current-dateTime()</xpath> And the recommended approach is to set saxon=true as shown: <xpath saxon="true" resultType="java.lang.String">current-dateTime()</xpath> 151.14. Namespace auditing to aid debugging Many XPath-related issues that users frequently face are linked to the usage of namespaces. You may have some misalignment between the namespaces present in your message, and those that your XPath expression is aware of or referencing. XPath predicates or expressions that are unable to locate the XML elements and attributes due to namespaces issues may simply look like they are not working , when in reality all there is to it is a lack of namespace definition. Namespaces in XML are completely necessary, and while we would love to simplify their usage by implementing some magic or voodoo to wire namespaces automatically, truth is that any action down this path would disagree with the standards and would greatly hinder interoperability. Therefore, the utmost we can do is assist you in debugging such issues by adding two new features to the XPath Expression Language and are thus accessible from both predicates and expressions. 151.14.1. Logging the Namespace Context of your XPath expression/predicate Every time a new XPath expression is created in the internal pool, Camel will log the namespace context of the expression under the org.apache.camel.language.xpath.XPathBuilder logger. Since Camel represents Namespace Contexts in a hierarchical fashion (parent-child relationships), the entire tree is output in a recursive manner with the following format: Any of these options can be used to activate this logging: Enable TRACE logging on the org.apache.camel.language.xpath.XPathBuilder logger, or some parent logger such as org.apache.camel or the root logger Enable the logNamespaces option as indicated in the following section, in which case the logging will occur on the INFO level 151.14.2. Auditing namespaces Camel is able to discover and dump all namespaces present on every incoming message before evaluating an XPath expression, providing all the richness of information you need to help you analyse and pinpoint possible namespace issues. To achieve this, it in turn internally uses another specially tailored XPath expression to extract all namespace mappings that appear in the message, displaying the prefix and the full namespace URI(s) for each individual mapping. Some points to take into account: The implicit XML namespace ( xmlns:xml="http://www.w3.org/XML/1998/namespace" ) is suppressed from the output because it adds no value Default namespaces are listed under the DEFAULT keyword in the output Keep in mind that namespaces can be remapped under different scopes. Think of a top-level 'a' prefix which in inner elements can be assigned a different namespace, or the default namespace changing in inner scopes. For each discovered prefix, all associated URIs are listed. You can enable this option in Java DSL and XML DSL: Java DSL: XPathBuilder.xpath("/foo:person/@id", String.class).logNamespaces() XML DSL: <xpath logNamespaces="true" resultType="String">/foo:person/@id</xpath> The result of the auditing will be appeared at the INFO level under the org.apache.camel.language.xpath.XPathBuilder logger and will look like the following: 2012-01-16 13:23:45,878 [stSaxonWithFlag] INFO XPathBuilder - Namespaces discovered in message: {xmlns:a=[http://apache.org/camel], DEFAULT=[http://apache.org/default], xmlns:b=[http://apache.org/camelA, http://apache.org/camelB]} 151.15. Loading script from external resource You can externalize the script and have Camel load it from a resource such as "classpath:" , "file:" , or "http:" . This is done using the following syntax: "resource:scheme:location" , eg to refer to a file on the classpath you can do: .setHeader("myHeader").xpath("resource:classpath:myxpath.txt", String.class) 151.16. Spring Boot Auto-Configuration The component supports 9 options, which are listed below. Name Description Default Type camel.language.xpath.document-type Name of class for document type The default value is org.w3c.dom.Document. String camel.language.xpath.enabled Whether to enable auto configuration of the xpath language. This is enabled by default. Boolean camel.language.xpath.factory-ref References to a custom XPathFactory to lookup in the registry. String camel.language.xpath.log-namespaces Whether to log namespaces which can assist during troubleshooting. false Boolean camel.language.xpath.object-model The XPath object model to use. String camel.language.xpath.pre-compile Whether to enable pre-compiling the xpath expression during initialization phase. pre-compile is enabled by default. This can be used to turn off, for example in cases the compilation phase is desired at the starting phase, such as if the application is ahead of time compiled (for example with camel-quarkus) which would then load the xpath factory of the built operating system, and not a JVM runtime. true Boolean camel.language.xpath.saxon Whether to use Saxon. false Boolean camel.language.xpath.thread-safety Whether to enable thread-safety for the returned result of the xpath expression. This applies to when using NODESET as the result type, and the returned set has multiple elements. In this situation there can be thread-safety issues if you process the NODESET concurrently such as from a Camel Splitter EIP in parallel processing mode. This option prevents concurrency issues by doing defensive copies of the nodes. It is recommended to turn this option on if you are using camel-saxon or Saxon in your application. Saxon has thread-safety issues which can be prevented by turning this option on. false Boolean camel.language.xpath.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-xpath-starter</artifactId> </dependency>",
"<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\"> <camelContext id=\"camel\" xmlns=\"http://activemq.apache.org/camel/schema/spring\" xmlns:foo=\"http://example.com/person\"> <route> <from uri=\"activemq:MyQueue\"/> <filter> <xpath>/foo:person[@name='James']</xpath> <to uri=\"mqseries:SomeOtherQueue\"/> </filter> </route> </camelContext> </beans>",
"from(\"queue:foo\"). filter().xpath(\"//foo\")). to(\"queue:bar\")",
"from(\"queue:foo\"). choice().xpath(\"//foo\")).to(\"queue:bar\"). otherwise().to(\"queue:others\");",
"xpath(\"/foo:person/@id\", String.class)",
"<xpath resultType=\"java.lang.String\">/foo:person/@id</xpath>",
"@XPath(value = \"concat('foo-',//order/name/)\", resultType = String.class) String name)",
"<xpath headerName=\"invoiceDetails\">/invoice/@orderType = 'premium'</xpath>",
"xpath(\"/invoice/@orderType = 'premium'\", \"invoiceDetails\")",
"from(\"direct:start\") .filter().xpath(\"/person[@name='James']\") .to(\"mock:result\");",
"<route> <from uri=\"direct:start\"/> <filter> <xpath>/person[@name='James']</xpath> <to uri=\"mock:result\"/> </filter> </route>",
"Namespaces ns = new Namespaces(\"c\", \"http://acme.com/cheese\"); from(\"direct:start\") .filter(xpath(\"/c:person[@name='James']\", ns)) .to(\"mock:result\");",
"/c:person[@name='James']",
"Namespaces ns = new Namespaces(\"c\", \"http://acme.com/cheese\") .add(\"w\", \"http://acme.com/wine\") .add(\"b\", \"http://acme.com/beer\");",
"<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:foo=\"http://example.com/person\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd \"> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <filter> <xpath logNamespaces=\"true\">/foo:person[@name='James']</xpath> <to uri=\"mock:result\"/> </filter> </route> </camelContext> </beans>",
"public class Foo { @Consume(uri = \"activemq:my.queue\") public void doSomething(@XPath(\"/person/@name\") String name, String xml) { // process the inbound message here } }",
"boolean matches = XPathBuilder.xpath(\"/foo/bar/@xyz\").matches(context, \"<foo><bar xyz='cheese'/></foo>\"));",
"String name = XPathBuilder.xpath(\"foo/bar\").evaluate(context, \"<foo><bar>cheese</bar></foo>\", String.class); Integer number = XPathBuilder.xpath(\"foo/bar\").evaluate(context, \"<foo><bar>123</bar></foo>\", Integer.class); Boolean bool = XPathBuilder.xpath(\"foo/bar\").evaluate(context, \"<foo><bar>true</bar></foo>\", Boolean.class);",
"String name = XPathBuilder.xpath(\"foo/bar\").evaluate(context, \"<foo><bar>cheese</bar></foo>\");",
"XPathBuilder INFO Using system property javax.xml.xpath.XPathFactory:http://saxon.sf.net/jaxp/xpath/om with value: net.sf.saxon.xpath.XPathFactoryImpl when creating XPathFactory",
"-Djavax.xml.xpath.XPathFactory=org.apache.xpath.jaxp.XPathFactoryImpl",
"<xpath factoryRef=\"saxonFactory\" resultType=\"java.lang.String\">current-dateTime()</xpath>",
"<bean id=\"saxonFactory\" class=\"net.sf.saxon.xpath.XPathFactoryImpl\"/>",
"<xpath objectModel=\"http://saxon.sf.net/jaxp/xpath/om\" resultType=\"java.lang.String\">current-dateTime()</xpath>",
"<xpath saxon=\"true\" resultType=\"java.lang.String\">current-dateTime()</xpath>",
"[me: {prefix -> namespace}, {prefix -> namespace}], [parent: [me: {prefix -> namespace}, {prefix -> namespace}], [parent: [me: {prefix -> namespace}]]]",
"XPathBuilder.xpath(\"/foo:person/@id\", String.class).logNamespaces()",
"<xpath logNamespaces=\"true\" resultType=\"String\">/foo:person/@id</xpath>",
"2012-01-16 13:23:45,878 [stSaxonWithFlag] INFO XPathBuilder - Namespaces discovered in message: {xmlns:a=[http://apache.org/camel], DEFAULT=[http://apache.org/default], xmlns:b=[http://apache.org/camelA, http://apache.org/camelB]}",
".setHeader(\"myHeader\").xpath(\"resource:classpath:myxpath.txt\", String.class)"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-xpath-language-starter |
Chapter 4. Configuring OAuth clients | Chapter 4. Configuring OAuth clients Several OAuth clients are created by default in OpenShift Container Platform. You can also register and configure additional OAuth clients. 4.1. Default OAuth clients The following OAuth clients are automatically created when starting the OpenShift Container Platform API: OAuth client Usage openshift-browser-client Requests tokens at <namespace_route>/oauth/token/request with a user-agent that can handle interactive logins. [1] openshift-challenging-client Requests tokens with a user-agent that can handle WWW-Authenticate challenges. <namespace_route> refers to the namespace route. This is found by running the following command: USD oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host 4.2. Registering an additional OAuth client If you need an additional OAuth client to manage authentication for your OpenShift Container Platform cluster, you can register one. Procedure To register additional OAuth clients: USD oc create -f <(echo ' kind: OAuthClient apiVersion: oauth.openshift.io/v1 metadata: name: demo 1 secret: "..." 2 redirectURIs: - "http://www.example.com/" 3 grantMethod: prompt 4 ') 1 The name of the OAuth client is used as the client_id parameter when making requests to <namespace_route>/oauth/authorize and <namespace_route>/oauth/token . 2 The secret is used as the client_secret parameter when making requests to <namespace_route>/oauth/token . 3 The redirect_uri parameter specified in requests to <namespace_route>/oauth/authorize and <namespace_route>/oauth/token must be equal to or prefixed by one of the URIs listed in the redirectURIs parameter value. 4 The grantMethod is used to determine what action to take when this client requests tokens and has not yet been granted access by the user. Specify auto to automatically approve the grant and retry the request, or prompt to prompt the user to approve or deny the grant. 4.3. Configuring token inactivity timeout for an OAuth client You can configure OAuth clients to expire OAuth tokens after a set period of inactivity. By default, no token inactivity timeout is set. Note If the token inactivity timeout is also configured in the internal OAuth server configuration, the timeout that is set in the OAuth client overrides that value. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have configured an identity provider (IDP). Procedure Update the OAuthClient configuration to set a token inactivity timeout. Edit the OAuthClient object: USD oc edit oauthclient <oauth_client> 1 1 Replace <oauth_client> with the OAuth client to configure, for example, console . Add the accessTokenInactivityTimeoutSeconds field and set your timeout value: apiVersion: oauth.openshift.io/v1 grantMethod: auto kind: OAuthClient metadata: ... accessTokenInactivityTimeoutSeconds: 600 1 1 The minimum allowed timeout value in seconds is 300 . Save the file to apply the changes. Verification Log in to the cluster with an identity from your IDP. Be sure to use the OAuth client that you just configured. Perform an action and verify that it was successful. Wait longer than the configured timeout without using the identity. In this procedure's example, wait longer than 600 seconds. Try to perform an action from the same identity's session. This attempt should fail because the token should have expired due to inactivity longer than the configured timeout. 4.4. Additional resources OAuthClient [oauth.openshift.io/v1 ] | [
"oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host",
"oc create -f <(echo ' kind: OAuthClient apiVersion: oauth.openshift.io/v1 metadata: name: demo 1 secret: \"...\" 2 redirectURIs: - \"http://www.example.com/\" 3 grantMethod: prompt 4 ')",
"oc edit oauthclient <oauth_client> 1",
"apiVersion: oauth.openshift.io/v1 grantMethod: auto kind: OAuthClient metadata: accessTokenInactivityTimeoutSeconds: 600 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/authentication_and_authorization/configuring-oauth-clients |
Chapter 3. ClusterResourceQuota [quota.openshift.io/v1] | Chapter 3. ClusterResourceQuota [quota.openshift.io/v1] Description ClusterResourceQuota mirrors ResourceQuota at a cluster scope. This object is easily convertible to synthetic ResourceQuota object to allow quota evaluation re-use. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required metadata spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Spec defines the desired quota status object Status defines the actual enforced quota and its current usage 3.1.1. .spec Description Spec defines the desired quota Type object Required quota selector Property Type Description quota object Quota defines the desired quota selector object Selector is the selector used to match projects. It should only select active projects on the scale of dozens (though it can select many more less active projects). These projects will contend on object creation through this resource. 3.1.2. .spec.quota Description Quota defines the desired quota Type object Property Type Description hard integer-or-string hard is the set of desired hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ scopeSelector object scopeSelector is also a collection of filters like scopes that must match each object tracked by a quota but expressed using ScopeSelectorOperator in combination with possible values. For a resource to match, both scopes AND scopeSelector (if specified in spec), must be matched. scopes array (string) A collection of filters that must match each object tracked by a quota. If not specified, the quota matches all objects. 3.1.3. .spec.quota.scopeSelector Description scopeSelector is also a collection of filters like scopes that must match each object tracked by a quota but expressed using ScopeSelectorOperator in combination with possible values. For a resource to match, both scopes AND scopeSelector (if specified in spec), must be matched. Type object Property Type Description matchExpressions array A list of scope selector requirements by scope of the resources. matchExpressions[] object A scoped-resource selector requirement is a selector that contains values, a scope name, and an operator that relates the scope name and values. 3.1.4. .spec.quota.scopeSelector.matchExpressions Description A list of scope selector requirements by scope of the resources. Type array 3.1.5. .spec.quota.scopeSelector.matchExpressions[] Description A scoped-resource selector requirement is a selector that contains values, a scope name, and an operator that relates the scope name and values. Type object Required operator scopeName Property Type Description operator string Represents a scope's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. scopeName string The name of the scope that the selector applies to. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.6. .spec.selector Description Selector is the selector used to match projects. It should only select active projects on the scale of dozens (though it can select many more less active projects). These projects will contend on object creation through this resource. Type object Property Type Description annotations undefined (string) AnnotationSelector is used to select projects by annotation. labels `` LabelSelector is used to select projects by label. 3.1.7. .status Description Status defines the actual enforced quota and its current usage Type object Required total Property Type Description namespaces `` Namespaces slices the usage by project. This division allows for quick resolution of deletion reconciliation inside of a single project without requiring a recalculation across all projects. This can be used to pull the deltas for a given project. total object Total defines the actual enforced quota and its current usage across all projects 3.1.8. .status.total Description Total defines the actual enforced quota and its current usage across all projects Type object Property Type Description hard integer-or-string Hard is the set of enforced hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ used integer-or-string Used is the current observed total usage of the resource in the namespace. 3.2. API endpoints The following API endpoints are available: /apis/quota.openshift.io/v1/clusterresourcequotas DELETE : delete collection of ClusterResourceQuota GET : list objects of kind ClusterResourceQuota POST : create a ClusterResourceQuota /apis/quota.openshift.io/v1/watch/clusterresourcequotas GET : watch individual changes to a list of ClusterResourceQuota. deprecated: use the 'watch' parameter with a list operation instead. /apis/quota.openshift.io/v1/clusterresourcequotas/{name} DELETE : delete a ClusterResourceQuota GET : read the specified ClusterResourceQuota PATCH : partially update the specified ClusterResourceQuota PUT : replace the specified ClusterResourceQuota /apis/quota.openshift.io/v1/watch/clusterresourcequotas/{name} GET : watch changes to an object of kind ClusterResourceQuota. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/quota.openshift.io/v1/clusterresourcequotas/{name}/status GET : read status of the specified ClusterResourceQuota PATCH : partially update status of the specified ClusterResourceQuota PUT : replace status of the specified ClusterResourceQuota 3.2.1. /apis/quota.openshift.io/v1/clusterresourcequotas HTTP method DELETE Description delete collection of ClusterResourceQuota Table 3.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ClusterResourceQuota Table 3.2. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuotaList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterResourceQuota Table 3.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.4. Body parameters Parameter Type Description body ClusterResourceQuota schema Table 3.5. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 201 - Created ClusterResourceQuota schema 202 - Accepted ClusterResourceQuota schema 401 - Unauthorized Empty 3.2.2. /apis/quota.openshift.io/v1/watch/clusterresourcequotas HTTP method GET Description watch individual changes to a list of ClusterResourceQuota. deprecated: use the 'watch' parameter with a list operation instead. Table 3.6. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/quota.openshift.io/v1/clusterresourcequotas/{name} Table 3.7. Global path parameters Parameter Type Description name string name of the ClusterResourceQuota HTTP method DELETE Description delete a ClusterResourceQuota Table 3.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterResourceQuota Table 3.10. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterResourceQuota Table 3.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.12. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterResourceQuota Table 3.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.14. Body parameters Parameter Type Description body ClusterResourceQuota schema Table 3.15. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 201 - Created ClusterResourceQuota schema 401 - Unauthorized Empty 3.2.4. /apis/quota.openshift.io/v1/watch/clusterresourcequotas/{name} Table 3.16. Global path parameters Parameter Type Description name string name of the ClusterResourceQuota HTTP method GET Description watch changes to an object of kind ClusterResourceQuota. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.5. /apis/quota.openshift.io/v1/clusterresourcequotas/{name}/status Table 3.18. Global path parameters Parameter Type Description name string name of the ClusterResourceQuota HTTP method GET Description read status of the specified ClusterResourceQuota Table 3.19. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ClusterResourceQuota Table 3.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.21. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ClusterResourceQuota Table 3.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.23. Body parameters Parameter Type Description body ClusterResourceQuota schema Table 3.24. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 201 - Created ClusterResourceQuota schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/schedule_and_quota_apis/clusterresourcequota-quota-openshift-io-v1 |
Chapter 39. AdditionalVolume schema reference | Chapter 39. AdditionalVolume schema reference Used in: PodTemplate Property Property type Description name string Name to use for the volume. Required. secret SecretVolumeSource Secret to use populate the volume. configMap ConfigMapVolumeSource ConfigMap to use to populate the volume. emptyDir EmptyDirVolumeSource EmptyDir to use to populate the volume. persistentVolumeClaim PersistentVolumeClaimVolumeSource PersistentVolumeClaim object to use to populate the volume. csi CSIVolumeSource CSIVolumeSource object to use to populate the volume. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-AdditionalVolume-reference |
Managing and allocating storage resources | Managing and allocating storage resources Red Hat OpenShift Data Foundation 4.16 Instructions on how to allocate storage to core services and hosted applications in OpenShift Data Foundation, including snapshot and clone. Red Hat Storage Documentation Team Abstract This document explains how to allocate storage to core services and hosted applications in Red Hat OpenShift Data Foundation. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . Chapter 1. Overview Read this document to understand how to create, configure, and allocate storage to core services or hosted applications in Red Hat OpenShift Data Foundation. Chapter 2, Storage classes shows you how to create custom storage classes. Chapter 3, Block pools provides you with information on how to create, update and delete block pools. Chapter 4, Configure storage for OpenShift Container Platform services shows you how to use OpenShift Data Foundation for core OpenShift Container Platform services. Chapter 6, Backing OpenShift Container Platform applications with OpenShift Data Foundation provides information about how to configure OpenShift Container Platform applications to use OpenShift Data Foundation. Adding file and object storage to an existing external OpenShift Data Foundation cluster Chapter 8, How to use dedicated worker nodes for Red Hat OpenShift Data Foundation provides information about how to use dedicated worker nodes for Red Hat OpenShift Data Foundation. Chapter 9, Managing Persistent Volume Claims provides information about managing Persistent Volume Claim requests, and automating the fulfillment of those requests. Chapter 10, Reclaiming space on target volumes shows you how to reclaim the actual available storage space. Chapter 12, Volume Snapshots shows you how to create, restore, and delete volume snapshots. Chapter 13, Volume cloning shows you how to create volume clones. Chapter 14, Managing container storage interface (CSI) component placements provides information about setting tolerations to bring up container storage interface component on the nodes. Chapter 2. Storage classes The OpenShift Data Foundation operator installs a default storage class depending on the platform in use. This default storage class is owned and controlled by the operator and it cannot be deleted or modified. However, you can create custom storage classes to use other storage resources or to offer a different behavior to applications. Note Custom storage classes are not supported for external mode OpenShift Data Foundation clusters. 2.1. Creating storage classes and pools You can create a storage class using an existing pool or you can create a new pool for the storage class while creating it. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and OpenShift Data Foundation cluster is in Ready state. Procedure Click Storage -> StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Reclaim Policy is set to Delete as the default option. Use this setting. If you change the reclaim policy to Retain in the storage class, the persistent volume (PV) remains in Released state even after deleting the persistent volume claim (PVC). Volume binding mode is set to WaitForConsumer as the default option. If you choose the Immediate option, then the PV gets created immediately when creating the PVC. Select RBD or CephFS Provisioner as the plugin for provisioning the persistent volumes. Choose a Storage system for your workloads. Select an existing Storage Pool from the list or create a new pool. Note The 2-way replication data protection policy is only supported for the non-default RBD pool. 2-way replication can be used by creating an additional pool. To know about Data Availability and Integrity considerations for replica 2 pools, see Knowledgebase Customer Solution Article . Create new pool Click Create New Pool . Enter Pool name . Choose 2-way-Replication or 3-way-Replication as the Data Protection Policy. Select Enable compression if you need to compress the data. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression will not be compressed. Click Create to create the new storage pool. Click Finish after the pool is created. Optional: Select Enable Encryption checkbox. Click Create to create the storage class. 2.2. Storage class for persistent volume encryption Persistent volume (PV) encryption guarantees isolation and confidentiality between tenants (applications). Before you can use PV encryption, you must create a storage class for PV encryption. Persistent volume encryption is only available for RBD PVs. OpenShift Data Foundation supports storing encryption passphrases in HashiCorp Vault and Thales CipherTrust Manager. You can create an encryption enabled storage class using an external key management system (KMS) for persistent volume encryption. You need to configure access to the KMS before creating the storage class. Note For PV encryption, you must have a valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . 2.2.1. Access configuration for Key Management System (KMS) Based on your use case, you need to configure access to KMS using one of the following ways: Using vaulttokens : allows users to authenticate using a token Using Thales CipherTrust Manager : uses Key Management Interoperability Protocol (KMIP) Using vaulttenantsa (Technology Preview): allows users to use serviceaccounts to authenticate with Vault Important Accessing the KMS using vaulttenantsa is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . 2.2.1.1. Configuring access to KMS using vaulttokens Prerequisites The OpenShift Data Foundation cluster is in Ready state. On the external key management system (KMS), Ensure that a policy with a token exists and the key value backend path in Vault is enabled. Ensure that you are using signed certificates on your Vault servers. Procedure Create a secret in the tenant's namespace. In the OpenShift Container Platform web console, navigate to Workloads -> Secrets . Click Create -> Key/value secret . Enter Secret Name as ceph-csi-kms-token . Enter Key as token . Enter Value . It is the token from Vault. You can either click Browse to select and upload the file containing the token or enter the token directly in the text box. Click Create . Note The token can be deleted only after all the encrypted PVCs using the ceph-csi-kms-token have been deleted. 2.2.1.2. Configuring access to KMS using Thales CipherTrust Manager Prerequisites Create a KMIP client if one does not exist. From the user interface, select KMIP -> Client Profile -> Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token be navigating to KMIP -> Registration Token -> New Registration Token . Copy the token for the step. To register the client, navigate to KMIP -> Registered Clients -> Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings -> Interfaces -> Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both meta-data and material when the key is deleted. It is disabled by default. Select the CA to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Procedure To create a key to act as the Key Encryption Key (KEK) for storageclass encryption, follow the steps below: Navigate to Keys -> Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. 2.2.1.3. Configuring access to KMS using vaulttenantsa Prerequisites The OpenShift Data Foundation cluster is in Ready state. On the external key management system (KMS), Ensure that a policy exists and the key value backend path in Vault is enabled. Ensure that you are using signed certificates on your Vault servers. Create the following serviceaccount in the tenant namespace as shown below: Procedure You need to configure the Kubernetes authentication method before OpenShift Data Foundation can authenticate with and start using Vault . The following instructions create and configure serviceAccount , ClusterRole , and ClusterRoleBinding required to allow OpenShift Data Foundation to authenticate with Vault . Apply the following YAML to your Openshift cluster: Create a secret for serviceaccount token and CA certificate. Get the token and the CA certificate from the secret. Retrieve the OpenShift cluster endpoint. Use the information collected in the steps to set up the kubernetes authentication method in Vault as shown: Create a role in Vault for the tenant namespace: csi-kubernetes is the default role name that OpenShift Data Foundation looks for in Vault. The default service account name in the tenant namespace in the OpenShift Data Foundation cluster is ceph-csi-vault-sa . These default values can be overridden by creating a ConfigMap in the tenant namespace. For more information about overriding the default names, see Overriding Vault connection details using tenant ConfigMap . Sample YAML To create a storageclass that uses the vaulttenantsa method for PV encrytpion, you must either edit the existing ConfigMap or create a ConfigMap named csi-kms-connection-details that will hold all the information needed to establish the connection with Vault. The sample yaml given below can be used to update or create the csi-kms-connection-detail ConfigMap: encryptionKMSType Set to vaulttenantsa to use service accounts for authentication with vault. vaultAddress The hostname or IP address of the vault server with the port number. vaultTLSServerName (Optional) The vault TLS server name vaultAuthPath (Optional) The path where kubernetes auth method is enabled in Vault. The default path is kubernetes . If the auth method is enabled in a different path other than kubernetes , this variable needs to be set as "/v1/auth/<path>/login" . vaultAuthNamespace (Optional) The Vault namespace where kubernetes auth method is enabled. vaultNamespace (Optional) The Vault namespace where the backend path being used to store the keys exists vaultBackendPath The backend path in Vault where the encryption keys will be stored vaultCAFromSecret The secret in the OpenShift Data Foundation cluster containing the CA certificate from Vault vaultClientCertFromSecret The secret in the OpenShift Data Foundation cluster containing the client certificate from Vault vaultClientCertKeyFromSecret The secret in the OpenShift Data Foundation cluster containing the client private key from Vault tenantSAName (Optional) The service account name in the tenant namespace. The default value is ceph-csi-vault-sa . If a different name is to be used, this variable has to be set accordingly. 2.2.2. Creating a storage class for persistent volume encryption Prerequisites Based on your use case, you must ensure to configure access to KMS for one of the following: Using vaulttokens : Ensure to configure access as described in Configuring access to KMS using vaulttokens Using vaulttenantsa (Technology Preview): Ensure to configure access as described in Configuring access to KMS using vaulttenantsa Using Thales CipherTrust Manager (using KMIP): Ensure to configure access as described in Configuring access to KMS using Thales CipherTrust Manager (For users on Azure platform only) Using Azure Vault [Technology preview]: Ensure to set up client authetication and fetch the client credentials from Azure using the following steps: Create Azure Vault. For more information, see Quickstart: Create a key vault using the Azure portal in Microsoft product documentation. Create Service Principal with certificate based authentication. For more information, see Create an Azure service principal with Azure CLI in Microsoft product documentation. Set Azure Key Vault role based access control (RBAC). For more information, see Enable Azure RBAC permissions on Key Vault . Procedure In the OpenShift Web Console, navigate to Storage -> StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Select either Delete or Retain for the Reclaim Policy . By default, Delete is selected. Select either Immediate or WaitForFirstConsumer as the Volume binding mode . WaitForConsumer is set as the default option. Select RBD Provisioner openshift-storage.rbd.csi.ceph.com which is the plugin used for provisioning the persistent volumes. Select Storage Pool where the volume data is stored from the list or create a new pool. Select the Enable encryption checkbox. Choose one of the following options to set the KMS connection details: Choose existing KMS connection : Select an existing KMS connection from the drop-down list. The list is populated from the the connection details available in the csi-kms-connection-details ConfigMap. Select the Provider from the drop down. Select the Key service for the given provider from the list. Create new KMS connection : This is applicable for vaulttokens and Thales CipherTrust Manager (using KMIP) only. Select one of the following Key Management Service Provider and provide the required details. Vault Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name . In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example, Address : 123.34.3.2, Port : 5696. Upload the Client Certificate , CA certificate , and Client Private Key . Enter the Unique Identifier for the key to be used for encryption and decryption, generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Azure Key Vault (Technology preview) (Only for Azure users on Azure platform) For information about setting up client authentication and fetching the client credentials, see the Prerequisites in Creating an OpenShift Data Foundation cluster section of the Deploying OpenShift Data Foundation using Microsoft Azure guide. Enter a unique Connection name for the key management service within the project. Enter Azure Vault URL . Enter Client ID . Enter Tenant ID . Upload Certificate file in .PEM format and the certificate file must include a client certificate and a private key. Click Save . Click Create . Edit the ConfigMap to add the vaultBackend parameter if the HashiCorp Vault setup does not allow automatic detection of the Key/Value (KV) secret engine API version used by the backend path. Note vaultBackend is an optional parameters that is added to the configmap to specify the version of the KV secret engine API associated with the backend path. Ensure that the value matches the KV secret engine API version that is set for the backend path, otherwise it might result in a failure during persistent volume claim (PVC) creation. Identify the encryptionKMSID being used by the newly created storage class. On the OpenShift Web Console, navigate to Storage -> Storage Classes . Click the Storage class name -> YAML tab. Capture the encryptionKMSID being used by the storage class. Example: On the OpenShift Web Console, navigate to Workloads -> ConfigMaps . To view the KMS connection details, click csi-kms-connection-details . Edit the ConfigMap. Click Action menu (...) -> Edit ConfigMap . Add the vaultBackend parameter depending on the backend that is configured for the previously identified encryptionKMSID . You can assign kv for KV secret engine API, version 1 and kv-v2 for KV secret engine API, version 2. Example: Click Save steps The storage class can be used to create encrypted persistent volumes. For more information, see managing persistent volume claims . Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the HashiCorp product. For technical assistance with this product, contact HashiCorp . 2.2.2.1. Overriding Vault connection details using tenant ConfigMap The Vault connections details can be reconfigured per tenant by creating a ConfigMap in the Openshift namespace with configuration options that differ from the values set in the csi-kms-connection-details ConfigMap in the openshift-storage namespace. The ConfigMap needs to be located in the tenant namespace. The values in the ConfigMap in the tenant namespace will override the values set in the csi-kms-connection-details ConfigMap for the encrypted Persistent Volumes created in that namespace. Procedure Ensure that you are in the tenant namespace. Click on Workloads -> ConfigMaps . Click on Create ConfigMap . The following is a sample yaml. The values to be overidden for the given tenant namespace can be specified under the data section as shown below: After the yaml is edited, click on Create . 2.3. Storage class with single replica You can create a storage class with a single replica to be used by your applications. This avoids redundant data copies and allows resiliency management on the application level. Warning Enabling this feature creates a single replica pool without data replication, increasing the risk of data loss, data corruption, and potential system instability if your application does not have its own replication. If any OSDs are lost, this feature requires very disruptive steps to recover. All applications can lose their data, and must be recreated in case of a failed OSD. Procedure Enable the single replica feature using the following command: Verify storagecluster is in Ready state: Example output: New cephblockpools are created for each failure domain. Verify cephblockpools are in Ready state: Example output: Verify new storage classes have been created: Example output: New OSD pods are created; 3 osd-prepare pods and 3 additional pods. Verify new OSD pods are in Running state: Example output: 2.3.1. Recovering after OSD lost from single replica When using replica 1, a storage class with a single replica, data loss is guaranteed when an OSD is lost. Lost OSDs go into a failing state. Use the following steps to recover after OSD loss. Procedure Follow these recovery steps to get your applications running again after data loss from replica 1. You first need to identify the domain where the failing OSD is. If you know which failure domain the failing OSD is in, run the following command to get the exact replica1-pool-name required for the steps. If you do not know where the failing OSD is, skip to step 2. Example output: Copy the corresponding failure domain name for use in steps, then skip to step 4. Find the OSD pod that is in Error state or CrashLoopBackoff state to find the failing OSD: Identify the replica-1 pool that had the failed OSD. Identify the node where the failed OSD was running: Identify the failureDomainLabel for the node where the failed OSD was running: The output shows the replica-1 pool name whose OSD is failing, for example: where USDfailure_domain_value is the failureDomainName. Delete the replica-1 pool. Connect to the toolbox pod: Delete the replica-1 pool. Note that you have to enter the replica-1 pool name twice in the command, for example: Replace replica1-pool-name with the failure domain name identified earlier. Purge the failing OSD by following the steps in section "Replacing operational or failed storage devices" based on your platform in the Replacing devices guide. Restart the rook-ceph operator: Recreate any affected applications in that avaialbity zone to start using the new pool with same name. Chapter 3. Block pools The OpenShift Data Foundation operator installs a default set of storage pools depending on the platform in use. These default storage pools are owned and controlled by the operator and it cannot be deleted or modified. Note Multiple block pools are not supported for external mode OpenShift Data Foundation clusters. 3.1. Managing block pools in internal mode With OpenShift Container Platform, you can create multiple custom storage pools which map to storage classes that provide the following features: Enable applications with their own high availability to use persistent volumes with two replicas, potentially improving application performance. Save space for persistent volume claims using storage classes with compression enabled. 3.1.1. Creating a block pool Prerequisites You must be logged into the OpenShift Container Platform web console as an administrator. Procedure Click Storage -> Data Foundation . In the Storage systems tab, select the storage system and then click the BlockPools tab. Click Create Block Pool . Enter Pool name . Note Using 2-way replication data protection policy is not supported for the default pool. However, you can use 2-way replication if you are creating an additional pool. Select Data protection policy as either 2-way Replication or 3-way Replication . Optional: Select Enable compression checkbox if you need to compress the data. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression is not compressed. Click Create . 3.1.2. Updating an existing pool Prerequisites You must be logged into the OpenShift Container Platform web console as an administrator. Procedure Click Storage -> Data Foundation . In the Storage systems tab, select the storage system and then click BlockPools . Click the Action Menu (...) at the end the pool you want to update. Click Edit Block Pool . Modify the form details as follows: Note Using 2-way replication data protection policy is not supported for the default pool. However, you can use 2-way replication if you are creating an additional pool. Change the Data protection policy to either 2-way Replication or 3-way Replication. Enable or disable the compression option. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression is not compressed. Click Save . 3.1.3. Deleting a pool Use this procedure to delete a pool in OpenShift Data Foundation. Prerequisites You must be logged into the OpenShift Container Platform web console as an administrator. Procedure . Click Storage -> Data Foundation . In the Storage systems tab, select the storage system and then click the BlockPools tab. Click the Action Menu (...) at the end the pool you want to delete. Click Delete Block Pool . Click Delete to confirm the removal of the Pool. Note A pool cannot be deleted when it is bound to a PVC. You must detach all the resources before performing this activity. Chapter 4. Configure storage for OpenShift Container Platform services You can use OpenShift Data Foundation to provide storage for OpenShift Container Platform services such as the following: OpenShift image registry OpenShift monitoring OpenShift logging (Loki) The process for configuring storage for these services depends on the infrastructure used in your OpenShift Data Foundation deployment. Warning Always ensure that you have a plenty of storage capacity for the following OpenShift services that you configure: OpenShift image registry OpenShift monitoring OpenShift logging (Loki) OpenShift tracing platform (Tempo) If the storage for these critical services runs out of space, the OpenShift cluster becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Configuring the Curator schedule and the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. If you do run out of storage space for these services, contact Red Hat Customer Support. 4.1. Configuring Image Registry to use OpenShift Data Foundation OpenShift Container Platform provides a built in Container Image Registry which runs as a standard workload on the cluster. A registry is typically used as a publication target for images built on the cluster as well as a source of images for workloads running on the cluster. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the Container Image Registry. On AWS, it is not required to change the storage for the registry. However, it is recommended to change the storage to OpenShift Data Foundation Persistent Volume for vSphere and Bare metal platforms. Warning This process does not migrate data from an existing image registry to the new image registry. If you already have container images in your existing registry, back up your registry before you complete this process, and re-register your images when this process is complete. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators -> Installed Operators to view installed operators. Image Registry Operator is installed and running in the openshift-image-registry namespace. In OpenShift Web Console, click Administration -> Cluster Settings -> Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.cephfs.csi.ceph.com is available. In OpenShift Web Console, click Storage -> StorageClasses to view available storage classes. Procedure Create a Persistent Volume Claim for the Image Registry to use. In the OpenShift Web Console, click Storage -> Persistent Volume Claims . Set the Project to openshift-image-registry . Click Create Persistent Volume Claim . From the list of available storage classes retrieved above, specify the Storage Class with the provisioner openshift-storage.cephfs.csi.ceph.com . Specify the Persistent Volume Claim Name , for example, ocs4registry . Specify an Access Mode of Shared Access (RWX) . Specify a Size of at least 100 GB. Click Create . Wait until the status of the new Persistent Volume Claim is listed as Bound . Configure the cluster's Image Registry to use the new Persistent Volume Claim. Click Administration -> Custom Resource Definitions . Click the Config custom resource definition associated with the imageregistry.operator.openshift.io group. Click the Instances tab. Beside the cluster instance, click the Action Menu (...) -> Edit Config . Add the new Persistent Volume Claim as persistent storage for the Image Registry. Add the following under spec: , replacing the existing storage: section if necessary. For example: Click Save . Verify that the new configuration is being used. Click Workloads -> Pods . Set the Project to openshift-image-registry . Verify that the new image-registry-* pod appears with a status of Running , and that the image-registry-* pod terminates. Click the new image-registry-* pod to view pod details. Scroll down to Volumes and verify that the registry-storage volume has a Type that matches your new Persistent Volume Claim, for example, ocs4registry . 4.2. Using Multicloud Object Gateway as OpenShift Image Registry backend storage You can use Multicloud Object Gateway (MCG) as OpenShift Container Platform (OCP) Image Registry backend storage in an on-prem OpenShift deployment. To configure MCG as a backend storage for the OCP image registry, follow the steps mentioned in the procedure. Prerequisites Administrative access to OCP Web Console. A running OpenShift Data Foundation cluster with MCG. Procedure Create ObjectBucketClaim by following the steps in Creating Object Bucket Claim . Create an image-registry-private-configuration-user secret. Go to the OpenShift web-console. Click ObjectBucketClaim --> ObjectBucketClaim Data . In the ObjectBucketClaim data , look for MCG access key and MCG secret key in the openshift-image-registry namespace . Create the secret using the following command: Change the status of managementState of Image Registry Operator to Managed . Edit the spec.storage section of Image Registry Operator configuration file: Get the unique-bucket-name and regionEndpoint under the Object Bucket Claim Data section from the Web Console OR you can also get the information on regionEndpoint and unique-bucket-name from the command: Add regionEndpoint as http://<Endpoint-name>:<port> if the storageclass is ceph-rgw storageclass and the endpoint points to the internal SVC from the openshift-storage namespace. An image-registry pod spawns after you make the changes to the Operator registry configuration file. Reset the image registry settings to default. Verification steps Run the following command to check if you have configured the MCG as OpenShift Image Registry backend storage successfully. Example output (Optional) You can also the run the following command to verify if you have configured the MCG as OpenShift Image Registry backend storage successfully. Example output 4.3. Configuring monitoring to use OpenShift Data Foundation OpenShift Data Foundation provides a monitoring stack that comprises of Prometheus and Alert Manager. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the monitoring stack. Important Monitoring will not function if it runs out of storage space. Always ensure that you have plenty of storage capacity for monitoring. Red Hat recommends configuring a short retention interval for this service. See the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In the OpenShift Web Console, click Operators -> Installed Operators to view installed operators. Monitoring Operator is installed and running in the openshift-monitoring namespace. In the OpenShift Web Console, click Administration -> Cluster Settings -> Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.rbd.csi.ceph.com is available. In the OpenShift Web Console, click Storage -> StorageClasses to view available storage classes. Procedure In the OpenShift Web Console, go to Workloads -> Config Maps . Set the Project dropdown to openshift-monitoring . Click Create Config Map . Define a new cluster-monitoring-config Config Map using the following example. Replace the content in angle brackets ( < , > ) with your own values, for example, retention: 24h or storage: 40Gi . Replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd . Example cluster-monitoring-config Config Map Click Create to save and create the Config Map. Verification steps Verify that the Persistent Volume Claims are bound to the pods. Go to Storage -> Persistent Volume Claims . Set the Project dropdown to openshift-monitoring . Verify that 5 Persistent Volume Claims are visible with a state of Bound , attached to three alertmanager-main-* pods, and two prometheus-k8s-* pods. Figure 4.1. Monitoring storage created and bound Verify that the new alertmanager-main-* pods appear with a state of Running . Go to Workloads -> Pods . Click the new alertmanager-main-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-alertmanager-claim that matches one of your new Persistent Volume Claims, for example, ocs-alertmanager-claim-alertmanager-main-0 . Figure 4.2. Persistent Volume Claims attached to alertmanager-main-* pod Verify that the new prometheus-k8s-* pods appear with a state of Running . Click the new prometheus-k8s-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-prometheus-claim that matches one of your new Persistent Volume Claims, for example, ocs-prometheus-claim-prometheus-k8s-0 . Figure 4.3. Persistent Volume Claims attached to prometheus-k8s-* pod 4.4. Overprovision level policy control Overprovision control is a mechanism that enables you to define a quota on the amount of Persistent Volume Claims (PVCs) consumed from a storage cluster, based on the specific application namespace. When you enable the overprovision control mechanism, it prevents you from overprovisioning the PVCs consumed from the storage cluster. OpenShift provides flexibility for defining constraints that limit the aggregated resource consumption at cluster scope with the help of ClusterResourceQuota . For more information see, OpenShift ClusterResourceQuota . With overprovision control, a ClusteResourceQuota is initiated, and you can set the storage capacity limit for each storage class. For more information about OpenShift Data Foundation deployment, refer to Product Documentation and select the deployment procedure according to the platform. Prerequisites Ensure that the OpenShift Data Foundation cluster is created. Procedure Deploy storagecluster either from the command line interface or the user interface. Label the application namespace. <desired_name> Specify a name for the application namespace, for example, quota-rbd . <desired_label> Specify a label for the storage quota, for example, storagequota1 . Edit the storagecluster to set the quota limit on the storage class. <ocs_storagecluster_name> Specify the name of the storage cluster. Add an entry for Overprovision Control with the desired hard limit into the StorageCluster.Spec : <desired_quota_limit> Specify a desired quota limit for the storage class, for example, 27Ti . <storage_class_name> Specify the name of the storage class for which you want to set the quota limit, for example, ocs-storagecluster-ceph-rbd . <desired_quota_name> Specify a name for the storage quota, for example, quota1 . <desired_label> Specify a label for the storage quota, for example, storagequota1 . Save the modified storagecluster . Verify that the clusterresourcequota is defined. Note Expect the clusterresourcequota with the quotaName that you defined in the step, for example, quota1 . 4.5. Cluster logging for OpenShift Data Foundation You can deploy cluster logging to aggregate logs for a range of OpenShift Container Platform services. For information about how to deploy cluster logging, see Deploying cluster logging . Upon initial OpenShift Container Platform deployment, OpenShift Data Foundation is not configured by default and the OpenShift Container Platform cluster will solely rely on default storage available from the nodes. You can edit the default configuration of OpenShift logging (ElasticSearch) to be backed by OpenShift Data Foundation to have OpenShift Data Foundation backed logging (Elasticsearch). Important Always ensure that you have plenty of storage capacity for these services. If you run out of storage space for these critical services, the logging application becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Cluster logging curator in the OpenShift Container Platform documentation for details. If you run out of storage space for these services, contact Red Hat Customer Support. 4.5.1. Configuring persistent storage You can configure a persistent storage class and size for the Elasticsearch cluster using the storage class name and size parameters. The Cluster Logging Operator creates a Persistent Volume Claim for each data node in the Elasticsearch cluster based on these parameters. For example: This example specifies that each data node in the cluster will be bound to a Persistent Volume Claim that requests 200GiB of ocs-storagecluster-ceph-rbd storage. Each primary shard will be backed by a single replica. A copy of the shard is replicated across all the nodes and are always available and the copy can be recovered if at least two nodes exist due to the single redundancy policy. For information about Elasticsearch replication policies, see Elasticsearch replication policy in About deploying and configuring cluster logging . Note Omission of the storage block will result in a deployment backed by default storage. For example: For more information, see Configuring cluster logging . 4.5.2. Configuring cluster logging to use OpenShift data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for the OpenShift cluster logging. Note You can obtain all the logs when you configure logging for the first time in OpenShift Data Foundation. However, after you uninstall and reinstall logging, the old logs are removed and only the new logs are processed. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Cluster logging Operator is installed and running in the openshift-logging namespace. Procedure Click Administration -> Custom Resource Definitions from the left pane of the OpenShift Web Console. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition Overview page, select View Instances from the Actions menu or click the Instances Tab. On the Cluster Logging page, click Create Cluster Logging . You might have to refresh the page to load the data. In the YAML, replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd : If you have tainted the OpenShift Data Foundation nodes, you must add toleration to enable scheduling of the daemonset pods for logging. Click Save . Verification steps Verify that the Persistent Volume Claims are bound to the elasticsearch pods. Go to Storage -> Persistent Volume Claims . Set the Project dropdown to openshift-logging . Verify that Persistent Volume Claims are visible with a state of Bound , attached to elasticsearch- * pods. Figure 4.4. Cluster logging created and bound Verify that the new cluster logging is being used. Click Workload -> Pods . Set the Project to openshift-logging . Verify that the new elasticsearch- * pods appear with a state of Running . Click the new elasticsearch- * pod to view pod details. Scroll down to Volumes and verify that the elasticsearch volume has a Type that matches your new Persistent Volume Claim, for example, elasticsearch-elasticsearch-cdm-9r624biv-3 . Click the Persistent Volume Claim name and verify the storage class name in the PersistentVolumeClaim Overview page. Note Make sure to use a shorter curator time to avoid PV full scenario on PVs attached to Elasticsearch pods. You can configure Curator to delete Elasticsearch data based on retention settings. It is recommended that you set the following default index data retention of 5 days as a default. For more details, see Curation of Elasticsearch Data . Note To uninstall the cluster logging backed by Persistent Volume Claim, use the procedure removing the cluster logging operator from OpenShift Data Foundation in the uninstall chapter of the respective deployment guide. Chapter 5. Creating Multus networks OpenShift Container Platform uses the Multus CNI plug-in to allow chaining of CNI plug-ins. You can configure your default pod network during cluster installation. The default network handles all ordinary network traffic for the cluster. You can define an additional network based on the available CNI plug-ins and attach one or more of these networks to your pods. To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition (NAD) custom resource (CR). A CNI configuration inside each of the NetworkAttachmentDefinition defines how that interface is created. OpenShift Data Foundation uses the CNI plug-in called macvlan. Creating a macvlan-based additional network allows pods on a host to communicate with other hosts and pods on those hosts using a physical network interface. Each pod that is attached to a macvlan-based additional network is provided a unique MAC address. 5.1. Creating network attachment definitions To utilize Multus, an already working cluster with the correct networking configuration is required, see Requirements for Multus configuration . The newly created NetworkAttachmentDefinition (NAD) can be selected during the Storage Cluster installation. This is the reason they must be created before the Storage Cluster. Note Network attachment definitions can only use the whereabouts IP address management (IPAM), and it must specify the range field. ipRanges and plugin chaining are not supported. You can select the newly created NetworkAttachmentDefinition (NAD) during the Storage Cluster installation. This is the reason you must create the NAD before you create the Storage Cluster. As detailed in the Planning Guide, the Multus networks you create depend on the number of available network interfaces you have for OpenShift Data Foundation traffic. It is possible to separate all of the storage traffic onto one of the two interfaces (one interface used for default OpenShift SDN) or to further segregate storage traffic into client storage traffic (public) and storage replication traffic (private or cluster). The following is an example NetworkAttachmentDefinition for all the storage traffic, public and cluster, on the same interface. It requires one additional interface on all schedulable nodes (OpenShift default SDN on separate network interface): Note All network interface names must be the same on all the nodes attached to the Multus network (that is, ens2 for ocs-public-cluster ). The following is an example NetworkAttachmentDefinition for storage traffic on separate Multus networks, public, for client storage traffic, and cluster, for replication traffic. It requires two additional interfaces on OpenShift nodes hosting object storage device (OSD) pods and one additional interface on all other schedulable nodes (OpenShift default SDN on separate network interface): Example NetworkAttachmentDefinition : Note All network interface names must be the same on all the nodes attached to the Multus networks (that is, ens2 for ocs-public , and ens3 for ocs-cluster ). Chapter 6. Backing OpenShift Container Platform applications with OpenShift Data Foundation You cannot directly install OpenShift Data Foundation during the OpenShift Container Platform installation. However, you can install OpenShift Data Foundation on an existing OpenShift Container Platform by using the Operator Hub and then configure the OpenShift Container Platform applications to be backed by OpenShift Data Foundation. Prerequisites OpenShift Container Platform is installed and you have administrative access to OpenShift Web Console. OpenShift Data Foundation is installed and running in the openshift-storage namespace. Procedure In the OpenShift Web Console, perform one of the following: Click Workloads -> Deployments . In the Deployments page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. Click Workloads -> Deployment Configs . In the Deployment Configs page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment Config to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. In the Add Storage page, you can choose one of the following options: Click the Use existing claim option and select a suitable PVC from the drop-down list. Click the Create new claim option. Select the appropriate CephFS or RBD storage class from the Storage Class drop-down list. Provide a name for the Persistent Volume Claim. Select ReadWriteOnce (RWO) or ReadWriteMany (RWX) access mode. Note ReadOnlyMany (ROX) is deactivated as it is not supported. Select the size of the desired storage capacity. Note You can expand the block PVs but cannot reduce the storage capacity after the creation of Persistent Volume Claim. Specify the mount path and subpath (if required) for the mount path volume inside the container. Click Save . Verification steps Depending on your configuration, perform one of the following: Click Workloads -> Deployments . Click Workloads -> Deployment Configs . Set the Project as required. Click the deployment for which you added storage to display the deployment details. Scroll down to Volumes and verify that your deployment has a Type that matches the Persistent Volume Claim that you assigned. Click the Persistent Volume Claim name and verify the storage class name in the Persistent Volume Claim Overview page. Chapter 7. Adding file and object storage to an existing external OpenShift Data Foundation cluster When OpenShift Data Foundation is configured in external mode, there are several ways to provide storage for persistent volume claims and object bucket claims. Persistent volume claims for block storage are provided directly from the external Red Hat Ceph Storage cluster. Persistent volume claims for file storage can be provided by adding a Metadata Server (MDS) to the external Red Hat Ceph Storage cluster. Object bucket claims for object storage can be provided either by using the Multicloud Object Gateway or by adding the Ceph Object Gateway to the external Red Hat Ceph Storage cluster. Use the following process to add file storage (using Metadata Servers) or object storage (using Ceph Object Gateway) or both to an external OpenShift Data Foundation cluster that was initially deployed to provide only block storage. Prerequisites OpenShift Data Foundation 4.15 is installed and running on the OpenShift Container Platform version 4.16 or above. Also, the OpenShift Data Foundation Cluster in external mode is in the Ready state. Your external Red Hat Ceph Storage cluster is configured with one or both of the following: a Ceph Object Gateway (RGW) endpoint that can be accessed by the OpenShift Container Platform cluster for object storage a Metadata Server (MDS) pool for file storage Ensure that you know the parameters used with the ceph-external-cluster-details-exporter.py script during external OpenShift Data Foundation cluster deployment. Procedure Download the OpenShift Data Foundation version of the ceph-external-cluster-details-exporter.py python script using the following command: Update permission caps on the external Red Hat Ceph Storage cluster by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. You may need to ask your Red Hat Ceph Storage administrator to do this. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. --rgw-pool-prefix The prefix used for the Ceph Object Gateway pool. This can be omitted if the default prefix is used. Generate and save configuration details from the external Red Hat Ceph Storage cluster. Generate configuration details by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. --monitoring-endpoint Is optional. It accepts comma separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. --monitoring-endpoint-port Is optional. It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint . If not provided, the value is automatically populated. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. --rgw-endpoint Provide this parameter to provision object storage through Ceph Object Gateway for OpenShift Data Foundation. (optional parameter) --rgw-pool-prefix The prefix used for the Ceph Object Gateway pool. This can be omitted if the default prefix is used. User permissions are updated as shown: Note Ensure that all the parameters (including the optional arguments) except the Ceph Object Gateway details (if provided), are the same as what was used during the deployment of OpenShift Data Foundation in external mode. Save the output of the script in an external-cluster-config.json file. The following example output shows the generated configuration changes in bold text. Upload the generated JSON file. Log in to the OpenShift web console. Click Workloads -> Secrets . Set project to openshift-storage . Click on rook-ceph-external-cluster-details . Click Actions (...) -> Edit Secret Click Browse and upload the external-cluster-config.json file. Click Save . Verification steps To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage -> Data foundation -> Storage Systems tab and then click on the storage system name. On the Overview -> Block and File tab, check the Status card to confirm that the Storage Cluster has a green tick indicating it is healthy. If you added a Metadata Server for file storage: Click Workloads -> Pods and verify that csi-cephfsplugin-* pods are created new and are in the Running state. Click Storage -> Storage Classes and verify that the ocs-external-storagecluster-cephfs storage class is created. If you added the Ceph Object Gateway for object storage: Click Storage -> Storage Classes and verify that the ocs-external-storagecluster-ceph-rgw storage class is created. To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage -> Data foundation -> Storage Systems tab and then click on the storage system name. Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. Chapter 8. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Any Red Hat OpenShift Container Platform subscription requires an OpenShift Data Foundation subscription. However, you can save on the OpenShift Container Platform subscription costs if you are using infrastructure nodes to schedule OpenShift Data Foundation resources. It is important to maintain consistency across environments with or without Machine API support. Because of this, it is highly recommended in all cases to have a special category of nodes labeled as either worker or infra or have both roles. See the Section 8.3, "Manual creation of infrastructure nodes" section for more information. 8.1. Anatomy of an Infrastructure node Infrastructure nodes for use with OpenShift Data Foundation have a few attributes. The infra node-role label is required to ensure the node does not consume RHOCP entitlements. The infra node-role label is responsible for ensuring only OpenShift Data Foundation entitlements are necessary for the nodes running OpenShift Data Foundation. Labeled with node-role.kubernetes.io/infra Adding an OpenShift Data Foundation taint with a NoSchedule effect is also required so that the infra node will only schedule OpenShift Data Foundation resources. Tainted with node.ocs.openshift.io/storage="true" The label identifies the RHOCP node as an infra node so that RHOCP subscription cost is not applied. The taint prevents non OpenShift Data Foundation resources to be scheduled on the tainted nodes. Note Adding storage taint on nodes might require toleration handling for the other daemonset pods such as openshift-dns daemonset . For information about how to manage the tolerations, see Knowledgebase article: Openshift-dns daemonsets doesn't include toleration to run on nodes with taints . Example of the taint and labels required on infrastructure node that will be used to run OpenShift Data Foundation services: 8.2. Machine sets for creating Infrastructure nodes If the Machine API is supported in the environment, then labels should be added to the templates for the Machine Sets that will be provisioning the infrastructure nodes. Avoid the anti-pattern of adding labels manually to nodes created by the machine API. Doing so is analogous to adding labels to pods created by a deployment. In both cases, when the pod/node fails, the replacement pod/node will not have the appropriate labels. Note In EC2 environments, you will need three machine sets, each configured to provision infrastructure nodes in a distinct availability zone (such as us-east-2a, us-east-2b, us-east-2c). Currently, OpenShift Data Foundation does not support deploying in more than three availability zones. The following Machine Set template example creates nodes with the appropriate taint and labels required for infrastructure nodes. This will be used to run OpenShift Data Foundation services. Important If you add a taint to the infrastructure nodes, you also need to add tolerations to the taint for other workloads, for example, the fluentd pods. For more information, see the Red Hat Knowledgebase solution Infrastructure Nodes in OpenShift 4 . 8.3. Manual creation of infrastructure nodes Only when the Machine API is not supported in the environment should labels be directly applied to nodes. Manual creation requires that at least 3 RHOCP worker nodes are available to schedule OpenShift Data Foundation services, and that these nodes have sufficient CPU and memory resources. To avoid the RHOCP subscription cost, the following is required: Adding a NoSchedule OpenShift Data Foundation taint is also required so that the infra node will only schedule OpenShift Data Foundation resources and repel any other non-OpenShift Data Foundation workloads. Warning Do not remove the node-role node-role.kubernetes.io/worker="" The removal of the node-role.kubernetes.io/worker="" can cause issues unless changes are made both to the OpenShift scheduler and to MachineConfig resources. If already removed, it should be added again to each infra node. Adding node-role node-role.kubernetes.io/infra="" and OpenShift Data Foundation taint is sufficient to conform to entitlement exemption requirements. 8.4. Taint a node from the user interface This section explains the procedure to taint nodes after the OpenShift Data Foundation deployment. Procedure In the OpenShift Web Console, click Compute -> Nodes , and then select the node which has to be tainted. In the Details page click on Edit taints . Enter the values in the Key <nodes.openshift.ocs.io/storage>, Value <true> and in the Effect <Noschedule> field. Click Save. Verification steps Follow the steps to verify that the node has tainted successfully: Navigate to Compute -> Nodes . Select the node to verify its status, and then click on the YAML tab. In the specs section check the values of the following parameters: Additional resources For more information, refer to Creating the OpenShift Data Foundation cluster on VMware vSphere . Chapter 9. Managing Persistent Volume Claims 9.1. Configuring application pods to use OpenShift Data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for an application pod. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators -> Installed Operators to view installed operators. The default storage classes provided by OpenShift Data Foundation are available. In OpenShift Web Console, click Storage -> StorageClasses to view default storage classes. Procedure Create a Persistent Volume Claim (PVC) for the application to use. In OpenShift Web Console, click Storage -> Persistent Volume Claims . Set the Project for the application pod. Click Create Persistent Volume Claim . Specify a Storage Class provided by OpenShift Data Foundation. Specify the PVC Name , for example, myclaim . Select the required Access Mode . Note The Access Mode , Shared access (RWX) is not supported in IBM FlashSystem. For Rados Block Device (RBD), if the Access mode is ReadWriteOnce ( RWO ), select the required Volume mode . The default volume mode is Filesystem . Specify a Size as per application requirement. Click Create and wait until the PVC is in Bound status. Configure a new or existing application pod to use the new PVC. For a new application pod, perform the following steps: Click Workloads -> Pods . Create a new application pod. Under the spec: section, add volumes: section to add the new PVC as a volume for the application pod. For example: For an existing application pod, perform the following steps: Click Workloads -> Deployment Configs . Search for the required deployment config associated with the application pod. Click on its Action menu (...) -> Edit Deployment Config . Under the spec: section, add volumes: section to add the new PVC as a volume for the application pod and click Save . For example: Verify that the new configuration is being used. Click Workloads -> Pods . Set the Project for the application pod. Verify that the application pod appears with a status of Running . Click the application pod name to view pod details. Scroll down to Volumes section and verify that the volume has a Type that matches your new Persistent Volume Claim, for example, myclaim . 9.2. Viewing Persistent Volume Claim request status Use this procedure to view the status of a PVC request. Prerequisites Administrator access to OpenShift Data Foundation. Procedure Log in to OpenShift Web Console. Click Storage -> Persistent Volume Claims Search for the required PVC name by using the Filter textbox. You can also filter the list of PVCs by Name or Label to narrow down the list Check the Status column corresponding to the required PVC. Click the required Name to view the PVC details. 9.3. Reviewing Persistent Volume Claim request events Use this procedure to review and address Persistent Volume Claim (PVC) request events. Prerequisites Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Storage systems tab, select the storage system and then click Overview -> Block and File . Locate the Inventory card to see the number of PVCs with errors. Click Storage -> Persistent Volume Claims Search for the required PVC using the Filter textbox. Click on the PVC name and navigate to Events Address the events as required or as directed. 9.4. Expanding Persistent Volume Claims OpenShift Data Foundation 4.6 onwards has the ability to expand Persistent Volume Claims providing more flexibility in the management of persistent storage resources. Expansion is supported for the following Persistent Volumes: PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on Ceph File System (CephFS) for volume mode Filesystem . PVC with ReadWriteOnce (RWO) access that is based on Ceph RADOS Block Devices (RBDs) with volume mode Filesystem . PVC with ReadWriteOnce (RWO) access that is based on Ceph RADOS Block Devices (RBDs) with volume mode Block . PVC with ReadWriteOncePod (RWOP) that is based on Ceph File System (CephFS) or Network File System (NFS) for volume mode Filesystem . PVC with ReadWriteOncePod (RWOP) access that is based on Ceph RADOS Block Devices (RBDs) with volume mode Filesystem . With RWOP access mode, you mount the volume as read-write by a single pod on a single node. Note PVC expansion is not supported for OSD, MON and encrypted PVCs. Prerequisites Administrator access to OpenShift Web Console. Procedure In OpenShift Web Console, navigate to Storage -> Persistent Volume Claims . Click the Action Menu (...) to the Persistent Volume Claim you want to expand. Click Expand PVC : Select the new size of the Persistent Volume Claim, then click Expand : To verify the expansion, navigate to the PVC's details page and verify the Capacity field has the correct size requested. Note When expanding PVCs based on Ceph RADOS Block Devices (RBDs), if the PVC is not already attached to a pod the Condition type is FileSystemResizePending in the PVC's details page. Once the volume is mounted, filesystem resize succeeds and the new size is reflected in the Capacity field. 9.5. Dynamic provisioning 9.5.1. About dynamic provisioning The StorageClass resource object describes and classifies storage that can be requested, as well as provides a means for passing parameters for dynamically provisioned storage on demand. StorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators ( cluster-admin ) or Storage Administrators ( storage-admin ) define and create the StorageClass objects that users can request without needing any intimate knowledge about the underlying storage volume sources. The OpenShift Container Platform persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure. Many storage types are available for use as persistent volumes in OpenShift Container Platform. Storage plug-ins might support static provisioning, dynamic provisioning or both provisioning types. 9.5.2. Dynamic provisioning in OpenShift Data Foundation Red Hat OpenShift Data Foundation is software-defined storage that is optimised for container environments. It runs as an operator on OpenShift Container Platform to provide highly integrated and simplified persistent storage management for containers. OpenShift Data Foundation supports a variety of storage types, including: Block storage for databases Shared file storage for continuous integration, messaging, and data aggregation Object storage for archival, backup, and media storage Version 4 uses Red Hat Ceph Storage to provide the file, block, and object storage that backs persistent volumes, and Rook.io to manage and orchestrate provisioning of persistent volumes and claims. NooBaa provides object storage, and its Multicloud Gateway allows object federation across multiple cloud environments (available as a Technology Preview). In OpenShift Data Foundation 4, the Red Hat Ceph Storage Container Storage Interface (CSI) driver for RADOS Block Device (RBD) and Ceph File System (CephFS) handles the dynamic provisioning requests. When a PVC request comes in dynamically, the CSI driver has the following options: Create a PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on Ceph RBDs with volume mode Block . Create a PVC with ReadWriteOnce (RWO) access that is based on Ceph RBDs with volume mode Filesystem . Create a PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on CephFS for volume mode Filesystem . Create a PVC with ReadWriteOncePod (RWOP) access that is based on CephFS,NFS and RBD. With RWOP access mode, you mount the volume as read-write by a single pod on a single node. The judgment of which driver (RBD or CephFS) to use is based on the entry in the storageclass.yaml file. 9.5.3. Available dynamic provisioning plug-ins OpenShift Container Platform provides the following provisioner plug-ins, which have generic implementations for dynamic provisioning that use the cluster's configured provider's API to create new storage resources: Storage type Provisioner plug-in name Notes OpenStack Cinder kubernetes.io/cinder AWS Elastic Block Store (EBS) kubernetes.io/aws-ebs For dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/<cluster_name>,Value=<cluster_id> where <cluster_name> and <cluster_id> are unique per cluster. AWS Elastic File System (EFS) Dynamic provisioning is accomplished through the EFS provisioner pod and not through a provisioner plug-in. Azure Disk kubernetes.io/azure-disk Azure File kubernetes.io/azure-file The persistent-volume-binder ServiceAccount requires permissions to create and get Secrets to store the Azure storage account and keys. GCE Persistent Disk (gcePD) kubernetes.io/gce-pd In multi-zone configurations, it is advisable to run one OpenShift Container Platform cluster per GCE project to avoid PVs from being created in zones where no node in the current cluster exists. VMware vSphere kubernetes.io/vsphere-volume Red Hat Virtualization csi.ovirt.org Important Any chosen provisioner plug-in also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation. Chapter 10. Reclaiming space on target volumes The deleted files or chunks of zero data Sometimes take up storage space on the Ceph cluster resulting in inaccurate reporting of the available storage space. The reclaim space operation removes such discrepancies by executing the following operations on the target volume: fstrim - This operation is executed on volumes that are in Filesystem mode and only if the volume is mounted to a pod at the time of execution of reclaim space operation. rbd sparsify - This operation is executed when the volume is not attached to any pods and reclaims the space occupied by chunks of 4M-sized zeroed data. Note The reclaim space operation is supported only by the Ceph RBD volumes. The reclaim space operation involves a performance penalty when it is being executed. You can use one of the following methods to reclaim the space: Enabling reclaim space operation using Annotating PersistentVolumeClaims (Recommended method to use for enabling reclaim space operation) Enabling reclaim space operation using ReclaimSpaceJob Enabling reclaim space operation using ReclaimSpaceCronJob 10.1. Enabling reclaim space operation using Annotating PersistentVolumeClaims Use this procedure to annotate PersistentVolumeClaims so that it can invoke the reclaim space operation automatically based on a given schedule. Note The schedule value is in the same format as the Kubernetes CronJobs which sets the and/or interval of the recurring operation request. Recommended schedule interval is @weekly . If the schedule interval value is empty or in an invalid format, then the default schedule value is set to @weekly . Do not schedule multiple ReclaimSpace operations @weekly or at the same time. Minimum supported interval between each scheduled operation is at least 24 hours. For example, @daily (At 00:00 every day) or 0 3 * * * (At 3:00 every day). Schedule the ReclaimSpace operation during off-peak, maintenance window, or the interval when the workload input/output is expected to be low. ReclaimSpaceCronJob is recreated when the schedule is modified. It is automatically deleted when the annotation is removed. Procedure Get the persistent volume claim (PVC) details. Add annotation reclaimspace.csiaddons.openshift.io/schedule=@monthly to the PVC to create reclaimspacecronjob . Verify that reclaimspacecronjob is created in the format, "<pvc-name>-xxxxxxx" . Modify the schedule to run this job automatically. Verify that the schedule for reclaimspacecronjob has been modified. 10.2. Enabling reclaim space operation using ReclaimSpaceJob ReclaimSpaceJob is a namespaced custom resource (CR) designed to invoke reclaim space operation on the target volume. This is a one time method that immediately starts the reclaim space operation. You have to repeat the creation of ReclaimSpaceJob CR to repeat the reclaim space operation when required. Note Recommended interval between the reclaim space operations is weekly . Ensure that the minimum interval between each operation is at least 24 hours . Schedule the reclaim space operation during off-peak, maintenance window, or when the workload input/output is expected to be low. Procedure Create and apply the following custom resource for reclaim space operation: where, target Indicates the volume target on which the operation is performed. persistentVolumeClaim Name of the PersistentVolumeClaim . backOfflimit Specifies the maximum number of retries before marking the reclaim space operation as failed . The default value is 6 . The allowed maximum and minimum values are 60 and 0 respectively. retryDeadlineSeconds Specifies the duration in which the operation might retire in seconds and it is relative to the start time. The value must be a positive integer. The default value is 600 seconds and the allowed maximum value is 1800 seconds. timeout Specifies the timeout in seconds for the grpc request sent to the CSI driver. If the timeout value is not specified, it defaults to the value of global reclaimspace timeout. Minimum allowed value for timeout is 60. Delete the custom resource after completion of the operation. 10.3. Enabling reclaim space operation using ReclaimSpaceCronJob ReclaimSpaceCronJob invokes the reclaim space operation based on the given schedule such as daily, weekly, and so on. You have to create ReclaimSpaceCronJob only once for a persistent volume claim. The CSI-addons controller creates a ReclaimSpaceJob at the requested time and interval with the schedule attribute. Note Recommended schedule interval is @weekly . Minimum interval between each scheduled operation should be at least 24 hours. For example, @daily (At 00:00 every day) or "0 3 * * *" (At 3:00 every day). Schedule the ReclaimSpace operation during off-peak, maintenance window, or the interval when workload input/output is expected to be low. Procedure Create and apply the following custom resource for reclaim space operation where, concurrencyPolicy Describes the changes when a new ReclaimSpaceJob is scheduled by the ReclaimSpaceCronJob , while a ReclaimSpaceJob is still running. The default Forbid prevents starting a new job whereas Replace can be used to delete the running job potentially in a failure state and create a new one. failedJobsHistoryLimit Specifies the number of failed ReclaimSpaceJobs that are kept for troubleshooting. jobTemplate Specifies the ReclaimSpaceJob.spec structure that describes the details of the requested ReclaimSpaceJob operation. successfulJobsHistoryLimit Specifies the number of successful ReclaimSpaceJob operations. schedule Specifieds the and/or interval of the recurring operation request and it is in the same format as the Kubernetes CronJobs . Delete the ReclaimSpaceCronJob custom resource when execution of reclaim space operation is no longer needed or when the target PVC is deleted. 10.4. Customising timeouts required for Reclaim Space Operation Depending on the RBD volume size and its data pattern, Reclaim Space Operation might fail with the context deadline exceeded error. You can avoid this by increasing the timeout value. The following example shows the failed status by inspecting -o yaml of the corresponding ReclaimSpaceJob : Example You can also set custom timeouts at global level by creating the following configmap : Example Restart the csi-addons operator pod. All Reclaim Space Operations started after the above configmap creation use the customized timeout. ' :leveloffset: +1 Chapter 11. Finding and cleaning stale subvolumes (Technology Preview) Sometimes stale subvolumes don't have a respective k8s reference attached. These subvolumes are of no use and can be deleted. You can find and delete stale subvolumes using the ODF CLI tool. Important Deleting stale subvolumes using the ODF CLI tool is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . Prerequisites Download the ODF CLI tool from the customer portal . Procedure Find the stale subvolumes by using the --stale flag with the subvolumes command: Example output: Delete the stale subvolumes: Replace <subvolumes> with a comma separated list of subvolumes from the output of the first command. The subvolumes must be of the same filesystem and subvolumegroup. Replace <filesystem> and <subvolumegroup> with the filesystem and subvolumegroup from the output of the first command. For example: Example output: Chapter 12. Volume Snapshots A volume snapshot is the state of the storage volume in a cluster at a particular point in time. These snapshots help to use storage more efficiently by not having to make a full copy each time and can be used as building blocks for developing an application. Volume snapshot class allows an administrator to specify different attributes belonging to a volume snapshot object. The OpenShift Data Foundation operator installs default volume snapshot classes depending on the platform in use. The operator owns and controls these default volume snapshot classes and they cannot be deleted or modified. You can create many snapshots of the same persistent volume claim (PVC) but cannot schedule periodic creation of snapshots. For CephFS, you can create up to 100 snapshots per PVC. For RADOS Block Device (RBD), you can create up to 512 snapshots per PVC. Note Persistent Volume encryption now supports volume snapshots. 12.1. Creating volume snapshots You can create a volume snapshot either from the Persistent Volume Claim (PVC) page or the Volume Snapshots page. Prerequisites For a consistent snapshot, the PVC should be in Bound state and not be in use. Ensure to stop all IO before taking the snapshot. Note OpenShift Data Foundation only provides crash consistency for a volume snapshot of a PVC if a pod is using it. For application consistency, be sure to first tear down a running pod to ensure consistent snapshots or use any quiesce mechanism provided by the application to ensure it. Procedure From the Persistent Volume Claims page Click Storage -> Persistent Volume Claims from the OpenShift Web Console. To create a volume snapshot, do one of the following: Beside the desired PVC, click Action menu (...) -> Create Snapshot . Click on the PVC for which you want to create the snapshot and click Actions -> Create Snapshot . Enter a Name for the volume snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. From the Volume Snapshots page Click Storage -> Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, click Create Volume Snapshot . Choose the required Project from the drop-down list. Choose the Persistent Volume Claim from the drop-down list. Enter a Name for the snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. Verification steps Go to the Details page of the PVC and click the Volume Snapshots tab to see the list of volume snapshots. Verify that the new volume snapshot is listed. Click Storage -> Volume Snapshots from the OpenShift Web Console. Verify that the new volume snapshot is listed. Wait for the volume snapshot to be in Ready state. 12.2. Restoring volume snapshots When you restore a volume snapshot, a new Persistent Volume Claim (PVC) gets created. The restored PVC is independent of the volume snapshot and the parent PVC. You can restore a volume snapshot from either the Persistent Volume Claim page or the Volume Snapshots page. Procedure From the Persistent Volume Claims page You can restore volume snapshot from the Persistent Volume Claims page only if the parent PVC is present. Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name with the volume snapshot to restore a volume snapshot as a new PVC. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. From the Volume Snapshots page Click Storage -> Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. Verification steps Click Storage -> Persistent Volume Claims from the OpenShift Web Console and confirm that the new PVC is listed in the Persistent Volume Claims page. Wait for the new PVC to reach Bound state. 12.3. Deleting volume snapshots Prerequisites For deleting a volume snapshot, the volume snapshot class which is used in that particular volume snapshot should be present. Procedure From Persistent Volume Claims page Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name which has the volume snapshot that needs to be deleted. In the Volume Snapshots tab, beside the desired volume snapshot, click Action menu (...) -> Delete Volume Snapshot . From Volume Snapshots page Click Storage -> Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, beside the desired volume snapshot click Action menu (...) -> Delete Volume Snapshot . Verfication steps Ensure that the deleted volume snapshot is not present in the Volume Snapshots tab of the PVC details page. Click Storage -> Volume Snapshots and ensure that the deleted volume snapshot is not listed. Chapter 13. Volume cloning A clone is a duplicate of an existing storage volume that is used as any standard volume. You create a clone of a volume to make a point in time copy of the data. A persistent volume claim (PVC) cannot be cloned with a different size. You can create up to 512 clones per PVC for both CephFS and RADOS Block Device (RBD). 13.1. Creating a clone Prerequisites Source PVC must be in Bound state and must not be in use. Note Do not create a clone of a PVC if a Pod is using it. Doing so might cause data corruption because the PVC is not quiesced (paused). Procedure Click Storage -> Persistent Volume Claims from the OpenShift Web Console. To create a clone, do one of the following: Beside the desired PVC, click Action menu (...) -> Clone PVC . Click on the PVC that you want to clone and click Actions -> Clone PVC . Enter a Name for the clone. Select the access mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Enter the required size of the clone. Select the storage class in which you want to create the clone. The storage class can be any RBD storage class and it need not necessarily be the same as the parent PVC. Click Clone . You are redirected to the new PVC details page. Wait for the cloned PVC status to become Bound . The cloned PVC is now available to be consumed by the pods. This cloned PVC is independent of its dataSource PVC. Chapter 14. Managing container storage interface (CSI) component placements Each cluster consists of a number of dedicated nodes such as infra and storage nodes. However, an infra node with a custom taint will not be able to use OpenShift Data Foundation Persistent Volume Claims (PVCs) on the node. So, if you want to use such nodes, you can set tolerations to bring up csi-plugins on the nodes. Procedure Edit the configmap to add the toleration for the custom taint. Remember to save before exiting the editor. Display the configmap to check the added toleration. Example output of the added toleration for the taint, nodetype=infra:NoSchedule : Note Ensure that all non-string values in the Tolerations value field has double quotation marks. For example, the values true which is of type boolean, and 1 which is of type int must be input as "true" and "1". Restart the rook-ceph-operator if the csi-cephfsplugin- * and csi-rbdplugin- * pods fail to come up on their own on the infra nodes. Example : Verification step Verify that the csi-cephfsplugin- * and csi-rbdplugin- * pods are running on the infra nodes. Chapter 15. Creating exports using NFS This section describes how to create exports using NFS that can then be accessed externally from the OpenShift cluster. Follow the instructions below to create exports and access them externally from the OpenShift Cluster: Section 15.1, "Enabling the NFS feature" Section 15.2, "Creating NFS exports" Section 15.3, "Consuming NFS exports in-cluster" Section 15.4, "Consuming NFS exports externally from the OpenShift cluster" 15.1. Enabling the NFS feature To use NFS feature, you need to enable it in the storage cluster using the command-line interface (CLI) after the cluster is created. You can also enable the NFS feature while creating the storage cluster using the user interface. Prerequisites OpenShift Data Foundation is installed and running in the openshift-storage namespace. The OpenShift Data Foundation installation includes a CephFilesystem. Procedure Run the following command to enable the NFS feature from CLI: Verification steps NFS installation and configuration is complete when the following conditions are met: The CephNFS resource named ocs-storagecluster-cephnfs has a status of Ready . Check if all the csi-nfsplugin-* pods are running: Output has multiple pods. For example: 15.2. Creating NFS exports NFS exports are created by creating a Persistent Volume Claim (PVC) against the ocs-storagecluster-ceph-nfs StorageClass. You can create NFS PVCs two ways: Create NFS PVC using a yaml. The following is an example PVC. Note volumeMode: Block will not work for NFS volumes. <desired_name> Specify a name for the PVC, for example, my-nfs-export . The export is created once the PVC reaches the Bound state. Create NFS PVCs from the OpenShift Container Platform web console. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and the NFS feature is enabled for the storage cluster. Procedure In the OpenShift Web Console, click Storage -> Persistent Volume Claims Set the Project to openshift-storage . Click Create PersistentVolumeClaim . Specify Storage Class , ocs-storagecluster-ceph-nfs . Specify the PVC Name , for example, my-nfs-export . Select the required Access Mode . Specify a Size as per application requirement. Select Volume mode as Filesystem . Note: Block mode is not supported for NFS PVCs Click Create and wait until the PVC is in Bound status. 15.3. Consuming NFS exports in-cluster Kubernetes application pods can consume NFS exports created by mounting a previously created PVC. You can mount the PVC one of two ways: Using a YAML: Below is an example pod that uses the example PVC created in Section 15.2, "Creating NFS exports" : <pvc_name> Specify the PVC you have previously created, for example, my-nfs-export . Using the OpenShift Container Platform web console. Procedure On the OpenShift Container Platform web console, navigate to Workloads -> Pods . Click Create Pod to create a new application pod. Under the metadata section add a name. For example, nfs-export-example , with namespace as openshift-storage . Under the spec: section, add containers: section with image and volumeMounts sections: For example: Under the spec: section, add volumes: section to add the NFS PVC as a volume for the application pod: For example: 15.4. Consuming NFS exports externally from the OpenShift cluster NFS clients outside of the OpenShift cluster can mount NFS exports created by a previously-created PVC. Procedure After the nfs flag is enabled, singe-server CephNFS is deployed by Rook. You need to fetch the value of the ceph_nfs field for the nfs-ganesha server to use in the step: For example: Expose the NFS server outside of the OpenShift cluster by creating a Kubernetes LoadBalancer Service. The example below creates a LoadBalancer Service and references the NFS server created by OpenShift Data Foundation. Replace <my-nfs> with the value you got in step 1. Collect connection information. The information external clients need to connect to an export comes from the Persistent Volume (PV) created for the PVC, and the status of the LoadBalancer Service created in the step. Get the share path from the PV. Get the name of the PV associated with the NFS export's PVC: Replace <pvc_name> with your own PVC name. For example: Use the PV name obtained previously to get the NFS export's share path: Get an ingress address for the NFS server. A service's ingress status may have multiple addresses. Choose the one desired to use for external clients. In the example below, there is only a single address: the host name ingress-id.somedomain.com . Connect the external client using the share path and ingress address from the steps. The following example mounts the export to the client's directory path /export/mount/path : If this does not work immediately, it could be that the Kubernetes environment is still taking time to configure the network resources to allow ingress to the NFS server. Chapter 16. Annotating encrypted RBD storage classes Starting with OpenShift Data Foundation 4.14, when the OpenShift console creates a RADOS block device (RBD) storage class with encryption enabled, the annotation is set automatically. However, you need to add the annotation, cdi.kubevirt.io/clone-strategy=copy for any of the encrypted RBD storage classes that were previously created before updating to the OpenShift Data Foundation version 4.14. This enables customer data integration (CDI) to use host-assisted cloning instead of the default smart cloning. The keys used to access an encrypted volume are tied to the namespace where the volume was created. When cloning an encrypted volume to a new namespace, such as, provisioning a new OpenShift Virtualization virtual machine, a new volume must be created and the content of the source volume must then be copied into the new volume. This behavior is triggered automatically if the storage class is properly annotated. Chapter 17. Enabling faster client IO or recovery IO during OSD backfill During a maintenance window, you may want to favor either client IO or recovery IO. Favoring recovery IO over client IO will significantly reduce OSD recovery time. The valid recovery profile options are balanced , high_client_ops , and high_recovery_ops . Set the recovery profile using the following procedure. Prerequisites Download the odf-cli tool from the customer portal . Procedure Check the current recovery profile: Modify the recovery profile: Replace option with either balanced , high_client_ops , or high_recovery_ops . Verify the updated recovery profile: Chapter 18. Setting Ceph OSD full thresholds You can set Ceph OSD full thresholds using the ODF CLI tool or by updating the StorageCluster CR. 18.1. Setting Ceph OSD full thresholds using the ODF CLI tool You can set Ceph OSD full thresholds temporarily by using the ODF CLI tool. This is necessary in cases when the cluster gets into a full state and the thresholds need to be immediately increased. Prerequisites Download the OpenShift Data Foundation command line interface (CLI) tool. With the Data Foundation CLI tool, you can effectively manage and troubleshoot your Data Foundation environment from a terminal. You can find a compatible version and download the CLI tool from the customer portal . Procedure Use the set command to adjust Ceph full thresholds. The set command supports the subcommands full , backfillfull , and nearfull . See the following examples for how to use each subcommand. full This subcommand allows updating the Ceph OSD full ratio in case Ceph prevents the IO operation on OSDs that reached the specified capacity. The default is 0.85 . Note If the value is set too close to 1.0 , the cluster becomes unrecoverable if the OSDs are full and there is nowhere to grow. For example, set Ceph OSD full ratio to 0.9 and then add capacity: For instructions to add capacity for you specific use case, see the Scaling storage guide . If OSDs continue to be in stuck , pending , or do not come up at all: Stop all IOs. Increase the full ratio to 0.92 : Wait for the cluster rebalance to happen. Once cluster rebalance is complete, change the full ratio back to its original value of 0.85: backfillfull This subcommand allows updating the Ceph OSDd backfillfull ratio in case Ceph denies backfilling to the OSD that reached the capacity specified. The default value is 0.80 . Note If the value is set too close to 1.0 , the OSDs become full and the cluster is not able to backfill. For example, to set backfillfull to 0.85 : nearfull This subcommand allows updating the Ceph OSD nearfull ratio in case Ceph returns the nearfull OSDs message when the cluster reaches the capacity specified. The default value is 0.75 . For example, to set nearfull to 0.8 : 18.2. Setting Ceph OSD full thresholds by updating the StorageCluster CR You can set Ceph OSD full thresholds by updating the StorageCluster CR. Use this procedure if you want to override the default settings. Procedure You can update the StorageCluster CR to change the settings for full , backfillfull , and nearfull . full Use this following command to update the Ceph OSD full ratio in case Ceph prevents the IO operation on OSDs that reached the specified capacity. The default is 0.85 . Note If the value is set too close to 1.0 , the cluster becomes unrecoverable if the OSDs are full and there is nowhere to grow. For example, to set Ceph OSD full ratio to 0.9 : backfillfull Use the following command to set the Ceph OSDd backfillfull ratio in case Ceph denies backfilling to the OSD that reached the capacity specified. The default value is 0.80 . Note If the value is set too close to 1.0 , the OSDs become full and the cluster is not able to backfill. For example, set backfill full to 0.85 : nearfull Use the following command to set the Ceph OSD nearfull ratio in case Ceph returns the nearfull OSDs message when the cluster reaches the capacity specified. The default value is 0.75 . For example, set nearfull to 0.8 : | [
"cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: ceph-csi-vault-sa EOF",
"apiVersion: v1 kind: ServiceAccount metadata: name: rbd-csi-vault-token-review --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-csi-vault-token-review rules: - apiGroups: [\"authentication.k8s.io\"] resources: [\"tokenreviews\"] verbs: [\"create\", \"get\", \"list\"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-csi-vault-token-review subjects: - kind: ServiceAccount name: rbd-csi-vault-token-review namespace: openshift-storage roleRef: kind: ClusterRole name: rbd-csi-vault-token-review apiGroup: rbac.authorization.k8s.io",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: rbd-csi-vault-token-review-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: \"rbd-csi-vault-token-review\" type: kubernetes.io/service-account-token data: {} EOF",
"SA_JWT_TOKEN=USD(oc -n openshift-storage get secret rbd-csi-vault-token-review-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret rbd-csi-vault-token-review-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)",
"OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")",
"vault auth enable kubernetes vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"",
"vault write \"auth/kubernetes/role/csi-kubernetes\" bound_service_account_names=\"ceph-csi-vault-sa\" bound_service_account_namespaces=<tenant_namespace> policies=<policy_name_in_vault>",
"apiVersion: v1 data: vault-tenant-sa: |- { \"encryptionKMSType\": \"vaulttenantsa\", \"vaultAddress\": \"<https://hostname_or_ip_of_vault_server:port>\", \"vaultTLSServerName\": \"<vault TLS server name>\", \"vaultAuthPath\": \"/v1/auth/kubernetes/login\", \"vaultAuthNamespace\": \"<vault auth namespace name>\" \"vaultNamespace\": \"<vault namespace name>\", \"vaultBackendPath\": \"<vault backend path name>\", \"vaultCAFromSecret\": \"<secret containing CA cert>\", \"vaultClientCertFromSecret\": \"<secret containing client cert>\", \"vaultClientCertKeyFromSecret\": \"<secret containing client private key>\", \"tenantSAName\": \"<service account name in the tenant namespace>\" } metadata: name: csi-kms-connection-details",
"encryptionKMSID: 1-vault",
"kind: ConfigMap apiVersion: v1 metadata: name: csi-kms-connection-details [...] data: 1-vault: |- { \"encryptionKMSType\": \"vaulttokens\", \"kmsServiceName\": \"1-vault\", [...] \"vaultBackend\": \"kv-v2\" } 2-vault: |- { \"encryptionKMSType\": \"vaulttenantsa\", [...] \"vaultBackend\": \"kv\" }",
"--- apiVersion: v1 kind: ConfigMap metadata: name: ceph-csi-kms-config data: vaultAddress: \"<vault_address:port>\" vaultBackendPath: \"<backend_path>\" vaultTLSServerName: \"<vault_tls_server_name>\" vaultNamespace: \"<vault_namespace>\"",
"oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephNonResilientPools/enable\", \"value\": true }]'",
"oc get storagecluster",
"NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 10m Ready 2024-02-05T13:56:15Z 4.15.0",
"oc get cephblockpools",
"NAME PHASE ocs-storagecluster-cephblockpool Ready ocs-storagecluster-cephblockpool-us-east-1a Ready ocs-storagecluster-cephblockpool-us-east-1b Ready ocs-storagecluster-cephblockpool-us-east-1c Ready",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 104m gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 104m gp3-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 104m ocs-storagecluster-ceph-non-resilient-rbd openshift-storage.rbd.csi.ceph.com Delete WaitForFirstConsumer true 46m ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 52m ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 52m openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 50m",
"oc get pods | grep osd",
"rook-ceph-osd-0-6dc76777bc-snhnm 2/2 Running 0 9m50s rook-ceph-osd-1-768bdfdc4-h5n7k 2/2 Running 0 9m48s rook-ceph-osd-2-69878645c4-bkdlq 2/2 Running 0 9m37s rook-ceph-osd-3-64c44d7d76-zfxq9 2/2 Running 0 5m23s rook-ceph-osd-4-654445b78f-nsgjb 2/2 Running 0 5m23s rook-ceph-osd-5-5775949f57-vz6jp 2/2 Running 0 5m22s rook-ceph-osd-prepare-ocs-deviceset-gp2-0-data-0x6t87-59swf 0/1 Completed 0 10m rook-ceph-osd-prepare-ocs-deviceset-gp2-1-data-0klwr7-bk45t 0/1 Completed 0 10m rook-ceph-osd-prepare-ocs-deviceset-gp2-2-data-0mk2cz-jx7zv 0/1 Completed 0 10m",
"oc get cephblockpools",
"NAME PHASE ocs-storagecluster-cephblockpool Ready ocs-storagecluster-cephblockpool-us-south-1 Ready ocs-storagecluster-cephblockpool-us-south-2 Ready ocs-storagecluster-cephblockpool-us-south-3 Ready",
"oc get pods -n openshift-storage -l app=rook-ceph-osd | grep 'CrashLoopBackOff\\|Error'",
"failed_osd_id=0 #replace with the ID of the failed OSD",
"failure_domain_label=USD(oc get storageclass ocs-storagecluster-ceph-non-resilient-rbd -o yaml | grep domainLabel |head -1 |awk -F':' '{print USD2}')",
"failure_domain_value=USD\"(oc get pods USDfailed_osd_id -oyaml |grep topology-location-zone |awk '{print USD2}')\"",
"replica1-pool-name= \"ocs-storagecluster-cephblockpool-USDfailure_domain_value\"",
"toolbox=USD(oc get pod -l app=rook-ceph-tools -n openshift-storage -o jsonpath='{.items[*].metadata.name}') rsh USDtoolbox -n openshift-storage",
"ceph osd pool rm <replica1-pool-name> <replica1-pool-name> --yes-i-really-really-mean-it",
"oc delete pod -l rook-ceph-operator -n openshift-storage",
"storage: pvc: claim: <new-pvc-name>",
"storage: pvc: claim: ocs4registry",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=<MCG Accesskey> --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=<MCG Secretkey> --namespace openshift-image-registry",
"oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{\"spec\": {\"managementState\": \"Managed\"}}'",
"oc describe noobaa",
"oc edit configs.imageregistry.operator.openshift.io -n openshift-image-registry apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: [..] name: cluster spec: [..] storage: s3: bucket: <Unique-bucket-name> region: us-east-1 (Use this region as default) regionEndpoint: https://<Endpoint-name>:<port> virtualHostedStyle: false",
"oc get pods -n openshift-image-registry",
"oc get pods -n openshift-image-registry",
"oc get pods -n openshift-image-registry NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-56d78bc5fb-bxcgv 2/2 Running 0 44d image-pruner-1605830400-29r7k 0/1 Completed 0 10h image-registry-b6c8f4596-ln88h 1/1 Running 0 17d node-ca-2nxvz 1/1 Running 0 44d node-ca-dtwjd 1/1 Running 0 44d node-ca-h92rj 1/1 Running 0 44d node-ca-k9bkd 1/1 Running 0 44d node-ca-stkzc 1/1 Running 0 44d node-ca-xn8h4 1/1 Running 0 44d",
"oc describe pod <image-registry-name>",
"oc describe pod image-registry-b6c8f4596-ln88h Environment: REGISTRY_STORAGE_S3_REGIONENDPOINT: http://s3.openshift-storage.svc REGISTRY_STORAGE: s3 REGISTRY_STORAGE_S3_BUCKET: bucket-registry-mcg REGISTRY_STORAGE_S3_REGION: us-east-1 REGISTRY_STORAGE_S3_ENCRYPT: true REGISTRY_STORAGE_S3_VIRTUALHOSTEDSTYLE: false REGISTRY_STORAGE_S3_USEDUALSTACK: true REGISTRY_STORAGE_S3_ACCESSKEY: <set to the key 'REGISTRY_STORAGE_S3_ACCESSKEY' in secret 'image-registry-private-configuration'> Optional: false REGISTRY_STORAGE_S3_SECRETKEY: <set to the key 'REGISTRY_STORAGE_S3_SECRETKEY' in secret 'image-registry-private-configuration'> Optional: false REGISTRY_HTTP_ADDR: :5000 REGISTRY_HTTP_NET: tcp REGISTRY_HTTP_SECRET: 57b943f691c878e342bac34e657b702bd6ca5488d51f839fecafa918a79a5fc6ed70184cab047601403c1f383e54d458744062dcaaa483816d82408bb56e686f REGISTRY_LOG_LEVEL: info REGISTRY_OPENSHIFT_QUOTA_ENABLED: true REGISTRY_STORAGE_CACHE_BLOBDESCRIPTOR: inmemory REGISTRY_STORAGE_DELETE_ENABLED: true REGISTRY_OPENSHIFT_METRICS_ENABLED: true REGISTRY_OPENSHIFT_SERVER_ADDR: image-registry.openshift-image-registry.svc:5000 REGISTRY_HTTP_TLS_CERTIFICATE: /etc/secrets/tls.crt REGISTRY_HTTP_TLS_KEY: /etc/secrets/tls.key",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time to retain monitoring files, for example 24h> volumeClaimTemplate: metadata: name: ocs-prometheus-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi> alertmanagerMain: volumeClaimTemplate: metadata: name: ocs-alertmanager-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi>",
"apiVersion: v1 kind: Namespace metadata: name: <desired_name> labels: storagequota: <desired_label>",
"oc edit storagecluster -n openshift-storage <ocs_storagecluster_name>",
"apiVersion: ocs.openshift.io/v1 kind: StorageCluster spec: [...] overprovisionControl: - capacity: <desired_quota_limit> storageClassName: <storage_class_name> quotaName: <desired_quota_name> selector: labels: matchLabels: storagequota: <desired_label> [...]",
"oc get clusterresourcequota -A oc describe clusterresourcequota -A",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"ocs-storagecluster-ceph-rbd\" size: \"200G\"",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: ocs-storagecluster-ceph-rbd size: 200G # Change as per your requirement redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: replicas: 1 curation: type: \"curator\" curator: schedule: \"30 3 * * *\" collection: logs: type: \"fluentd\" fluentd: {}",
"spec: [...] collection: logs: fluentd: tolerations: - effect: NoSchedule key: node.ocs.openshift.io/storage value: 'true' type: fluentd",
"config.yaml: | openshift-storage: delete: days: 5",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ceph-multus-net namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"eth0\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.200.0/24\", \"routes\": [ {\"dst\": \"NODE_IP_CIDR\"} ] } }'",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-public namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens2\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.1.0/24\" } }'",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-cluster namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens3\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.2.0/24\" } }'",
"get csv USD(oc get csv -n openshift-storage | grep rook-ceph-operator | awk '{print USD1}') -n openshift-storage -o jsonpath='{.metadata.annotations.externalClusterScript}' | base64 --decode >ceph-external-cluster-details-exporter.py",
"python3 ceph-external-cluster-details-exporter.py --upgrade --run-as-user= ocs-client-name --rgw-pool-prefix rgw-pool-prefix",
"python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbd-block-pool-name --monitoring-endpoint ceph-mgr-prometheus-exporter-endpoint --monitoring-endpoint-port ceph-mgr-prometheus-exporter-port --run-as-user ocs-client-name --rgw-endpoint rgw-endpoint --rgw-pool-prefix rgw-pool-prefix",
"caps: [mgr] allow command config caps: [mon] allow r, allow command quorum_status, allow command version caps: [osd] allow rwx pool=default.rgw.meta, allow r pool=.rgw.root, allow rw pool=default.rgw.control, allow rx pool=default.rgw.log, allow x pool=default.rgw.buckets.index",
"[{\"name\": \"rook-ceph-mon-endpoints\", \"kind\": \"ConfigMap\", \"data\": {\"data\": \"xxx.xxx.xxx.xxx:xxxx\", \"maxMonId\": \"0\", \"mapping\": \"{}\"}}, {\"name\": \"rook-ceph-mon\", \"kind\": \"Secret\", \"data\": {\"admin-secret\": \"admin-secret\", \"fsid\": \"<fs-id>\", \"mon-secret\": \"mon-secret\"}}, {\"name\": \"rook-ceph-operator-creds\", \"kind\": \"Secret\", \"data\": {\"userID\": \"<user-id>\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-node\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-node\", \"userKey\": \"<user-key>\"}}, {\"name\": \"ceph-rbd\", \"kind\": \"StorageClass\", \"data\": {\"pool\": \"<pool>\"}}, {\"name\": \"monitoring-endpoint\", \"kind\": \"CephCluster\", \"data\": {\"MonitoringEndpoint\": \"xxx.xxx.xxx.xxx\", \"MonitoringPort\": \"xxxx\"}}, {\"name\": \"rook-ceph-dashboard-link\", \"kind\": \"Secret\", \"data\": {\"userID\": \"ceph-dashboard-link\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-provisioner\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-provisioner\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-cephfs-provisioner\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-provisioner\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"rook-csi-cephfs-node\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-node\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"cephfs\", \"kind\": \"StorageClass\", \"data\": {\"fsName\": \"cephfs\", \"pool\": \"cephfs_data\"}}, {\"name\": \"ceph-rgw\", \"kind\": \"StorageClass\", \"data\": {\"endpoint\": \"xxx.xxx.xxx.xxx:xxxx\", \"poolPrefix\": \"default\"}}, {\"name\": \"rgw-admin-ops-user\", \"kind\": \"Secret\", \"data\": {\"accessKey\": \"<access-key>\", \"secretKey\": \"<secret-key>\"}} ]",
"spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/worker: \"\" node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"",
"template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: kb-s25vf machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: kb-s25vf-infra-us-west-2a spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"",
"label node <node> node-role.kubernetes.io/infra=\"\" label node <node> cluster.ocs.openshift.io/openshift-storage=\"\"",
"adm taint node <node> node.ocs.openshift.io/storage=\"true\":NoSchedule",
"Taints: Key: node.openshift.ocs.io/storage Value: true Effect: Noschedule",
"volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>",
"volumes: - name: mypd persistentVolumeClaim: claimName: myclaim",
"volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>",
"volumes: - name: mypd persistentVolumeClaim: claimName: myclaim",
"oc get pvc data-pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO ocs-storagecluster-ceph-rbd 20h",
"oc annotate pvc data-pvc \"reclaimspace.csiaddons.openshift.io/schedule=@monthly\"",
"persistentvolumeclaim/data-pvc annotated",
"oc get reclaimspacecronjobs.csiaddons.openshift.io",
"NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @monthly 3s",
"oc annotate pvc data-pvc \"reclaimspace.csiaddons.openshift.io/schedule=@weekly\" --overwrite=true",
"persistentvolumeclaim/data-pvc annotated",
"oc get reclaimspacecronjobs.csiaddons.openshift.io",
"NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 @weekly 3s",
"apiVersion: csiaddons.openshift.io/v1alpha1 kind: ReclaimSpaceJob metadata: name: sample-1 spec: target: persistentVolumeClaim: pvc-1 timeout: 360",
"apiVersion: csiaddons.openshift.io/v1alpha1 kind: ReclaimSpaceCronJob metadata: name: reclaimspacecronjob-sample spec: jobTemplate: spec: target: persistentVolumeClaim: data-pvc timeout: 360 schedule: '@weekly' concurrencyPolicy: Forbid",
"Status: Completion Time: 2023-03-08T18:56:18Z Conditions: Last Transition Time: 2023-03-08T18:56:18Z Message: Failed to make controller request: context deadline exceeded Observed Generation: 1 Reason: failed Status: True Type: Failed Message: Maximum retry limit reached Result: Failed Retries: 6 Start Time: 2023-03-08T18:33:55Z",
"apiVersion: v1 kind: ConfigMap metadata: name: csi-addons-config namespace: openshift-storage data: \"reclaim-space-timeout\": \"6m\"",
"delete po -n openshift-storage -l \"app.kubernetes.io/name=csi-addons\"",
"odf subvolume ls --stale",
"Filesystem Subvolume Subvolumegroup State ocs-storagecluster-cephfilesystem csi-vol-427774b4-340b-11ed-8d66-0242ac110004 csi stale ocs-storagecluster-cephfilesystem csi-vol-427774b4-340b-11ed-8d66-0242ac110005 csi stale",
"odf subvolume delete <subvolumes> <filesystem> <subvolumegroup>",
"odf subvolume delete csi-vol-427774b4-340b-11ed-8d66-0242ac110004,csi-vol-427774b4-340b-11ed-8d66-0242ac110005 ocs-storagecluster csi",
"Info: subvolume csi-vol-427774b4-340b-11ed-8d66-0242ac110004 deleted Info: subvolume csi-vol-427774b4-340b-11ed-8d66-0242ac110004 deleted",
"oc edit configmap rook-ceph-operator-config -n openshift-storage",
"oc get configmap rook-ceph-operator-config -n openshift-storage -o yaml",
"apiVersion: v1 data: [...] CSI_PLUGIN_TOLERATIONS: | - key: nodetype operator: Equal value: infra effect: NoSchedule - key: node.ocs.openshift.io/storage operator: Equal value: \"true\" effect: NoSchedule [...] kind: ConfigMap metadata: [...]",
"oc delete -n openshift-storage pod <name of the rook_ceph_operator pod>",
"oc delete -n openshift-storage pod rook-ceph-operator-5446f9b95b-jrn2j pod \"rook-ceph-operator-5446f9b95b-jrn2j\" deleted",
"oc --namespace openshift-storage patch storageclusters.ocs.openshift.io ocs-storagecluster --type merge --patch '{\"spec\": {\"nfs\":{\"enable\": true}}}'",
"-n openshift-storage describe cephnfs ocs-storagecluster-cephnfs",
"-n openshift-storage get pod | grep csi-nfsplugin",
"csi-nfsplugin-47qwq 2/2 Running 0 10s csi-nfsplugin-77947 2/2 Running 0 10s csi-nfsplugin-ct2pm 2/2 Running 0 10s csi-nfsplugin-provisioner-f85b75fbb-2rm2w 2/2 Running 0 10s csi-nfsplugin-provisioner-f85b75fbb-8nj5h 2/2 Running 0 10s",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: <desired_name> spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: ocs-storagecluster-ceph-nfs",
"apiVersion: v1 kind: Pod metadata: name: nfs-export-example spec: containers: - name: web-server image: nginx volumeMounts: - name: nfs-export-pvc mountPath: /var/lib/www/html volumes: - name: nfs-export-pvc persistentVolumeClaim: claimName: <pvc_name> readOnly: false",
"apiVersion: v1 kind: Pod metadata: name: nfs-export-example namespace: openshift-storage spec: containers: - name: web-server image: nginx volumeMounts: - name: <volume_name> mountPath: /var/lib/www/html",
"apiVersion: v1 kind: Pod metadata: name: nfs-export-example namespace: openshift-storage spec: containers: - name: web-server image: nginx volumeMounts: - name: nfs-export-pvc mountPath: /var/lib/www/html",
"volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>",
"volumes: - name: nfs-export-pvc persistentVolumeClaim: claimName: my-nfs-export",
"oc get pods -n openshift-storage | grep rook-ceph-nfs",
"oc describe pod <name of the rook-ceph-nfs pod> | grep ceph_nfs",
"oc describe pod rook-ceph-nfs-ocs-storagecluster-cephnfs-a-7bb484b4bf-bbdhs | grep ceph_nfs ceph_nfs=my-nfs",
"apiVersion: v1 kind: Service metadata: name: rook-ceph-nfs-ocs-storagecluster-cephnfs-load-balancer namespace: openshift-storage spec: ports: - name: nfs port: 2049 type: LoadBalancer externalTrafficPolicy: Local selector: app: rook-ceph-nfs ceph_nfs: <my-nfs> instance: a",
"oc get pvc <pvc_name> --output jsonpath='{.spec.volumeName}' pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d",
"get pvc pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d --output jsonpath='{.spec.volumeName}' pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d",
"oc get pv pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d --output jsonpath='{.spec.csi.volumeAttributes.share}' /0001-0011-openshift-storage-0000000000000001-ba9426ab-d61b-11ec-9ffd-0a580a800215",
"oc -n openshift-storage get service rook-ceph-nfs-ocs-storagecluster-cephnfs-load-balancer --output jsonpath='{.status.loadBalancer.ingress}' [{\"hostname\":\"ingress-id.somedomain.com\"}]",
"mount -t nfs4 -o proto=tcp ingress-id.somedomain.com:/0001-0011-openshift-storage-0000000000000001-ba9426ab-d61b-11ec-9ffd-0a580a800215 /export/mount/path",
"odf get recovery-profile",
"odf set recovery-profile <option>",
"odf get recovery-profile",
"odf set full 0.9",
"odf set full 0.92",
"odf set full 0.85",
"odf set backfillfull 0.85",
"odf set nearfull 0.8",
"oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephCluster/fullRatio\", \"value\": 0.90 }]'",
"oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephCluster/backfillFullRatio\", \"value\": 0.85 }]'",
"oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephCluster/nearFullRatio\", \"value\": 0.8 }]'"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html-single/managing_and_allocating_storage_resources/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/deploying_and_upgrading_amq_streams_on_openshift/making-open-source-more-inclusive |
Chapter 60. Salesforce Source | Chapter 60. Salesforce Source Receive updates from Salesforce. 60.1. Configuration Options The following table summarizes the configuration options available for the salesforce-source Kamelet: Property Name Description Type Default Example clientId * Consumer Key The Salesforce application consumer key string clientSecret * Consumer Secret The Salesforce application consumer secret string password * Password The Salesforce user password string query * Query The query to execute on Salesforce string "SELECT Id, Name, Email, Phone FROM Contact" topicName * Topic Name The name of the topic/channel to use string "ContactTopic" userName * Username The Salesforce username string loginUrl Login URL The Salesforce instance login URL string "https://login.salesforce.com" Note Fields marked with an asterisk (*) are mandatory. 60.2. Dependencies At runtime, the salesforce-source Kamelet relies upon the presence of the following dependencies: camel:jackson camel:salesforce mvn:org.apache.camel.k:camel-k-kamelet-reify camel:kamelet 60.3. Usage This section describes how you can use the salesforce-source . 60.3.1. Knative Source You can use the salesforce-source Kamelet as a Knative source by binding it to a Knative object. salesforce-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: salesforce-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: salesforce-source properties: clientId: "The Consumer Key" clientSecret: "The Consumer Secret" password: "The Password" query: "SELECT Id, Name, Email, Phone FROM Contact" topicName: "ContactTopic" userName: "The Username" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 60.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 60.3.1.2. Procedure for using the cluster CLI Save the salesforce-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f salesforce-source-binding.yaml 60.3.1.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind salesforce-source -p "source.clientId=The Consumer Key" -p "source.clientSecret=The Consumer Secret" -p "source.password=The Password" -p "source.query=SELECT Id, Name, Email, Phone FROM Contact" -p "source.topicName=ContactTopic" -p "source.userName=The Username" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 60.3.2. Kafka Source You can use the salesforce-source Kamelet as a Kafka source by binding it to a Kafka topic. salesforce-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: salesforce-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: salesforce-source properties: clientId: "The Consumer Key" clientSecret: "The Consumer Secret" password: "The Password" query: "SELECT Id, Name, Email, Phone FROM Contact" topicName: "ContactTopic" userName: "The Username" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 60.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 60.3.2.2. Procedure for using the cluster CLI Save the salesforce-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f salesforce-source-binding.yaml 60.3.2.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind salesforce-source -p "source.clientId=The Consumer Key" -p "source.clientSecret=The Consumer Secret" -p "source.password=The Password" -p "source.query=SELECT Id, Name, Email, Phone FROM Contact" -p "source.topicName=ContactTopic" -p "source.userName=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 60.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/salesforce-source.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: salesforce-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: salesforce-source properties: clientId: \"The Consumer Key\" clientSecret: \"The Consumer Secret\" password: \"The Password\" query: \"SELECT Id, Name, Email, Phone FROM Contact\" topicName: \"ContactTopic\" userName: \"The Username\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f salesforce-source-binding.yaml",
"kamel bind salesforce-source -p \"source.clientId=The Consumer Key\" -p \"source.clientSecret=The Consumer Secret\" -p \"source.password=The Password\" -p \"source.query=SELECT Id, Name, Email, Phone FROM Contact\" -p \"source.topicName=ContactTopic\" -p \"source.userName=The Username\" channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: salesforce-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: salesforce-source properties: clientId: \"The Consumer Key\" clientSecret: \"The Consumer Secret\" password: \"The Password\" query: \"SELECT Id, Name, Email, Phone FROM Contact\" topicName: \"ContactTopic\" userName: \"The Username\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f salesforce-source-binding.yaml",
"kamel bind salesforce-source -p \"source.clientId=The Consumer Key\" -p \"source.clientSecret=The Consumer Secret\" -p \"source.password=The Password\" -p \"source.query=SELECT Id, Name, Email, Phone FROM Contact\" -p \"source.topicName=ContactTopic\" -p \"source.userName=The Username\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/salesforce-source |
Chapter 7. Uninstalling a cluster on Azure Stack Hub | Chapter 7. Uninstalling a cluster on Azure Stack Hub You can remove a cluster that you deployed to Azure Stack Hub. 7.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. | [
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_azure_stack_hub/uninstalling-cluster-azure-stack-hub |
Chapter 30. Submitting your Helm chart for certification | Chapter 30. Submitting your Helm chart for certification After configuring and setting up your Helm chart component on the Red Hat Partner Connect , submit your Helm charts for certification by creating a pull request to the Red Hat's OpenShift Helm chart repository . In the pull request, you can either include your chart or the report generated by the chart-verifier tool or both. Based on the content of your pull request, the chart will be certified, and the chart-verifier will run if a report is not provided. Prerequisites Before creating a pull request, ensure to have the following prerequisites: Fork the Red Hat's OpenShift Helm chart repository and clone it to your local system. Here, you can see a directory already created for your company under the partner's directory. Note The directory name is the same as the container registry namespace that you set while certifying your containers. Within your company's directory, there will be a subdirectory for each chart certification component you created in the step. To verify if this is set up correctly, review the OWNERS file. The OWNERS file is automatically created in your chart directory within your organization directory. It contains information about your component, including the GitHub users authorized to certify Helm charts on behalf of your company. You can locate the file at the location charts/partners/acme/awesome/OWNERS . If you want to edit the GitHub user details, navigate to the Settings page. For example, if your organization name is acme and the chart name is awesome . The content of the OWNERS file is as follows: The name of the chart that you are submitting must match the value in the OWNERS file. Before submitting the Helm chart source or the Helm chart verification report, create a directory with its version number. For example, if you are publishing the 0.1.0 version of the awesome chart, create a directory as follows: Note For charts that represent a product supported by Red Hat, submit the pull request to the main branch with the OWNERS file located at the charts, redhat directory available in your organization directory. For example, for a Red Hat chart named awesome, submit your pull request to the main branch located at charts/redhat/redhat/awesome/OWNERS . Note that for Red Hat supported components, your organization name is also redhat. Procedure You can submit your Helm chart for certification by using three methods: Submit a Helm chart without the chart verification report Submit a chart verification report without the Helm chart Submit a chart verification report along with the Helm chart 30.1. Submitting a Helm chart without the chart verification report You can submit your Helm chart for certification without the chart verification report in two different formats: 30.1.1. Chart as a tarball If you want to submit your Helm chart as a tarball, you can create a tarball of your Helm chart using the Helm package command and place it directly in the 0.1.0 directory. For example, if your Helm chart is awesome for an organization acme 30.1.2. Chart in a directory If you want to submit your Helm chart in a directory, place your Helm chart in a directory with the chart source. If you have signed the chart, place the providence file in the same directory. You can include a base64 encoded public key for the chart in the OWNERS file. When a base64 encoded public key is present, the key will be decoded and specified when the chart-verifier is used to create a report for the chart. If the public key does not match the chart, the verifier report will include a check failure, and the pull request will end with an error. If the public key matches with the chart and there are no other failures, a release will be created, which will include the tarball, the providence file, the public key file, and the generated report. For example, If the OWNERS file does not include the public key, the chart verifier check is skipped and will not affect the outcome of the pull request. Further, the public key file will not be included in the release. If the chart is a directory with the chart source, create a src directory to place the chart source. For example, A Path can be charts/partners/acme/awesome/0.1.0/src/ And the file structure can be 30.2. Submitting a chart verification report without the Helm chart Generate the report using the chart-verifier tool and save it with a file name report.yaml in the directory 0.1.0. You can submit two types of reports: 30.2.1. For submitting a signed report Before submitting your report for certification, you can add a PGP public key to the chart verification report. Adding a PGP public key is optional. When you add it to your report, you can find your public key in the OWNER S file under your chart directory within your organization directory. The PGP public key is available in the publicPgpKey attribute. The value of this attribute must follow ASCII armor format . When submitting a chart verification report without the chart, you can sign your report and save the signature in ASCII armor format . For example, Note You can see a warning message on the console if the signature verification fails. 30.2.2. For submitting a report for a signed chart For submitting the chart verification report for a signed chart, when you provide a PGP public key to the chart verifier tool while generating the report, it includes a digest of the key along with the report. Also, when you include a base64 encoded PGP public key to the OWNERS file, a check is made to confirm if the digest of the decoded key in the OWNERS file matches the key digest in the report. When they do not match, the pull request fails. But if the key digest matches with the report and there are no other errors when processing the pull request, a release is generated containing the public key and the report. For example, Note A release is not generated if you have enabled the provider control delivery. 30.3. Submitting a chart verification report along with the Helm chart You can also submit a chart along with the report. Follow Submitting a Chart without Chart Verification Report procedure and place the source or tarball in the version number directory. Similarly, follow the steps in Submitting a Chart Verification Report without the Chart and place the report.yaml file in the same version number directory. 30.3.1. For submitting a signed report You can sign the report and submit for verification. You can see a warning message on the console if the signature verification fails. For more information, see, 'For submitting a signed report' section of Submitting a Chart Verification Report without the Chart . 30.3.2. For submitting a signed Helm chart For a signed chart you must include a tarball and a providence file in addition to the report file. For more information, see, 'For submitting a report for a signed chart' section of Submitting a Chart Verification Report without the Chart . 30.4. Summary of certification submission options Follow the table that summarizes the scenarios for submitting your Helm charts for certification, depending on how you want to access your chart and also to check whether the chart tests have some dependencies on your local environment. Objective Include Helm chart Include chart verification report Red Hat certification outcome Methods to publish your certified Helm chart If you want to perform the following actions: Store your certified chart at charts.openshift.io . Take advantage of Red Hat CI for ongoing chart tests Yes No The chart-verifier tool is executed in the Red Hat CI environment to ensure compliance. Your customers can download the certified Helm charts from charts.openshift.io . If you want to perform the following actions: Store your certified chart at charts.openshift.io . Aim to test your chart in your own environment since it has some external dependencies. Yes Yes The Red Hat certification team reviews the results to ensure compliance. Your customers can download the certified Helm charts from charts.openshift.io . If you don't want to store your certified charts at charts.openshift.io . No Yes The Red Hat certification team reviews the results to ensure compliance. Your customers can download the certified Helm chart from your designated Helm chart repository. A corresponding entry is added to the index.yaml file at charts.openshift.io . 30.5. Verification Steps After submitting the pull request, it will take a few minutes to run all the checks and merge the pull request automatically. Perform the following steps after submitting your pull request: Check for any messages in the new pull request. If you see an error message, see Troubleshooting Pull Request Failures . Update the pull request accordingly with necessary changes to rectify the issue. If you see a success message, it indicates that the chart repository index is updated successfully. You can verify it by checking the latest commit in the gh-pages branch. The commit message is in this format: You can see your chart related changes in the index.yaml file. If you have submitted a chart source, a GitHub release with the chart and corresponding report is available on the GitHub releases page. The release tag is in this format: <partner-label>-<chart-name>-<version-number> (e.g., acme-psql-service-0.1.1) . You can find the certified Helm charts on the Red Hat's official Helm chart repository . Follow the instructions listed here to install the certified Helm chart on your OpenShift cluster. | [
"chart: name: awesome shortDescription: A Helm chart for Awesomeness publicPgpKey: null providerDelivery: False users: - githubUsername: <username-one> - githubUsername: <username-two> vendor: label: acme name: ACME Inc.",
"charts/partners/acme/awesome/0.1.0/",
"charts/partners/acme/awesome/0.1.0/awesome-0.1.0.tgz charts/partners/acme/awesome/0.1.0/awesome-0.1.0.tgz.prov",
"awesome-0.1.0.tgz awesome-0.1.0.tgz.prov awesome-0.1.0.tgz.key report.yaml",
". └── src ├── Chart.yaml ├── README.md ├── templates │ ├── deployment.yaml │ ├── _helpers.tpl │ ├── hpa.yaml │ ├── ingress.yaml │ ├── NOTES.txt │ ├── serviceaccount.yaml │ ├── service.yaml │ └── tests │ └── test-connection.yaml ├── values.schema.json └── values.yaml",
"gpg --sign --armor --detach-sign --output report.yaml.asc report.yaml",
"awesome-0.1.0.tgz.key report.yaml",
"<partner-label>-<chart-name>-<version-number> index.yaml (#<PR-number>) (e.g, acme-psql-service-0.1.1 index.yaml (#7))."
] | https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_software_certification_workflow_guide/submitting-your-helm-chart-for-certification_openshift-sw-cert-workflow-validating-helm-charts-for-certification |
Chapter 6. System Configuration | Chapter 6. System Configuration 6.1. ACPI 6.1.1. CPU hotplug ACPI CPU hotplug is available as a Technology Preview in Red Hat Enterprise Linux 6.5. This is a platform-specific feature; as such, details of its use are outside the scope of this document. Important Technology Preview features provide early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. However, these features are not fully supported under Red Hat Enterprise Linux Subscription Level Agreements, may not be functionally complete, and are not intended for production use. During the development of a Technology Preview feature, additional components may become available to the public for testing. Because Technology Preview features are still under development, Red Hat cannot guarantee the stability of such features. As a result, if you are using Technology Preview features, you may not be able to seamlessly upgrade to subsequent releases of that feature. While Red Hat intends to fully support Technology Preview features in future releases, we may discover that a feature does not meet the standards for enterprise usingbility. If this happens, we cannot guarantee that Technology Preview features will be released in a supported manner. Some Technology Preview features may only be available for specific hardware architectures. The CONFIG_ACPI_HOTPLUG_CPU configuration option must be enabled in order to use this feature. Additionally, if the platform implements the optional ACPI _OST method, the following configuration options must also be enabled. There are no harmful effects associated with enabling these configuration for all platforms. CONFIG_ACPI_HOTPLUG_CPU CONFIG_ACPI_HOTPLUG_MEMORY or CONFIG_ACPI_HOTPLUG_MEMORY_MODULE CONFIG_ACPI_CONTAINER or CONFIG_ACPI_CONTAINER_MODULE | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/migration_planning_guide/chap-migration_guide-system_configuration |
Chapter 26. Tips for undercloud and overcloud services | Chapter 26. Tips for undercloud and overcloud services This section provides advice on tuning and managing specific OpenStack services on the undercloud. 26.1. Review the database flush intervals Some services use a cron container to flush old content from the database. OpenStack Identity (keystone): Flush expired tokens. OpenStack Orchestration (heat): Flush expired deleted template data. OpenStack Compute (nova): Flush expired deleted instance data. The default flush periods for each service are listed in this table: Service Database content flushed Default flush period OpenStack Identity (keystone) Expired tokens Every hour OpenStack Orchestration (heat) Deleted template data that has expired and is older than 30 days Every day OpenStack Compute (nova) Archive deleted instance data Every day OpenStack Compute (nova) Flush archived data older than 14 days Every day The following tables outline the parameters that you can use to control these cron jobs. Table 26.1. OpenStack Identity (keystone) cron parameters Parameter Description KeystoneCronTokenFlushMinute Cron to purge expired tokens - Minute. The default value is: 1 KeystoneCronTokenFlushHour Cron to purge expired tokens - Hour. The default value is: * KeystoneCronTokenFlushMonthday Cron to purge expired tokens - Month Day. The default value is: * KeystoneCronTokenFlushMonth Cron to purge expired tokens - Month. The default value is: * KeystoneCronTokenFlushWeekday Cron to purge expired tokens - Week Day. The default value is: * Table 26.2. OpenStack Orchestration (heat) cron parameters Parameter Description HeatCronPurgeDeletedAge Cron to purge database entries marked as deleted and older than USDage - Age. The default value is: 30 HeatCronPurgeDeletedAgeType Cron to purge database entries marked as deleted and older than USDage - Age type. The default value is: days HeatCronPurgeDeletedMinute Cron to purge database entries marked as deleted and older than USDage - Minute. The default value is: 1 HeatCronPurgeDeletedHour Cron to purge database entries marked as deleted and older than USDage - Hour. The default value is: 0 HeatCronPurgeDeletedMonthday Cron to purge database entries marked as deleted and older than USDage - Month Day. The default value is: * HeatCronPurgeDeletedMonth Cron to purge database entries marked as deleted and older than USDage - Month. The default value is: * HeatCronPurgeDeletedWeekday Cron to purge database entries marked as deleted and older than USDage - Week Day. The default value is: * Table 26.3. OpenStack Compute (nova) cron parameters Parameter Description NovaCronArchiveDeleteRowsMaxRows Cron to move deleted instances to another table - Max Rows. The default value is: 100 NovaCronArchiveDeleteRowsPurge Purge shadow tables immediately after scheduled archiving. The default value is: False NovaCronArchiveDeleteRowsMinute Cron to move deleted instances to another table - Minute. The default value is: 1 NovaCronArchiveDeleteRowsHour Cron to move deleted instances to another table - Hour. The default value is: 0 NovaCronArchiveDeleteRowsMonthday Cron to move deleted instances to another table - Month Day. The default value is: * NovaCronArchiveDeleteRowsMonth Cron to move deleted instances to another table - Month. The default value is: * NovaCronArchiveDeleteRowsWeekday Cron to move deleted instances to another table - Week Day. The default value is: * NovaCronArchiveDeleteRowsUntilComplete Cron to move deleted instances to another table - Until complete. The default value is: True NovaCronPurgeShadowTablesAge Cron to purge shadow tables - Age This will define the retention policy when purging the shadow tables in days. 0 means, purge data older than today in shadow tables. The default value is: 14 NovaCronPurgeShadowTablesMinute Cron to purge shadow tables - Minute. The default value is: 0 NovaCronPurgeShadowTablesHour Cron to purge shadow tables - Hour. The default value is: 5 NovaCronPurgeShadowTablesMonthday Cron to purge shadow tables - Month Day. The default value is: * NovaCronPurgeShadowTablesMonth Cron to purge shadow tables - Month. The default value is: * NovaCronPurgeShadowTablesWeekday Cron to purge shadow tables - Week Day. The default value is: *` To adjust these intervals, create an environment file that contains your token flush interval for the respective services and add this file to the custom_env_files parameter in your undercloud.conf file. For example, to change the OpenStack Identity (keystone) token flush to 30 minutes, use the following snippets keystone-cron.yaml undercloud.yaml Then rerun the openstack undercloud install command. Note You can also use these parameters for your overcloud. For more information, see the Overcloud Parameters guide. 26.2. Tuning deployment performance OpenStack Platform director uses OpenStack Orchestration (heat) to conduct the main deployment and provisioning functions. Heat uses a series of workers to execute deployment tasks. To calculate the default number of workers, the director heat configuration halves the total CPU thread count of the undercloud. [2] . For example, if your undercloud has a CPU with 16 threads, heat spawns 8 workers by default. The director configuration also uses a minimum and maximum cap by default: Service Minimum Maximum OpenStack Orchestration (heat) 4 24 However, you can set the number of workers manually with the HeatWorkers parameter in an environment file: heat-workers.yaml undercloud.yaml 26.3. Running swift-ring-builder in a container To manage your Object Storage (swift) rings, use the swift-ring-builder commands inside the server containers: swift_object_server swift_container_server swift_account_server For example, to view information about your swift object rings, run the following command: You can run this command on both the undercloud and overcloud nodes. 26.4. Changing the SSL/TLS cipher rules for HAProxy If you enabled SSL/TLS in the undercloud (see Section 4.2, "Director configuration parameters" ), you might want to harden the SSL/TLS ciphers and rules that are used with the HAProxy configuration. This hardening helps to avoid SSL/TLS vulnerabilities, such as the POODLE vulnerability . Set the following hieradata using the hieradata_override undercloud configuration option: tripleo::haproxy::ssl_cipher_suite The cipher suite to use in HAProxy. tripleo::haproxy::ssl_options The SSL/TLS rules to use in HAProxy. For example, you might want to use the following cipher and rules: Cipher: ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS Rules: no-sslv3 no-tls-tickets Create a hieradata override file ( haproxy-hiera-overrides.yaml ) with the following content: Note The cipher collection is one continuous line. Set the hieradata_override parameter in the undercloud.conf file to use the hieradata override file you created before you ran openstack undercloud install : [2] In this instance, thread count refers to the number of CPU cores multiplied by the hyper-threading value | [
"parameter_defaults: KeystoneCronTokenFlushMinute: '0/30'",
"custom_env_files: keystone-cron.yaml",
"openstack undercloud install",
"parameter_defaults: HeatWorkers: 16",
"custom_env_files: heat-workers.yaml",
"sudo podman exec -ti -u swift swift_object_server swift-ring-builder /etc/swift/object.builder",
"tripleo::haproxy::ssl_cipher_suite: ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS tripleo::haproxy::ssl_options: no-sslv3 no-tls-tickets",
"[DEFAULT] hieradata_override = haproxy-hiera-overrides.yaml"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/director_installation_and_usage/tips-for-undercloud-and-overcloud-services |
4.3.12. Renaming a Volume Group | 4.3.12. Renaming a Volume Group Use the vgrename command to rename an existing volume group. Either of the following commands renames the existing volume group vg02 to my_volume_group | [
"vgrename /dev/vg02 /dev/my_volume_group",
"vgrename vg02 my_volume_group"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/VG_rename |
3.6. Adding Journals to a GFS2 File System | 3.6. Adding Journals to a GFS2 File System The gfs2_jadd command is used to add journals to a GFS2 file system. You can add journals to a GFS2 file system dynamically at any point without expanding the underlying logical volume. The gfs2_jadd command must be run on a mounted file system, but it needs to be run on only one node in the cluster. All the other nodes sense that the expansion has occurred. Note If a GFS2 file system is full, the gfs2_jadd command will fail, even if the logical volume containing the file system has been extended and is larger than the file system. This is because in a GFS2 file system, journals are plain files rather than embedded metadata, so simply extending the underlying logical volume will not provide space for the journals. Before adding journals to a GFS2 file system, you can find out how many journals the GFS2 file system currently contains with the gfs2_edit -p jindex command, as in the following example: Usage Number Specifies the number of new journals to be added. MountPoint Specifies the directory where the GFS2 file system is mounted. Examples In this example, one journal is added to the file system on the /mygfs2 directory. In this example, two journals are added to the file system on the /mygfs2 directory. Complete Usage MountPoint Specifies the directory where the GFS2 file system is mounted. Device Specifies the device node of the file system. Table 3.4, "GFS2-specific Options Available When Adding Journals" describes the GFS2-specific options that can be used when adding journals to a GFS2 file system. Table 3.4. GFS2-specific Options Available When Adding Journals Flag Parameter Description -h Help. Displays short usage message. -J Megabytes Specifies the size of the new journals in megabytes. Default journal size is 128 megabytes. The minimum size is 32 megabytes. To add journals of different sizes to the file system, the gfs2_jadd command must be run for each size journal. The size specified is rounded down so that it is a multiple of the journal-segment size that was specified when the file system was created. -j Number Specifies the number of new journals to be added by the gfs2_jadd command. The default value is 1. -q Quiet. Turns down the verbosity level. -V Displays command version information. | [
"gfs2_edit -p jindex /dev/sasdrives/scratch|grep journal 3/3 [fc7745eb] 4/25 (0x4/0x19): File journal0 4/4 [8b70757d] 5/32859 (0x5/0x805b): File journal1 5/5 [127924c7] 6/65701 (0x6/0x100a5): File journal2",
"gfs2_jadd -j Number MountPoint",
"gfs2_jadd -j 1 /mygfs2",
"gfs2_jadd -j 2 /mygfs2",
"gfs2_jadd [ Options ] { MountPoint | Device } [ MountPoint | Device ]"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/global_file_system_2/s1-manage-addjournalfs |
Release notes for Eclipse Temurin 17.0.12 | Release notes for Eclipse Temurin 17.0.12 Red Hat build of OpenJDK 17 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.12/index |
Appendix B. Health messages of a Ceph cluster | Appendix B. Health messages of a Ceph cluster There is a finite set of possible health messages that a Red Hat Ceph Storage cluster can raise. These are defined as health checks which have unique identifiers. The identifier is a terse pseudo-human-readable string that is intended to enable tools to make sense of health checks, and present them in a way that reflects their meaning. Table B.1. Monitor Health Code Description DAEMON_OLD_VERSION Warn if old version of Ceph are running on any daemons. It will generate a health error if multiple versions are detected. MON_DOWN One or more Ceph Monitor daemons are currently down. MON_CLOCK_SKEW The clocks on the nodes running the ceph-mon daemons are not sufficiently well synchronized. Resolve it by synchronizing the clocks using ntpd or chrony . MON_MSGR2_NOT_ENABLED The ms_bind_msgr2 option is enabled but one or more Ceph Monitors is not configured to bind to a v2 port in the cluster's monmap. Resolve this by running ceph mon enable-msgr2 command. MON_DISK_LOW One or more Ceph Monitors are low on disk space. MON_DISK_CRIT One or more Ceph Monitors are critically low on disk space. MON_DISK_BIG The database size for one or more Ceph Monitors are very large. AUTH_INSECURE_GLOBAL_ID_RECLAIM One or more clients or daemons are connected to the storage cluster that are not securely reclaiming their global_id when reconnecting to a Ceph Monitor. AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED Ceph is currently configured to allow clients to reconnect to monitors using an insecure process to reclaim their global_id because the setting auth_allow_insecure_global_id_reclaim is set to true . Table B.2. Manager Health Code Description MGR_DOWN All Ceph Manager daemons are currently down. MGR_MODULE_DEPENDENCY An enabled Ceph Manager module is failing its dependency check. MGR_MODULE_ERROR A Ceph Manager module has experienced an unexpected error. Typically, this means an unhandled exception was raised from the module's serve function. Table B.3. OSDs Health Code Description OSD_DOWN One or more OSDs are marked down. OSD_CRUSH_TYPE_DOWN All the OSDs within a particular CRUSH subtree are marked down, for example all OSDs on a host. For example, OSD_HOST_DOWN and OSD_ROOT_DOWN OSD_ORPHAN An OSD is referenced in the CRUSH map hierarchy but does not exist. Remove the OSD by running ceph osd crush rm osd._OSD_ID command. OSD_OUT_OF_ORDER_FULL The utilization thresholds for nearfull , backfillfull , full , or, failsafefull are not ascending. Adjust the thresholds by running ceph osd set-nearfull-ratio RATIO , ceph osd set-backfillfull-ratio RATIO , and ceph osd set-full-ratio RATIO OSD_FULL One or more OSDs has exceeded the full threshold and is preventing the storage cluster from servicing writes. Restore write availability by raising the full threshold by a small margin ceph osd set-full-ratio RATIO . OSD_BACKFILLFULL One or more OSDs has exceeded the backfillfull threshold, which will prevent data from being allowed to rebalance to this device. OSD_NEARFULL One or more OSDs has exceeded the nearfull threshold. OSDMAP_FLAGS One or more storage cluster flags of interest has been set. These flags include full , pauserd , pausewr , noup , nodown , noin , noout , nobackfill , norecover , norebalance , noscrub , nodeep_scrub , and notieragent . Except for full , the flags can be cleared with ceph osd set FLAG and ceph osd unset FLAG commands. OSD_FLAGS One or more OSDs or CRUSH has a flag of interest set. These flags include noup , nodown , noin , and noout . OLD_CRUSH_TUNABLES The CRUSH map is using very old settings and should be updated. OLD_CRUSH_STRAW_CALC_VERSION The CRUSH map is using an older, non-optimal method for calculating intermediate weight values for straw buckets. CACHE_POOL_NO_HIT_SET One or more cache pools is not configured with a hit set to track utilization, which will prevent the tiering agent from identifying cold objects to flush and evict from the cache. Configure the hit sets on the cache pool with ceph osd pool set_POOL_NAME_ hit_set_type TYPE , ceph osd pool set POOL_NAME hit_set_period PERIOD_IN_SECONDS , ceph osd pool set POOL_NAME hit_set_count NUMBER_OF_HIT_SETS , and ceph osd pool set POOL_NAME hit_set_fpp TARGET_FALSE_POSITIVE_RATE commands. OSD_NO_SORTBITWISE sortbitwise flag is not set. Set the flag with ceph osd set sortbitwise command. POOL_FULL One or more pools has reached its quota and is no longer allowing writes. Increase the pool quota with ceph osd pool set-quota POOL_NAME max_objects NUMBER_OF_OBJECTS and ceph osd pool set-quota POOL_NAME max_bytes BYTES or delete some existing data to reduce utilization. BLUEFS_SPILLOVER One or more OSDs that use the BlueStore backend is allocated db partitions but that space has filled, such that metadata has "spilled over" onto the normal slow device. Disable this with ceph config set osd bluestore_warn_on_bluefs_spillover false command. BLUEFS_AVAILABLE_SPACE This output gives three values which are BDEV_DB free , BDEV_SLOW free and available_from_bluestore . BLUEFS_LOW_SPACE If the BlueStore File System (BlueFS) is running low on available free space and there is little available_from_bluestore one can consider reducing BlueFS allocation unit size. BLUESTORE_FRAGMENTATION As BlueStore works free space on underlying storage will get fragmented. This is normal and unavoidable but excessive fragmentation will cause slowdown. BLUESTORE_LEGACY_STATFS BlueStore tracks its internal usage statistics on a per-pool granular basis, and one or more OSDs have BlueStore volumes. Disable the warning with ceph config set global bluestore_warn_on_legacy_statfs false command. BLUESTORE_NO_PER_POOL_OMAP BlueStore tracks omap space utilization by pool. Disable the warning with ceph config set global bluestore_warn_on_no_per_pool_omap false command. BLUESTORE_NO_PER_PG_OMAP BlueStore tracks omap space utilization by PG. Disable the warning with ceph config set global bluestore_warn_on_no_per_pg_omap false command. BLUESTORE_DISK_SIZE_MISMATCH One or more OSDs using BlueStore has an internal inconsistency between the size of the physical device and the metadata tracking its size. BLUESTORE_NO_COMPRESSION ` One or more OSDs is unable to load a BlueStore compression plugin. This can be caused by a broken installation, in which the ceph-osd binary does not match the compression plugins, or a recent upgrade that did not include a restart of the ceph-osd daemon. BLUESTORE_SPURIOUS_READ_ERRORS One or more OSDs using BlueStore detects spurious read errors at main device. BlueStore has recovered from these errors by retrying disk reads. Table B.4. Device health Health Code Description DEVICE_HEALTH One or more devices is expected to fail soon, where the warning threshold is controlled by the mgr/devicehealth/warn_threshold config option. Mark the device out to migrate the data and replace the hardware. DEVICE_HEALTH_IN_USE One or more devices is expected to fail soon and has been marked "out" of the storage cluster based on mgr/devicehealth/mark_out_threshold , but it is still participating in one more PGs. DEVICE_HEALTH_TOOMANY Too many devices are expected to fail soon and the mgr/devicehealth/self_heal behavior is enabled, such that marking out all of the ailing devices would exceed the clusters mon_osd_min_in_ratio ratio that prevents too many OSDs from being automatically marked out . Table B.5. Pools and placement groups Health Code Description PG_AVAILABILITY Data availability is reduced, meaning that the storage cluster is unable to service potential read or write requests for some data in the cluster. PG_DEGRADED Data redundancy is reduced for some data, meaning the storage cluster does not have the desired number of replicas for for replicated pools or erasure code fragments. PG_RECOVERY_FULL Data redundancy might be reduced or at risk for some data due to a lack of free space in the storage cluster, specifically, one or more PGs has the recovery_toofull flag set, which means that the cluster is unable to migrate or recover data because one or more OSDs is above the full threshold. PG_BACKFILL_FULL Data redundancy might be reduced or at risk for some data due to a lack of free space in the storage cluster, specifically, one or more PGs has the backfill_toofull flag set, which means that the cluster is unable to migrate or recover data because one or more OSDs is above the backfillfull threshold. PG_DAMAGED Data scrubbing has discovered some problems with data consistency in the storage cluster, specifically, one or more PGs has the inconsistent or snaptrim_error flag is set, indicating an earlier scrub operation found a problem, or that the repair flag is set, meaning a repair for such an inconsistency is currently in progress. OSD_SCRUB_ERRORS Recent OSD scrubs have uncovered inconsistencies. OSD_TOO_MANY_REPAIRS When a read error occurs and another replica is available it is used to repair the error immediately, so that the client can get the object data. LARGE_OMAP_OBJECTS One or more pools contain large omap objects as determined by osd_deep_scrub_large_omap_object_key_threshold or osd_deep_scrub_large_omap_object_value_sum_threshold or both. Adjust the thresholds with ceph config set osd osd_deep_scrub_large_omap_object_key_threshold KEYS and ceph config set osd osd_deep_scrub_large_omap_object_value_sum_threshold BYTES commands. CACHE_POOL_NEAR_FULL A cache tier pool is nearly full. Adjust the cache pool target size with ceph osd pool set CACHE_POOL_NAME target_max_bytes BYTES and ceph osd pool set CACHE_POOL_NAME target_max_bytes BYTES commands. TOO_FEW_PGS The number of PGs in use in the storage cluster is below the configurable threshold of mon_pg_warn_min_per_osd PGs per OSD. POOL_PG_NUM_NOT_POWER_OF_TWO One or more pools has a pg_num value that is not a power of two. Disable the warning with ceph config set global mon_warn_on_pool_pg_num_not_power_of_two false command. POOL_TOO_FEW_PGS One or more pools should probably have more PGs, based on the amount of data that is currently stored in the pool. You can either disable auto-scaling of PGs with ceph osd pool set POOL_NAME pg_autoscale_mode off command, automatically adjust the number of PGs with ceph osd pool set POOL_NAME pg_autoscale_mode on command or manually set the number of PGs with ceph osd pool set POOL_NAME pg_num _NEW_PG_NUMBER command. TOO_MANY_PGS The number of PGs in use in the storage cluster is above the configurable threshold of mon_max_pg_per_osd PGs per OSD. Increase the number of OSDs in the cluster by adding more hardware. POOL_TOO_MANY_PGS One or more pools should probably have more PGs, based on the amount of data that is currently stored in the pool. You can either disable auto-scaling of PGs with ceph osd pool set POOL_NAME pg_autoscale_mode off command, automatically adjust the number of PGs with ceph osd pool set POOL_NAME pg_autoscale_mode on command or manually set the number of PGs with ceph osd pool set POOL_NAME pg_num _NEW_PG_NUMBER command. POOL_TARGET_SIZE_BYTES_OVERCOMMITTED One or more pools have a target_size_bytes property set to estimate the expected size of the pool, but the values exceed the total available storage. Set the value for the pool to zero with ceph osd pool set POOL_NAME target_size_bytes 0 command. POOL_HAS_TARGET_SIZE_BYTES_AND_RATIO One or more pools have both target_size_bytes and target_size_ratio set to estimate the expected size of the pool. Set the value for the pool to zero with ceph osd pool set POOL_NAME target_size_bytes 0 command. TOO_FEW_OSDS The number of OSDs in the storage cluster is below the configurable threshold of o`sd_pool_default_size . SMALLER_PGP_NUM One or more pools has a pgp_num value less than pg_num . This is normally an indication that the PG count was increased without also increasing the placement behavior. Resolve this by setting pgp_num to match with pg_num with ceph osd pool set POOL_NAME pgp_num PG_NUM_VALUE command. MANY_OBJECTS_PER_PG One or more pools has an average number of objects per PG that is significantly higher than the overall storage cluster average. The specific threshold is controlled by the mon_pg_warn_max_object_skew configuration value. POOL_APP_NOT_ENABLED A pool exists that contains one or more objects but has not been tagged for use by a particular application. Resolve this warning by labeling the pool for use by an application with rbd pool init POOL_NAME command. POOL_FULL One or more pools has reached its quota. The threshold to trigger this error condition is controlled by the mon_pool_quota_crit_threshold configuration option. POOL_NEAR_FULL One or more pools is approaching a configured fullness threshold. Adjust the pool quotas with ceph osd pool set-quota POOL_NAME max_objects NUMBER_OF_OBJECTS and ceph osd pool set-quota POOL_NAME max_bytes BYTES commands. OBJECT_MISPLACED One or more objects in the storage cluster is not stored on the node the storage cluster would like it to be stored on. This is an indication that data migration due to some recent storage cluster change has not yet completed. OBJECT_UNFOUND One or more objects in the storage cluster cannot be found, specifically, the OSDs know that a new or updated copy of an object should exist, but a copy of that version of the object has not been found on OSDs that are currently online. SLOW_OPS One or more OSD or monitor requests is taking a long time to process. This can be an indication of extreme load, a slow storage device, or a software bug. PG_NOT_SCRUBBED One or more PGs has not been scrubbed recently. PGs are normally scrubbed within every configured interval specified by osd_scrub_max_interval globally. Initiate the scrub with ceph pg scrub PG_ID command. PG_NOT_DEEP_SCRUBBED One or more PGs has not been deep scrubbed recently. Initiate the scrub with ceph pg deep-scrub PG_ID command. PGs are normally scrubbed every osd_deep_scrub_interval seconds, and this warning triggers when mon_warn_pg_not_deep_scrubbed_ratio percentage of interval has elapsed without a scrub since it was due. PG_SLOW_SNAP_TRIMMING The snapshot trim queue for one or more PGs has exceeded the configured warning threshold. This indicates that either an extremely large number of snapshots were recently deleted, or that the OSDs are unable to trim snapshots quickly enough to keep up with the rate of new snapshot deletions. Table B.6. Miscellaneous Health Code Description RECENT_CRASH One or more Ceph daemons has crashed recently, and the crash has not yet been acknowledged by the administrator. TELEMETRY_CHANGED Telemetry has been enabled, but the contents of the telemetry report have changed since that time, so telemetry reports will not be sent. AUTH_BAD_CAPS One or more auth users has capabilities that cannot be parsed by the monitor. Update the capabilities of the user with ceph auth ENTITY_NAME DAEMON_TYPE CAPS command. OSD_NO_DOWN_OUT_INTERVAL The mon_osd_down_out_interval option is set to zero, which means that the system will not automatically perform any repair or healing operations after an OSD fails. Silence the interval with ceph config global mon mon_warn_on_osd_down_out_interval_zero false command. DASHBOARD_DEBUG The Dashboard debug mode is enabled. This means, if there is an error while processing a REST API request, the HTTP error response contains a Python traceback. Disable the debug mode with ceph dashboard debug disable command. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/troubleshooting_guide/health-messages-of-a-ceph-cluster_diag |
Chapter 42. Managing Hosts in IdM CLI | Chapter 42. Managing Hosts in IdM CLI This chapter introduces hosts and host entries in Identity Management (IdM), and the following operations performed when managing hosts and host entries in IdM CLI: Host Enrollment Adding IdM host entries Deleting IdM host entries Re-enrolling hosts Renaming hosts Disabling hosts Re-enabling hosts Delegating access to hosts and services The chapter also contains an overview table of the prerequisites, the context, and the consequences of these operations. 42.1. Hosts in IdM Identity Management (IdM) manages these identities: Users Services Hosts A host represents a machine. As an IdM identity, a host has an entry in the IdM LDAP, that is the 389 Directory Server instance of the IdM server. The host entry in IdM LDAP is used to establish relationships between other hosts and even services within the domain. These relationships are part of delegating authorization and control to hosts within the domain. Any host can be used in host-based access control (HBAC) rules. IdM domain establishes a commonality between machines, with common identity information, common policies, and shared services. Any machine that belongs to a domain functions as a client of the domain, which means it uses the services that the domain provides. IdM domain provides three main services specifically for machines: DNS Kerberos Certificate management Hosts in IdM are closely connected with the services running on them: Service entries are associated with a host. A host stores both the host and the service Kerberos principals. 42.2. Host enrollment This section describes enrolling hosts as IdM clients and what happens during and after the enrollment. The section compares the enrollment of IdM hosts and IdM users. The section also outlines alternative types of authentication available to hosts. Enrolling a host consists of: Creating a host entry in IdM LDAP: possibly using the ipa host-add command in IdM CLI, or the equivalent IdM Web UI operation . Configuring IdM services on the host, for example the System Security Services Daemon (SSSD), Kerberos, and certmonger, and joining the host to the IdM domain. The two actions can be performed separately or together. If performed separately, they allow for dividing the two tasks between two users with different levels of privilege. This is useful for bulk deployments. The ipa-client-install command can perform the two actions together. The command creates a host entry in IdM LDAP if that entry does not exist yet, and configures both the Kerberos and SSSD services for the host. The command brings the host within the IdM domain and allows it to identify the IdM server it will connect to. If the host belongs to a DNS zone managed by IdM, ipa-client-install adds DNS records for the host too. The command must be run on the client. 42.3. User privileges required for host enrollment The host enrollment operation requires authentication to prevent an unprivileged user from adding unwanted machines to the IdM domain. The privileges required depend on several factors, for example: If a host entry is created separately from running ipa-client-install If a one-time password (OTP) is used for enrollment User privileges for optionally manually creating a host entry in IdM LDAP The user privilege required for creating a host entry in IdM LDAP using the ipa host-add CLI command or the IdM Web UI is Host Administrators . The Host Administrators privilege can be obtained through the IT Specialist role. User privileges for joining the client to the IdM domain Hosts are configured as IdM clients during the execution of the ipa-client-install command. The level of credentials required for executing the ipa-client-install command depends on which of the following enrolling scenarios you find yourself in: The host entry in IdM LDAP does not exist. For this scenario, you need a full administrator's credentials or the Host Administrators role. A full administrator is a member of the admins group. The Host Administrators role provides privileges to add hosts and enroll hosts. For details about this scenario, see Installing a client using user credentials: interactive installation . The host entry in IdM LDAP exists. For this scenario, you need a limited administrator's credentials to execute ipa-client-install successfully. The limited administrator in this case has the Enrollment Administrator role, which provides the Host Enrollment privilege. For details, Installing a client using user credentials: interactive installation . The host entry in IdM LDAP exists, and an OTP has been generated for the host by a full or limited administrator. For this scenario, you can install an IdM client as an ordinary user if you run the ipa-client-install command with the --password option, supplying the correct OTP. For details, see Installing a client by using a one-time password: Interactive installation . After enrollment, IdM hosts authenticate every new session to be able to access IdM resources. Machine authentication is required for the IdM server to trust the machine and to accept IdM connections from the client software installed on that machine. After authenticating the client, the IdM server can respond to its requests. 42.4. Enrollment and authentication of IdM hosts and users: comparison There are many similarities between users and hosts in IdM, some of which can be observed during the enrollment stage as well as those that concern authentication during the deployment stage. The enrollment stage ( User and host enrollment ): An administrator can create an LDAP entry for both a user and a host before the user or host actually join IdM: for the stage user, the command is ipa stageuser-add ; for the host, the command is ipa host-add . A file containing a key table or, abbreviated, keytab, a symmetric key resembling to some extent a user password, is created during the execution of the ipa-client-install command on the host, resulting in the host joining the IdM realm. Analogically, a user is asked to create a password when they activate their account, therefore joining the IdM realm. While the user password is the default authentication method for a user, the keytab is the default authentication method for a host. The keytab is stored in a file on the host. Table 42.1. User and host enrollment Action User Host Pre-enrollment USD ipa stageuser-add user_name [--password] USD ipa host-add host_name [--random] Activating the account USD ipa stageuser-activate user_name USD ipa-client install [--password] (must be run on the host itself) The deployment stage ( User and host session authentication ): When a user starts a new session, the user authenticates using a password; similarly, every time it is switched on, the host authenticates by presenting its keytab file. The System Security Services Daemon (SSSD) manages this process in the background. If the authentication is successful, the user or host obtains a Kerberos ticket granting ticket (TGT). The TGT is then used to obtain specific tickets for specific services. Table 42.2. User and host session authentication User Host Default means of authentication Password Keytabs Starting a session (ordinary user) USD kinit user_name [switch on the host] The result of successful authentication TGT to be used to obtain access to specific services TGT to be used to obtain access to specific services TGTs and other Kerberos tickets are generated as part of the Kerberos services and policies defined by the server. The initial granting of a Kerberos ticket, the renewing of the Kerberos credentials, and even the destroying of the Kerberos session are all handled automatically by the IdM services. Alternative authentication options for IdM hosts Apart from keytabs, IdM supports two other types of machine authentication: SSH keys. The SSH public key for the host is created and uploaded to the host entry. From there, the System Security Services Daemon (SSSD) uses IdM as an identity provider and can work in conjunction with OpenSSH and other services to reference the public keys located centrally in IdM. Machine certificates. In this case, the machine uses an SSL certificate that is issued by the IdM server's certificate authority and then stored in IdM's Directory Server. The certificate is then sent to the machine to present when it authenticates to the server. On the client, certificates are managed by a service called certmonger . 42.5. Host Operations The most common operations related to host enrollment and enablement, and the prerequisites, the context, and the consequences of performing those operations are outlined in the following sections. Table 42.3. Host operations part 1 Action What are the prerequisites of the action? When does it make sense to run the command? How is the action performed by a system administrator? What command(s) does he run? Enrolling a client see Preparing the system for Identity Management client installation in Installing Identity Management When you want the host to join the IdM realm. Enrolling machines as clients in the IdM domain is a two-part process. A host entry is created for the client (and stored in the 389 Directory Server instance) when the ipa host-add command is run, and then a keytab is created to provision the client. Both parts are performed automatically by the ipa-client-install command. It is also possible to perform those steps separately; this allows for administrators to prepare machines and IdM in advance of actually configuring the clients. This allows more flexible setup scenarios, including bulk deployments. Disabling a client The host must have an entry in IdM. The host needs to have an active keytab. When you want to remove the host from the IdM realm temporarily, perhaps for maintenance purposes. ipa host-disable host_name Enabling a client The host must have an entry in IdM. When you want the temporarily disabled host to become active again. ipa-getkeytab Re-enrolling a client The host must have en entry in IdM. When the original host has been lost but you have installed a host with the same host name. ipa-client-install --keytab or ipa-client-install --force-join Un-enrolling a client The host must have an entry in IdM. When you want to remove the host from the IdM realm permanently. ipa-client-install --uninstall Table 42.4. Host operations part 2 Action On which machine can the administrator run the command(s)? What happens when the action is performed? What are the consequences for the host's functioning in IdM? What limitations are introduced/removed? Enrolling a client In the case of a two-step enrollment: ipa host-add can be run on any IdM client; the second step of ipa-client-install must be run on the client itself By default this configures SSSD to connect to an IdM server for authentication and authorization. Optionally one can instead configure the Pluggable Authentication Module (PAM) and the Name Switching Service (NSS) to work with an IdM server over Kerberos and LDAP. Disabling a client Any machine in IdM, even the host itself The host's Kerberos key and SSL certificate are invalidated, and all services running on the host are disabled. Enabling a client Any machine in IdM. If run on the disabled host, LDAP credentials need to be supplied. The host's Kerberos key and the SSL certificate are made valid again, and all IdM services running on the host are re-enabled. Re-enrolling a client The host to be re-enrolled. LDAP credentials need to be supplied. A new Kerberos key is generated for the host, replacing the one. Un-enrolling a client The host to be un-enrolled. The command unconfigures IdM and attempts to return the machine to its state. Part of this process is to unenroll the host from the IdM server. Unenrollment consists of disabling the principal key on the IdM server. The machine principal in /etc/krb5.keytab ( host/<fqdn>@REALM ) is used to authenticate to the IdM server to unenroll itself. If this principal does not exist then unenrollment will fail and an administrator will need to disable the host principal ( ipa host-disable <fqdn> ). 42.6. Host entry in IdM LDAP An Identity Management (IdM) host entry contains information about the host and what attributes it can contain. An LDAP host entry contains all relevant information about the client within IdM: Service entries associated with the host The host and service principal Access control rules Machine information, such as its physical location and operating system Note Note that the IdM Web UI Identity Hosts tab does not show all the information about a particular host stored in the IdM LDAP. Host entry configuration properties A host entry can contain information about the host that is outside its system configuration, such as its physical location, MAC address, keys, and certificates. This information can be set when the host entry is created if it is created manually. Alternatively, most of this information can be added to the host entry after the host is enrolled in the domain. Table 42.5. Host Configuration Properties UI Field Command-Line Option Description Description --desc = description A description of the host. Locality --locality = locality The geographic location of the host. Location --location = location The physical location of the host, such as its data center rack. Platform --platform = string The host hardware or architecture. Operating system --os = string The operating system and version for the host. MAC address --macaddress = address The MAC address for the host. This is a multi-valued attribute. The MAC address is used by the NIS plug-in to create a NIS ethers map for the host. SSH public keys --sshpubkey = string The full SSH public key for the host. This is a multi-valued attribute, so multiple keys can be set. Principal name (not editable) --principalname = principal The Kerberos principal name for the host. This defaults to the host name during the client installation, unless a different principal is explicitly set in the -p . This can be changed using the command-line tools, but cannot be changed in the UI. Set One-Time Password --password = string This option sets a password for the host which can be used in bulk enrollment. - --random This option generates a random password to be used in bulk enrollment. - --certificate = string A certificate blob for the host. - --updatedns This sets whether the host can dynamically update its DNS entries if its IP address changes. 42.7. Adding IdM host entries from IdM CLI Follow this procedure to add host entries in Identity Management (IdM) using the command line (CLI). Host entries are created using the host-add command. This commands adds the host entry to the IdM Directory Server. Consult the ipa host manpage by typing ipa help host in your CLI to get the full list of options available with host-add . There are a few different scenarios when adding a host to IdM: At its most basic, specify only the client host name to add the client to the Kerberos realm and to create an entry in the IdM LDAP server: If the IdM server is configured to manage DNS, add the host to the DNS resource records using the --ip-address option. Example 42.1. Creating Host Entries with Static IP Addresses If the host to be added does not have a static IP address or if the IP address is not known at the time the client is configured, use the --force option with the ipa host-add command. Example 42.2. Creating Host Entries with DHCP For example, laptops may be preconfigured as IdM clients, but they do not have IP addresses at the time they are configured. Using --force essentially creates a placeholder entry in the IdM DNS service. When the DNS service dynamically updates its records, the host's current IP address is detected and its DNS record is updated. 42.8. Deleting host entries from IdM CLI Use the host-del command to delete host records. If your IdM domain has integrated DNS, use the --updatedns option to remove the associated records of any kind for the host from the DNS: 42.9. Re-enrolling an Identity Management client This section describes the different way you can re-enroll an Identity Management client. 42.9.1. Client re-enrollment in IdM During the re-enrollment, the client generates a new Kerberos key and SSH keys, but the identity of the client in the LDAP database remains unchanged. After the re-enrollment, the host has its keys and other information in the same LDAP object with the same FQDN as previously, before the machine's loss of connection with the IdM servers. Important You can only re-enroll clients whose domain entry is still active. If you uninstalled a client (using ipa-client-install --uninstall ) or disabled its host entry (using ipa host-disable ), you cannot re-enroll it. You cannot re-enroll a client after you have renamed it. This is because in Identity Management, the key attribute of the client's entry in LDAP is the client's hostname, its FQDN . As opposed to re-enrolling a client, during which the client's LDAP object remains unchanged, the outcome of renaming a client is that the client has its keys and other information in a different LDAP object with a new FQDN . Therefore, the only way to rename a client is to uninstall the host from IdM, change the host's hostname, and install it as an IdM client with a new name. For details on how to rename a client, see Renaming Identity Management client systems . What happens during client re-enrollment During re-enrollment, Identity Management: Revokes the original host certificate Creates new SSH keys Generates a new keytab 42.9.2. Re-enrolling a client by using user credentials: Interactive re-enrollment Follow this procedure to re-enroll an Identity Management client interactively by using the credentials of an authorized user. Re-create the client machine with the same host name. Run the ipa-client-install --force-join command on the client machine: The script prompts for a user whose identity will be used to re-enroll the client. This could be, for example, a hostadmin user with the Enrollment Administrator role: Additional resources See Installing a client by using user credentials: Interactive installation in Installing Identity Management . 42.9.3. Re-enrolling a client by using the client keytab: Non-interactive re-enrollment You can re-enroll an Identity Management (IdM) client non-interactively by using the krb5.keytab keytab file of the client system from the deployment. For example, re-enrollment using the client keytab is appropriate for an automated installation. Prerequisites You have backed up the keytab of the client from the deployment on another system. Procedure Re-create the client machine with the same host name. Copy the keytab file from the backup location to the re-created client machine, for example its /tmp/ directory. Important Do not put the keytab in the /etc/krb5.keytab file as old keys are removed from this location during the execution of the ipa-client-install installation script. Use the ipa-client-install utility to re-enroll the client. Specify the keytab location with the --keytab option: Note The keytab specified in the --keytab option is only used when authenticating to initiate the re-enrollment. During the re-enrollment, IdM generates a new keytab for the client. 42.9.4. Testing an Identity Management client after installation The command line informs you that the ipa-client-install was successful, but you can also do your own test. To test that the Identity Management client can obtain information about users defined on the server, check that you are able to resolve a user defined on the server. For example, to check the default admin user: To test that authentication works correctly, su - as another IdM user: 42.10. Renaming Identity Management client systems The following sections describe how to change the host name of an Identity Management client system. Warning Renaming a client is a manual procedure. Do not perform it unless changing the host name is absolutely required. Renaming an Identity Management client involves: Preparing the host. For details, see Preparing an IdM client for its renaming . Uninstalling the IdM client from the host. For details, see Uninstalling an Identity Management client . Renaming the host. For details, see Renaming the host system . Installing the IdM client on the host with the new name. For details, see Installing an Identity Management client in Installing Identity Management .. Configuring the host after the IdM client installation. For details, see Re-adding services, re-generating certificates, and re-adding host groups . 42.10.1. Preparing an IdM client for its renaming Before uninstalling the current client, make note of certain settings for the client. You will apply this configuration after re-enrolling the machine with a new host name. Identify which services are running on the machine: Use the ipa service-find command, and identify services with certificates in the output: In addition, each host has a default host service which does not appear in the ipa service-find output. The service principal for the host service, also called a host principal , is host/ old-client-name.example.com . For all service principals displayed by ipa service-find old-client-name.example.com , determine the location of the corresponding keytabs on the old-client-name.example.com system: Each service on the client system has a Kerberos principal in the form service_name/host_name@REALM , such as ldap/ [email protected] . Identify all host groups to which the machine belongs. 42.10.2. Uninstalling an Identity Management client Uninstalling a client removes the client from the Identity Management domain, along with all of the specific Identity Management configuration of system services, such as System Security Services Daemon (SSSD). This restores the configuration of the client system. Procedure Run the ipa-client-install --uninstall command: Remove the DNS entries for the client host manually from the server: For each identified keytab other than /etc/krb5.keytab , remove the old principals: On an IdM server, remove the host entry. This removes all services and revokes all certificates issued for that host: 42.10.3. Renaming the host system Rename the machine as required. For example: You can now re-install the Identity Management client to the Identity Management domain with the new host name. 42.10.4. Re-adding services, re-generating certificates, and re-adding host groups Procedure You can re-add services, re-generate certificates, and re-add host groups on your Identity Management (IdM) server. On the Identity Management server, add a new keytab for every service identified in the Preparing an IdM client for its renaming . Generate certificates for services that had a certificate assigned in the Preparing an IdM client for its renaming . You can do this: Using the IdM administration tools Using the certmonger utility Re-add the client to the host groups identified in the Preparing an IdM client for its renaming . 42.11. Disabling and Re-enabling Host Entries This section describes how to disable and re-enable hosts in Identity Management (IdM). 42.11.1. Disabling Hosts Complete this procedure to disable a host entry in IdM. Domain services, hosts, and users can access an active host. There can be situations when it is necessary to remove an active host temporarily, for maintenance reasons, for example. Deleting the host in such situations is not desired as it removes the host entry and all the associated configuration permanently. Instead, choose the option of disabling the host. Disabling a host prevents domain users from accessing it without permanently removing it from the domain. Procedure Disable a host using the host-disable command. Disabling a host kills the host's current, active keytabs. For example: As a result of disabling a host, the host becomes unavailable to all IdM users, hosts and services. Important Disabling a host entry not only disables that host. It disables every configured service on that host as well. 42.11.2. Re-enabling Hosts Follow this procedure to re-enable a disabled IdM host. Disabling a host killed its active keytabs, which removed the host from the IdM domain without otherwise touching its configuration entry. Procedure To re-enable a host, use the ipa-getkeytab command, adding: the -s option to specify which IdM server to request the keytab from the -p option to specify the principal name the -k option to specify the file to which to save the keytab. For example, to request a new host keytab from server.example.com for client.example.com , and store the keytab in the /etc/krb5.keytab file: Note You can also use the administrator's credentials, specifying -D "uid=admin,cn=users,cn=accounts,dc=example,dc=com" . It is important that the credentials correspond to a user allowed to create the keytab for the host. If the ipa-getkeytab command is run on an active IdM client or server, then it can be run without any LDAP credentials ( -D and -w ) if the user has a TGT obtained using, for example, kinit admin . To run the command directly on the disabled host, supply LDAP credentials to authenticate to the IdM server. 42.12. Delegating access to hosts and services By delegating access to hosts and services within an IdM domain, you can retrieve keytabs and certificates for another host or service. Each host and service has a managedby entry that lists what hosts and services can manage it. By default, a host can manage itself and all of its services. You can configure a host to manage other hosts, or services on other hosts within the IdM domain. Note When you delegate authority of a host to another host through a managedby entry, it does not automatically grant management rights for all services on that host. You must perform each delegation independently. Host and service delegation 42.12.1. Delegating service management You can delegate permissions to a host to manage a service on another host within the domain. When you delegate permissions to a host to manage another host, it does not automatically include permissions to manage its services. You must delegate service management independently. Procedure Delegate the management of a service to a specific host by using the service-add-host command: You must specify the service principal using the principal argument and the hosts with control using the --hosts option. For example: Once the host is delegated authority, the host principal can be used to manage the service: To generate a certificate for the delegated service, create a certificate request on the host with the delegated authority: Use the cert-request utility to submit the certificate request and load the certification information: Additional resources Managing certificates for users, hosts, and services using the integrated IdM CA 42.12.2. Delegating host management You can delegate authority for a host to manage another host by using the host-add-managedby utility. This creates a managedby entry. After the managedby entry is created, the managing host can retrieve a keytab for the host it manages. Procedure Log in as the admin user: Add the managedby entry. For example, this delegates authority over client2 to client1 : Obtain a ticket as the host client1 : Retrieve a keytab for client2 : 42.12.3. Accessing delegated services When a client has delegated authority, it can obtain a keytab for the principal on the local machine for both services and hosts. With the kinit command, use the -k option to load a keytab and the -t option to specify the keytab. The principal format is <principal>/hostname@REALM . For a service, replace <principal> with the service name, for example HTTP. For a host, use host as the principal. Procedure To access a host: To access a service: | [
"ipa host-add client1.example.com",
"ipa host-add --ip-address=192.168.166.31 client1.example.com",
"ipa host-add --force client1.example.com",
"ipa host-del --updatedns client1.example.com",
"ipa-client-install --force-join",
"User authorized to enroll computers: hostadmin Password for hostadmin @ EXAMPLE.COM :",
"ipa-client-install --keytab /tmp/krb5.keytab",
"[user@client1 ~]USD id admin uid=1254400000(admin) gid=1254400000(admins) groups=1254400000(admins)",
"[user@client1 ~]USD su - idm_user Last login: Thu Oct 18 18:39:11 CEST 2018 from 192.168.122.1 on pts/0 [idm_user@client1 ~]USD",
"ipa service-find old-client-name.example.com",
"find / -name \"*.keytab\"",
"ipa hostgroup-find old-client-name.example.com",
"ipa-client-install --uninstall",
"ipa dnsrecord-del Record name: old-client-client Zone name: idm.example.com No option to delete specific record provided. Delete all? Yes/No (default No): true ------------------------ Deleted record \"old-client-name\"",
"ipa-rmkeytab -k /path/to/keytab -r EXAMPLE.COM",
"ipa host-del client.example.com",
"hostnamectl set-hostname new-client-name.example.com",
"ipa service-add service_name/new-client-name",
"kinit admin ipa host-disable client.example.com",
"ipa-getkeytab -s server.example.com -p host/client.example.com -k /etc/krb5.keytab -D \"cn=directory manager\" -w password",
"ipa service-add-host principal --hosts=<hostname>",
"ipa service-add HTTP/web.example.com ipa service-add-host HTTP/web.example.com --hosts=client1.example.com",
"kinit -kt /etc/krb5.keytab host/client1.example.com ipa-getkeytab -s server.example.com -k /tmp/test.keytab -p HTTP/web.example.com Keytab successfully retrieved and stored in: /tmp/test.keytab",
"kinit -kt /etc/krb5.keytab host/client1.example.com openssl req -newkey rsa:2048 -subj '/CN=web.example.com/O=EXAMPLE.COM' -keyout /etc/pki/tls/web.key -out /tmp/web.csr -nodes Generating a 2048 bit RSA private key .............................................................+++ ............................................................................................+++ Writing new private key to '/etc/pki/tls/private/web.key'",
"ipa cert-request --principal=HTTP/web.example.com web.csr Certificate: MIICETCCAXqgA...[snip] Subject: CN=web.example.com,O=EXAMPLE.COM Issuer: CN=EXAMPLE.COM Certificate Authority Not Before: Tue Feb 08 18:51:51 2011 UTC Not After: Mon Feb 08 18:51:51 2016 UTC Serial number: 1005",
"kinit admin",
"ipa host-add-managedby client2.example.com --hosts=client1.example.com",
"kinit -kt /etc/krb5.keytab host/client1.example.com",
"ipa-getkeytab -s server.example.com -k /tmp/client2.keytab -p host/client2.example.com Keytab successfully retrieved and stored in: /tmp/client2.keytab",
"kinit -kt /etc/krb5.keytab host/[email protected]",
"kinit -kt /etc/httpd/conf/krb5.keytab HTTP/[email protected]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/managing-hosts-cli_managing-users-groups-hosts |
Chapter 5. Camel K quick start developer tutorials | Chapter 5. Camel K quick start developer tutorials Red Hat Integration - Camel K provides quick start developer tutorials based on integration use cases available from https://github.com/openshift-integration . This chapter provides details on how to set up and deploy the following tutorials: Section 5.1, "Deploying a basic Camel K Java integration" Section 5.2, "Deploying a Camel K Serverless integration with Knative" Section 5.3, "Deploying a Camel K transformations integration" Section 5.4, "Deploying a Camel K Serverless event streaming integration" Section 5.5, "Deploying a Camel K Serverless API-based integration" Section 5.6, "Deploying a Camel K SaaS integration" Section 5.7, "Deploying a Camel K JDBC integration" Section 5.8, "Deploying a Camel K JMS integration" Section 5.9, "Deploying a Camel K Kafka integration" 5.1. Deploying a basic Camel K Java integration This tutorial demonstrates how to run a simple Java integration in the cloud on OpenShift, apply configuration and routing to an integration, and run an integration as a Kubernetes CronJob. Prerequisites See the tutorial readme in GitHub . You must have installed the Camel K operator and the kamel CLI. See Installing Camel K . Visual Studio (VS) Code is optional but recommended for the best developer experience. See Setting up your Camel K development environment . Procedure Clone the tutorial Git repository. USD git clone [email protected]:openshift-integration/camel-k-example-basic.git In VS Code, select File Open Folder camel-k-example-basic . In the VS Code navigation tree, click the readme.md file. This opens a new tab in VS Code to display the tutorial instructions. Follow the tutorial instructions. Alternatively, if you do not have VS Code installed, you can manually enter the commands from deploying basic Camel K Java integration . Additional resources Developing Camel K integrations in Java 5.2. Deploying a Camel K Serverless integration with Knative This tutorial demonstrates how to deploy Camel K integrations with OpenShift Serverless in an event-driven architecture. This tutorial uses a Knative Eventing broker to communicate using an event publish-subscribe pattern in a Bitcoin trading demonstration. This tutorial also shows how to use Camel K integrations to connect to a Knative event mesh with multiple external systems. The Camel K integrations also use Knative Serving to automatically scale up and down to zero as needed. Prerequisites See the tutorial readme in GitHub . You must have cluster administrator access to an OpenShift cluster to install Camel K and OpenShift Serverless: Installing Camel K Installing OpenShift Serverless from the OperatorHub Visual Studio (VS) Code is optional but recommended for the best developer experience. See Setting up your Camel K development environment . Procedure Clone the tutorial Git repository: USD git clone [email protected]:openshift-integration/camel-k-example-knative.git In VS Code, select File Open Folder camel-k-example-knative . In the VS Code navigation tree, click the readme.md file. This opens a new tab in VS Code to display the tutorial instructions. Follow the tutorial instructions. If you do not have VS Code installed, you can manually enter the commands from deploying Camel K Knative integration . Additional resources About Knative Eventing About Knative Serving 5.3. Deploying a Camel K transformations integration This tutorial demonstrates how to run a Camel K Java integration on OpenShift that transforms data such as XML to JSON, and stores it in a database such as PostgreSQL. The tutorial example uses a CSV file to query an XML API and uses the data collected to build a valid GeoJSON file, which is stored in a PostgreSQL database. Prerequisites See the tutorial readme in GitHub . You must have cluster administrator access to an OpenShift cluster to install Camel K. See Installing Camel K . You must follow the instructions in the tutorial readme to install Crunchy Postgres for Kubernetes, which is required on your OpenShift cluster. Visual Studio (VS) Code is optional but recommended for the best developer experience. See Setting up your Camel K development environment . Procedure Clone the tutorial Git repository: USD git clone [email protected]:openshift-integration/camel-k-example-transformations.git In VS Code, select File Open Folder camel-k-example-transformations . In the VS Code navigation tree, click the readme.md file. This opens a new tab in VS Code to display the tutorial instructions. Follow the tutorial instructions. If you do not have VS Code installed, you can manually enter the commands from deploying Camel K transformations integration . Additional resources https://operatorhub.io/operator/postgresql https://geojson.org/ 5.4. Deploying a Camel K Serverless event streaming integration This tutorial demonstrates using Camel K and OpenShift Serverless with Knative Eventing for an event-driven architecture. The tutorial shows how to install Camel K and Serverless with Knative in an AMQ Streams cluster with an AMQ Broker cluster, and how to deploy an event streaming project to run a global hazard alert demonstration application. Prerequisites See the tutorial readme in GitHub . You must have cluster administrator access to an OpenShift cluster to install Camel K and OpenShift Serverless: Installing Camel K Installing OpenShift Serverless from the OperatorHub You must follow the instructions in the tutorial readme to install the additional required Operators on your OpenShift cluster: AMQ Streams Operator AMQ Broker Operator Visual Studio (VS) Code is optional but recommended for the best developer experience. See Setting up your Camel K development environment . Procedure Clone the tutorial Git repository. USD git clone [email protected]:openshift-integration/camel-k-example-event-streaming.git In VS Code, select File Open Folder camel-k-example-event-streaming . In the VS Code navigation tree, click the readme.md file. This opens a new tab in VS Code to display the tutorial instructions. Follow the tutorial instructions. Alternatively, if you do not have VS Code installed, you can manually enter the commands from deploying Camel K event stream integration . Additional resources Red Hat AMQ documentation OpenShift Serverless documentation 5.5. Deploying a Camel K Serverless API-based integration This tutorial demonstrates using Camel K and OpenShift Serverless with Knative Serving for an API-based integration, and managing an API with 3scale API Management on OpenShift. The tutorial shows how to configure Amazon S3-based storage, design an OpenAPI definition, and run an integration that calls the demonstration API endpoints. Prerequisites See the tutorial readme in GitHub . You must have cluster administrator access to an OpenShift cluster to install Camel K and OpenShift Serverless: Installing Camel K Installing OpenShift Serverless from the OperatorHub You can also install the optional Red Hat Integration - 3scale Operator on your OpenShift system to manage the API. See Deploying 3scale using the Operator . Visual Studio (VS) Code is optional but recommended for the best developer experience. See Setting up your Camel K development environment . Procedure Clone the tutorial Git repository. USD git clone [email protected]:openshift-integration/camel-k-example-api.git In VS Code, select File Open Folder camel-k-example-api . In the VS Code navigation tree, click the readme.md file. This opens a new tab in VS Code to display the tutorial instructions. Follow the tutorial instructions. Alternatively, if you do not have VS Code installed, you can manually enter the commands from deploying Camel K API integration . Additional resources Red Hat 3scale API Management documentation OpenShift Serverless documentation 5.6. Deploying a Camel K SaaS integration This tutorial demonstrates how to run a Camel K Java integration on OpenShift that connects two widely-used Software as a Service (SaaS) providers. The tutorial example shows how to integrate the Salesforce and ServiceNow SaaS providers using REST-based Camel components. In this simple example, each new Salesforce Case is copied to a corresponding ServiceNow Incident that includes the Salesforce Case Number. Prerequisites See the tutorial readme in GitHub . You must have cluster administrator access to an OpenShift cluster to install Camel K. See Installing Camel K . You must have Salesforce login credentials and ServiceNow login credentials. Visual Studio (VS) Code is optional but recommended for the best developer experience. See Setting up your Camel K development environment . Procedure Clone the tutorial Git repository: USD git clone [email protected]:openshift-integration/camel-k-example-saas.git In VS Code, select File Open Folder camel-k-example-saas . In the VS Code navigation tree, click the readme.md file. This opens a new tab in VS Code to display the tutorial instructions. Follow the tutorial instructions . If you do not have VS Code installed, you can manually enter the commands from deploying Camel K SaaS integration . Additional resources https://www.salesforce.com/ https://www.servicenow.com/ 5.7. Deploying a Camel K JDBC integration This tutorial demonstrates how to get started with Camel K and an SQL database via JDBC drivers. This tutorial shows how to set up an integration producing data into a Postgres database (you can use any relational database of your choice) and also how to read data from the same database. Prerequisites See the tutorial readme in GitHub . You must have cluster administrator access to an OpenShift cluster to install Camel K. Installing Camel K You must follow the instructions in the tutorial readme to install Crunchy Postgres for Kubernetes, which is required on your OpenShift cluster. Visual Studio (VS) Code is optional but recommended for the best developer experience. See Setting up your Camel K development environment . Procedure Clone the tutorial Git repository. USD git clone [email protected]:openshift-integration/camel-k-example-jdbc.git In VS Code, select File Open Folder camel-k-example-jdbc . In the VS Code navigation tree, click the readme.md file. This opens a new tab in VS Code to display the tutorial instructions. Follow the tutorial instructions. Alternatively, if you do not have VS Code installed, you can manually enter the commands from deploying Camel K JDBC integration . 5.8. Deploying a Camel K JMS integration This tutorial demonstrates how to use JMS to connect to a message broker in order to consume and produce messages. There are two examples: JMS Sink: this tutorial demonstrates how to produce a message to a JMS broker. JMS Source: this tutorial demonstrates how to consume a message from a JMS broker. Prerequisites See the tutorial readme in GitHub . You must have cluster administrator access to an OpenShift cluster to install Camel K. Installing Camel K Visual Studio (VS) Code is optional but recommended for the best developer experience. See Setting up your Camel K development environment . Procedure Clone the tutorial Git repository: USD git clone [email protected]:openshift-integration/camel-k-example-jms.git In VS Code, select File Open Folder camel-k-example-jms . In the VS Code navigation tree, click the readme.md file. This opens a new tab in VS Code to display the tutorial instructions. Follow the tutorial instructions. If you do not have VS Code installed, you can manually enter the commands from deploying Camel K JMS integration . Additional resources JMS Sink JMS Source 5.9. Deploying a Camel K Kafka integration This tutorial demonstrates how to use Camel K with Apache Kafka. This tutorial demonstrates how to set up a Kafka Topic via Red Hat OpenShift Streams for Apache Kafka and to use it in conjunction with Camel K. Prerequisites See the tutorial readme in GitHub . You must have cluster administrator access to an OpenShift cluster to install Camel K. Installing Camel K Visual Studio (VS) Code is optional but recommended for the best developer experience. See Setting up your Camel K development environment . Procedure Clone the tutorial Git repository: USD git clone [email protected]:openshift-integration/camel-k-example-kafka.git In VS Code, select File Open Folder camel-k-example-kafka . In the VS Code navigation tree, click the readme.md file. This opens a new tab in VS Code to display the tutorial instructions. Follow the tutorial instructions. If you do not have VS Code installed, you can manually enter the commands from deploying Camel K Kafka integration . | [
"git clone [email protected]:openshift-integration/camel-k-example-basic.git",
"git clone [email protected]:openshift-integration/camel-k-example-knative.git",
"git clone [email protected]:openshift-integration/camel-k-example-transformations.git",
"git clone [email protected]:openshift-integration/camel-k-example-event-streaming.git",
"git clone [email protected]:openshift-integration/camel-k-example-api.git",
"git clone [email protected]:openshift-integration/camel-k-example-saas.git",
"git clone [email protected]:openshift-integration/camel-k-example-jdbc.git",
"git clone [email protected]:openshift-integration/camel-k-example-jms.git",
"git clone [email protected]:openshift-integration/camel-k-example-kafka.git"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/getting_started_with_camel_k/deploying-camel-k-tutorials |
5.2. Creating a Cluster | 5.2. Creating a Cluster Create a cluster with the gluster service enabled. Figure 5.2. New Cluster Window Select the Compatibility Version from the drop-down menu. Click OK . Important While creating a cluster, use iptables for the option Firewall Type . | null | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/configuring_red_hat_virtualization_with_red_hat_gluster_storage/creating_a_cluster |
function::proc_mem_txt | function::proc_mem_txt Name function::proc_mem_txt - Program text (code) size in pages Synopsis Arguments None Description Returns the current process text (code) size in pages, or zero when there is no current process or the number of pages couldn't be retrieved. | [
"proc_mem_txt:long()"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-proc-mem-txt |
Chapter 1. Preparing to deploy OpenShift Data Foundation | Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provides you with the option to create internal cluster resources. Before you begin the deployment of Red Hat OpenShift Data Foundation, follow these steps: Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) HashiCorp Vault, follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Token authentication using KMS . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Kubernetes authentication using KMS . Ensure that you are using signed certificates on your Vault servers. Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) Thales CipherTrust Manager, you must first enable the Key Management Interoperability Protocol (KMIP) and use signed certificates on your server. Create a KMIP client if one does not exist. From the user interface, select KMIP Client Profile Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token by navigating to KMIP Registration Token New Registration Token . Copy the token for the step. To register the client, navigate to KMIP Registered Clients Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings Interfaces Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both metadata and material when the key is deleted. It is disabled by default. Select the certificate authority (CA) to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Optional: If StorageClass encryption is to be enabled during deployment, create a key to act as the Key Encryption Key (KEK): Navigate to Keys Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. Minimum starting node requirements An OpenShift Data Foundation cluster is deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in the Planning guide . Important In order perform stop and start node operations, or to create or add a machine pool, it is necessary to apply proper labeling. For example: Replace <cluster-name> with the cluster name and <machinepool-name> with the machine pool name. | [
"rosa edit machinepool --cluster <cluster-name> --labels cluster.ocs.openshift.io/openshift-storage=\"\" <machinepool-name>"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_red_hat_openshift_service_on_aws_with_hosted_control_planes/preparing_to_deploy_openshift_data_foundation |
Configure | Configure builds for Red Hat OpenShift 1.1 Configuring Builds Red Hat OpenShift Documentation Team | [
"apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: buildah-golang-build annotations: build.shipwright.io/verify.repository: \"true\" spec: source: git: url: https://github.com/shipwright-io/sample-go contextDir: docker-build",
"apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: buildah-build spec: source: git: url: https://github.com/sclorg/nodejs-ex cloneSecret: source-repository-credentials",
"apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: buildah-custom-context-dockerfile spec: source: git: url: https://github.com/userjohn/npm-simple contextDir: docker-build",
"apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: buildah-golang-build spec: source: git: url: https://github.com/shipwright-io/sample-go revision: v0.1.0",
"apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: buildah-golang-build spec: source: git: url: https://github.com/shipwright-io/sample-go contextDir: docker-build env: - name: <example_var_1> value: \"<example_value_1>\" - name: <example_var_2> value: \"<example_value_2>\"",
"apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: buildah-build spec: strategy: name: buildah kind: ClusterBuildStrategy",
"apiVersion: shipwright.io/v1beta1 kind: ClusterBuildStrategy metadata: name: buildah spec: parameters: - name: build-args description: \"The values for the args in the Dockerfile. Values must be in the format KEY=VALUE.\" type: array defaults: [] # - name: storage-driver description: \"The storage driver to use, such as 'overlay' or 'vfs'.\" type: string default: \"vfs\" steps:",
"apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: <your_build> namespace: <your_namespace> spec: paramValues: - name: storage-driver value: \"overlay\" strategy: name: buildah kind: ClusterBuildStrategy output: #",
"apiVersion: v1 kind: ConfigMap metadata: name: buildah-configuration namespace: <your_namespace> data: storage-driver: overlay",
"apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: <your_build> namespace: <your_namespace> spec: paramValues: - name: storage-driver configMapValue: name: buildah-configuration key: storage-driver strategy: name: buildah kind: ClusterBuildStrategy output: #",
"apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: <your_build> namespace: <your_namespace> spec: paramValues: - name: storage-driver configMapValue: name: buildah-configuration key: storage-driver - name: registries-search values: - value: registry.redhat.io strategy: name: buildah kind: ClusterBuildStrategy output: #",
"apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: <your_build> namespace: <your_namespace> spec: paramValues: - name: storage-driver configMapValue: name: buildah-configuration key: storage-driver - name: registries-block values: - secretValue: 1 name: registry-configuration key: reg-blocked strategy: name: buildah kind: ClusterBuildStrategy output: #",
"apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: buildah-golang-build spec: source: git: url: https://github.com/shipwright-io/sample-go contextDir: docker-build strategy: name: buildah kind: ClusterBuildStrategy paramValues: - name: dockerfile value: Dockerfile",
"apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: s2i-nodejs-build spec: source: git: url: https://github.com/shipwright-io/sample-nodejs contextDir: source-build/ strategy: name: source-to-image kind: ClusterBuildStrategy paramValues: - name: builder-image value: docker.io/centos/nodejs-10-centos7",
"apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: s2i-nodejs-build spec: source: git: url: https://github.com/shipwright-io/sample-nodejs contextDir: source-build/ strategy: name: source-to-image kind: ClusterBuildStrategy paramValues: - name: builder-image value: docker.io/centos/nodejs-10-centos7 output: image: image-registry.openshift-image-registry.svc:5000/build-examples/nodejs-ex",
"apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: s2i-nodejs-build spec: source: git: url: https://github.com/shipwright-io/sample-nodejs contextDir: source-build/ strategy: name: source-to-image kind: ClusterBuildStrategy paramValues: - name: builder-image value: docker.io/centos/nodejs-10-centos7 output: image: us.icr.io/source-to-image-build/nodejs-ex pushSecret: icr-knbuild",
"apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: s2i-nodejs-build spec: source: git: url: https://github.com/shipwright-io/sample-nodejs contextDir: source-build/ strategy: name: source-to-image kind: ClusterBuildStrategy paramValues: - name: builder-image value: docker.io/centos/nodejs-10-centos7 output: image: us.icr.io/source-to-image-build/nodejs-ex pushSecret: icr-knbuild annotations: \"org.opencontainers.image.source\": \"https://github.com/org/repo\" \"org.opencontainers.image.url\": \"https://my-company.com/images\" labels: \"maintainer\": \"[email protected]\" \"description\": \"This is my cool image\"",
"apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: build-retention-ttl spec: source: git: url: \"https://github.com/shipwright-io/sample-go\" contextDir: docker-build strategy: kind: ClusterBuildStrategy name: buildah output: # retention: ttlAfterFailed: 30m ttlAfterSucceeded: 1h failedLimit: 10 succeededLimit: 20 #",
"apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: <build_name> spec: source: git: url: https://github.com/example/url strategy: name: buildah kind: ClusterBuildStrategy paramValues: - name: dockerfile value: Dockerfile output: image: registry/namespace/image:latest volumes: - name: <your_volume_name> configMap: name: <your_configmap_name>",
"apiVersion: shipwright.io/v1beta1 kind: ClusterBuildStrategy metadata: name: <cluster_build_strategy_name> # spec: parameters: - name: tool-args description: Parameters for the tool type: array steps: - name: a-step command: - some-tool args: - --tool-args - USD(params.tool-args[*])",
"apiVersion: shipwright.io/v1beta1 kind: ClusterBuildStrategy metadata: name: buildah-small spec: steps: - name: build-and-push image: quay.io/containers/buildah:v1.31.0 workingDir: USD(params.shp-source-root) securityContext: capabilities: add: - \"SETFCAP\" command: - /bin/bash args: - -c - | set -euo pipefail # Parse parameters # # That's the separator between the shell script and its args - -- - --context - USD(params.shp-source-context) - --dockerfile - USD(build.dockerfile) - --image - USD(params.shp-output-image) - --build-args - USD(params.build-args[*]) - --registries-block - USD(params.registries-block[*]) - --registries-insecure - USD(params.registries-insecure[*]) - --registries-search - USD(params.registries-search[*]) resources: limits: cpu: 250m memory: 65Mi requests: cpu: 250m memory: 65Mi parameters: - name: build-args description: \"The values for the args in the Dockerfile. Values must be in the format KEY=VALUE.\" type: array defaults: [] #",
"apiVersion: shipwright.io/v1beta1 kind: ClusterBuildStrategy metadata: name: buildah-medium spec: steps: - name: build-and-push image: quay.io/containers/buildah:v1.31.0 workingDir: USD(params.shp-source-root) securityContext: capabilities: add: - \"SETFCAP\" command: - /bin/bash args: - -c - | set -euo pipefail # Parse parameters # # That's the separator between the shell script and its args - -- - --context - USD(params.shp-source-context) - --dockerfile - USD(build.dockerfile) - --image - USD(params.shp-output-image) - --build-args - USD(params.build-args[*]) - --registries-block - USD(params.registries-block[*]) - --registries-insecure - USD(params.registries-insecure[*]) - --registries-search - USD(params.registries-search[*]) resources: limits: cpu: 500m memory: 1Gi requests: cpu: 500m memory: 1Gi parameters: - name: build-args description: \"The values for the args in the Dockerfile. Values must be in the format KEY=VALUE.\" type: array defaults: [] #",
"apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: buildah-medium spec: source: git: url: https://github.com/shipwright-io/sample-go contextDir: docker-build strategy: name: buildah-medium kind: ClusterBuildStrategy #",
"apiVersion: shipwright.io/v1beta1 kind: ClusterBuildStrategy metadata: name: <cluster_build_strategy_name> annotations: container.apparmor.security.beta.kubernetes.io/step-build-and-push: unconfined container.seccomp.security.alpha.kubernetes.io/step-build-and-push: unconfined spec: #",
"apiVersion: shipwright.io/v1beta1 kind: BuildStrategy metadata: name: sample-strategy spec: parameters: - name: sample-parameter description: A sample parameter type: string steps: - name: sample-step env: - name: PARAM_SAMPLE_PARAMETER value: USD(params.sample-parameter) command: - /bin/bash args: - -c - | set -euo pipefail some-tool --sample-argument \"USD{PARAM_SAMPLE_PARAMETER}\"",
"apiVersion: shipwright.io/v1beta1 kind: BuildStrategy metadata: name: sample-strategy spec: parameters: - name: sample-parameter description: A sample parameter type: string steps: - name: sample-step command: - /bin/bash args: - -c - | set -euo pipefail SAMPLE_PARAMETER=\"USD1\" some-tool --sample-argument \"USD{SAMPLE_PARAMETER}\" - -- - USD(params.sample-parameter)",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun status: # output: digest: sha256:07626e3c7fdd28d5328a8d6df8d29cd3da760c7f5e2070b534f9b880ed093a53 size: 1989004 #",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun status: # failureDetails: location: container: step-source-default pod: baran-build-buildrun-gzmv5-b7wbf-pod-bbpqr message: The source repository does not exist, or you have insufficient permission to access it. reason: GitRemotePrivate",
"apiVersion: shipwright.io/v1beta1 kind: BuildStrategy metadata: name: buildah spec: steps: - name: build image: quay.io/containers/buildah:v1.23.3 # volumeMounts: - name: varlibcontainers mountPath: /var/lib/containers volumes: - name: varlibcontainers overridable: true emptyDir: {}",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: build: name: buildah-build",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: standalone-buildrun spec: build: spec: source: git: url: https://github.com/shipwright-io/sample-go.git contextDir: source-build strategy: kind: ClusterBuildStrategy name: buildah output: image: <path_to_image>",
"apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: <your_build> namespace: <your_namespace> spec: paramValues: - name: cache value: disabled strategy: name: <your_strategy> kind: ClusterBuildStrategy source: # output: #",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: <your_buildrun> namespace: <your_namespace> spec: build: name: <your_build> paramValues: - name: cache value: registry",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: build: name: buildah-build serviceAccount: pipeline 1",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buidrun-retention-ttl spec: build: name: build-retention-ttl retention: ttlAfterFailed: 10m ttlAfterSucceeded: 10m",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: <buildrun_name> spec: build: name: <build_name> volumes: - name: <volume_name> configMap: name: <configmap_name>",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: build: name: buildah-build env: - name: <example_var_1> value: \"<example_value_1>\" - name: <example_var_2> value: \"<example_value_2>\"",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: build: name: buildah-build env: - name: <pod_name> valueFrom: fieldRef: fieldPath: metadata.name",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: build: name: buildah-build env: - name: MEMORY_LIMIT valueFrom: resourceFieldRef: containerName: <my_container> resource: limits.memory",
"oc get buildrun buildah-buildrun-mp99r NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME buildah-buildrun-mp99r Unknown Unknown 1s",
"oc get buildrun buildah-buildrun-mp99r NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME buildah-buildrun-mp99r True Succeeded 29m 20m",
"status: # failureDetails: location: container: step-source-default pod: baran-build-buildrun-gzmv5-b7wbf-pod-bbpqr message: The source repository does not exist, or you have insufficient permission to access it. reason: GitRemotePrivate",
"status: buildSpec: # output: digest: sha256:07626e3c7fdd28d5328a8d6df8d29cd3da760c7f5e2070b534f9b880ed093a53 size: 1989004 sources: - name: default git: commitAuthor: xxx xxxxxx commitSha: f25822b85021d02059c9ac8a211ef3804ea8fdde branchName: main",
"status: buildSpec: # output: digest: sha256:07626e3c7fdd28d5328a8d6df8d29cd3da760c7f5e2070b534f9b880ed093a53 size: 1989004 sources: - name: default bundle: digest: sha256:0f5e2070b534f9b880ed093a537626e3c7fdd28d5328a8d6df8d29cd3da760c7",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: # [...] state: \"BuildRunCanceled\""
] | https://docs.redhat.com/en/documentation/builds_for_red_hat_openshift/1.1/html-single/configure/index |
4.10. Configuring Automated Unlocking of Encrypted Volumes using Policy-Based Decryption | 4.10. Configuring Automated Unlocking of Encrypted Volumes using Policy-Based Decryption The Policy-Based Decryption (PBD) is a collection of technologies that enable unlocking encrypted root and secondary volumes of hard drives on physical and virtual machines using different methods like a user password, a Trusted Platform Module (TPM) device, a PKCS#11 device connected to a system, for example, a smart card, or with the help of a special network server. The PBD as a technology allows combining different unlocking methods into a policy creating an ability to unlock the same volume in different ways. The current implementation of the PBD in Red Hat Enterprise Linux consists of the Clevis framework and plugins called pins. Each pin provides a separate unlocking capability. For now, the only two pins available are the ones that allow volumes to be unlocked with TPM or with a network server. The Network Bound Disc Encryption (NBDE) is a subcategory of the PBD technologies that allows binding the encrypted volumes to a special network server. The current implementation of the NBDE includes Clevis pin for Tang server and the Tang server itself. 4.10.1. Network-Bound Disk Encryption The Network-Bound Disk Encryption (NBDE) allows the user to encrypt root volumes of hard drives on physical and virtual machines without requiring to manually enter a password when systems are restarted. In Red Hat Enterprise Linux 7, NBDE is implemented through the following components and technologies: Figure 4.2. The Network-Bound Disk Encryption using Clevis and Tang Tang is a server for binding data to network presence. It makes a system containing your data available when the system is bound to a certain secure network. Tang is stateless and does not require TLS or authentication. Unlike escrow-based solutions, where the server stores all encryption keys and has knowledge of every key ever used, Tang never interacts with any client keys, so it never gains any identifying information from the client. Clevis is a pluggable framework for automated decryption. In NBDE, Clevis provides automated unlocking of LUKS volumes. The clevis package provides the client side of the feature. A Clevis pin is a plug-in into the Clevis framework. One of such pins is a plug-in that implements interactions with the NBDE server - Tang. Clevis and Tang are generic client and server components that provide network-bound encryption. In Red Hat Enterprise Linux 7, they are used in conjunction with LUKS to encrypt and decrypt root and non-root storage volumes to accomplish Network-Bound Disk Encryption. Both client- and server-side components use the Jose library to perform encryption and decryption operations. When you begin provisioning NBDE, the Clevis pin for Tang server gets a list of the Tang server's advertised asymmetric keys. Alternatively, since the keys are asymmetric, a list of Tang's public keys can be distributed out of band so that clients can operate without access to the Tang server. This mode is called offline provisioning . The Clevis pin for Tang uses one of the public keys to generate a unique, cryptographically-strong encryption key. Once the data is encrypted using this key, the key is discarded. The Clevis client should store the state produced by this provisioning operation in a convenient location. This process of encrypting data is the provisioning step . The provisioning state for NBDE is stored in the LUKS header leveraging the luksmeta package. When the client is ready to access its data, it loads the metadata produced in the provisioning step and it responds to recover the encryption key. This process is the recovery step . In NBDE, Clevis binds a LUKS volume using a pin so that it can be automatically unlocked. After successful completion of the binding process, the disk can be unlocked using the provided Dracut unlocker. All LUKS-encrypted devices, such as those with the /tmp , /var , and /usr/local/ directories, that contain a file system requiring to start before the network connection is established are considered to be root volumes . Additionally, all mount points that are used by services run before the network is up, such as /var/log/ , var/log/audit/ , or /opt , also require to be mounted early after switching to a root device. You can also identify a root volume by not having the _netdev option in the /etc/fstab file. 4.10.2. Installing an Encryption Client - Clevis To install the Clevis pluggable framework and its pins on a machine with an encrypted volume (client), enter the following command as root : To decrypt data, use the clevis decrypt command and provide the cipher text (JWE): For more information, see the built-in CLI help: 4.10.3. Deploying a Tang Server with SELinux in Enforcing Mode Red Hat Enterprise Linux 7.7 and newer provides the tangd_port_t SELinux type, and a Tang server can be deployed as a confined service in SELinux enforcing mode. Prerequisites The policycoreutils-python-utils package and its dependencies are installed. Procedure To install the tang package and its dependencies, enter the following command as root : Pick an unoccupied port, for example, 7500/tcp , and allow the tangd service to bind to that port: Note that a port can be used only by one service at a time, and thus an attempt to use an already occupied port implies the ValueError: Port already defined error message. Open the port in the firewall: Enable the tangd service using systemd: Create an override file: In the following editor screen, which opens an empty override.conf file located in the /etc/systemd/system/tangd.socket.d/ directory, change the default port for the Tang server from 80 to the previously picked number by adding the following lines: Save the file and exit the editor. Reload the changed configuration and start the tangd service: Check that your configuration is working: Start the tangd service: Because tangd uses the systemd socket activation mechanism, the server starts as soon as the first connection comes in. A new set of cryptographic keys is automatically generated at the first start. To perform cryptographic operations such as manual key generation, use the jose utility. Enter the jose -h command or see the jose(1) man pages for more information. Example 4.4. Rotating Tang Keys It is important to periodically rotate your keys. The precise interval at which you should rotate them depends upon your application, key sizes, and institutional policy. For some common recommendations, see the Cryptographic Key Length Recommendation page. To rotate keys, start with the generation of new keys in the key database directory, typically /var/db/tang . For example, you can create new signature and exchange keys with the following commands: Rename the old keys to have a leading . to hide them from advertisement. Note that the file names in the following example differs from real and unique file names in the key database directory. Tang immediately picks up all changes. No restart is required. At this point, new client bindings pick up the new keys and old clients can continue to utilize the old keys. When you are sure that all old clients use the new keys, you can remove the old keys. Warning Be aware that removing the old keys while clients are still using them can result in data loss. 4.10.3.1. Deploying High-Availability Systems Tang provides two methods for building a high-availability deployment: Client Redundancy (Recommended) Clients should be configured with the ability to bind to multiple Tang servers. In this setup, each Tang server has its own keys and clients are able to decrypt by contacting a subset of these servers. Clevis already supports this workflow through its sss plug-in. For more information about this setup, see the following man pages: tang(8) , section High Availability clevis(1) , section Shamir's Secret Sharing clevis-encrypt-sss(1) Red Hat recommends this method for a high-availability deployment. Key Sharing For redundancy purposes, more than one instance of Tang can be deployed. To set up a second or any subsequent instance, install the tang packages and copy the key directory to the new host using rsync over SSH. Note that Red Hat does not recommend this method because sharing keys increases the risk of key compromise and requires additional automation infrastructure. 4.10.4. Deploying an Encryption Client for an NBDE system with Tang Prerequisites The Clevis framework is installed. See Section 4.10.2, "Installing an Encryption Client - Clevis" A Tang server or its downloaded advertisement is available. See Section 4.10.3, "Deploying a Tang Server with SELinux in Enforcing Mode" Procedure To bind a Clevis encryption client to a Tang server, use the clevis encrypt tang sub-command: Change the http://tang.srv URL in the example to match the URL of the server where tang is installed. The JWE output file contains your encrypted cipher text. This cipher text is read from the PLAINTEXT input file. To decrypt data, use the clevis decrypt command and provide the cipher text (JWE): For more information, see the clevis-encrypt-tang(1) man page or use the built-in CLI help: 4.10.5. Deploying an Encryption Client with a TPM 2.0 Policy On systems with the 64-bit Intel or 64-bit AMD architecture, to deploy a client that encrypts using a Trusted Platform Module 2.0 (TPM 2.0) chip, use the clevis encrypt tpm2 sub-command with the only argument in form of the JSON configuration object: To choose a different hierarchy, hash, and key algorithms, specify configuration properties, for example: To decrypt the data, provide the ciphertext (JWE): The pin also supports sealing data to a Platform Configuration Registers (PCR) state. That way the data can only be unsealed if the PCRs hashes values match the policy used when sealing. For example, to seal the data to the PCR with index 0 and 1 for the SHA1 bank: For more information and the list of possible configuration properties, see the clevis-encrypt-tpm2(1) man page. 4.10.6. Configuring Manual Enrollment of Root Volumes To automatically unlock an existing LUKS-encrypted root volume, install the clevis-luks subpackage and bind the volume to a Tang server using the clevis luks bind command: This command performs four steps: Creates a new key with the same entropy as the LUKS master key. Encrypts the new key with Clevis. Stores the Clevis JWE object in the LUKS header with LUKSMeta. Enables the new key for use with LUKS. This disk can now be unlocked with your existing password as well as with the Clevis policy. For more information, see the clevis-luks-bind(1) man page. Note The binding procedure assumes that there is at least one free LUKS password slot. The clevis luks bind command takes one of the slots. To verify that the Clevis JWE object is successfully placed in a LUKS header, use the luksmeta show command: To enable the early boot system to process the disk binding, enter the following commands on an already installed system: Important To use NBDE for clients with static IP configuration (without DHCP), pass your network configuration to the dracut tool manually, for example: Alternatively, create a .conf file in the /etc/dracut.conf.d/ directory with the static network information. For example: Regenerate the initial RAM disk image: See the dracut.cmdline(7) man page for more information. 4.10.7. Configuring Automated Enrollment Using Kickstart Clevis can integrate with Kickstart to provide a fully automated enrollment process. Instruct Kickstart to partition the disk such that LUKS encryption has enabled for all mount points, other than /boot , with a temporary password. The password is temporary for this step of the enrollment process. Note that OSPP-complaint systems require a more complex configuration, for example: Install the related Clevis packages by listing them in the %packages section: Call clevis luks bind to perform binding in the %post section. Afterward, remove the temporary password: In the above example, note that we specify the thumbprint that we trust on the Tang server as part of our binding configuration, enabling binding to be completely non-interactive. You can use an analogous procedure when using a TPM 2.0 policy instead of a Tang server. For more information on Kickstart installations, see the Red Hat Enterprise Linux 7 Installation Guide . For information on Linux Unified Key Setup-on-disk-format (LUKS), see Section 4.9.1, "Using LUKS Disk Encryption" . 4.10.8. Configuring Automated Unlocking of Removable Storage Devices To automatically unlock a LUKS-encrypted removable storage device, such as a USB drive, install the clevis-udisks2 package: Reboot the system, and then perform the binding step using the clevis luks bind command as described in Section 4.10.6, "Configuring Manual Enrollment of Root Volumes" , for example: The LUKS-encrypted removable device can be now unlocked automatically in your GNOME desktop session. The device bound to a Clevis policy can be also unlocked by the clevis luks unlock command: You can use an analogous procedure when using a TPM 2.0 policy instead of a Tang server. 4.10.9. Configuring Automated Unlocking of Non-root Volumes at Boot Time To use NBDE to also unlock LUKS-encrypted non-root volumes, perform the following steps: Install the clevis-systemd package: Enable the Clevis unlocker service: Perform the binding step using the clevis luks bind command as described in Section 4.10.6, "Configuring Manual Enrollment of Root Volumes" . To set up the encrypted block device during system boot, add the corresponding line with the _netdev option to the /etc/crypttab configuration file. See the crypttab(5) man page for more information. Add the volume to the list of accessible filesystems in the /etc/fstab file. Use the _netdev option in this configuration file, too. See the fstab(5) man page for more information. 4.10.10. Deploying Virtual Machines in a NBDE Network The clevis luks bind command does not change the LUKS master key. This implies that if you create a LUKS-encrypted image for use in a virtual machine or cloud environment, all the instances that run this image will share a master key. This is extremely insecure and should be avoided at all times. This is not a limitation of Clevis but a design principle of LUKS. If you wish to have encrypted root volumes in a cloud, you need to make sure that you perform the installation process (usually using Kickstart) for each instance of Red Hat Enterprise Linux in a cloud as well. The images cannot be shared without also sharing a LUKS master key. If you intend to deploy automated unlocking in a virtualized environment, Red Hat strongly recommends that you use systems such as lorax or virt-install together with a Kickstart file (see Section 4.10.7, "Configuring Automated Enrollment Using Kickstart" ) or another automated provisioning tool to ensure that each encrypted VM has a unique master key. 4.10.11. Building Automatically-enrollable VM Images for Cloud Environments using NBDE Deploying automatically-enrollable encrypted images in a cloud environment can provide a unique set of challenges. Like other virtualization environments, it is recommended to reduce the number of instances started from a single image to avoid sharing the LUKS master key. Therefore, the best practice is to create customized images that are not shared in any public repository and that provide a base for the deployment of a limited amount of instances. The exact number of instances to create should be defined by deployment's security policies and based on the risk tolerance associated with the LUKS master key attack vector. To build LUKS-enabled automated deployments, systems such as Lorax or virt-install together with a Kickstart file should be used to ensure master key uniqueness during the image building process. Cloud environments enable two Tang server deployment options which we consider here. First, the Tang server can be deployed within the cloud environment itself. Second, the Tang server can be deployed outside of the cloud on independent infrastructure with a VPN link between the two infrastructures. Deploying Tang natively in the cloud does allow for easy deployment. However, given that it shares infrastructure with the data persistence layer of ciphertext of other systems, it may be possible for both the Tang server's private key and the Clevis metadata to be stored on the same physical disk. Access to this physical disk permits a full compromise of the ciphertext data. Important For this reason, Red Hat strongly recommends maintaining a physical separation between the location where the data is stored and the system where Tang is running. This separation between the cloud and the Tang server ensures that the Tang server's private key cannot be accidentally combined with the Clevis metadata. It also provides local control of the Tang server if the cloud infrastructure is at risk. 4.10.12. Additional Resources The How to set up Network Bound Disk Encryption with multiple LUKS devices (Clevis+Tang unlocking) Knowledgebase article. For more information, see the following man pages: tang(8) clevis(1) jose(1) clevis-luks-unlockers(1) tang-nagios(1) | [
"~]# yum install clevis",
"~]USD clevis decrypt < JWE > PLAINTEXT",
"~]USD clevis Usage: clevis COMMAND [OPTIONS] clevis decrypt Decrypts using the policy defined at encryption time clevis encrypt http Encrypts using a REST HTTP escrow server policy clevis encrypt sss Encrypts using a Shamir's Secret Sharing policy clevis encrypt tang Encrypts using a Tang binding server policy clevis encrypt tpm2 Encrypts using a TPM2.0 chip binding policy ~]USD clevis decrypt Usage: clevis decrypt < JWE > PLAINTEXT Decrypts using the policy defined at encryption time ~]USD clevis encrypt tang Usage: clevis encrypt tang CONFIG < PLAINTEXT > JWE Encrypts using a Tang binding server policy This command uses the following configuration properties: url: <string> The base URL of the Tang server (REQUIRED) thp: <string> The thumbprint of a trusted signing key adv: <string> A filename containing a trusted advertisement adv: <object> A trusted advertisement (raw JSON) Obtaining the thumbprint of a trusted signing key is easy. If you have access to the Tang server's database directory, simply do: USD jose jwk thp -i USDDBDIR/USDSIG.jwk Alternatively, if you have certainty that your network connection is not compromised (not likely), you can download the advertisement yourself using: USD curl -f USDURL/adv > adv.jws",
"~]# yum install tang",
"~]# semanage port -a -t tangd_port_t -p tcp 7500",
"~]# firewall-cmd --add-port= 7500/tcp ~]# firewall-cmd --runtime-to-permanent",
"~]# systemctl enable tangd.socket Created symlink from /etc/systemd/system/multi-user.target.wants/tangd.socket to /usr/lib/systemd/system/tangd.socket.",
"~]# systemctl edit tangd.socket",
"[Socket] ListenStream= ListenStream= 7500",
"~]# systemctl daemon-reload",
"~]# systemctl show tangd.socket -p Listen Listen=[::]:7500 (Stream)",
"~]# systemctl start tangd.socket",
"~]# DB=/var/db/tang ~]# jose jwk gen -i '{\"alg\":\"ES512\"}' -o USDDB/new_sig.jwk ~]# jose jwk gen -i '{\"alg\":\"ECMR\"}' -o USDDB/new_exc.jwk",
"~]# mv USDDB/old_sig.jwk USDDB/.old_sig.jwk ~]# mv USDDB/old_exc.jwk USDDB/.old_exc.jwk",
"~]USD clevis encrypt tang '{\"url\":\" http://tang.srv \"}' < PLAINTEXT > JWE The advertisement contains the following signing keys: _OsIk0T-E2l6qjfdDiwVmidoZjA Do you wish to trust these keys? [ynYN] y",
"~]USD clevis decrypt < JWE > PLAINTEXT",
"~]USD clevis Usage: clevis COMMAND [OPTIONS] clevis decrypt Decrypts using the policy defined at encryption time clevis encrypt http Encrypts using a REST HTTP escrow server policy clevis encrypt sss Encrypts using a Shamir's Secret Sharing policy clevis encrypt tang Encrypts using a Tang binding server policy clevis luks bind Binds a LUKSv1 device using the specified policy clevis luks unlock Unlocks a LUKSv1 volume ~]USD clevis decrypt Usage: clevis decrypt < JWE > PLAINTEXT Decrypts using the policy defined at encryption time ~]USD clevis encrypt tang Usage: clevis encrypt tang CONFIG < PLAINTEXT > JWE Encrypts using a Tang binding server policy This command uses the following configuration properties: url: <string> The base URL of the Tang server (REQUIRED) thp: <string> The thumbprint of a trusted signing key adv: <string> A filename containing a trusted advertisement adv: <object> A trusted advertisement (raw JSON) Obtaining the thumbprint of a trusted signing key is easy. If you have access to the Tang server's database directory, simply do: USD jose jwk thp -i USDDBDIR/USDSIG.jwk Alternatively, if you have certainty that your network connection is not compromised (not likely), you can download the advertisement yourself using: USD curl -f USDURL/adv > adv.jws",
"~]USD clevis encrypt tpm2 '{}' < PLAINTEXT > JWE",
"~]USD clevis encrypt tpm2 '{\"hash\":\"sha1\",\"key\":\"rsa\"}' < PLAINTEXT > JWE",
"~]USD clevis decrypt < JWE > PLAINTEXT",
"~]USD clevis encrypt tpm2 '{\"pcr_bank\":\"sha1\",\"pcr_ids\":\"0,1\"}' < PLAINTEXT > JWE",
"~]# yum install clevis-luks",
"~]# clevis luks bind -d /dev/sda tang '{\"url\":\" http://tang.srv \"}' The advertisement contains the following signing keys: _OsIk0T-E2l6qjfdDiwVmidoZjA Do you wish to trust these keys? [ynYN] y You are about to initialize a LUKS device for metadata storage. Attempting to initialize it may result in data loss if data was already written into the LUKS header gap in a different format. A backup is advised before initialization is performed. Do you wish to initialize /dev/sda? [yn] y Enter existing LUKS password:",
"~]# luksmeta show -d /dev/sda 0 active empty 1 active cb6e8904-81ff-40da-a84a-07ab9ab5715e 2 inactive empty 3 inactive empty 4 inactive empty 5 inactive empty 6 inactive empty 7 inactive empty",
"~]# yum install clevis-dracut ~]# dracut -f --regenerate-all",
"~]# dracut -f --regenerate-all --kernel-cmdline \"ip= 192.0.2.10 netmask= 255.255.255.0 gateway= 192.0.2.1 nameserver= 192.0.2.45 \"",
"~]# cat /etc/dracut.conf.d/static_ip.conf kernel_cmdline=\"ip=10.0.0.103 netmask=255.255.252.0 gateway=10.0.0.1 nameserver=10.0.0.1\"",
"~]# dracut -f --regenerate-all",
"part /boot --fstype=\"xfs\" --ondisk=vda --size=256 part / --fstype=\"xfs\" --ondisk=vda --grow --encrypted --passphrase=temppass",
"part /boot --fstype=\"xfs\" --ondisk=vda --size=256 part / --fstype=\"xfs\" --ondisk=vda --size=2048 --encrypted --passphrase=temppass part /var --fstype=\"xfs\" --ondisk=vda --size=1024 --encrypted --passphrase=temppass part /tmp --fstype=\"xfs\" --ondisk=vda --size=1024 --encrypted --passphrase=temppass part /home --fstype=\"xfs\" --ondisk=vda --size=2048 --grow --encrypted --passphrase=temppass part /var/log --fstype=\"xfs\" --ondisk=vda --size=1024 --encrypted --passphrase=temppass part /var/log/audit --fstype=\"xfs\" --ondisk=vda --size=1024 --encrypted --passphrase=temppass",
"%packages clevis-dracut %end",
"%post clevis luks bind -f -k- -d /dev/vda2 tang '{\"url\":\"http://tang.srv\",\"thp\":\"_OsIk0T-E2l6qjfdDiwVmidoZjA\"}' \\ <<< \"temppass\" cryptsetup luksRemoveKey /dev/vda2 <<< \"temppass\" %end",
"~]# yum install clevis-udisks2",
"~]# clevis luks bind -d /dev/sdb1 tang '{\"url\":\" http://tang.srv \"}'",
"~]# clevis luks unlock -d /dev/sdb1",
"~]# yum install clevis-systemd",
"~]# systemctl enable clevis-luks-askpass.path Created symlink from /etc/systemd/system/remote-fs.target.wants/clevis-luks-askpass.path to /usr/lib/systemd/system/clevis-luks-askpass.path."
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-policy-based_decryption |
Deploying OpenShift Data Foundation using Amazon Web Services | Deploying OpenShift Data Foundation using Amazon Web Services Red Hat OpenShift Data Foundation 4.14 Instructions for deploying OpenShift Data Foundation using Amazon Web Services for cloud storage Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on Amazon Web Services. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_using_amazon_web_services/index |
B.2. Log in to JBoss Dashboard Builder | B.2. Log in to JBoss Dashboard Builder Prerequisites Red Hat JBoss Data Virtualization must be installed and running. You must have a JBoss Dashboard Builder user account. Procedure B.1. Log in to the JBoss Dashboard Builder Navigate to JBoss Dashboard Builder Navigate to JBoss Dashboard Builder in your web browser. The default location is http://localhost:8080/dashboard . Log in to JBoss Dashboard Builder Enter the Username and Password of a valid JBoss Dashboard Builder user. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/log_in_to_jboss_dashboard_builder |
Chapter 1. Red Hat Software Collections 3.4 | Chapter 1. Red Hat Software Collections 3.4 This chapter serves as an overview of the Red Hat Software Collections 3.4 content set. It provides a list of components and their descriptions, sums up changes in this version, documents relevant compatibility information, and lists known issues. 1.1. About Red Hat Software Collections For certain applications, more recent versions of some software components are often needed in order to use their latest new features. Red Hat Software Collections is a Red Hat offering that provides a set of dynamic programming languages, database servers, and various related packages that are either more recent than their equivalent versions included in the base Red Hat Enterprise Linux system, or are available for this system for the first time. Red Hat Software Collections 3.4 is available for Red Hat Enterprise Linux 7; selected previously released components also for Red Hat Enterprise Linux 6. For a complete list of components that are distributed as part of Red Hat Software Collections and a brief summary of their features, see Section 1.2, "Main Features" . Red Hat Software Collections does not replace the default system tools provided with Red Hat Enterprise Linux 6 or Red Hat Enterprise Linux 7. Instead, a parallel set of tools is installed in the /opt/ directory and can be optionally enabled per application by the user using the supplied scl utility. The default versions of Perl or PostgreSQL, for example, remain those provided by the base Red Hat Enterprise Linux system. Note In Red Hat Enterprise Linux 8, similar components are provided as Application Streams . All Red Hat Software Collections components are fully supported under Red Hat Enterprise Linux Subscription Level Agreements, are functionally complete, and are intended for production use. Important bug fix and security errata are issued to Red Hat Software Collections subscribers in a similar manner to Red Hat Enterprise Linux for at least two years from the release of each major version. In each major release stream, each version of a selected component remains backward compatible. For detailed information about length of support for individual components, refer to the Red Hat Software Collections Product Life Cycle document. 1.1.1. Red Hat Developer Toolset Red Hat Developer Toolset is a part of Red Hat Software Collections, included as a separate Software Collection. For more information about Red Hat Developer Toolset, refer to the Red Hat Developer Toolset Release Notes and the Red Hat Developer Toolset User Guide . 1.2. Main Features Table 1.1, "Red Hat Software Collections 3.4 Components" lists components that are supported at the time of the Red Hat Software Collections 3.4 release. Table 1.1. Red Hat Software Collections 3.4 Components Component Software Collection Description Red Hat Developer Toolset 9.0 devtoolset-9 Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. It provides current versions of the GNU Compiler Collection , GNU Debugger , and other development, debugging, and performance monitoring tools. For a complete list of components, see the Red Hat Developer Toolset Components table in the Red Hat Developer Toolset User Guide . Perl 5.26.3 [a] rh-perl526 A release of Perl, a high-level programming language that is commonly used for system administration utilities and web programming. The rh-perl526 Software Collection provides additional utilities, scripts, and database connectors for MySQL and PostgreSQL . It includes the DateTime Perl module and the mod_perl Apache httpd module, which is supported only with the httpd24 Software Collection. Additionally, it provides the cpanm utility for easy installation of CPAN modules. The rh-perl526 packaging is aligned with upstream; the perl526-perl package installs also core modules, while the interpreter is provided by the perl-interpreter package. PHP 7.2.24 [a] rh-php72 A release of PHP 7.2 with PEAR 1.10.5, APCu 5.1.12, and enhanced language features. PHP 7.3.11 [a] rh-php73 A release of PHP 7.3 with PEAR 1.10.9, APCu 5.1.17, and the Xdebug extension. Python 2.7.16 python27 A release of Python 2.7 with a number of additional utilities. This Python version provides various features and enhancements, including an ordered dictionary type, faster I/O operations, and improved forward compatibility with Python 3. The python27 Software Collections contains the Python 2.7.13 interpreter , a set of extension libraries useful for programming web applications and mod_wsgi (only supported with the httpd24 Software Collection), MySQL and PostgreSQL database connectors, and numpy and scipy . Python 3.6.9 rh-python36 The rh-python36 Software Collection contains Python 3.6.9, which introduces a number of new features, such as f-strings, syntax for variable annotations, and asynchronous generators and comprehensions . In addition, a set of extension libraries useful for programming web applications is included, with mod_wsgi (supported only together with the httpd24 Software Collection), PostgreSQL database connector, and numpy and scipy . Ruby 2.4.6 rh-ruby24 A release of Ruby 2.4. This version provides multiple performance improvements and enhancements, for example improved hash table, new debugging features, support for Unicode case mappings, and support for OpenSSL 1.1.0 . Ruby 2.4.0 maintains source-level backward compatibility with Ruby 2.3, Ruby 2.2, Ruby 2.0.0, and Ruby 1.9.3. Ruby 2.5.5 [a] rh-ruby25 A release of Ruby 2.5. This version provides multiple performance improvements and new features, for example, simplified usage of blocks with the rescue , else , and ensure keywords, a new yield_self method, support for branch coverage and method coverage measurement, new Hash#slice and Hash#transform_keys methods . Ruby 2.5.0 maintains source-level backward compatibility with Ruby 2.4. Ruby 2.6.2 [a] rh-ruby26 A release of Ruby 2.6. This version provides multiple performance improvements and new features, such as endless ranges, the Binding#source_location method, and the USDSAFE process global state . Ruby 2.6.0 maintains source-level backward compatibility with Ruby 2.5. Ruby on Rails 5.0.1 rh-ror50 A release of Ruby on Rails 5.0, the latest version of the web application framework written in the Ruby language. Notable new features include Action Cable, API mode, exclusive use of rails CLI over Rake, and ActionRecord attributes. This Software Collection is supported together with the rh-ruby24 Collection. Scala 2.10.6 [a] rh-scala210 A release of Scala, a general purpose programming language for the Java platform, which integrates features of object-oriented and functional languages. MariaDB 10.2.22 rh-mariadb102 A release of MariaDB, an alternative to MySQL for users of Red Hat Enterprise Linux. For all practical purposes, MySQL is binary compatible with MariaDB and can be replaced with it without any data conversions. This version adds MariaDB Backup, Flashback, support for Recursive Common Table Expressions, window functions, and JSON functions . MariaDB 10.3.13 [a] rh-mariadb103 A release of MariaDB, an alternative to MySQL for users of Red Hat Enterprise Linux. For all practical purposes, MySQL is binary compatible with MariaDB and can be replaced with it without any data conversions. This version introduces system-versioned tables, invisible columns, a new instant ADD COLUMN operation for InnoDB , and a JDBC connector for MariaDB and MySQL . MongoDB 3.4.9 rh-mongodb34 A release of MongoDB, a cross-platform document-oriented database system classified as a NoSQL database. This release introduces support for new architectures, adds message compression and support for the decimal128 type, enhances collation features and more. MongoDB 3.6.3 [a] rh-mongodb36 A release of MongoDB, a cross-platform document-oriented database system classified as a NoSQL database. This release introduces change streams, retryable writes, and JSON Schema , as well as other features. MySQL 8.0.17 [a] rh-mysql80 A release of the MySQL server, which introduces a number of new security and account management features and enhancements. PostgreSQL 9.6.10 rh-postgresql96 A release of PostgreSQL, which introduces parallel execution of sequential scans, joins, and aggregates, and provides enhancements to synchronous replication, full-text search, deration driver, postgres_fdw, as well as performance improvements. PostgreSQL 10.6 [a] rh-postgresql10 A release of PostgreSQL, which includes a significant performance improvement and a number of new features, such as logical replication using the publish and subscribe keywords, or stronger password authentication based on the SCRAM-SHA-256 mechanism . PostgreSQL 12.1 [a] rh-postgresql12 A release of PostgreSQL, which provides the pgaudit extension, various enhancements to partitioning and parallelism, support for the SQL/JSON path language, and performance improvements. Node.js 10.16.3 [a] rh-nodejs10 A release of Node.js, which provides multiple API enhancements and new features, including V8 engine version 6.6, full N-API support , and stability improvements. Node.js 12.10.0 [a] rh-nodejs12 A release of Node.js, with V8 engine version 7.6, support for ES6 modules, and improved support for native modules. nginx 1.10.2 rh-nginx110 A release of nginx, a web and proxy server with a focus on high concurrency, performance, and low memory usage. This version introduces a number of new features, including dynamic module support, HTTP/2 support, Perl integration, and numerous performance improvements . nginx 1.14.1 [a] rh-nginx114 A release of nginx, a web and proxy server with a focus on high concurrency, performance, and low memory usage. This version provides a number of features, such as mirror module, HTTP/2 server push, gRPC proxy module, and numerous performance improvements . nginx 1.16.1 [a] rh-nginx116 A release of nginx, a web and proxy server with a focus on high concurrency, performance, and low memory usage. This version introduces numerous updates related to SSL, several new directives and parameters, and various enhancements. Apache httpd 2.4.34 httpd24 A release of the Apache HTTP Server (httpd), including a high performance event-based processing model, enhanced SSL module and FastCGI support . The mod_auth_kerb , mod_auth_mellon , and ModSecurity modules are also included. Varnish Cache 5.2.1 [a] rh-varnish5 A release of Varnish Cache, a high-performance HTTP reverse proxy. This version includes the shard director, experimental HTTP/2 support, and improvements to Varnish configuration through separate VCL files and VCL labels. Varnish Cache 6.0.2 [a] rh-varnish6 A release of Varnish Cache, a high-performance HTTP reverse proxy. This version includes support for Unix Domain Sockets (both for clients and for back-end servers), new level of the VCL language ( vcl 4.1 ), and improved HTTP/2 support . Maven 3.5.0 [a] rh-maven35 A release of Maven, a software project management and comprehension tool. This release introduces support for new architectures and a number of new features, including colorized logging . Maven 3.6.1 [a] rh-maven36 A release of Maven, a software project management and comprehension tool. This release provides various enhancements and bug fixes. Git 2.18.1 [a] rh-git218 A release of Git, a distributed revision control system with a decentralized architecture. As opposed to centralized version control systems with a client-server model, Git ensures that each working copy of a Git repository is its exact copy with complete revision history. This version includes the Large File Storage (LFS) extension . Redis 5.0.5 [a] rh-redis5 A release of Redis 5.0, a persistent key-value database . Redis now provides redis-trib , a cluster management tool . HAProxy 1.8.17 [a] rh-haproxy18 A release of HAProxy 1.8, a reliable, high-performance network load balancer for TCP and HTTP-based applications. Common Java Packages rh-java-common This Software Collection provides common Java libraries and tools used by other collections. The rh-java-common Software Collection is required by the rh-maven35 and rh-scala210 components and it is not supposed to be installed directly by users. JDK Mission Control [a] rh-jmc This Software Collection includes JDK Mission Control (JMC) , a powerful profiler for HotSpot JVMs. JMC provides an advanced set of tools for efficient and detailed analysis of extensive data collected by the JDK Flight Recorder. JMC requires JDK version 8 or later to run. Target Java applications must run with at least OpenJDK version 11 so that JMC can access JDK Flight Recorder features. The rh-jmc Software Collection requires the rh-maven35 Software Collection. [a] This Software Collection is available only for Red Hat Enterprise Linux 7 Previously released Software Collections remain available in the same distribution channels. All Software Collections, including retired components, are listed in the Table 1.2, "All Available Software Collections" . Software Collections that are no longer supported are marked with an asterisk ( * ). See the Red Hat Software Collections Product Life Cycle document for information on the length of support for individual components. For detailed information regarding previously released components, refer to the Release Notes for earlier versions of Red Hat Software Collections. Table 1.2. All Available Software Collections Component Software Collection Availability Architectures supported on RHEL7 Components New in Red Hat Software Collections 3.4 Red Hat Developer Toolset 9.0 devtoolset-9 RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le Node.js 12.10.0 rh-nodejs12 RHEL7 x86_64, s390x, aarch64, ppc64le PHP 7.3.11 rh-php73 RHEL7 x86_64, s390x, aarch64, ppc64le nginx 1.16.1 rh-nginx116 RHEL7 x86_64, s390x, aarch64, ppc64le PostgreSQL 12.1 rh-postgresql12 RHEL7 x86_64, s390x, aarch64, ppc64le Maven 3.6.1 rh-maven36 RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Updated in Red Hat Software Collections 3.4 Apache httpd 2.4.34 httpd24 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.3 Red Hat Developer Toolset 8.1 devtoolset-8 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le MariaDB 10.3.13 rh-mariadb103 RHEL7 x86_64, s390x, aarch64, ppc64le Redis 5.0.5 rh-redis5 RHEL7 x86_64, s390x, aarch64, ppc64le Ruby 2.6.2 rh-ruby26 RHEL7 x86_64, s390x, aarch64, ppc64le HAProxy 1.8.17 rh-haproxy18 RHEL7 x86_64 Varnish Cache 6.0.2 rh-varnish6 RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.2 PHP 7.2.24 rh-php72 RHEL7 x86_64, s390x, aarch64, ppc64le MySQL 8.0.17 rh-mysql80 RHEL7 x86_64, s390x, aarch64, ppc64le Node.js 10.16.3 rh-nodejs10 RHEL7 x86_64, s390x, aarch64, ppc64le nginx 1.14.1 rh-nginx114 RHEL7 x86_64, s390x, aarch64, ppc64le Git 2.18.1 rh-git218 RHEL7 x86_64, s390x, aarch64, ppc64le JDK Mission Control rh-jmc RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.1 Red Hat Developer Toolset 7.1 devtoolset-7 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le Perl 5.26.3 rh-perl526 RHEL7 x86_64, s390x, aarch64, ppc64le Ruby 2.5.5 rh-ruby25 RHEL7 x86_64, s390x, aarch64, ppc64le MongoDB 3.6.3 rh-mongodb36 RHEL7 x86_64, s390x, aarch64, ppc64le Varnish Cache 5.2.1 rh-varnish5 RHEL7 x86_64, s390x, aarch64, ppc64le PostgreSQL 10.6 rh-postgresql10 RHEL7 x86_64, s390x, aarch64, ppc64le PHP 7.0.27 rh-php70 * RHEL6, RHEL7 x86_64 MySQL 5.7.24 rh-mysql57 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.0 PHP 7.1.8 rh-php71 * RHEL7 x86_64, s390x, aarch64, ppc64le nginx 1.12.1 rh-nginx112 * RHEL7 x86_64, s390x, aarch64, ppc64le Python 3.6.9 rh-python36 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Maven 3.5.0 rh-maven35 RHEL7 x86_64, s390x, aarch64, ppc64le MariaDB 10.2.22 rh-mariadb102 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le PostgreSQL 9.6.10 rh-postgresql96 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le MongoDB 3.4.9 rh-mongodb34 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Node.js 8.11.4 rh-nodejs8 * RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.4 Red Hat Developer Toolset 6.1 devtoolset-6 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le Scala 2.10.6 rh-scala210 RHEL7 x86_64 nginx 1.10.2 rh-nginx110 RHEL6, RHEL7 x86_64 Node.js 6.11.3 rh-nodejs6 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Ruby 2.4.6 rh-ruby24 RHEL6, RHEL7 x86_64 Ruby on Rails 5.0.1 rh-ror50 RHEL6, RHEL7 x86_64 Eclipse 4.6.3 rh-eclipse46 * RHEL7 x86_64 Python 2.7.16 python27 RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Thermostat 1.6.6 rh-thermostat16 * RHEL6, RHEL7 x86_64 Maven 3.3.9 rh-maven33 * RHEL6, RHEL7 x86_64 Common Java Packages rh-java-common RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.3 Git 2.9.3 rh-git29 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Redis 3.2.4 rh-redis32 * RHEL6, RHEL7 x86_64 Perl 5.24.0 rh-perl524 * RHEL6, RHEL7 x86_64 Python 3.5.1 rh-python35 * RHEL6, RHEL7 x86_64 MongoDB 3.2.10 rh-mongodb32 * RHEL6, RHEL7 x86_64 Ruby 2.3.8 rh-ruby23 * RHEL6, RHEL7 x86_64 PHP 5.6.25 rh-php56 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.2 Red Hat Developer Toolset 4.1 devtoolset-4 * RHEL6, RHEL7 x86_64 MariaDB 10.1.29 rh-mariadb101 * RHEL6, RHEL7 x86_64 MongoDB 3.0.11 upgrade collection rh-mongodb30upg * RHEL6, RHEL7 x86_64 Node.js 4.6.2 rh-nodejs4 * RHEL6, RHEL7 x86_64 PostgreSQL 9.5.14 rh-postgresql95 * RHEL6, RHEL7 x86_64 Ruby on Rails 4.2.6 rh-ror42 * RHEL6, RHEL7 x86_64 MongoDB 2.6.9 rh-mongodb26 * RHEL6, RHEL7 x86_64 Thermostat 1.4.4 thermostat1 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.1 Varnish Cache 4.0.3 rh-varnish4 * RHEL6, RHEL7 x86_64 nginx 1.8.1 rh-nginx18 * RHEL6, RHEL7 x86_64 Node.js 0.10 nodejs010 * RHEL6, RHEL7 x86_64 Maven 3.0.5 maven30 * RHEL6, RHEL7 x86_64 V8 3.14.5.10 v8314 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.0 Red Hat Developer Toolset 3.1 devtoolset-3 * RHEL6, RHEL7 x86_64 Perl 5.20.1 rh-perl520 * RHEL6, RHEL7 x86_64 Python 3.4.2 rh-python34 * RHEL6, RHEL7 x86_64 Ruby 2.2.9 rh-ruby22 * RHEL6, RHEL7 x86_64 Ruby on Rails 4.1.5 rh-ror41 * RHEL6, RHEL7 x86_64 MariaDB 10.0.33 rh-mariadb100 * RHEL6, RHEL7 x86_64 MySQL 5.6.40 rh-mysql56 * RHEL6, RHEL7 x86_64 PostgreSQL 9.4.14 rh-postgresql94 * RHEL6, RHEL7 x86_64 Passenger 4.0.50 rh-passenger40 * RHEL6, RHEL7 x86_64 PHP 5.4.40 php54 * RHEL6, RHEL7 x86_64 PHP 5.5.21 php55 * RHEL6, RHEL7 x86_64 nginx 1.6.2 nginx16 * RHEL6, RHEL7 x86_64 DevAssistant 0.9.3 devassist09 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 1 Git 1.9.4 git19 * RHEL6, RHEL7 x86_64 Perl 5.16.3 perl516 * RHEL6, RHEL7 x86_64 Python 3.3.2 python33 * RHEL6, RHEL7 x86_64 Ruby 1.9.3 ruby193 * RHEL6, RHEL7 x86_64 Ruby 2.0.0 ruby200 * RHEL6, RHEL7 x86_64 Ruby on Rails 4.0.2 ror40 * RHEL6, RHEL7 x86_64 MariaDB 5.5.53 mariadb55 * RHEL6, RHEL7 x86_64 MongoDB 2.4.9 mongodb24 * RHEL6, RHEL7 x86_64 MySQL 5.5.52 mysql55 * RHEL6, RHEL7 x86_64 PostgreSQL 9.2.18 postgresql92 * RHEL6, RHEL7 x86_64 Legend: RHEL6 - Red Hat Enterprise Linux 6 RHEL7 - Red Hat Enterprise Linux 7 x86_64 - AMD64 and Intel 64 architectures s390x - IBM Z aarch64 - The 64-bit ARM architecture ppc64 - IBM POWER, big endian ppc64le - IBM POWER, little endian * - Retired component; this Software Collection is no longer supported The tables above list the latest versions available through asynchronous updates. Note that Software Collections released in Red Hat Software Collections 2.0 and later include a rh- prefix in their names. Eclipse is available as a part of the Red Hat Developer Tools offering. 1.3. Changes in Red Hat Software Collections 3.4 1.3.1. Overview Architectures The Red Hat Software Collections offering contains packages for Red Hat Enterprise Linux 7 running on AMD64 and Intel 64 architectures; certain earlier Software Collections are available also for Red Hat Enterprise Linux 6. In addition, Red Hat Software Collections 3.4 supports the following architectures on Red Hat Enterprise Linux 7: The 64-bit ARM architecture IBM Z IBM POWER, little endian For a full list of components and their availability, see Table 1.2, "All Available Software Collections" . New Software Collections Red Hat Software Collections 3.4 adds the following new Software Collections: devtoolset-9 - see Section 1.3.2, "Changes in Red Hat Developer Toolset" rh-nodejs12 - see Section 1.3.3, "Changes in Node.js" rh-php73 - see Section 1.3.4, "Changes in PHP" rh-nginx116 - see Section 1.3.5, "Changes in nginx" rh-postgresql12 - see Section 1.3.6, "Changes in PostgreSQL" rh-maven36 - see Section 1.3.7, "Changes in Maven" All new Software Collections are available only for Red Hat Enterprise Linux 7. Updated Software Collections The following component has been updated in Red Hat Software Collections 3.4: httpd24 - see Section 1.3.8, "Changes in Apache httpd" Red Hat Software Collections Container Images The following container images are new in Red Hat Software Collections 3.4: rhscl/devtoolset-9-toolchain-rhel7 rhscl/devtoolset-9-perftools-rhel7 rhscl/nodejs-12-rhel7 rhscl/php-73-rhel7 rhscl/nginx-116-rhel7 rhscl/postgresql-12-rhel7 The following container image has been updated in Red Hat Software Collections 3.4: rhscl/httpd-24-rhel7 For detailed information regarding Red Hat Software Collections container images, see Section 3.4, "Red Hat Software Collections Container Images" . 1.3.2. Changes in Red Hat Developer Toolset The following components have been upgraded in Red Hat Developer Toolset 9.0 compared to the release of Red Hat Developer Toolset: GCC to version 9.1.1 binutils to version 2.32 GDB to version 8.3 strace to version 5.1 SystemTap to version 4.1 Valgrind to version 3.15.0 Dyninst to version 10.1.0 For detailed information on changes in 9.0, see the Red Hat Developer Toolset User Guide . 1.3.3. Changes in Node.js The new rh-nodejs12 Software Collection provides Node.js 12.10.0 . Notable enhancements in this release include: The V8 engine upgraded to version 7.6 A new default HTTP parser, llhttp (no longer experimental) Integrated capability of heap dump generation Support for ECMAScript 2015 (ES6) modules Improved support for native modules Worker threads no longer require a flag A new experimental diagnostic report feature Improved performance For detailed changes in Node.js 12.10.0, see the upstream release notes and upstream documentation . 1.3.4. Changes in PHP The new rh-php73 Software Collection with PHP 7.3.11 introduces the following notable enhancements over rh-php72 : The Xdebug extension included for development Enhanced and more flexible heredoc and nowdoc syntaxes The PCRE extension upgraded to PCRE2 Improved multibyte string handling Support for LDAP controls Improved FastCGI Process Manager (FPM) logging The ability to add a trailing comma in function calls Improved performance Several deprecations and backward incompatible changes For more information, see Migrating from PHP 7.2.x to PHP 7.3.x . Note that the following behavior is different from upstream: The rh-php73 Software Collection does not support the Argon2 password hashing algorithm. The x (PCRE_EXTENDED) pattern modifier is always enabled in the rh-php73 Software Collection. As a result, invalid escape sequences are not interpreted as literals. 1.3.5. Changes in nginx The new rh-nginx116 Software Collection provides nginx 1.16.1 , which introduces a number of new features and enhancements. For example: Numerous updates related to SSL (loading of SSL certificates and secret keys from variables, variable support in the ssl_certificate and ssl_certificate_key directives, a new ssl_early_data directive) New keepalive -related directives A new random directive for distributed load balancing New parameters and improvements to existing directives (port ranges for the listen directive, a new delay parameter for the limit_req directive, which enables two-stage rate limiting) A new USDupstream_bytes_sent variable Improvements to User Datagram Protocol (UDP) proxying Other notable changes include: The ssl directive has been deprecated; use the ssl parameter for the listen directive instead. nginx now detects missing SSL certificates during configuration testing. When using a host name in the listen directive, nginx now creates listening sockets for all addresses that the host name resolves to. For more information regarding changes in nginx , refer to the upstream release notes . For migration instructions, see Section 5.8, "Migrating to nginx 1.16" . 1.3.6. Changes in PostgreSQL The new rh-postgresql12 Software Collection includes PostgreSQL 12.1 . This release introduces various enhancements over version 10, distributed in an earlier Software Collection, such as: The PostgreSQL Audit Extension, pgaudit , which provides detailed session and object audit logging through the standard PostgreSQL logging facility. Improvements to the partitioning functionality, for example, support for hash partitioning Enhancements to query parallelism Stored SQL procedures enabling transaction management Various performance improvements Enhancements to the administrative functionality Support for the SQL/JSON path language Stored generated columns Nondeterministic collations New authentication features, including encryption of TCP/IP connections when using GSSAPI authentication or multi-factor authentication. For detailed changes, see the upstream release notes for PostgreSQL 11 and PostgreSQL 12 . Note that support for Just-In-Time (JIT) compilation, available in upstream since PostgreSQL 11 , is not provided by the rh-postgresql12 Software Collection. The rh-postgresql12 Software Collection includes the rh-postgresql12-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and others. After installing the rh-postgreqsl12*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-postgreqsl12* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system. To find out more about syspaths, see the Red Hat Software Collections Packaging Guide . For information on migration, see Section 5.6, "Migrating to PostgreSQL 12" . 1.3.7. Changes in Maven The new rh-maven36 Software Collection with Maven 3.6.1 includes numerous bug fixes and various enhancements. For detailed changes, see the upstream release notes . 1.3.8. Changes in Apache httpd The httpd24 Software Collection has been updated to provide several security and bug fixes. 1.4. Compatibility Information Red Hat Software Collections 3.4 is available for all supported releases of Red Hat Enterprise Linux 7 on AMD64 and Intel 64 architectures, the 64-bit ARM architecture, IBM Z, and IBM POWER, little endian. Certain components are available also for all supported releases of Red Hat Enterprise Linux 6 on AMD64 and Intel 64 architectures. For a full list of available components, see Table 1.2, "All Available Software Collections" . 1.5. Known Issues multiple components, BZ# 1716378 Certain files provided by the Software Collections debuginfo packages might conflict with the corresponding debuginfo package files from the base Red Hat Enterprise Linux system or from other versions of Red Hat Software Collections components. For example, the python27-python-debuginfo package files might conflict with the corresponding files from the python-debuginfo package installed on the core system. Similarly, files from the httpd24-mod_auth_mellon-debuginfo package might conflict with similar files provided by the base system mod_auth_mellon-debuginfo package. To work around this problem, uninstall the base system debuginfo package prior to installing the Software Collection debuginfo package. rh-mysql80 component, BZ# 1646363 The mysql-connector-java database connector does not work with the MySQL 8.0 server. To work around this problem, use the mariadb-java-client database connector from the rh-mariadb103 Software Collection. rh-mysql80 component, BZ# 1646158 The default character set has been changed to utf8mb4 in MySQL 8.0 but this character set is unsupported by the php-mysqlnd database connector. Consequently, php-mysqlnd fails to connect in the default configuration. To work around this problem, specify a known character set as a parameter of the MySQL server configuration. For example, modify the /etc/opt/rh/rh-mysql80/my.cnf.d/mysql-server.cnf file to read: httpd24 component, BZ# 1429006 Since httpd 2.4.27 , the mod_http2 module is no longer supported with the default prefork Multi-Processing Module (MPM). To enable HTTP/2 support, edit the configuration file at /opt/rh/httpd24/root/etc/httpd/conf.modules.d/00-mpm.conf and switch to the event or worker MPM. Note that the HTTP/2 server-push feature does not work on the 64-bit ARM architecture, IBM Z, and IBM POWER, little endian. httpd24 component, BZ# 1327548 The mod_ssl module does not support the ALPN protocol on Red Hat Enterprise Linux 6, or on Red Hat Enterprise Linux 7.3 and earlier. Consequently, clients that support upgrading TLS connections to HTTP/2 only using ALPN are limited to HTTP/1.1 support. httpd24 component, BZ# 1224763 When using the mod_proxy_fcgi module with FastCGI Process Manager (PHP-FPM), httpd uses port 8000 for the FastCGI protocol by default instead of the correct port 9000 . To work around this problem, specify the correct port explicitly in configuration. httpd24 component, BZ# 1382706 When SELinux is enabled, the LD_LIBRARY_PATH environment variable is not passed through to CGI scripts invoked by httpd . As a consequence, in some cases it is impossible to invoke executables from Software Collections enabled in the /opt/rh/httpd24/service-environment file from CGI scripts run by httpd . To work around this problem, set LD_LIBRARY_PATH as desired from within the CGI script. httpd24 component Compiling external applications against the Apache Portable Runtime (APR) and APR-util libraries from the httpd24 Software Collection is not supported. The LD_LIBRARY_PATH environment variable is not set in httpd24 because it is not required by any application in this Software Collection. rh-python35 , rh-python36 components, BZ# 1499990 The pytz module, which is used by Babel for time zone support, is not included in the rh-python35 , and rh-python36 Software Collections. Consequently, when the user tries to import the dates module from Babel , a traceback is returned. To work around this problem, install pytz through the pip package manager from the pypi public repository by using the pip install pytz command. rh-python36 component Certain complex trigonometric functions provided by numpy might return incorrect values on the 64-bit ARM architecture, IBM Z, and IBM POWER, little endian. The AMD64 and Intel 64 architectures are not affected by this problem. python27 component, BZ# 1330489 The python27-python-pymongo package has been updated to version 3.2.1. Note that this version is not fully compatible with the previously shipped version 2.5.2. scl-utils component In Red Hat Enterprise Linux 7.5 and earlier, due to an architecture-specific macro bug in the scl-utils package, the <collection>/root/usr/lib64/ directory does not have the correct package ownership on the 64-bit ARM architecture and on IBM POWER, little endian. As a consequence, this directory is not removed when a Software Collection is uninstalled. To work around this problem, manually delete <collection>/root/usr/lib64/ when removing a Software Collection. ruby component Determination of RubyGem installation paths is dependent on the order in which multiple Software Collections are enabled. The required order has been changed since Ruby 2.3.1 shipped in Red Hat Software Collections 2.3 to support dependent Collections. As a consequence, RubyGem paths, which are used for gem installation during an RPM build, are invalid when the Software Collections are supplied in an incorrect order. For example, the build fails if the RPM spec file contains scl enable rh-ror50 rh-nodejs6 . To work around this problem, enable the rh-ror50 Software Collection last, for example, scl enable rh-nodejs6 rh-ror50 . maven component When the user has installed both the Red Hat Enterprise Linux system version of maven-local package and the rh-maven*-maven-local package, XMvn , a tool used for building Java RPM packages, run from the Maven Software Collection tries to read the configuration file from the base system and fails. To work around this problem, uninstall the maven-local package from the base Red Hat Enterprise Linux system. perl component It is impossible to install more than one mod_perl.so library. As a consequence, it is not possible to use the mod_perl module from more than one Perl Software Collection. postgresql component The rh-postgresql9* packages for Red Hat Enterprise Linux 6 do not provide the sepgsql module as this feature requires installation of libselinux version 2.0.99, which is not available in Red Hat Enterprise Linux 6. httpd , mariadb , mongodb , mysql , nodejs , perl , php , python , ruby , and ror components, BZ# 1072319 When uninstalling the httpd24 , rh-mariadb* , rh-mongodb* , rh-mysql* , rh-nodejs* , rh-perl* , rh-php* , python27 , rh-python* , rh-ruby* , or rh-ror* packages, the order of uninstalling can be relevant due to ownership of dependent packages. As a consequence, some directories and files might not be removed properly and might remain on the system. mariadb , mysql components, BZ# 1194611 Since MariaDB 10 and MySQL 5.6 , the rh-mariadb*-mariadb-server and rh-mysql*-mysql-server packages no longer provide the test database by default. Although this database is not created during initialization, the grant tables are prefilled with the same values as when test was created by default. As a consequence, upon a later creation of the test or test_* databases, these databases have less restricted access rights than is default for new databases. Additionally, when running benchmarks, the run-all-tests script no longer works out of the box with example parameters. You need to create a test database before running the tests and specify the database name in the --database parameter. If the parameter is not specified, test is taken by default but you need to make sure the test database exist. mariadb , mysql , postgresql , mongodb components Red Hat Software Collections contains the MySQL 5.7 , MySQL 8.0 , MariaDB 10.2 , MariaDB 10.3 , PostgreSQL 9.6 , PostgreSQL 10 , PostgreSQL 12 , MongoDB 3.4 , and MongoDB 3.6 databases. The core Red Hat Enterprise Linux 6 provides earlier versions of the MySQL and PostgreSQL databases (client library and daemon). The core Red Hat Enterprise Linux 7 provides earlier versions of the MariaDB and PostgreSQL databases (client library and daemon). Client libraries are also used in database connectors for dynamic languages, libraries, and so on. The client library packaged in the Red Hat Software Collections database packages in the PostgreSQL component is not supposed to be used, as it is included only for purposes of server utilities and the daemon. Users are instead expected to use the system library and the database connectors provided with the core system. A protocol, which is used between the client library and the daemon, is stable across database versions, so, for example, using the PostgreSQL 9.2 client library with the PostgreSQL 9.4 or 9.5 daemon works as expected. The core Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 do not include the client library for MongoDB . In order to use this client library for your application, you should use the client library from Red Hat Software Collections and always use the scl enable ... call every time you run an application linked against this MongoDB client library. mariadb , mysql , mongodb components MariaDB, MySQL, and MongoDB do not make use of the /opt/ provider / collection /root prefix when creating log files. Note that log files are saved in the /var/opt/ provider / collection /log/ directory, not in /opt/ provider / collection /root/var/log/ . Other Notes rh-ruby* , rh-python* , rh-php* components Using Software Collections on a read-only NFS has several limitations. Ruby gems cannot be installed while the rh-ruby* Software Collection is on a read-only NFS. Consequently, for example, when the user tries to install the ab gem using the gem install ab command, an error message is displayed, for example: The same problem occurs when the user tries to update or install gems from an external source by running the bundle update or bundle install commands. When installing Python packages on a read-only NFS using the Python Package Index (PyPI), running the pip command fails with an error message similar to this: Installing packages from PHP Extension and Application Repository (PEAR) on a read-only NFS using the pear command fails with the error message: This is an expected behavior. httpd component Language modules for Apache are supported only with the Red Hat Software Collections version of Apache httpd and not with the Red Hat Enterprise Linux system versions of httpd . For example, the mod_wsgi module from the rh-python35 Collection can be used only with the httpd24 Collection. all components Since Red Hat Software Collections 2.0, configuration files, variable data, and runtime data of individual Collections are stored in different directories than in versions of Red Hat Software Collections. coreutils , util-linux , screen components Some utilities, for example, su , login , or screen , do not export environment settings in all cases, which can lead to unexpected results. It is therefore recommended to use sudo instead of su and set the env_keep environment variable in the /etc/sudoers file. Alternatively, you can run commands in a reverse order; for example: instead of When using tools like screen or login , you can use the following command to preserve the environment settings: source /opt/rh/<collection_name>/enable python component When the user tries to install more than one scldevel package from the python27 and rh-python* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_python , %scl_ prefix _python ). php component When the user tries to install more than one scldevel package from the rh-php* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_php , %scl_ prefix _php ). ruby component When the user tries to install more than one scldevel package from the rh-ruby* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_ruby , %scl_ prefix _ruby ). perl component When the user tries to install more than one scldevel package from the rh-perl* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_perl , %scl_ prefix _perl ). nginx component When the user tries to install more than one scldevel package from the rh-nginx* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_nginx , %scl_ prefix _nginx ). 1.6. Deprecated Functionality httpd24 component, BZ# 1434053 Previously, in an SSL/TLS configuration requiring name-based SSL virtual host selection, the mod_ssl module rejected requests with a 400 Bad Request error, if the host name provided in the Host: header did not match the host name provided in a Server Name Indication (SNI) header. Such requests are no longer rejected if the configured SSL/TLS security parameters are identical between the selected virtual hosts, in-line with the behavior of upstream mod_ssl . | [
"[mysqld] character-set-server=utf8",
"ERROR: While executing gem ... (Errno::EROFS) Read-only file system @ dir_s_mkdir - /opt/rh/rh-ruby22/root/usr/local/share/gems",
"Read-only file system: '/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/ipython-3.1.0.dist-info'",
"Cannot install, php_dir for channel \"pear.php.net\" is not writeable by the current user",
"su -l postgres -c \"scl enable rh-postgresql94 psql\"",
"scl enable rh-postgresql94 bash su -l postgres -c psql"
] | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.4_release_notes/chap-rhscl |
14.2. Types | 14.2. Types The main permission control method used in SELinux targeted policy to provide advanced process isolation is Type Enforcement. All files and processes are labeled with a type: types define a SELinux domain for processes and a SELinux type for files. SELinux policy rules define how types access each other, whether it be a domain accessing a type, or a domain accessing another domain. Access is only allowed if a specific SELinux policy rule exists that allows it. The following types are used with OpenShift. Different types allow you to configure flexible access: Process types openshift_t The OpenShift process is associated with the openshift_t SELinux type. Types on executables openshift_cgroup_read_exec_t SELinux allows files with this type to transition an executable to the openshift_cgroup_read_t domain. openshift_cron_exec_t SELinux allows files with this type to transition an executable to the openshift_cron_t domain. openshift_initrc_exec_t SELinux allows files with this type to transition an executable to the openshift_initrc_t domain. Writable types openshift_cgroup_read_tmp_t This type allows OpenShift control groups (cgroup) read and access temporary files in the /tmp/ directory. openshift_cron_tmp_t This type allows storing temporary files of the OpenShift cron jobs in /tmp/ . openshift_initrc_tmp_t This type allows storing the OpenShift initrc temporary files in /tmp/ . openshift_log_t Files with this type are treated as OpenShift log data, usually stored under the /var/log/ directory. openshift_rw_file_t OpenShift have permission to read and to write to files labeled with this type. openshift_tmp_t This type is used for storing the OpenShift temporary files in /tmp/ . openshift_tmpfs_t This type allows storing the OpenShift data on a tmpfs file system. openshift_var_lib_t This type allows storing the OpenShift files in the /var/lib/ directory. openshift_var_run_t This type allows storing the OpenShift files in the /run/ or /var/run/ directory. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_confined_services/sect-managing_confined_services-openshift-types |
2.6. Configuring Services on the Real Servers | 2.6. Configuring Services on the Real Servers If the real servers are Red Hat Enterprise Linux systems, set the appropriate server daemons to activate at boot time. These daemons can include httpd for Web services or xinetd for FTP or Telnet services. It may also be useful to access the real servers remotely, so the sshd daemon should also be installed and running. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/s1-lvs-server-daemons-vsa |
Chapter 9. Operator SDK | Chapter 9. Operator SDK 9.1. Installing the Operator SDK CLI The Operator SDK provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator. You can install the Operator SDK CLI on your workstation so that you are prepared to start authoring your own Operators. Important The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases. The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.18 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform. The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs. The base image for Ansible-based Operator projects The base image for Helm-based Operator projects For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework) . Operator authors with cluster administrator access to a Kubernetes-based cluster, such as OpenShift Container Platform, can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, Java, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work. See Developing Operators for full documentation on the Operator SDK. Note OpenShift Container Platform 4.18 supports Operator SDK 1.38.0. 9.1.1. Installing the Operator SDK CLI on Linux You can install the OpenShift SDK CLI tool on Linux. Prerequisites Go v1.19+ docker v17.03+, podman v1.9.3+, or buildah v1.7+ Procedure Navigate to the OpenShift mirror site . From the latest 4.18 directory, download the latest version of the tarball for Linux. Unpack the archive: USD tar xvf operator-sdk-v1.38.0-ocp-linux-x86_64.tar.gz Make the file executable: USD chmod +x operator-sdk Move the extracted operator-sdk binary to a directory that is on your PATH . Tip To check your PATH : USD echo USDPATH USD sudo mv ./operator-sdk /usr/local/bin/operator-sdk Verification After you install the Operator SDK CLI, verify that it is available: USD operator-sdk version Example output operator-sdk version: "v1.38.0-ocp", ... 9.1.2. Installing the Operator SDK CLI on macOS You can install the OpenShift SDK CLI tool on macOS. Prerequisites Go v1.19+ docker v17.03+, podman v1.9.3+, or buildah v1.7+ Procedure For the amd64 and arm64 architectures, navigate to the OpenShift mirror site for the amd64 architecture and OpenShift mirror site for the arm64 architecture respectively. From the latest 4.18 directory, download the latest version of the tarball for macOS. Unpack the Operator SDK archive for amd64 architecture by running the following command: USD tar xvf operator-sdk-v1.38.0-ocp-darwin-x86_64.tar.gz Unpack the Operator SDK archive for arm64 architecture by running the following command: USD tar xvf operator-sdk-v1.38.0-ocp-darwin-aarch64.tar.gz Make the file executable by running the following command: USD chmod +x operator-sdk Move the extracted operator-sdk binary to a directory that is on your PATH by running the following command: Tip Check your PATH by running the following command: USD echo USDPATH USD sudo mv ./operator-sdk /usr/local/bin/operator-sdk Verification After you install the Operator SDK CLI, verify that it is available by running the following command:: USD operator-sdk version Example output operator-sdk version: "v1.38.0-ocp", ... 9.2. Operator SDK CLI reference The Operator SDK command-line interface (CLI) is a development kit designed to make writing Operators easier. Important The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of OpenShift Container Platform. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future OpenShift Container Platform releases. The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with OpenShift Container Platform 4.18 to maintain their projects and create Operator releases targeting newer versions of OpenShift Container Platform. The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs. The base image for Ansible-based Operator projects The base image for Helm-based Operator projects For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework) . Operator SDK CLI syntax USD operator-sdk <command> [<subcommand>] [<argument>] [<flags>] See Developing Operators for full documentation on the Operator SDK. 9.2.1. bundle The operator-sdk bundle command manages Operator bundle metadata. 9.2.1.1. validate The bundle validate subcommand validates an Operator bundle. Table 9.1. bundle validate flags Flag Description -h , --help Help output for the bundle validate subcommand. --index-builder (string) Tool to pull and unpack bundle images. Only used when validating a bundle image. Available options are docker , which is the default, podman , or none . --list-optional List all optional validators available. When set, no validators are run. --select-optional (string) Label selector to select optional validators to run. When run with the --list-optional flag, lists available optional validators. 9.2.2. cleanup The operator-sdk cleanup command destroys and removes resources that were created for an Operator that was deployed with the run command. Table 9.2. cleanup flags Flag Description -h , --help Help output for the run bundle subcommand. --kubeconfig (string) Path to the kubeconfig file to use for CLI requests. -n , --namespace (string) If present, namespace in which to run the CLI request. --timeout <duration> Time to wait for the command to complete before failing. The default value is 2m0s . 9.2.3. completion The operator-sdk completion command generates shell completions to make issuing CLI commands quicker and easier. Table 9.3. completion subcommands Subcommand Description bash Generate bash completions. zsh Generate zsh completions. Table 9.4. completion flags Flag Description -h, --help Usage help output. For example: USD operator-sdk completion bash Example output # bash completion for operator-sdk -*- shell-script -*- ... # ex: ts=4 sw=4 et filetype=sh 9.2.4. create The operator-sdk create command is used to create, or scaffold , a Kubernetes API. 9.2.4.1. api The create api subcommand scaffolds a Kubernetes API. The subcommand must be run in a project that was initialized with the init command. Table 9.5. create api flags Flag Description -h , --help Help output for the run bundle subcommand. 9.2.5. generate The operator-sdk generate command invokes a specific generator to generate code or manifests. 9.2.5.1. bundle The generate bundle subcommand generates a set of bundle manifests, metadata, and a bundle.Dockerfile file for your Operator project. Note Typically, you run the generate kustomize manifests subcommand first to generate the input Kustomize bases that are used by the generate bundle subcommand. However, you can use the make bundle command in an initialized project to automate running these commands in sequence. Table 9.6. generate bundle flags Flag Description --channels (string) Comma-separated list of channels to which the bundle belongs. The default value is alpha . --crds-dir (string) Root directory for CustomResoureDefinition manifests. --default-channel (string) The default channel for the bundle. --deploy-dir (string) Root directory for Operator manifests, such as deployments and RBAC. This directory is different from the directory passed to the --input-dir flag. -h , --help Help for generate bundle --input-dir (string) Directory from which to read an existing bundle. This directory is the parent of your bundle manifests directory and is different from the --deploy-dir directory. --kustomize-dir (string) Directory containing Kustomize bases and a kustomization.yaml file for bundle manifests. The default path is config/manifests . --manifests Generate bundle manifests. --metadata Generate bundle metadata and Dockerfile. --output-dir (string) Directory to write the bundle to. --overwrite Overwrite the bundle metadata and Dockerfile if they exist. The default value is true . --package (string) Package name for the bundle. -q , --quiet Run in quiet mode. --stdout Write bundle manifest to standard out. --version (string) Semantic version of the Operator in the generated bundle. Set only when creating a new bundle or upgrading the Operator. Additional resources See Bundling an Operator and deploying with Operator Lifecycle Manager for a full procedure that includes using the make bundle command to call the generate bundle subcommand. 9.2.5.2. kustomize The generate kustomize subcommand contains subcommands that generate Kustomize data for the Operator. 9.2.5.2.1. manifests The generate kustomize manifests subcommand generates or regenerates Kustomize bases and a kustomization.yaml file in the config/manifests directory, which are used to build bundle manifests by other Operator SDK commands. This command interactively asks for UI metadata, an important component of manifest bases, by default unless a base already exists or you set the --interactive=false flag. Table 9.7. generate kustomize manifests flags Flag Description --apis-dir (string) Root directory for API type definitions. -h , --help Help for generate kustomize manifests . --input-dir (string) Directory containing existing Kustomize files. --interactive When set to false , if no Kustomize base exists, an interactive command prompt is presented to accept custom metadata. --output-dir (string) Directory where to write Kustomize files. --package (string) Package name. -q , --quiet Run in quiet mode. 9.2.6. init The operator-sdk init command initializes an Operator project and generates, or scaffolds , a default project directory layout for the given plugin. This command writes the following files: Boilerplate license file PROJECT file with the domain and repository Makefile to build the project go.mod file with project dependencies kustomization.yaml file for customizing manifests Patch file for customizing images for manager manifests Patch file for enabling Prometheus metrics main.go file to run Table 9.8. init flags Flag Description --help, -h Help output for the init command. --plugins (string) Name and optionally version of the plugin to initialize the project with. Available plugins are ansible.sdk.operatorframework.io/v1 , go.kubebuilder.io/v2 , go.kubebuilder.io/v3 , and helm.sdk.operatorframework.io/v1 . --project-version Project version. Available values are 2 and 3-alpha , which is the default. 9.2.7. run The operator-sdk run command provides options that can launch the Operator in various environments. 9.2.7.1. bundle The run bundle subcommand deploys an Operator in the bundle format with Operator Lifecycle Manager (OLM). Table 9.9. run bundle flags Flag Description --index-image (string) Index image in which to inject a bundle. The default image is quay.io/operator-framework/upstream-opm-builder:latest . --install-mode <install_mode_value> Install mode supported by the cluster service version (CSV) of the Operator, for example AllNamespaces or SingleNamespace . --timeout <duration> Install timeout. The default value is 2m0s . --kubeconfig (string) Path to the kubeconfig file to use for CLI requests. -n , --namespace (string) If present, namespace in which to run the CLI request. --security-context-config <security_context> Specifies the security context to use for the catalog pod. Allowed values include restricted and legacy . The default value is legacy . [1] -h , --help Help output for the run bundle subcommand. The restricted security context is not compatible with the default namespace. To configure your Operator's pod security admission in your production environment, see "Complying with pod security admission". For more information about pod security admission, see "Understanding and managing pod security admission". Additional resources See Operator group membership for details on possible install modes. 9.2.7.2. bundle-upgrade The run bundle-upgrade subcommand upgrades an Operator that was previously installed in the bundle format with Operator Lifecycle Manager (OLM). Table 9.10. run bundle-upgrade flags Flag Description --timeout <duration> Upgrade timeout. The default value is 2m0s . --kubeconfig (string) Path to the kubeconfig file to use for CLI requests. -n , --namespace (string) If present, namespace in which to run the CLI request. --security-context-config <security_context> Specifies the security context to use for the catalog pod. Allowed values include restricted and legacy . The default value is legacy . [1] -h , --help Help output for the run bundle subcommand. The restricted security context is not compatible with the default namespace. To configure your Operator's pod security admission in your production environment, see "Complying with pod security admission". For more information about pod security admission, see "Understanding and managing pod security admission". 9.2.8. scorecard The operator-sdk scorecard command runs the scorecard tool to validate an Operator bundle and provide suggestions for improvements. The command takes one argument, either a bundle image or directory containing manifests and metadata. If the argument holds an image tag, the image must be present remotely. Table 9.11. scorecard flags Flag Description -c , --config (string) Path to scorecard configuration file. The default path is bundle/tests/scorecard/config.yaml . -h , --help Help output for the scorecard command. --kubeconfig (string) Path to kubeconfig file. -L , --list List which tests are available to run. -n , --namespace (string) Namespace in which to run the test images. -o , --output (string) Output format for results. Available values are text , which is the default, and json . --pod-security <security_context> Option to run scorecard with the specified security context. Allowed values include restricted and legacy . The default value is legacy . [1] -l , --selector (string) Label selector to determine which tests are run. -s , --service-account (string) Service account to use for tests. The default value is default . -x , --skip-cleanup Disable resource cleanup after tests are run. -w , --wait-time <duration> Seconds to wait for tests to complete, for example 35s . The default value is 30s . The restricted security context is not compatible with the default namespace. To configure your Operator's pod security admission in your production environment, see "Complying with pod security admission". For more information about pod security admission, see "Understanding and managing pod security admission". Additional resources See Validating Operators using the scorecard tool for details about running the scorecard tool. | [
"tar xvf operator-sdk-v1.38.0-ocp-linux-x86_64.tar.gz",
"chmod +x operator-sdk",
"echo USDPATH",
"sudo mv ./operator-sdk /usr/local/bin/operator-sdk",
"operator-sdk version",
"operator-sdk version: \"v1.38.0-ocp\",",
"tar xvf operator-sdk-v1.38.0-ocp-darwin-x86_64.tar.gz",
"tar xvf operator-sdk-v1.38.0-ocp-darwin-aarch64.tar.gz",
"chmod +x operator-sdk",
"echo USDPATH",
"sudo mv ./operator-sdk /usr/local/bin/operator-sdk",
"operator-sdk version",
"operator-sdk version: \"v1.38.0-ocp\",",
"operator-sdk <command> [<subcommand>] [<argument>] [<flags>]",
"operator-sdk completion bash",
"bash completion for operator-sdk -*- shell-script -*- ex: ts=4 sw=4 et filetype=sh"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/cli_tools/operator-sdk |
Appendix E. Performance Recommendations | Appendix E. Performance Recommendations E.1. Concurrent Startup for Large Clusters When starting a large number of instances, each managing a large number of caches, in parallel this may take a while as rebalancing attempts to distribute the data evenly as each node joins the cluster. To limit the number of rebalancing attempts made during the initial startup of the cluster disable rebalancing temporarily by following the below steps: Start the first node in the cluster. Set JMX attribute jboss.infinispan/CacheManager/"clustered"/LocalTopologyManager/rebalancingEnabled to false , as seen in Section C.13, "LocalTopologyManager" . Start the remaining nodes in the cluster. Re-enable the JMX attribute jboss.infinispan/CacheManager/"clustered"/LocalTopologyManager/rebalancingEnabled by setting this value back to true , as seen in Section C.13, "LocalTopologyManager" . Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/appe-performance_recommendations |
15.4. JSON Representation of a Virtual Machine | 15.4. JSON Representation of a Virtual Machine Example 15.3. A JSON representation of a virtual machine | [
"{ \"type\" : \"server\", \"status\" : { \"state\" : \"down\" }, \"stop_reason\" : \"\", \"memory\" : 1073741824, \"cpu\" : { \"topology\" : { \"sockets\" : \"1\", \"cores\" : \"1\" }, \"architecture\" : \"X86_64\" }, \"cpu_shares\" : \"0\", \"bios\" : { \"boot_menu\" : { \"enabled\" : \"false\" } }, \"os\" : { \"boot\" : [ { \"dev\" : \"hd\" } ], \"type\" : \"other\" }, \"high_availability\" : { \"enabled\" : \"false\", \"priority\" : \"1\" }, \"display\" : { \"type\" : \"spice\", \"monitors\" : \"1\", \"single_qxl_pci\" : \"false\", \"allow_override\" : \"false\", \"smartcard_enabled\" : \"false\", \"file_transfer_enabled\" : \"true\", \"copy_paste_enabled\" : \"true\" }, \"cluster\" : { \"href\" : \"/ovirt-engine/api/clusters/00000001-0001-0001-0001-0000000002fb\", \"id\" : \"00000001-0001-0001-0001-0000000002fb\" }, \"template\" : { \"href\" : \"/ovirt-engine/api/templates/00000000-0000-0000-0000-000000000000\", \"id\" : \"00000000-0000-0000-0000-000000000000\" }, \"stop_time\" : 1423550982110, \"creation_time\" : 1423490033647, \"origin\" : \"ovirt\", \"stateless\" : \"false\", \"delete_protected\" : \"false\", \"sso\" : { \"methods\" : { \"method\" : [ { \"id\" : \"GUEST_AGENT\" } ] } }, \"timezone\" : \"Etc/GMT\", \"initialization\" : { \"regenerate_ssh_keys\" : \"false\", \"nic_configurations\" : { } }, \"placement_policy\" : { \"affinity\" : \"migratable\" }, \"memory_policy\" : { \"guaranteed\" : 1073741824, \"ballooning\" : \"true\" }, \"usb\" : { \"enabled\" : \"false\" }, \"migration_downtime\" : \"-1\", \"cpu_profile\" : { \"href\" : \"/ovirt-engine/api/cpuprofiles/0000001a-001a-001a-001a-0000000002e3\", \"id\" : \"0000001a-001a-001a-001a-0000000002e3\" }, \"next_run_configuration_exists\" : \"false\", \"numa_tune_mode\" : \"interleave\", \"actions\" : { \"link\" : [ { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/ticket\", \"rel\" : \"ticket\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/move\", \"rel\" : \"move\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/clone\", \"rel\" : \"clone\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/commit_snapshot\", \"rel\" : \"commit_snapshot\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/preview_snapshot\", \"rel\" : \"preview_snapshot\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/logon\", \"rel\" : \"logon\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/cancelmigration\", \"rel\" : \"cancelmigration\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/maintenance\", \"rel\" : \"maintenance\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/reboot\", \"rel\" : \"reboot\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/undo_snapshot\", \"rel\" : \"undo_snapshot\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/migrate\", \"rel\" : \"migrate\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/detach\", \"rel\" : \"detach\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/export\", \"rel\" : \"export\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/shutdown\", \"rel\" : \"shutdown\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/start\", \"rel\" : \"start\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/stop\", \"rel\" : \"stop\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/suspend\", \"rel\" : \"suspend\" } ] }, \"name\" : \"VM_01\", \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e\", \"id\" : \"42ec2621-7ad6-4ca2-bd68-973a44b2562e\", \"link\" : [ { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/applications\", \"rel\" : \"applications\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/disks\", \"rel\" : \"disks\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/nics\", \"rel\" : \"nics\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/numanodes\", \"rel\" : \"numanodes\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/cdroms\", \"rel\" : \"cdroms\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/snapshots\", \"rel\" : \"snapshots\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/tags\", \"rel\" : \"tags\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/permissions\", \"rel\" : \"permissions\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/statistics\", \"rel\" : \"statistics\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/reporteddevices\", \"rel\" : \"reporteddevices\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/watchdogs\", \"rel\" : \"watchdogs\" }, { \"href\" : \"/ovirt-engine/api/vms/42ec2621-7ad6-4ca2-bd68-973a44b2562e/sessions\", \"rel\" : \"sessions\" } ] }"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/json_representation_of_a_virtual_machine |
18.12.10.4. ARP/RARP | 18.12.10.4. ARP/RARP Protocol ID: arp or rarp Rules of this type should either go into the root or arp/rarp chain. Table 18.6. ARP and RARP protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to MAC address of sender dstmacaddr MAC_ADDR MAC address of destination dstmacmask MAC_MASK Mask applied to MAC address of destination hwtype UINT16 Hardware type protocoltype UINT16 Protocol type opcode UINT16, STRING Opcode valid strings are: Request, Reply, Request_Reverse, Reply_Reverse, DRARP_Request, DRARP_Reply, DRARP_Error, InARP_Request, ARP_NAK arpsrcmacaddr MAC_ADDR Source MAC address in ARP/RARP packet arpdstmacaddr MAC _ADDR Destination MAC address in ARP/RARP packet arpsrcipaddr IP_ADDR Source IP address in ARP/RARP packet arpdstipaddr IP_ADDR Destination IP address in ARP/RARP packet gratuitous BOOLEAN Boolean indiating whether to check for a gratuitous ARP packet comment STRING text string up to 256 characters | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sub-sect-prot-arp-rarp-explained |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/using_3scale_api_management_with_the_amq_streams_kafka_bridge/making-open-source-more-inclusive |
Chapter 9. Providing public access to an instance | Chapter 9. Providing public access to an instance New instances automatically receive a port with a fixed IP address on the network that the instance is assigned to. This IP address is private and is permanently associated with the instance until the instance is deleted. The fixed IP address is used for communication between instances. You can connect a public instance directly to a shared external network where a public IP address is directly assigned to the instance. This is useful if you are working in a private cloud. You can also provide public access to an instance through a project network that has a routed connection to an external provider network. This is the preferred method if you are working in a public cloud, or when public IP addresses are limited. To provide public access through the project network, the project network must be connected to a router with the gateway set to the external network. For external traffic to reach the instance, the cloud user must associate a floating IP address with the instance. To provide access to and from an instance, whether it is connected to a shared external network or a routed provider network, you must configure security group rules for the required protocols, such as SSH, ICMP, or HTTP. You must also pass a key pair to the instance during creation, so that you can access the instance remotely. 9.1. Prerequisites The external network must have a subnet to provide the floating IP addresses. The project network must be connected to a router that has the external network configured as the gateway. 9.2. Securing instance access with security groups and key pairs Security groups are sets of IP filter rules that control network and protocol access to and from instances, such as ICMP to allow you to ping an instance, and SSH to allow you to connect to an instance. The security group rules are applied to all instances within a project. All projects have a default security group called default , which is used when you do not specify a security group for your instances. By default, the default security group allows all outgoing traffic and denies all incoming traffic from any source other than instances in the same security group. You can either add rules to the default security group or create a new security group for your project. You can apply one or more security groups to an instance during instance creation. To apply a security group to a running instance, apply the security group to a port attached to the instance. Note You cannot apply a role-based access control (RBAC)-shared security group directly to an instance during instance creation. To apply an RBAC-shared security group to an instance you must first create the port, apply the shared security group to that port, and then assign that port to the instance. See Adding a security group to a port . Key pairs are SSH or x509 credentials that are injected into an instance when it is launched to enable remote access to the instance. You can create new key pairs in RHOSP, or import existing key pairs. Each user should have at least one key pair. The key pair can be used for multiple instances. Note You cannot share key pairs between users in a project because each key pair belongs to the individual user that created or imported the key pair, rather than to the project. 9.2.1. Creating a security group You can create a new security group to apply to instances and ports within a project. Procedure Optional: To ensure the security group you need does not already exist, review the available security groups and their rules: Replace <sec_group> with the name or ID of the security group that you retrieved from the list of available security groups. Create your security group: Add rules to your security group: Replace <protocol> with the name of the protocol you want to allow to communicate with your instances. Optional: Replace <port-range> with the destination port or port range to open for the protocol. Required for IP protocols TCP, UDP, and SCTP. Set to -1 to allow all ports for the specified protocol. Optional: You can allow access only from specified IP addresses by using --remote-ip to specify the remote IP address block, or --remote-group to specify that the rule only applies to packets from interfaces that are a member of the remote group. If using --remote-ip , replace <ip-address> with the remote IP address block. You can use CIDR notation. If using --remote-group , replace <group> with the name or ID of the existing security group. If neither option is specified, then access is allowed to all addresses, as the remote IP access range defaults (IPv4 default: 0.0.0.0/0 ; IPv6 default: ::/0 ). Specify the direction of network traffic the protocol rule applies to, either incoming ( ingress ) or outgoing ( egress ). If not specified, defaults to ingress . Repeat step 3 until you have created rules for all the protocols that you want to allow to access your instances. The following example creates a rule to allow SSH connections to instances in the security group mySecGroup : 9.2.2. Updating security group rules You can update the rules of any security group that you have access to. Procedure Retrieve the name or ID of the security group that you want to update the rules for: Determine the rules that you need to apply to the security group. Add rules to your security group: Replace <protocol> with the name of the protocol you want to allow to communicate with your instances. Optional: Replace <port-range> with the destination port or port range to open for the protocol. Required for IP protocols TCP, UDP, and SCTP. Set to -1 to allow all ports for the specified protocol. Optional: You can allow access only from specified IP addresses by using --remote-ip to specify the remote IP address block, or --remote-group to specify that the rule only applies to packets from interfaces that are a member of the remote group. If using --remote-ip , replace <ip-address> with the remote IP address block. You can use CIDR notation. If using --remote-group , replace <group> with the name or ID of the existing security group. If neither option is specified, then access is allowed to all addresses, as the remote IP access range defaults (IPv4 default: 0.0.0.0/0 ; IPv6 default: ::/0 ). Specify the direction of network traffic the protocol rule applies to, either incoming ( ingress ) or outgoing ( egress ). If not specified, defaults to ingress . Replace <group_name> with the name or ID of the security group that you want to apply the rule to. Repeat step 3 until you have created rules for all the protocols that you want to allow to access your instances. The following example creates a rule to allow SSH connections to instances in the security group mySecGroup : 9.2.3. Deleting security group rules You can delete rules from a security group. Procedure Identify the security group that the rules are applied to: Retrieve IDs of the rules associated with the security group: Delete the rule or rules: Replace <rule> with the ID of the rule to delete. You can delete more than one rule at a time by specifying a space-delimited list of the IDs of the rules to delete. 9.2.4. Adding a security group to a port The default security group is applied to instances that do not specify an alternative security group. You can apply an alternative security group to a port on a running instance. Procedure Determine the port on the instance that you want to apply the security group to: Apply the security group to the port: Replace <sec_group> with the name or ID of the security group you want to apply to the port on your running instance. You can use the --security-group option more than once to apply multiple security groups, as required. 9.2.5. Removing a security group from a port To remove a security group from a port you need to first remove all the security groups, then re-add the security groups that you want to remain assigned to the port. Procedure List all the security groups associated with the port and record the IDs of the security groups that you want to remain associated with the port: Remove all the security groups associated with the port: Re-apply the security groups to the port: Replace <sec_group> with the ID of the security group that you want to re-apply to the port on your running instance. You can use the --security-group option more than once to apply multiple security groups, as required. 9.2.6. Deleting a security group You can delete security groups that are not associated with any ports. Procedure Retrieve the name or ID of the security group that you want to delete: Retrieve a list of the available ports: Check each port for an associated security group: If the security group you want to delete is associated with any of the ports, then you must first remove the security group from the port. For more information, see Removing a security group from a port . Delete the security group: Replace <group> with the ID of the group that you want to delete. You can delete more than one group at a time by specifying a space-delimited list of the IDs of the groups to delete. 9.2.7. Generating a new SSH key pair You can create a new SSH key pair for use within your project. Note Use a x509 certificate to create a key pair for a Windows instance. Procedure Create the key pair and save the private key in your local .ssh directory: Replace <keypair> with the name of your new key pair. Protect the private key: 9.2.8. Importing an existing SSH key pair You can import an SSH key to your project that you created outside of the Red Hat OpenStack Platform (RHOSP) by providing the public key file when you create a new key pair. Procedure Create the key pair from the existing key file and save the private key in your local .ssh directory: To import the key pair from an existing public key file, enter the following command: Replace <public_key> with the name of the public key file that you want to use to create the key pair. Replace <keypair> with the name of your new key pair. To import the key pair from an existing private key file, enter the following command: Replace <private_key> with the name of the public key file that you want to use to create the key pair. Replace <keypair> with the name of your new key pair. Protect the private key: 9.2.9. Additional resources Security groups in the Networking Guide . Project security management in the Users and Identity Management Guide . 9.3. Assigning a floating IP address to an instance You can assign a public floating IP address to an instance to enable communication with networks outside the cloud, including the Internet. The cloud administrator configures the available pool of floating IP addresses for an external network. You can allocate a floating IP address from this pool to your project, then associate the floating IP address with your instance. Projects have a limited quota of floating IP addresses that can be used by instances in the project, 50 by default. Therefore, release IP addresses for reuse when you no longer need them. Prerequisites The instance must be on an external network, or on a project network that is connected to a router that has the external network configured as the gateway. The external network that the instance will connect to must have a subnet to provide the floating IP addresses. Procedure Check the floating IP addresses that are allocated to the current project: If there are no floating IP addresses available that you want to use, allocate a floating IP address to the current project from the external network allocation pool: Replace <provider-network> with the name or ID of the external network that you want to use to provide external access. Tip By default, a floating IP address is randomly allocated from the pool of the external network. A cloud administrator can use the --floating-ip-address option to allocate a specific floating IP address from an external network. Assign the floating IP address to an instance: Replace <instance> with the name or ID of the instance that you want to provide public access to. Replace <floating_ip> with the floating IP address that you want to assign to the instance. Optional: Replace <ip_address> with the IP address of the interface that you want to attach the floating IP to. By default, this attaches the floating IP address to the first port. Verify that the floating IP address has been assigned to the instance: Additional resources Creating floating IP pools in the Networking Guide . 9.4. Disassociating a floating IP address from an instance When the instance no longer needs public access, disassociate it from the instance and return it to the allocation pool. Procedure Disassociate the floating IP address from the instance: Replace <instance> with the name or ID of the instance that you want to remove public access from. Replace <floating_ip> with the floating IP address that is assigned to the instance. Release the floating IP address back into the allocation pool: Confirm the floating IP address is deleted and is no longer available for assignment: 9.5. Creating an instance with SSH access You can provide SSH access to an instance by specifying a key pair when you create the instance. Key pairs are SSH or x509 credentials that are injected into an instance when it is launched. Each project should have at least one key pair. A key pair belongs to an individual user, not to a project. Note You cannot associate a key pair with an instance after the instance has been created. You can apply a security group directly to an instance during instance creation, or to a port on the running instance. Note You cannot apply a role-based access control (RBAC)-shared security group directly to an instance during instance creation. To apply an RBAC-shared security group to an instance you must first create the port, apply the shared security group to that port, and then assign that port to the instance. See Adding a security group to a port . Prerequisites A key pair is available that you can use to SSH into your instances. For more information, see Generating a new SSH key pair . The network that you plan to create your instance on must be an external network, or a project network connected to a router that has the external network configured as the gateway. For more information, see Adding a router in the Networking Guide . The external network that the instance connects to must have a subnet to provide the floating IP addresses. The security group allows SSH access to instances. For more information, see Securing instance access with security groups and key pairs . The image that the instance is based on contains the cloud-init package to inject the SSH public key into the instance. A floating IP address is available to assign to your instance. For more information, see Assigning a floating IP address to an instance . Procedure Retrieve the name or ID of the flavor that has the hardware profile that your instance requires: Note Choose a flavor with sufficient size for the image to successfully boot, otherwise the instance will fail to launch. Retrieve the name or ID of the image that has the software profile that your instance requires: If the image you require is not available, you can download or create a new image. For information about creating or downloading cloud images, see Creating images . Retrieve the name or ID of the network that you want to connect your instance to: Retrieve the name of the key pair that you want to use to access your instance remotely: Create your instance with SSH access: Replace <flavor> with the name or ID of the flavor that you retrieved in step 1. Replace <image> with the name or ID of the image that you retrieved in step 2. Replace <network> with the name or ID of the network that you retrieved in step 3. You can use the --network option more than once to connect your instance to several networks, as required. Optional: The default security group is applied to instances that do not specify an alternative security group. You can apply an alternative security group directly to the instance during instance creation, or to a port on the running instance. Use the --security-group option to specify an alternative security group when creating the instance. For information on adding a security group to a port on a running instance, see Adding a security group to a port . Replace <keypair> with the name or ID of the key pair that you retrieved in step 4. Assign a floating IP address to the instance: Replace <floating_ip> with the floating IP address that you want to assign to the instance. Use the automatically created cloud-user account to verify that you can log in to your instance by using SSH: 9.6. Additional resources Creating a network in the Networking Guide . Adding a router in the Networking Guide . | [
"openstack security group list openstack security group rule list <sec_group>",
"openstack security group create mySecGroup",
"openstack security group rule create --protocol <protocol> [--dst-port <port-range>] [--remote-ip <ip-address> | --remote-group <group>] [--ingress | --egress] mySecGroup",
"openstack security group rule create --protocol tcp --dst-port 22 mySecGroup",
"openstack security group list",
"openstack security group rule create --protocol <protocol> [--dst-port <port-range>] [--remote-ip <ip-address> | --remote-group <group>] [--ingress | --egress] <group_name>",
"openstack security group rule create --protocol tcp --dst-port 22 mySecGroup",
"openstack security group list",
"openstack security group show <sec-group>",
"openstack security group rule delete <rule> [<rule> ...]",
"openstack port list --server myInstancewithSSH",
"openstack port set --security-group <sec_group> <port>",
"openstack port show <port>",
"openstack port set --no-security-group <port>",
"openstack port set --security-group <sec_group> <port>",
"openstack security group list",
"openstack port list",
"openstack port show <port-uuid> -c security_group_ids",
"openstack security group delete <group> [<group> ...]",
"openstack keypair create <keypair> > ~/.ssh/<keypair>.pem",
"chmod 600 ~/.ssh/<keypair>.pem",
"openstack keypair create --public-key ~/.ssh/<public_key>.pub <keypair> > ~/.ssh/<keypair>.pem",
"openstack keypair create --private-key ~/.ssh/<private_key> <keypair> > ~/.ssh/<keypair>.pem",
"chmod 600 ~/.ssh/<keypair>.pem",
"openstack floating ip list",
"openstack floating ip create <provider-network>",
"openstack server add floating ip [--fixed-ip-address <ip_address>] <instance> <floating_ip>",
"openstack server show <instance>",
"openstack server remove floating ip <instance> <ip_address>",
"openstack floating ip delete <ip_address>",
"openstack floating ip list",
"openstack flavor list",
"openstack image list",
"openstack network list",
"openstack keypair list",
"openstack server create --flavor <flavor> --image <image> --network <network> [--security-group <secgroup>] --key-name <keypair> --wait myInstancewithSSH",
"openstack server add floating ip myInstancewithSSH <floating_ip>",
"ssh -i ~/.ssh/<keypair>.pem cloud-user@<floatingIP> [cloud-user@demo-server1 ~]USD"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/creating_and_managing_instances/assembly_providing-public-access-to-an-instance_instances |
5.100. gtk2 | 5.100. gtk2 5.100.1. RHBA-2012:0809 - gtk2 bug fix and enhancement update Updated gtk2 packages that fix three bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. GTK+ is a multi-platform toolkit for creating graphical user interfaces. Bug Fixes BZ# 697437 Previously, the "Open Files" dialog box failed to show the "Size" column if it was previously used in "Search" mode. This update fixes the bug by ensuring that the "Size" column is always displayed accordingly to the "Show Size Column" context menu option. BZ# 750756 Previously, copying text from selectable labels, such as those displayed in message dialog boxes, using the Ctrl+Insert key combination did not work. This update adds the Ctrl+Insert key combination that copies selected text to clipboard when activated. BZ# 801620 Previously, certain GTK applications, such as virt-viewer, failed to properly initialize key bindings associated with menu items. This was due to a bug in the way properties associated with the menu items were parsed by the library. This update fixes the bug, rendering the menu items accessible again by key bindings for applications that use this feature. Enhancement BZ# 689188 Previously, the "Open Files" dialog box could appear with an abnormal width when the "file type" filter contained a very long string (as observed with certain image hosting websites), making the dialog unusable. With this update, the dialog box splits the filter string into multiple lines of text, so that the dialog keeps a reasonable width. All users of gtk2 are advised to upgrade to these updated packages, which fix these bugs and add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/gtk2 |
Installing on OpenStack | Installing on OpenStack OpenShift Container Platform 4.14 Installing OpenShift Container Platform on OpenStack Red Hat OpenShift Documentation Team | [
"#!/usr/bin/env bash set -Eeuo pipefail declare catalog san catalog=\"USD(mktemp)\" san=\"USD(mktemp)\" readonly catalog san declare invalid=0 openstack catalog list --format json --column Name --column Endpoints | jq -r '.[] | .Name as USDname | .Endpoints[] | select(.interface==\"public\") | [USDname, .interface, .url] | join(\" \")' | sort > \"USDcatalog\" while read -r name interface url; do # Ignore HTTP if [[ USD{url#\"http://\"} != \"USDurl\" ]]; then continue fi # Remove the schema from the URL noschema=USD{url#\"https://\"} # If the schema was not HTTPS, error if [[ \"USDnoschema\" == \"USDurl\" ]]; then echo \"ERROR (unknown schema): USDname USDinterface USDurl\" exit 2 fi # Remove the path and only keep host and port noschema=\"USD{noschema%%/*}\" host=\"USD{noschema%%:*}\" port=\"USD{noschema##*:}\" # Add the port if was implicit if [[ \"USDport\" == \"USDhost\" ]]; then port='443' fi # Get the SAN fields openssl s_client -showcerts -servername \"USDhost\" -connect \"USDhost:USDport\" </dev/null 2>/dev/null | openssl x509 -noout -ext subjectAltName > \"USDsan\" # openssl returns the empty string if no SAN is found. # If a SAN is found, openssl is expected to return something like: # # X509v3 Subject Alternative Name: # DNS:standalone, DNS:osp1, IP Address:192.168.2.1, IP Address:10.254.1.2 if [[ \"USD(grep -c \"Subject Alternative Name\" \"USDsan\" || true)\" -gt 0 ]]; then echo \"PASS: USDname USDinterface USDurl\" else invalid=USD((invalid+1)) echo \"INVALID: USDname USDinterface USDurl\" fi done < \"USDcatalog\" clean up temporary files rm \"USDcatalog\" \"USDsan\" if [[ USDinvalid -gt 0 ]]; then echo \"USD{invalid} legacy certificates were detected. Update your certificates to include a SAN field.\" exit 1 else echo \"All HTTPS certificates for this cloud are valid.\" fi",
"x509: certificate relies on legacy Common Name field, use SANs instead",
"openstack catalog list",
"host=<host_name>",
"port=<port_number>",
"openssl s_client -showcerts -servername \"USDhost\" -connect \"USDhost:USDport\" </dev/null 2>/dev/null | openssl x509 -noout -ext subjectAltName",
"X509v3 Subject Alternative Name: DNS:your.host.example.net",
"x509: certificate relies on legacy Common Name field, use SANs instead",
"openstack network create radio --provider-physical-network radio --provider-network-type flat --external",
"openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external",
"openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio",
"openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"openstack role add --user <user> --project <project> swiftoperator",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name>",
"oc apply -f <storage_class_file_name>",
"storageclass.storage.k8s.io/custom-csi-storageclass created",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: \"true\" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3",
"oc apply -f <pvc_file_name>",
"persistentvolumeclaim/csi-pvc-imageregistry created",
"oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{\"op\": \"replace\", \"path\": \"/spec/storage/pvc/claim\", \"value\": \"csi-pvc-imageregistry\"}]'",
"config.imageregistry.operator.openshift.io/cluster patched",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"status: managementState: Managed pvc: claim: csi-pvc-imageregistry",
"oc get pvc -n openshift-image-registry csi-pvc-imageregistry",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"openshift-install --dir <destination_directory> create manifests",
"vi openshift/manifests/cloud-provider-config.yaml",
"# [LoadBalancer] lb-provider = \"amphora\" 1 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #",
"oc edit configmap -n openshift-config cloud-provider-config",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 platform: openstack: machinesSubnet: <subnet_UUID> 3",
"./openshift-install wait-for install-complete --log-level debug",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: mydomain.test featureSet: TechPreviewNoUpgrade 1 compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: 2 - cidr: \"192.168.25.0/24\" - cidr: \"fd2e:6f44:5dd8:c956::/64\" clusterNetwork: 3 - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 serviceNetwork: 4 - 172.30.0.0/16 - fd02::/112 platform: openstack: ingressVIPs: ['192.168.25.79', 'fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad'] 5 apiVIPs: ['192.168.25.199', 'fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36'] 6 controlPlanePort: 7 fixedIPs: 8 - subnet: 9 name: subnet-v4 id: subnet-v4-id - subnet: 10 name: subnet-v6 id: subnet-v6-id network: 11 name: dualstack id: network-id",
"apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.10.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a 1 apiVIPs: - 192.168.10.5 ingressVIPs: - 192.168.10.7 loadBalancer: type: UserManaged 2",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project>",
"(undercloud) USD openstack overcloud container image prepare -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml --namespace=registry.access.redhat.com/rhosp13 --push-destination=<local-ip-from-undercloud.conf>:8787 --prefix=openstack- --tag-from-label {version}-{product-version} --output-env-file=/home/stack/templates/overcloud_images.yaml --output-images-file /home/stack/local_registry_images.yaml",
"- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787",
"(undercloud) USD sudo openstack overcloud container image upload --config-file /home/stack/local_registry_images.yaml --verbose",
"openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml -e octavia_timeouts.yaml",
"openstack loadbalancer provider list",
"+---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"openstack role add --user <user> --project <project> swiftoperator",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"openshift-install --dir <destination_directory> create manifests",
"vi openshift/manifests/cloud-provider-config.yaml",
"# [LoadBalancer] lb-provider = \"amphora\" 1 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #",
"oc edit configmap -n openshift-config cloud-provider-config",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"ip route add <cluster_network_cidr> via <installer_subnet_gateway>",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr 2 platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 3 octaviaSupport: true 4 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.10.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a 1 apiVIPs: - 192.168.10.5 ingressVIPs: - 192.168.10.7 loadBalancer: type: UserManaged 2",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"./openshift-install create manifests --dir <installation_directory> 1",
"touch <installation_directory>/manifests/cluster-network-03-config.yml 1",
"ls <installation_directory>/manifests/cluster-network-*",
"cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml",
"oc edit networks.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr ansible-collections-openstack",
"sudo alternatives --set python /usr/bin/python3",
"xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-containers.yaml'",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"file <name_of_downloaded_file>",
"openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"bootstrap machine\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"machineNetwork\"] = [{\"cidr\": \"192.168.0.0/18\"}]; 1 open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"export INFRA_ID=USD(jq -r .infraID metadata.json)",
"import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)",
"openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>",
"openstack image show <image_name>",
"openstack catalog show image",
"openstack token issue -c id -f value",
"{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }",
"for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done",
"# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'",
"# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'",
"ansible-playbook -i inventory.yaml security-groups.yaml",
"ansible-playbook -i inventory.yaml network.yaml",
"openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"",
"all: hosts: localhost: ansible_connection: local ansible_python_interpreter: \"{{ansible_playbook_python}}\" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external'",
"./openshift-install wait-for install-complete --log-level debug",
"ansible-playbook -i inventory.yaml bootstrap.yaml",
"openstack console log show \"USDINFRA_ID-bootstrap\"",
"ansible-playbook -i inventory.yaml control-plane.yaml",
"openshift-install wait-for bootstrap-complete",
"INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml",
"ansible-playbook -i inventory.yaml compute-nodes.yaml",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3",
"openshift-install --log-level debug wait-for install-complete",
"sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project>",
"(undercloud) USD openstack overcloud container image prepare -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml --namespace=registry.access.redhat.com/rhosp13 --push-destination=<local-ip-from-undercloud.conf>:8787 --prefix=openstack- --tag-from-label {version}-{product-version} --output-env-file=/home/stack/templates/overcloud_images.yaml --output-images-file /home/stack/local_registry_images.yaml",
"- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787",
"(undercloud) USD sudo openstack overcloud container image upload --config-file /home/stack/local_registry_images.yaml --verbose",
"openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml -e octavia_timeouts.yaml",
"openstack loadbalancer provider list",
"+---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+",
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr ansible-collections-openstack",
"sudo alternatives --set python /usr/bin/python3",
"xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-containers.yaml'",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"file <name_of_downloaded_file>",
"openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"bootstrap machine\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr 2 platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 3 octaviaSupport: true 4 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"./openshift-install create manifests --dir <installation_directory> 1",
"touch <installation_directory>/manifests/cluster-network-03-config.yml 1",
"ls <installation_directory>/manifests/cluster-network-*",
"cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml",
"oc edit networks.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"machineNetwork\"] = [{\"cidr\": \"192.168.0.0/18\"}]; 1 open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"networkType\"] = \"Kuryr\"; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"export INFRA_ID=USD(jq -r .infraID metadata.json)",
"import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)",
"openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>",
"openstack image show <image_name>",
"openstack catalog show image",
"openstack token issue -c id -f value",
"{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }",
"for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done",
"# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'",
"# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'",
"ansible-playbook -i inventory.yaml security-groups.yaml",
"ansible-playbook -i inventory.yaml network.yaml",
"openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"",
"ansible-playbook -i inventory.yaml bootstrap.yaml",
"openstack console log show \"USDINFRA_ID-bootstrap\"",
"ansible-playbook -i inventory.yaml control-plane.yaml",
"openshift-install wait-for bootstrap-complete",
"INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml",
"ansible-playbook -i inventory.yaml compute-nodes.yaml",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3",
"openshift-install --log-level debug wait-for install-complete",
"openstack role add --user <user> --project <project> swiftoperator",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"openshift-install --dir <destination_directory> create manifests",
"vi openshift/manifests/cloud-provider-config.yaml",
"# [LoadBalancer] lb-provider = \"amphora\" 1 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #",
"oc edit configmap -n openshift-config cloud-provider-config",
"file <name_of_downloaded_file>",
"openstack image create --file rhcos-44.81.202003110027-0-openstack.x86_64.qcow2 --disk-format qcow2 rhcos-USD{RHCOS_VERSION}",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"platform: openstack: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"publish: Internal",
"ip route add <cluster_network_cidr> via <installer_subnet_gateway>",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: region: region1 cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"openstack port show <cluster_name>-<cluster_ID>-ingress-port",
"openstack floating ip set --port <ingress_port_ID> <apps_FIP>",
"*.apps.<cluster_name>.<base_domain> IN A <apps_FIP>",
"<apps_FIP> console-openshift-console.apps.<cluster name>.<base domain> <apps_FIP> integrated-oauth-server-openshift-authentication.apps.<cluster name>.<base domain> <apps_FIP> oauth-openshift.apps.<cluster name>.<base domain> <apps_FIP> prometheus-k8s-openshift-monitoring.apps.<cluster name>.<base domain> <apps_FIP> <app name>.apps.<cluster name>.<base domain>",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy 1 metadata: name: \"hwoffload9\" namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: pfNames: 2 - ens6 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: \"hwoffload9\"",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy 1 metadata: name: \"hwoffload10\" namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: pfNames: 2 - ens5 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: \"hwoffload10\"",
"apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9 name: hwoffload9 namespace: default spec: config: '{ \"cniVersion\":\"0.3.1\", \"name\":\"hwoffload9\",\"type\":\"host-device\",\"device\":\"ens6\" }'",
"apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10 name: hwoffload10 namespace: default spec: config: '{ \"cniVersion\":\"0.3.1\", \"name\":\"hwoffload10\",\"type\":\"host-device\",\"device\":\"ens5\" }'",
"apiVersion: v1 kind: Pod metadata: name: dpdk-testpmd namespace: default annotations: irq-load-balancing.crio.io: disable cpu-quota.crio.io: disable k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9 k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10 spec: restartPolicy: Never containers: - name: dpdk-testpmd image: quay.io/krister/centos8_nfv-container-dpdk-testpmd:latest",
"spec: additionalNetworks: - name: hwoffload1 namespace: cnf rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"hwoffload1\", \"type\": \"host-device\",\"pciBusId\": \"0000:00:05.0\", \"ipam\": {}}' 1 type: Raw",
"oc describe SriovNetworkNodeState -n openshift-sriov-network-operator",
"oc apply -f network.yaml",
"openstack port set --no-security-group --disable-port-security <compute_ipv6_port> 1",
"apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift namespace: ipv6 spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - hello-openshift replicas: 2 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift annotations: k8s.v1.cni.cncf.io/networks: ipv6 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: hello-openshift securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL image: quay.io/openshift/origin-hello-openshift ports: - containerPort: 8080",
"oc create -f <ipv6_enabled_resource> 1",
"oc edit networks.operator.openshift.io cluster",
"spec: additionalNetworks: - name: ipv6 namespace: ipv6 1 rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"ipv6\", \"type\": \"macvlan\", \"master\": \"ens4\"}' 2 type: Raw",
"oc get network-attachment-definitions -A",
"NAMESPACE NAME AGE ipv6 ipv6 21h",
"[Global] use-clouds = true clouds-file = /etc/openstack/secret/clouds.yaml cloud = openstack [LoadBalancer] enabled = true 1",
"apiVersion: v1 data: cloud.conf: | [Global] 1 secret-name = openstack-credentials secret-namespace = kube-system region = regionOne [LoadBalancer] enabled = True kind: ConfigMap metadata: creationTimestamp: \"2022-12-20T17:01:08Z\" name: cloud-conf namespace: openshift-cloud-controller-manager resourceVersion: \"2519\" uid: cbbeedaf-41ed-41c2-9f37-4885732d3677",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk",
"sudo alternatives --set python /usr/bin/python3",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml down-control-plane.yaml down-compute-nodes.yaml down-load-balancers.yaml down-network.yaml down-security-groups.yaml",
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"platform: aws: lbType:",
"publish:",
"sshKey:",
"compute: platform: aws: amiID:",
"compute: platform: aws: iamRole:",
"compute: platform: aws: rootVolume: iops:",
"compute: platform: aws: rootVolume: size:",
"compute: platform: aws: rootVolume: type:",
"compute: platform: aws: rootVolume: kmsKeyARN:",
"compute: platform: aws: type:",
"compute: platform: aws: zones:",
"compute: aws: region:",
"aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge",
"controlPlane: platform: aws: amiID:",
"controlPlane: platform: aws: iamRole:",
"controlPlane: platform: aws: rootVolume: iops:",
"controlPlane: platform: aws: rootVolume: size:",
"controlPlane: platform: aws: rootVolume: type:",
"controlPlane: platform: aws: rootVolume: kmsKeyARN:",
"controlPlane: platform: aws: type:",
"controlPlane: platform: aws: zones:",
"controlPlane: aws: region:",
"platform: aws: amiID:",
"platform: aws: hostedZone:",
"platform: aws: hostedZoneRole:",
"platform: aws: serviceEndpoints: - name: url:",
"platform: aws: userTags:",
"platform: aws: propagateUserTags:",
"platform: aws: subnets:",
"platform: aws: preserveBootstrapIgnition:",
"compute: platform: openstack: rootVolume: size:",
"compute: platform: openstack: rootVolume: types:",
"compute: platform: openstack: rootVolume: type:",
"compute: platform: openstack: rootVolume: zones:",
"controlPlane: platform: openstack: rootVolume: size:",
"controlPlane: platform: openstack: rootVolume: types:",
"controlPlane: platform: openstack: rootVolume: type:",
"controlPlane: platform: openstack: rootVolume: zones:",
"platform: openstack: cloud:",
"platform: openstack: externalNetwork:",
"platform: openstack: computeFlavor:",
"compute: platform: openstack: additionalNetworkIDs:",
"compute: platform: openstack: additionalSecurityGroupIDs:",
"compute: platform: openstack: zones:",
"compute: platform: openstack: serverGroupPolicy:",
"controlPlane: platform: openstack: additionalNetworkIDs:",
"controlPlane: platform: openstack: additionalSecurityGroupIDs:",
"controlPlane: platform: openstack: zones:",
"controlPlane: platform: openstack: serverGroupPolicy:",
"platform: openstack: clusterOSImage:",
"platform: openstack: clusterOSImageProperties:",
"platform: openstack: defaultMachinePlatform:",
"{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }",
"platform: openstack: ingressFloatingIP:",
"platform: openstack: apiFloatingIP:",
"platform: openstack: externalDNS:",
"platform: openstack: loadbalancer:",
"platform: openstack: machinesSubnet:",
"controlPlane: platform: gcp: osImage: project:",
"controlPlane: platform: gcp: osImage: name:",
"compute: platform: gcp: osImage: project:",
"compute: platform: gcp: osImage: name:",
"platform: gcp: network:",
"platform: gcp: networkProjectID:",
"platform: gcp: projectID:",
"platform: gcp: region:",
"platform: gcp: controlPlaneSubnet:",
"platform: gcp: computeSubnet:",
"platform: gcp: defaultMachinePlatform: zones:",
"platform: gcp: defaultMachinePlatform: osDisk: diskSizeGB:",
"platform: gcp: defaultMachinePlatform: osDisk: diskType:",
"platform: gcp: defaultMachinePlatform: osImage: project:",
"platform: gcp: defaultMachinePlatform: osImage: name:",
"platform: gcp: defaultMachinePlatform: tags:",
"platform: gcp: defaultMachinePlatform: type:",
"platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: name:",
"platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: keyRing:",
"platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: location:",
"platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: projectID:",
"platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKeyServiceAccount:",
"platform: gcp: defaultMachinePlatform: secureBoot:",
"platform: gcp: defaultMachinePlatform: confidentialCompute:",
"platform: gcp: defaultMachinePlatform: onHostMaintenance:",
"controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: name:",
"controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: keyRing:",
"controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: location:",
"controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: projectID:",
"controlPlane: platform: gcp: osDisk: encryptionKey: kmsKeyServiceAccount:",
"controlPlane: platform: gcp: osDisk: diskSizeGB:",
"controlPlane: platform: gcp: osDisk: diskType:",
"controlPlane: platform: gcp: tags:",
"controlPlane: platform: gcp: type:",
"controlPlane: platform: gcp: zones:",
"controlPlane: platform: gcp: secureBoot:",
"controlPlane: platform: gcp: confidentialCompute:",
"controlPlane: platform: gcp: onHostMaintenance:",
"compute: platform: gcp: osDisk: encryptionKey: kmsKey: name:",
"compute: platform: gcp: osDisk: encryptionKey: kmsKey: keyRing:",
"compute: platform: gcp: osDisk: encryptionKey: kmsKey: location:",
"compute: platform: gcp: osDisk: encryptionKey: kmsKey: projectID:",
"compute: platform: gcp: osDisk: encryptionKey: kmsKeyServiceAccount:",
"compute: platform: gcp: osDisk: diskSizeGB:",
"compute: platform: gcp: osDisk: diskType:",
"compute: platform: gcp: tags:",
"compute: platform: gcp: type:",
"compute: platform: gcp: zones:",
"compute: platform: gcp: secureBoot:",
"compute: platform: gcp: confidentialCompute:",
"compute: platform: gcp: onHostMaintenance:"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/installing_on_openstack/index |
8.246. tsclient | 8.246. tsclient 8.246.1. RHBA-2014:0524 - tsclient bug fix update Updated tsclient packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The Terminal Server Client (tsclient) is a GTK2 front end that makes it easy to use the Remote Desktop Protocol client (rdesktop) and vncviewer utilities. Bug Fixes BZ# 798631 Previously, the tsclient user interface did not offer an option to set the 32-bit color depth in the "Advanced Options" menu for the "Windows Terminal Service" connection type. Consequently, the 32-bit color depth could not be selected even though it was supported on the system, and connection to Windows systems could not be established with more than 16-bit color depth. With this update, the error in the "Advanced Options" menu has been fixed, and the 32-bit color depth option now functions as intended. BZ# 848526 Prior to this update, tsclient was not fully compatible with the Remote Desktop Protocol (RDP). Consequently, tsclient could, under certain circumstances, terminate unexpectedly when the user was connected to a remote system over RDP. This update addresses the problems with RDP compatibility, and tsclient no longer crashes when using RDP for remote connection. Users of tsclient are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/tsclient |
Chapter 2. Load balancing considerations | Chapter 2. Load balancing considerations Distributing load between several Capsule Servers prevents any one Capsule from becoming a single point of failure. Configuring Capsules to use a load balancer can provide resilience against planned and unplanned outages. This improves availability and responsiveness. Consider the following guidelines when configuring load balancing: If you use Puppet, Puppet certificate signing is assigned to the first Capsule that you configure. If the first Capsule is down, clients cannot obtain Puppet content. This solution does not use Pacemaker or other similar HA tools to maintain one state across all Capsules. To troubleshoot issues, reproduce the issue on each Capsule, bypassing the load balancer. Additional maintenance required for load balancing Configuring Capsules to use a load balancer results in a more complex environment and requires additional maintenance. The following additional steps are required for load balancing: You must ensure that all Capsules have the same content views and synchronize all Capsules to the same content view versions You must upgrade each Capsule in sequence You must backup each Capsule that you configure regularly Upgrading Capsule Servers in a load balancing configuration To upgrade Capsule Servers from 6.14 to 6.15, complete the Upgrading Capsule Servers procedure in Upgrading connected Red Hat Satellite to 6.15 . There are no additional steps required for Capsule Servers in a load balancing configuration. | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/configuring_capsules_with_a_load_balancer/load_balancing_considerations_load-balancing |
Network APIs | Network APIs OpenShift Container Platform 4.16 Reference guide for network APIs Red Hat OpenShift Documentation Team | [
"Name: \"mysvc\", Subsets: [ { Addresses: [{\"ip\": \"10.10.1.1\"}, {\"ip\": \"10.10.2.2\"}], Ports: [{\"name\": \"a\", \"port\": 8675}, {\"name\": \"b\", \"port\": 309}] }, { Addresses: [{\"ip\": \"10.10.3.3\"}], Ports: [{\"name\": \"a\", \"port\": 93}, {\"name\": \"b\", \"port\": 76}] }, ]",
"type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: \"Available\", \"Progressing\", and \"Degraded\" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition `json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\" protobuf:\"bytes,1,rep,name=conditions\"`",
"// other fields }",
"type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: \"Available\", \"Progressing\", and \"Degraded\" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition `json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\" protobuf:\"bytes,1,rep,name=conditions\"`",
"// other fields }",
"type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: \"Available\", \"Progressing\", and \"Degraded\" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition `json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\" protobuf:\"bytes,1,rep,name=conditions\"`",
"// other fields }",
"Name: \"mysvc\", Subsets: [ { Addresses: [{\"ip\": \"10.10.1.1\"}, {\"ip\": \"10.10.2.2\"}], Ports: [{\"name\": \"a\", \"port\": 8675}, {\"name\": \"b\", \"port\": 309}] }, { Addresses: [{\"ip\": \"10.10.3.3\"}], Ports: [{\"name\": \"a\", \"port\": 93}, {\"name\": \"b\", \"port\": 76}] }, ]",
"{ Addresses: [{\"ip\": \"10.10.1.1\"}, {\"ip\": \"10.10.2.2\"}], Ports: [{\"name\": \"a\", \"port\": 8675}, {\"name\": \"b\", \"port\": 309}] }",
"a: [ 10.10.1.1:8675, 10.10.2.2:8675 ], b: [ 10.10.1.1:309, 10.10.2.2:309 ]"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/network_apis/index |
Chapter 16. InsightsOperator [operator.openshift.io/v1] | Chapter 16. InsightsOperator [operator.openshift.io/v1] Description InsightsOperator holds cluster-wide information about the Insights Operator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 16.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the Insights. status object status is the most recently observed status of the Insights operator. 16.1.1. .spec Description spec is the specification of the desired behavior of the Insights. Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides holds a sparse config that will override any previously set options. It only needs to be the fields to override it will end up overlaying in the following order: 1. hardcoded defaults 2. observedConfig 3. unsupportedConfigOverrides 16.1.2. .status Description status is the most recently observed status of the Insights operator. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. gatherStatus object gatherStatus provides basic information about the last Insights data gathering. When omitted, this means no data gathering has taken place yet. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. insightsReport object insightsReport provides general Insights analysis results. When omitted, this means no data gathering has taken place yet. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 16.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 16.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 16.1.5. .status.gatherStatus Description gatherStatus provides basic information about the last Insights data gathering. When omitted, this means no data gathering has taken place yet. Type object Property Type Description gatherers array gatherers is a list of active gatherers (and their statuses) in the last gathering. gatherers[] object gathererStatus represents information about a particular data gatherer. lastGatherDuration string lastGatherDuration is the total time taken to process all gatherers during the last gather event. lastGatherTime string lastGatherTime is the last time when Insights data gathering finished. An empty value means that no data has been gathered yet. 16.1.6. .status.gatherStatus.gatherers Description gatherers is a list of active gatherers (and their statuses) in the last gathering. Type array 16.1.7. .status.gatherStatus.gatherers[] Description gathererStatus represents information about a particular data gatherer. Type object Required conditions lastGatherDuration name Property Type Description conditions array conditions provide details on the status of each gatherer. conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } lastGatherDuration string lastGatherDuration represents the time spent gathering. name string name is the name of the gatherer. 16.1.8. .status.gatherStatus.gatherers[].conditions Description conditions provide details on the status of each gatherer. Type array 16.1.9. .status.gatherStatus.gatherers[].conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 16.1.10. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 16.1.11. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 16.1.12. .status.insightsReport Description insightsReport provides general Insights analysis results. When omitted, this means no data gathering has taken place yet. Type object Property Type Description downloadedAt string downloadedAt is the time when the last Insights report was downloaded. An empty value means that there has not been any Insights report downloaded yet and it usually appears in disconnected clusters (or clusters when the Insights data gathering is disabled). healthChecks array healthChecks provides basic information about active Insights health checks in a cluster. healthChecks[] object healthCheck represents an Insights health check attributes. 16.1.13. .status.insightsReport.healthChecks Description healthChecks provides basic information about active Insights health checks in a cluster. Type array 16.1.14. .status.insightsReport.healthChecks[] Description healthCheck represents an Insights health check attributes. Type object Required advisorURI description state totalRisk Property Type Description advisorURI string advisorURI provides the URL link to the Insights Advisor. description string description provides basic description of the healtcheck. state string state determines what the current state of the health check is. Health check is enabled by default and can be disabled by the user in the Insights advisor user interface. totalRisk integer totalRisk of the healthcheck. Indicator of the total risk posed by the detected issue; combination of impact and likelihood. The values can be from 1 to 4, and the higher the number, the more important the issue. 16.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/insightsoperators DELETE : delete collection of InsightsOperator GET : list objects of kind InsightsOperator POST : create an InsightsOperator /apis/operator.openshift.io/v1/insightsoperators/{name} DELETE : delete an InsightsOperator GET : read the specified InsightsOperator PATCH : partially update the specified InsightsOperator PUT : replace the specified InsightsOperator /apis/operator.openshift.io/v1/insightsoperators/{name}/scale GET : read scale of the specified InsightsOperator PATCH : partially update scale of the specified InsightsOperator PUT : replace scale of the specified InsightsOperator /apis/operator.openshift.io/v1/insightsoperators/{name}/status GET : read status of the specified InsightsOperator PATCH : partially update status of the specified InsightsOperator PUT : replace status of the specified InsightsOperator 16.2.1. /apis/operator.openshift.io/v1/insightsoperators HTTP method DELETE Description delete collection of InsightsOperator Table 16.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind InsightsOperator Table 16.2. HTTP responses HTTP code Reponse body 200 - OK InsightsOperatorList schema 401 - Unauthorized Empty HTTP method POST Description create an InsightsOperator Table 16.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.4. Body parameters Parameter Type Description body InsightsOperator schema Table 16.5. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 201 - Created InsightsOperator schema 202 - Accepted InsightsOperator schema 401 - Unauthorized Empty 16.2.2. /apis/operator.openshift.io/v1/insightsoperators/{name} Table 16.6. Global path parameters Parameter Type Description name string name of the InsightsOperator HTTP method DELETE Description delete an InsightsOperator Table 16.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 16.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified InsightsOperator Table 16.9. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified InsightsOperator Table 16.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.11. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified InsightsOperator Table 16.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.13. Body parameters Parameter Type Description body InsightsOperator schema Table 16.14. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 201 - Created InsightsOperator schema 401 - Unauthorized Empty 16.2.3. /apis/operator.openshift.io/v1/insightsoperators/{name}/scale Table 16.15. Global path parameters Parameter Type Description name string name of the InsightsOperator HTTP method GET Description read scale of the specified InsightsOperator Table 16.16. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified InsightsOperator Table 16.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.18. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified InsightsOperator Table 16.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.20. Body parameters Parameter Type Description body Scale schema Table 16.21. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 16.2.4. /apis/operator.openshift.io/v1/insightsoperators/{name}/status Table 16.22. Global path parameters Parameter Type Description name string name of the InsightsOperator HTTP method GET Description read status of the specified InsightsOperator Table 16.23. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified InsightsOperator Table 16.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.25. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified InsightsOperator Table 16.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.27. Body parameters Parameter Type Description body InsightsOperator schema Table 16.28. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 201 - Created InsightsOperator schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/operator_apis/insightsoperator-operator-openshift-io-v1 |
Chapter 38. Clustering | Chapter 38. Clustering pcs now supports managing multi-site clusters that use Booth and ticket constraints As a Technology Preview starting with Red Hat Enterprise Linux 7.3, the pcs tool enables you to manage multi-site clusters that use the Booth cluster ticket manager by using the pcs booth command. You can also set ticket constraints by using the pcs constraint ticket command to manage resources in multi-site clusters. It is also possible to manage ticket constraints in the web UI. (BZ# 1305049 , BZ#1308514) Support for quorum devices in a Pacemaker cluster Starting with Red Hat Enterprise Linux 7.3, you can configure a separate quorum device (QDevice) which acts as a third-party arbitration device for the cluster. This functionality is provided as a Technology Preview, and its primary use is to allow a cluster to sustain more node failures than standard quorum rules allow. A quorum device is recommended for clusters with an even number of nodes and highly recommended for two-node clusters. For information on configuring a quorum device, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/High_Availability_Add-On_Reference/ . (BZ#1158805) Support for clufter , a tool for transforming and analyzing cluster configuration formats The clufter package, available as a Technology Preview in Red Hat Enterprise Linux 7, provides a tool for transforming and analyzing cluster configuration formats. It can be used to assist with migration from an older stack configuration to a newer configuration that leverages Pacemaker. For information on the capabilities of clufter , see the clufter(1) man page or the output of the clufter -h command. (BZ#1212909) clufter rebased to version 0.59.5 The clufter packages, available as a Technology Preview, have been upgraded to upstream version 0.59.5, which provides a number of bug fixes, new features, and user experience enhancements over the version. Among the notable updates are the following: When converting the old cluster stack configuration into files for a Pacemaker stack or into the respective sequence of pcs commands with the ccs2pcs and ccs2pcscmd families of clufter commands, monitor action is properly propagated or added. When converting configuration files for the Pacemaker stack using the corosync.conf file, either as a byproduct of converting CMAN-based configuration or with first-class input such as the *2pcscmd{,-needle} families of commands, the cluster name is propagated correctly. Previously, the cluster name was mistakenly dropped, resulting in a command that confused the name of the first cluster node for the name of the cluster as in, for example, pcs cluster setup --start --name node1 node2 node3 . When converting CMAN-based configuration into the parallel configuration for a Pacemaker stack with the ccs2pcs family of commands, accidentally broken values of attributes marked as having an ID type in the schema no longer occur. When converting either CMAN or Pacemaker stack specific configuration into the respective sequence of pcs commands with the *2pcscmd families of commands, the clufter tool no longer suggests pcs cluster cib file --config , which does not currently work for subsequent local-modification pcs commands. Instead it suggests pcs cluster cib file . The clufter tool outputs now may vary significantly depending on the specified distribution target since the tool now aligns the output with what the respective environment, such as the pcs version, can support. Because of this, your distribution or setup may not be supported, and you should not expect that one sequence of pcs commands that the clufter tool produces is portable to a completely different environment. The clufter tool now supports several new features of the pcs tool, including quorum devices. Additionally, the clufter tool supports older features recently added to the pcs tool, including ticket constraints, and resource sets for colocation and order constraints. (BZ# 1343661 , BZ# 1270740 , BZ# 1272570 , BZ# 1272592 , BZ# 1300014 , BZ# 1300050 , BZ# 1328078 ) Support for Booth cluster ticket manager Red Hat Enterprise Linux 7.3 provides support for a Booth cluster ticket manager as a technology preview. This allows you to configure multiple high availability clusters in separate sites that communicate through a distributed service to coordinate management of resources. The Booth ticket manager facilitates a consensus-based decision process for individual tickets that ensure that specified resources are run at only one site at a time, for which a ticket has been granted. For information on configuring multi-site clusters with the Booth ticket manager, see the https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/High_Availability_Add-On_Reference/ (BZ#1302087) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/technology_previews_clustering |
Chapter 4. Configuration options for JDK Flight Recorder | Chapter 4. Configuration options for JDK Flight Recorder You can configure JDK Flight Recorder (JFR) to capture various sets of events using the command line or diagnostic commands. 4.1. Configure JDK Flight Recorder using the command line You can configure JDK Flight Recorder (JFR) from the command line using the following options: 4.1.1. Start JFR Use -XX:StartFlightRecording option to start a JFR recording for the Java application. For example: You can set the following parameter=value entries when starting a JFR recording: delay=time Use this parameter to specify the delay between the Java application launch time and the start of the recording. Append s to specify the time in seconds, m for minutes, h for hours, or d for days. For example, specifying 10m means 10 minutes. By default, there is no delay, and this parameter is set to 0. disk={true|false} Use this parameter to specify whether to write data to disk while recording. By default, this parameter is true . dumponexit={true|false} Use this parameter to specify if the running recording is dumped when the JVM shuts down. If the parameter is enabled and a file name is not set, the recording is written to a file in the directory where the recording progress has started. The file name is a system-generated name that contains the process ID, recording ID, and current timestamp. For example, hotspot-pid-47496-id-1-2018_01_25_19_10_41.jfr. By default, this parameter is false . duration=time Use this parameter to specify the duration of the recording. Append s to specify the time in seconds, m for minutes, h for hours, or d for days. For example, if you specify duration as 5h, it indicates 5 hours. By default, this parameter is set to 0, which means there is no limit set on the recording duration. filename=path Use this parameter to specify the path and name of the recording file. The recording is written to this file when stopped. For example: · recording.jfr · /home/user/recordings/recording.jfr name=identifier Use this parameter to specify both the name and the identifier of a recording. maxage=time Use this parameter to specify the maximum number of days the recording should be available on the disk. This parameter is valid only when the disk parameter is set to true. Append s to specify the time in seconds, m for minutes, h for hours, or d for days. For example, when you specify 30s, it indicates 30 seconds. By default, this parameter is set to 0, which means there is no limit set. maxsize=size Use this parameter to specify the maximum size of disk data to keep for the recording. This parameter is valid only when the disk parameter is set to true. The value must not be less than the value for the maxchunksize parameter set with -XX:FlightRecorderOptions . Append m or M to specify the size in megabytes, or g or G to specify the size in gigabytes. By default, the maximum size of disk data isn't limited, and this parameter is set to 0. path-to-gc-roots={true|false} Use this parameter to specify whether to collect the path to garbage collection (GC) roots at the end of a recording. By default, this parameter is set to false. The path to GC roots is useful for finding memory leaks. For Red Hat build of OpenJDK 11, you can enable the OldObjectSample event which is a more efficient alternative than using heap dumps. You can also use the OldObjectSample event in production. Collecting memory leak information is time-consuming and incurs extra overhead. You should enable this parameter only when you start recording an application that you suspect has memory leaks. If the JFR profile parameter is set to profile, you can trace the stack from where the object is leaking. It is included in the information collected. settings=path Use this parameter to specify the path and name of the event settings file (of type JFC). By default, the default.jfc file is used, which is located in JAVA_HOME/lib/jfr. This default settings file collects a predefined set of information with low overhead, so it has minimal impact on performance and can be used with recordings that run continuously. The second settings file is also provided, profile.jfc, which provides more data than the default configuration, but can have more overhead and impact performance. Use this configuration for short periods of time when more information is needed. Note You can specify values for multiple parameters by separating them with a comma. For example, -XX:StartFlightRecording=disk=false , name=example-recording . 4.1.2. Control behavior of JFR Use -XX:FlightRecorderOptions option to sets the parameters that control the behavior of JFR. For example: You can set the following parameter=value entries to control the behavior of JFR: globalbuffersize=size Use this parameter to specify the total amount of primary memory used for data retention. The default value is based on the value specified for memorysize . You can change the memorysize parameter to alter the size of global buffers. maxchunksize=size Use this parameter to specify the maximum size of the data chunks in a recording. Append m or M to specify the size in megabytes (MB), or g or G to specify the size in gigabytes (GB). By default, the maximum size of data chunks is set to 12 MB. The minimum size allowed is 1 MB. memorysize=size Use this parameter to determine how much buffer memory should be used. The parameter sets the globalbuffersize and numglobalbuffers parameters based on the size specified. Append m or M to specify the size in megabytes (MB), or g or G to specify the size in gigabytes (GB). By default, the memory size is set to 10 MB. numglobalbuffers=number Use this parameter to specify the number of global buffers used. The default value is based on the size specified in the memorysize parameter. You can change the memorysize parameter to alter the number of global buffers. old-object-queue-size=number-of-objects Use this parameter to track the maximum number of old objects. By default, the number of objects is set to 256. repository=path Use this parameter to specify the repository for temporary disk storage. By default, it uses system temporary directory. retransform={true|false} Use this parameter to specify if event classes should be retransformed using JVMTI. If set to false , instrumentation is added to loaded event classes. By default, this parameter is set to true for enabling class retransformation. samplethreads={true|false} Use this parameter to specify whether thread sampling is enabled. Thread sampling only occurs when the sampling event is enabled and this parameter is set to true . By default, this parameter is set to true . stackdepth=depth Use this parameter to set the stack depth for stack traces. By default, the stack depth is set to 64 method calls. You can set the maximum stack depth to 2048. Values greater than 64 could create significant overhead and reduce performance. threadbuffersize=size Use this parameter to specify the local buffer size for a thread. By default, the local buffer size is set to 8 kilobytes, with a minimum value of 4 kilobytes. Overriding this parameter could reduce performance and is not recommended. Note You can specify values for multiple parameters by separating them with a comma. 4.2. Configuring JDK Flight Recorder using diagnostic command (JCMD) You can configure JDK Flight Recorder (JFR) using Java diagnostic command. The simplest way to execute a diagnostic command is to use the jcmd tool which is located in the Java installation directory. To use a command, you have to pass the process identifier of the JVM or the name of the main class, and the actual command as arguments to jcmd . You can retrieve the JVM or the name of the main class by running jcmd without arguments or by using jps . The jps (Java Process Status) tool lists JVMs on a target system to which it has access permissions. To see a list of all running Java processes, use the jcmd command without any arguments. To see a complete list of commands available for a running Java application, specify help as the diagnostic command after the process identifier or the name of the main class. Use the following diagnostic commands for JFR: 4.2.1. Start JFR Use JFR.start diagnostic command to start a flight recording. For example: Table 4.1. The following table lists the parameters you can use with this command: Parameter Description Data type Default value name Name of the recording String - settings Server-side template String - duration Duration of recording Time 0s filename Resulting recording file name String - maxage Maximum age of buffer data Time 0s maxsize Maximum size of buffers in bytes Long 0 dumponexit Dump running recording when JVM shuts down Boolean - path-to-gc-roots Collect path to garbage collector roots Boolean False 4.2.2. Stop JFR Use JFR.stop diagnostic command to stop running flight recordings. For example: Table 4.2. The following table lists the parameters you can use with this command. Parameter Description Data type Default value name Name of the recording String - filename Copy recording data to the file String - 4.2.3. Check JFR Use JFR.check command to show information about the recordings which are in progress. For example: Table 4.3. The following table lists the parameters you can use with this command. Parameter Description Data type Default value name Name of the recording String - filename Copy recording data to the file String - maxage Maximum duration to dump file Time 0s maxsize Maximum amount of bytes to dump Long 0 begin Starting time to dump data String - end Ending time to dump data String - path-to-gc-roots Collect path to garbage collector roots Boolean false 4.2.4. Dump JFR Use JFR.dump diagnostic command to copy the content of a flight recording to a file. For example: Table 4.4. The following table lists the parameters you can use with this command. Parameter Description Data type Default value name Name of the recording String - filename Copy recording data to the file String - maxage Maximum duration to dump file Time 0s maxsize Maximum amount of bytes to dump Long 0 begin Starting time to dump data String - end Ending time to dump data String - path-to-gc-roots Collect path to garbage collector roots Boolean false 4.2.5. Configure JFR Use JFR.configure diagnostic command to configure the flight recordings. For example: Table 4.5. The following table lists the parameters you can use with this command. Parameter Description Data type Default value repositorypath Path to repository String - dumppath Path to dump String - stackdepth Stack depth Jlong 64 globalbuffercount Number of global buffers Jlong 32 globalbuffersize Size of a global buffer Jlong 524288 thread_buffer_size Size of a thread buffer Jlong 8192 memorysize Overall memory size Jlong 16777216 maxchunksize Size of an individual disk chunk Jlong 12582912 Samplethreads Activate thread sampling Boolean true Revised on 2024-05-09 17:12:45 UTC | [
"java -XX:StartFlightRecording=delay=5s,disk=false,dumponexit=true,duration=60s,filename=myrecording.jfr <<YOUR_JAVA_APPLICATION>>",
"java -XX:FlightRecorderOptions=duration=60s,filename=myrecording.jfr -XX:FlightRecorderOptions=stackdepth=128,maxchunksize=2M <<YOUR_JAVA_APPLICATION>>",
"jcmd <PID> JFR.start delay=10s duration=10m filename=recording.jfr",
"jcmd <PID> JFR.stop name=output_file",
"jcmd <PID> JFR.check",
"jcmd <PID> JFR.dump name=output_file filename=output.jfr",
"jcmd <PID> JFR.configure repositorypath=/home/jfr/recordings"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/using_jdk_flight_recorder_with_red_hat_build_of_openjdk/configure-jfr-options |
Chapter 21. Managing self-service rules using the IdM Web UI | Chapter 21. Managing self-service rules using the IdM Web UI Learn about self-service rules in Identity Management (IdM) and how to create and edit self-service access rules in the web interface (IdM Web UI). 21.1. Self-service access control in IdM Self-service access control rules define which operations an Identity Management (IdM) entity can perform on its IdM Directory Server entry: for example, IdM users have the ability to update their own passwords. This method of control allows an authenticated IdM entity to edit specific attributes within its LDAP entry, but does not allow add or delete operations on the entire entry. Warning Be careful when working with self-service access control rules: configuring access control rules improperly can inadvertently elevate an entity's privileges. 21.2. Creating self-service rules using the IdM Web UI Follow this procedure to create self-service access rules in IdM using the web interface (IdM Web UI). Prerequisites Administrator privileges for managing IdM or the User Administrator role. You are logged-in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . Procedure Open the Role-Based Access Control submenu in the IPA Server tab and select Self Service Permissions . Click Add at the upper-right of the list of the self-service access rules: The Add Self Service Permission window opens. Enter the name of the new self-service rule in the Self-service name field. Spaces are allowed: Select the check boxes to the attributes you want users to be able to edit. Optional: If an attribute you want to provide access to is not listed, you can add a listing for it: Click the Add button. Enter the attribute name in the Attribute text field of the following Add Custom Attribute window. Click the OK button to add the attribute Verify that the new attribute is selected Click the Add button at the bottom of the form to save the new self-service rule. Alternatively, you can save and continue editing the self-service rule by clicking the Add and Edit button, or save and add further rules by clicking the Add and Add another button. 21.3. Editing self-service rules using the IdM Web UI Follow this procedure to edit self-service access rules in IdM using the web interface (IdM Web UI). Prerequisites Administrator privileges for managing IdM or the User Administrator role. You are logged-in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . Procedure Open the Role-Based Access Control submenu in the IPA Server tab and select Self Service Permissions . Click on the name of the self-service rule you want to modify. The edit page only allows you to edit the list of attributes to you want to add or remove to the self-service rule. Select or deselect the appropriate check boxes. Click the Save button to save your changes to the self-service rule. 21.4. Deleting self-service rules using the IdM Web UI Follow this procedure to delete self-service access rules in IdM using the web interface (IdM Web UI). Prerequisites Administrator privileges for managing IdM or the User Administrator role. You are logged-in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . Procedure Open the Role-Based Access Control submenu in the IPA Server tab and select Self Service Permissions . Select the check box to the rule you want to delete, then click on the Delete button on the right of the list. A dialog opens, click on Delete to confirm. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/managing-self-service-rules-in-idm-using-the-idm-web-ui_managing-users-groups-hosts |
Chapter 14. Open Container Initiative support | Chapter 14. Open Container Initiative support Container registries were originally designed to support container images in the Docker image format. To promote the use of additional runtimes apart from Docker, the Open Container Initiative (OCI) was created to provide a standardization surrounding container runtimes and image formats. Most container registries support the OCI standardization as it is based on the Docker image manifest V2, Schema 2 format. In addition to container images, a variety of artifacts have emerged that support not just individual applications, but also the Kubernetes platform as a whole. These range from Open Policy Agent (OPA) policies for security and governance to Helm charts and Operators that aid in application deployment. Quay.io is a private container registry that not only stores container images, but also supports an entire ecosystem of tooling to aid in the management of containers. Quay.io strives to be as compatible as possible with the OCI 1.1 Image and Distribution specifications , and supports common media types like Helm charts (as long as they pushed with a version of Helm that supports OCI) and a variety of arbitrary media types within the manifest or layer components of container images. Support for OCI media types differs from iterations of Quay.io, when the registry was more strict about accepted media types. Because Quay.io now works with a wider array of media types, including those that were previously outside the scope of its support, it is now more versatile accommodating not only standard container image formats but also emerging or unconventional types. In addition to its expanded support for novel media types, Quay.io ensures compatibility with Docker images, including V2_2 and V2_1 formats. This compatibility with Docker V2_2 and V2_1 images demonstrates Quay.io's commitment to providing a seamless experience for Docker users. Moreover, Quay.io continues to extend its support for Docker V1 pulls, catering to users who might still rely on this earlier version of Docker images. Support for OCI artifacts are enabled by default. The following examples show you how to use some media types, which can be used as examples for using other OCI media types. 14.1. Helm and OCI prerequisites Helm simplifies how applications are packaged and deployed. Helm uses a packaging format called Charts which contain the Kubernetes resources representing an application. Quay.io supports Helm charts so long as they are a version supported by OCI. Use the following procedures to pre-configure your system to use Helm and other OCI media types. The most recent version of Helm can be downloaded from the Helm releases page. 14.2. Using Helm charts Use the following example to download and push an etherpad chart from the Red Hat Community of Practice (CoP) repository. Prerequisites You have logged into Quay.io. Procedure Add a chart repository by entering the following command: USD helm repo add redhat-cop https://redhat-cop.github.io/helm-charts Enter the following command to update the information of available charts locally from the chart repository: USD helm repo update Enter the following command to pull a chart from a repository: USD helm pull redhat-cop/etherpad --version=0.0.4 --untar Enter the following command to package the chart into a chart archive: USD helm package ./etherpad Example output Successfully packaged chart and saved it to: /home/user/linux-amd64/etherpad-0.0.4.tgz Log in to Quay.io using helm registry login : USD helm registry login quay.io Push the chart to your repository using the helm push command: helm push etherpad-0.0.4.tgz oci://quay.io/<organization_name>/helm Example output: Pushed: quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad:0.0.4 Digest: sha256:a6667ff2a0e2bd7aa4813db9ac854b5124ff1c458d170b70c2d2375325f2451b Ensure that the push worked by deleting the local copy, and then pulling the chart from the repository: USD rm -rf etherpad-0.0.4.tgz USD helm pull oci://quay.io/<organization_name>/helm/etherpad --version 0.0.4 Example output: Pulled: quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad:0.0.4 Digest: sha256:4f627399685880daf30cf77b6026dc129034d68c7676c7e07020b70cf7130902 14.3. Cosign OCI support Cosign is a tool that can be used to sign and verify container images. It uses the ECDSA-P256 signature algorithm and Red Hat's Simple Signing payload format to create public keys that are stored in PKIX files. Private keys are stored as encrypted PEM files. Cosign currently supports the following: Hardware and KMS Signing Bring-your-own PKI OIDC PKI Built-in binary transparency and timestamping service Use the following procedure to directly install Cosign. Prerequisites You have installed Go version 1.16 or later. Procedure Enter the following go command to directly install Cosign: USD go install github.com/sigstore/cosign/cmd/[email protected] Example output go: downloading github.com/sigstore/cosign v1.0.0 go: downloading github.com/peterbourgon/ff/v3 v3.1.0 Generate a key-value pair for Cosign by entering the following command: USD cosign generate-key-pair Example output Enter password for private key: Enter again: Private key written to cosign.key Public key written to cosign.pub Sign the key-value pair by entering the following command: USD cosign sign -key cosign.key quay.io/user1/busybox:test Example output Enter password for private key: Pushing signature to: quay-server.example.com/user1/busybox:sha256-ff13b8f6f289b92ec2913fa57c5dd0a874c3a7f8f149aabee50e3d01546473e3.sig If you experience the error: signing quay-server.example.com/user1/busybox:test: getting remote image: GET https://quay-server.example.com/v2/user1/busybox/manifests/test : UNAUTHORIZED: access to the requested resource is not authorized; map[] error, which occurs because Cosign relies on ~./docker/config.json for authorization, you might need to execute the following command: USD podman login --authfile ~/.docker/config.json quay.io Example output Username: Password: Login Succeeded! Enter the following command to see the updated authorization configuration: USD cat ~/.docker/config.json { "auths": { "quay-server.example.com": { "auth": "cXVheWFkbWluOnBhc3N3b3Jk" } } 14.4. Installing and using Cosign Use the following procedure to directly install Cosign. Prerequisites You have installed Go version 1.16 or later. You have set FEATURE_GENERAL_OCI_SUPPORT to true in your config.yaml file. Procedure Enter the following go command to directly install Cosign: USD go install github.com/sigstore/cosign/cmd/[email protected] Example output go: downloading github.com/sigstore/cosign v1.0.0 go: downloading github.com/peterbourgon/ff/v3 v3.1.0 Generate a key-value pair for Cosign by entering the following command: USD cosign generate-key-pair Example output Enter password for private key: Enter again: Private key written to cosign.key Public key written to cosign.pub Sign the key-value pair by entering the following command: USD cosign sign -key cosign.key quay.io/user1/busybox:test Example output Enter password for private key: Pushing signature to: quay-server.example.com/user1/busybox:sha256-ff13b8f6f289b92ec2913fa57c5dd0a874c3a7f8f149aabee50e3d01546473e3.sig If you experience the error: signing quay-server.example.com/user1/busybox:test: getting remote image: GET https://quay-server.example.com/v2/user1/busybox/manifests/test : UNAUTHORIZED: access to the requested resource is not authorized; map[] error, which occurs because Cosign relies on ~./docker/config.json for authorization, you might need to execute the following command: USD podman login --authfile ~/.docker/config.json quay.io Example output Username: Password: Login Succeeded! Enter the following command to see the updated authorization configuration: USD cat ~/.docker/config.json { "auths": { "quay-server.example.com": { "auth": "cXVheWFkbWluOnBhc3N3b3Jk" } } | [
"helm repo add redhat-cop https://redhat-cop.github.io/helm-charts",
"helm repo update",
"helm pull redhat-cop/etherpad --version=0.0.4 --untar",
"helm package ./etherpad",
"Successfully packaged chart and saved it to: /home/user/linux-amd64/etherpad-0.0.4.tgz",
"helm registry login quay.io",
"helm push etherpad-0.0.4.tgz oci://quay.io/<organization_name>/helm",
"Pushed: quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad:0.0.4 Digest: sha256:a6667ff2a0e2bd7aa4813db9ac854b5124ff1c458d170b70c2d2375325f2451b",
"rm -rf etherpad-0.0.4.tgz",
"helm pull oci://quay.io/<organization_name>/helm/etherpad --version 0.0.4",
"Pulled: quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad:0.0.4 Digest: sha256:4f627399685880daf30cf77b6026dc129034d68c7676c7e07020b70cf7130902",
"go install github.com/sigstore/cosign/cmd/[email protected]",
"go: downloading github.com/sigstore/cosign v1.0.0 go: downloading github.com/peterbourgon/ff/v3 v3.1.0",
"cosign generate-key-pair",
"Enter password for private key: Enter again: Private key written to cosign.key Public key written to cosign.pub",
"cosign sign -key cosign.key quay.io/user1/busybox:test",
"Enter password for private key: Pushing signature to: quay-server.example.com/user1/busybox:sha256-ff13b8f6f289b92ec2913fa57c5dd0a874c3a7f8f149aabee50e3d01546473e3.sig",
"podman login --authfile ~/.docker/config.json quay.io",
"Username: Password: Login Succeeded!",
"cat ~/.docker/config.json { \"auths\": { \"quay-server.example.com\": { \"auth\": \"cXVheWFkbWluOnBhc3N3b3Jk\" } }",
"go install github.com/sigstore/cosign/cmd/[email protected]",
"go: downloading github.com/sigstore/cosign v1.0.0 go: downloading github.com/peterbourgon/ff/v3 v3.1.0",
"cosign generate-key-pair",
"Enter password for private key: Enter again: Private key written to cosign.key Public key written to cosign.pub",
"cosign sign -key cosign.key quay.io/user1/busybox:test",
"Enter password for private key: Pushing signature to: quay-server.example.com/user1/busybox:sha256-ff13b8f6f289b92ec2913fa57c5dd0a874c3a7f8f149aabee50e3d01546473e3.sig",
"podman login --authfile ~/.docker/config.json quay.io",
"Username: Password: Login Succeeded!",
"cat ~/.docker/config.json { \"auths\": { \"quay-server.example.com\": { \"auth\": \"cXVheWFkbWluOnBhc3N3b3Jk\" } }"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/about_quay_io/oci-intro |
Chapter 8. Installation configuration parameters for the Agent-based Installer | Chapter 8. Installation configuration parameters for the Agent-based Installer Before you deploy an OpenShift Container Platform cluster using the Agent-based Installer, you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml and agent-config.yaml files, you must provide values for the required parameters, and you can use the optional parameters to customize your cluster further. 8.1. Available installation configuration parameters The following tables specify the required and optional installation configuration parameters that you can set as part of the Agent-based installation process. These values are specified in the install-config.yaml file. Note These settings are used for installation only, and cannot be modified after installation. 8.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 8.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . When you do not provide metadata.name through either the install-config.yaml or agent-config.yaml files, for example when you use only ZTP manifests, the cluster name is set to agent-cluster . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: baremetal , external , none , or vsphere . Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 8.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Consider the following information before you configure network parameters for your cluster: If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you deployed nodes in an OpenShift Container Platform cluster with a network that supports both IPv4 and non-link-local IPv6 addresses, configure your cluster to use a dual-stack network. For clusters configured for dual-stack networking, both IPv4 and IPv6 traffic must use the same network interface as the default gateway. This ensures that in a multiple network interface controller (NIC) environment, a cluster can detect what NIC to use based on the available network interface. For more information, see "OVN-Kubernetes IPv6 and dual-stack limitations" in About the OVN-Kubernetes network plugin . To prevent network connectivity issues, do not install a single-stack IPv4 cluster on a host that supports dual-stack networking. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Table 8.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 Required if you use networking.clusterNetwork . An IP address block. If you use the OVN-Kubernetes network plugin, you can specify IPv4 and IPv6 networks. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The prefix length for an IPv6 block is between 0 and 128 . For example, 10.128.0.0/14 or fd01::/48 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. For an IPv4 network the default value is 23 . For an IPv6 network the default value is 64 . The default value is also the minimum value for IPv6. The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. If you use the OVN-Kubernetes network plugin, you can specify an IP address block for both of the IPv4 and IPv6 address families. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 - fd02::/112 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 or fd00::/48 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. Configures the IPv4 join subnet that is used internally by ovn-kubernetes . This subnet must not overlap with any other subnet that OpenShift Container Platform is using, including the node network. The size of the subnet must be larger than the number of nodes. You cannot change the value after installation. An IP network block in CIDR notation. The default value is 100.64.0.0/16 . 8.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 8.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 , arm64 , ppc64le , and s390x . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. baremetal , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 , arm64 , ppc64le , and s390x . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. baremetal , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , 4 , 5 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Mint , Passthrough , Manual or an empty string ( "" ). Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 8.1.4. Additional bare metal configuration parameters for the Agent-based Installer Additional bare metal installation configuration parameters for the Agent-based Installer are described in the following table: Note These fields are not used during the initial provisioning of the cluster, but they are available to use once the cluster has been installed. Configuring these fields at install time eliminates the need to set them as a Day 2 operation. Table 8.4. Additional bare metal parameters Parameter Description Values The IP address within the cluster where the provisioning services run. Defaults to the third IP address of the provisioning subnet. For example, 172.22.0.3 or 2620:52:0:1307::3 . IPv4 or IPv6 address. The provisioningNetwork configuration setting determines whether the cluster uses the provisioning network. If it does, the configuration setting also determines if the cluster manages the network. Managed : Default. Set this parameter to Managed to fully manage the provisioning network, including DHCP, TFTP, and so on. Disabled : Set this parameter to Disabled to disable the requirement for a provisioning network. When set to Disabled , you can use only virtual media based provisioning on Day 2. If Disabled and using power management, BMCs must be accessible from the bare-metal network. If Disabled, you must provide two IP addresses on the bare-metal network that are used for the provisioning services. Managed or Disabled . The MAC address within the cluster where provisioning services run. MAC address. The CIDR for the network to use for provisioning. This option is required when not using the default address range on the provisioning network. Valid CIDR, for example 10.0.0.0/16 . The name of the network interface on nodes connected to the provisioning network. Use the bootMACAddress configuration setting to enable Ironic to identify the IP address of the NIC instead of using the provisioningNetworkInterface configuration setting to identify the name of the NIC. String. Defines the IP range for nodes on the provisioning network, for example 172.22.0.10,172.22.0.254 . IP address range. Configuration for bare metal hosts. Array of host configuration objects. The name of the host. String. The MAC address of the NIC used for provisioning the host. MAC address. Configuration for the host to connect to the baseboard management controller (BMC). Dictionary of BMC configuration objects. The username for the BMC. String. Password for the BMC. String. The URL for communicating with the host's BMC controller. The address configuration setting specifies the protocol. For example, redfish+http://10.10.10.1:8000/redfish/v1/Systems/1234 enables Redfish. For more information, see "BMC addressing" in the "Deploying installer-provisioned clusters on bare metal" section. URL. redfish and redfish-virtualmedia need this parameter to manage BMC addresses. The value should be True when using a self-signed certificate for BMC addresses. Boolean. 8.1.5. Additional VMware vSphere configuration parameters Additional VMware vSphere configuration parameters are described in the following table: Table 8.5. Additional VMware vSphere cluster parameters Parameter Description Values Describes your account on the cloud platform that hosts your cluster. You can use the parameter to customize the platform. If you provide additional configuration settings for compute and control plane machines in the machine pool, the parameter is not required. A dictionary of vSphere configuration objects Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. An array of failure domain configuration objects. The name of the failure domain. String If you define multiple failure domains for your cluster, you must attach the tag to each vCenter data center. To define a region, use a tag from the openshift-region tag category. For a single vSphere data center environment, you do not need to attach a tag, but you must enter an alphanumeric value, such as datacenter , for the parameter. String Specifies the fully-qualified hostname or IP address of the VMware vCenter server, so that a client can access failure domain resources. You must apply the server role to the vSphere vCenter server location. String If you define multiple failure domains for your cluster, you must attach a tag to each vCenter cluster. To define a zone, use a tag from the openshift-zone tag category. For a single vSphere data center environment, you do not need to attach a tag, but you must enter an alphanumeric value, such as cluster , for the parameter. String The path to the vSphere compute cluster. String Lists and defines the data centers where OpenShift Container Platform virtual machines (VMs) operate. The list of data centers must match the list of data centers specified in the vcenters field. String The path to the vSphere datastore that holds virtual machine files, templates, and ISO images. Important You can specify the path of any datastore that exists in a datastore cluster. By default, Storage vMotion is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage vMotion to avoid data loss issues for your OpenShift Container Platform cluster. If you must specify VMs across multiple datastores, use a datastore object to specify a failure domain in your cluster's install-config.yaml configuration file. For more information, see "VMware vSphere region and zone enablement". String Optional: The absolute path of an existing folder where the user creates the virtual machines, for example, /<data_center_name>/vm/<folder_name>/<subfolder_name> . String Lists any network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String Optional: The absolute path of an existing resource pool where the installation program creates the virtual machines, for example, /<data_center_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . String Specifies the absolute path to a pre-existing Red Hat Enterprise Linux CoreOS (RHCOS) image template or virtual machine. The installation program can use the image template or virtual machine to quickly install RHCOS on vSphere hosts. Consider using this parameter as an alternative to uploading an RHCOS image on vSphere hosts. This parameter is available for use only on installer-provisioned infrastructure. String Configures the connection details so that services can communicate with a vCenter server. An array of vCenter configuration objects. Lists and defines the data centers where OpenShift Container Platform virtual machines (VMs) operate. The list of data centers must match the list of data centers specified in the failureDomains field. String The password associated with the vSphere user. String The port number used to communicate with the vCenter server. Integer The fully qualified host name (FQHN) or IP address of the vCenter server. String The username associated with the vSphere user. String 8.1.6. Deprecated VMware vSphere configuration parameters In OpenShift Container Platform 4.13, the following vSphere configuration parameters are deprecated. You can continue to use these parameters, but the installation program does not automatically specify these parameters in the install-config.yaml file. The following table lists each deprecated vSphere configuration parameter: Table 8.6. Deprecated VMware vSphere cluster parameters Parameter Description Values The vCenter cluster to install the OpenShift Container Platform cluster in. String Defines the data center where OpenShift Container Platform virtual machines (VMs) operate. String The name of the default datastore to use for provisioning volumes. String Optional: The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the data center virtual machine folder. String, for example, /<data_center_name>/vm/<folder_name>/<subfolder_name> . The password for the vCenter user name. String Optional: The absolute path of an existing resource pool where the installation program creates the virtual machines. If you do not specify a value, the installation program installs the resources in the root of the cluster under /<data_center_name>/host/<cluster_name>/Resources . String, for example, /<data_center_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. String The fully-qualified hostname or IP address of a vCenter server. String Additional resources BMC addressing Configuring regions and zones for a VMware vCenter Required vCenter account privileges 8.2. Available Agent configuration parameters The following tables specify the required and optional Agent configuration parameters that you can set as part of the Agent-based installation process. These values are specified in the agent-config.yaml file. Note These settings are used for installation only, and cannot be modified after installation. 8.2.1. Required configuration parameters Required Agent configuration parameters are described in the following table: Table 8.7. Required parameters Parameter Description Values The API version for the agent-config.yaml content. The current version is v1beta1 . The installation program might also support older API versions. String Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . The value entered in the agent-config.yaml file is ignored, and instead the value specified in the install-config.yaml file is used. When you do not provide metadata.name through either the install-config.yaml or agent-config.yaml files, for example when you use only ZTP manifests, the cluster name is set to agent-cluster . String of lowercase letters and hyphens ( - ), such as dev . 8.2.2. Optional configuration parameters Optional Agent configuration parameters are described in the following table: Table 8.8. Optional parameters Parameter Description Values The IP address of the node that performs the bootstrapping process as well as running the assisted-service component. You must provide the rendezvous IP address when you do not specify at least one host's IP address in the networkConfig parameter. If this address is not provided, one IP address is selected from the provided hosts' networkConfig . IPv4 or IPv6 address. When you use the Agent-based Installer to generate a minimal ISO image, this parameter specifies a URL where the rootfs image file can be retrieved from during cluster installation. This parameter is optional for booting minimal ISO images in connected environments. When you use the Agent-based Installer to generate an iPXE script, this parameter specifies the URL of the server to upload Preboot Execution Environment (PXE) assets to. For more information, see "Preparing PXE assets for OpenShift Container Platform". String. A list of Network Time Protocol (NTP) sources to be added to all cluster hosts, which are added to any NTP sources that are configured through other means. List of hostnames or IP addresses. Host configuration. An optional list of hosts. The number of hosts defined must not exceed the total number of hosts defined in the install-config.yaml file, which is the sum of the values of the compute.replicas and controlPlane.replicas parameters. An array of host configuration objects. Hostname. Overrides the hostname obtained from either the Dynamic Host Configuration Protocol (DHCP) or a reverse DNS lookup. Each host must have a unique hostname supplied by one of these methods, although configuring a hostname through this parameter is optional. String. Provides a table of the name and MAC address mappings for the interfaces on the host. If a NetworkConfig section is provided in the agent-config.yaml file, this table must be included and the values must match the mappings provided in the NetworkConfig section. An array of host configuration objects. The name of an interface on the host. String. The MAC address of an interface on the host. A MAC address such as the following example: 00-B0-D0-63-C2-26 . Defines whether the host is a master or worker node. If no role is defined in the agent-config.yaml file, roles will be assigned at random during cluster installation. master or worker . Enables provisioning of the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installation program examines the devices in the order it discovers them, and compares the discovered values with the hint values. It uses the first discovered device that matches the hint value. This is the device that the operating system is written on during installation. A dictionary of key-value pairs. For more information, see "Root device hints" in the "Setting up the environment for an OpenShift installation" page. The name of the device the RHCOS image is provisioned to. String. The host network definition. The configuration must match the Host Network Management API defined in the nmstate documentation . A dictionary of host network configuration objects. Defines whether the Agent-based Installer generates a full ISO or a minimal ISO image. When this parameter is set to True , the Agent-based Installer generates an ISO without a rootfs image file, and instead contains details about where to pull the rootfs file from. When you generate a minimal ISO, if you do not specify a rootfs URL through the bootArtifactsBaseURL parameter, the Agent-based Installer embeds a default URL that is accessible in environments with an internet connection. The default value is False . Boolean. Additional resources Preparing PXE assets for OpenShift Container Platform Root device hints | [
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16 - fd02::/112",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"networking: ovnKubernetesConfig: ipv4: internalJoinSubnet:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:",
"platform: baremetal: clusterProvisioningIP:",
"platform: baremetal: provisioningNetwork:",
"platform: baremetal: provisioningMACAddress:",
"platform: baremetal: provisioningNetworkCIDR:",
"platform: baremetal: provisioningNetworkInterface:",
"platform: baremetal: provisioningDHCPRange:",
"platform: baremetal: hosts:",
"platform: baremetal: hosts: name:",
"platform: baremetal: hosts: bootMACAddress:",
"platform: baremetal: hosts: bmc:",
"platform: baremetal: hosts: bmc: username:",
"platform: baremetal: hosts: bmc: password:",
"platform: baremetal: hosts: bmc: address:",
"platform: baremetal: hosts: bmc: disableCertificateVerification:",
"platform: vsphere:",
"platform: vsphere: failureDomains:",
"platform: vsphere: failureDomains: name:",
"platform: vsphere: failureDomains: region:",
"platform: vsphere: failureDomains: server:",
"platform: vsphere: failureDomains: zone:",
"platform: vsphere: failureDomains: topology: computeCluster:",
"platform: vsphere: failureDomains: topology: datacenter:",
"platform: vsphere: failureDomains: topology: datastore:",
"platform: vsphere: failureDomains: topology: folder:",
"platform: vsphere: failureDomains: topology: networks:",
"platform: vsphere: failureDomains: topology: resourcePool:",
"platform: vsphere: failureDomains: topology template:",
"platform: vsphere: vcenters:",
"platform: vsphere: vcenters: datacenters:",
"platform: vsphere: vcenters: password:",
"platform: vsphere: vcenters: port:",
"platform: vsphere: vcenters: server:",
"platform: vsphere: vcenters: user:",
"platform: vsphere: cluster:",
"platform: vsphere: datacenter:",
"platform: vsphere: defaultDatastore:",
"platform: vsphere: folder:",
"platform: vsphere: password:",
"platform: vsphere: resourcePool:",
"platform: vsphere: username:",
"platform: vsphere: vCenter:",
"apiVersion:",
"metadata:",
"metadata: name:",
"rendezvousIP:",
"bootArtifactsBaseURL:",
"additionalNTPSources:",
"hosts:",
"hosts: hostname:",
"hosts: interfaces:",
"hosts: interfaces: name:",
"hosts: interfaces: macAddress:",
"hosts: role:",
"hosts: rootDeviceHints:",
"hosts: rootDeviceHints: deviceName:",
"hosts: networkConfig:",
"minimalISO:"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_an_on-premise_cluster_with_the_agent-based_installer/installation-config-parameters-agent |
Chapter 2. Understanding disconnected installation mirroring | Chapter 2. Understanding disconnected installation mirroring You can use a mirror registry for disconnected installations and to ensure that your clusters only use container images that satisfy your organization's controls on external content. Before you install a cluster on infrastructure that you provision in a disconnected environment, you must mirror the required container images into that environment. To mirror container images, you must have a registry for mirroring. 2.1. Mirroring images for a disconnected installation through the Agent-based Installer You can use one of the following procedures to mirror your OpenShift Container Platform image repository to your mirror registry: Mirroring images for a disconnected installation Mirroring images for a disconnected installation using the oc-mirror plugin 2.2. About mirroring the OpenShift Container Platform image repository for a disconnected registry To use mirror images for a disconnected installation with the Agent-based Installer, you must modify the install-config.yaml file. You can mirror the release image by using the output of either the oc adm release mirror or oc mirror command. This is dependent on which command you used to set up the mirror registry. The following example shows the output of the oc adm release mirror command. USD oc adm release mirror Example output To use the new mirrored repository to install, add the following section to the install-config.yaml: imageContentSources: mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: quay.io/openshift-release-dev/ocp-v4.0-art-dev mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: registry.ci.openshift.org/ocp/release The following example shows part of the imageContentSourcePolicy.yaml file generated by the oc-mirror plugin. The file can be found in the results directory, for example oc-mirror-workspace/results-1682697932/ . Example imageContentSourcePolicy.yaml file spec: repositoryDigestMirrors: - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release-images source: quay.io/openshift-release-dev/ocp-release 2.2.1. Configuring the Agent-based Installer to use mirrored images You must use the output of either the oc adm release mirror command or the oc-mirror plugin to configure the Agent-based Installer to use mirrored images. Procedure If you used the oc-mirror plugin to mirror your release images: Open the imageContentSourcePolicy.yaml located in the results directory, for example oc-mirror-workspace/results-1682697932/ . Copy the text in the repositoryDigestMirrors section of the yaml file. If you used the oc adm release mirror command to mirror your release images: Copy the text in the imageContentSources section of the command output. Paste the copied text into the imageContentSources field of the install-config.yaml file. Add the certificate file used for the mirror registry to the additionalTrustBundle field of the yaml file. Important The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Example install-config.yaml file additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- If you are using GitOps ZTP manifests: add the registries.conf and ca-bundle.crt files to the mirror path to add the mirror configuration in the agent ISO image. Note You can create the registries.conf file from the output of either the oc adm release mirror command or the oc mirror plugin. The format of the /etc/containers/registries.conf file has changed. It is now version 2 and in TOML format. Example registries.conf file [[registry]] location = "registry.ci.openshift.org/ocp/release" mirror-by-digest-only = true [[registry.mirror]] location = "virthost.ostest.test.metalkube.org:5000/localimages/local-release-image" [[registry]] location = "quay.io/openshift-release-dev/ocp-v4.0-art-dev" mirror-by-digest-only = true [[registry.mirror]] location = "virthost.ostest.test.metalkube.org:5000/localimages/local-release-image" 2.3. Additional resources Installing an OpenShift Container Platform cluster with the Agent-based Installer | [
"oc adm release mirror",
"To use the new mirrored repository to install, add the following section to the install-config.yaml: imageContentSources: mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: quay.io/openshift-release-dev/ocp-v4.0-art-dev mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: registry.ci.openshift.org/ocp/release",
"spec: repositoryDigestMirrors: - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release-images source: quay.io/openshift-release-dev/ocp-release",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"[[registry]] location = \"registry.ci.openshift.org/ocp/release\" mirror-by-digest-only = true [[registry.mirror]] location = \"virthost.ostest.test.metalkube.org:5000/localimages/local-release-image\" [[registry]] location = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev\" mirror-by-digest-only = true [[registry.mirror]] location = \"virthost.ostest.test.metalkube.org:5000/localimages/local-release-image\""
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_an_on-premise_cluster_with_the_agent-based_installer/understanding-disconnected-installation-mirroring |
Chapter 7. Managing domains | Chapter 7. Managing domains Identity Service (keystone) domains are additional namespaces that you can create in keystone. Use keystone domains to partition users, groups, and projects. You can also configure these separate domains to authenticate users in different LDAP or Active Directory environments. For more information, see the Integrate with Identity Service guide. Note Identity Service includes a built-in domain called Default . It is suggested you reserve this domain only for service accounts, and create a separate domain for user accounts. 7.1. Viewing a list of domains You can view a list of domains with the openstack domain list command: 7.2. Creating a new domain You can create a new domain with the openstack domain create command: 7.3. Viewing the details of a domain You can view the details of a domain with the openstack domain show command: 7.4. Disabling a domain You can disable and enable domains according to your requirements. Procedure Disable a domain using the --disable option: Confirm that the domain has been disabled: Use the --enable option to re-enable the domain, if required: | [
"openstack domain list +----------------------------------+------------------+---------+--------------------+ | ID | Name | Enabled | Description | +----------------------------------+------------------+---------+--------------------+ | 3abefa6f32c14db9a9703bf5ce6863e1 | TestDomain | True | | | 69436408fdcb44ab9e111691f8e9216d | corp | True | | | a4f61a8feb8d4253b260054c6aa41adb | federated_domain | True | | | default | Default | True | The default domain | +----------------------------------+------------------+---------+--------------------+",
"openstack domain create TestDomain +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | enabled | True | | id | 3abefa6f32c14db9a9703bf5ce6863e1 | | name | TestDomain | +-------------+----------------------------------+",
"openstack domain show TestDomain +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | enabled | True | | id | 3abefa6f32c14db9a9703bf5ce6863e1 | | name | TestDomain | +-------------+----------------------------------+",
"openstack domain set TestDomain --disable",
"openstack domain show TestDomain +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | enabled | False | | id | 3abefa6f32c14db9a9703bf5ce6863e1 | | name | TestDomain | +-------------+----------------------------------+",
"openstack domain set TestDomain --enable"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/users_and_identity_management_guide/assembly_domains |
Chapter 12. Expanding the cluster | Chapter 12. Expanding the cluster You can expand a cluster installed with the Assisted Installer by adding hosts using the user interface or the API. 12.1. Prerequisites You must have access to an Assisted Installer cluster. You must install the OpenShift CLI ( oc ). Ensure that all the required DNS records exist for the cluster that you are adding the worker node to. If you are adding a worker node to a cluster with multiple CPU architectures, you must ensure that the architecture is set to multi . If you are adding arm64 , IBM Power , or IBM zSystems compute nodes to an existing x86_64 cluster, use a platform that supports a mixed architecture. For details, see Installing a mixed architecture cluster Additional resources Installing with the Assisted Installer API Installing with the Assisted Installer UI Adding hosts with the Assisted Installer API Adding hosts with the Assisted Installer UI 12.2. Checking for multiple architectures When adding a node to a cluster with multiple architectures, ensure that the architecture setting is set to multi . Procedure Log in to the cluster using the CLI. Check the architecture setting: USD oc adm release info -o json | jq .metadata.metadata Ensure that the architecture setting is set to 'multi'. { "release.openshift.io/architecture": "multi" } 12.3. Adding hosts with the UI You can add hosts to clusters that were created using the Assisted Installer . Important Adding hosts to Assisted Installer clusters is only supported for clusters running OpenShift Container Platform version 4.11 and up. Procedure Log in to OpenShift Cluster Manager and click the cluster that you want to expand. Click Add hosts and download the discovery ISO for the new host, adding an SSH public key and configuring cluster-wide proxy settings as needed. Optional: Modify ignition files as needed. Boot the target host using the discovery ISO, and wait for the host to be discovered in the console. Select the host role. It can be either a worker or a control plane host. Start the installation. As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the host. When prompted, approve the pending CSRs to complete the installation. When the host is successfully installed, it is listed as a host in the cluster web console. Important New hosts will be encrypted using the same method as the original cluster. 12.4. Adding hosts with the API You can add hosts to clusters using the Assisted Installer REST API. Prerequisites Install the OpenShift Cluster Manager CLI ( ocm ). Log in to OpenShift Cluster Manager as a user with cluster creation privileges. Install jq . Ensure that all the required DNS records exist for the cluster that you want to expand. Procedure Authenticate against the Assisted Installer REST API and generate an API token for your session. The generated token is valid for 15 minutes only. Set the USDAPI_URL variable by running the following command: USD export API_URL=<api_url> 1 1 Replace <api_url> with the Assisted Installer API URL, for example, https://api.openshift.com Import the cluster by running the following commands: Set the USDCLUSTER_ID variable. Log in to the cluster and run the following command: USD export CLUSTER_ID=USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}') Set the USDCLUSTER_REQUEST variable that is used to import the cluster: USD export CLUSTER_REQUEST=USD(jq --null-input --arg openshift_cluster_id "USDCLUSTER_ID" '{ "api_vip_dnsname": "<api_vip>", 1 "openshift_cluster_id": USDCLUSTER_ID, "name": "<openshift_cluster_name>" 2 }') 1 Replace <api_vip> with the hostname for the cluster's API server. This can be the DNS domain for the API server or the IP address of the single node which the host can reach. For example, api.compute-1.example.com . 2 Replace <openshift_cluster_name> with the plain text name for the cluster. The cluster name should match the cluster name that was set during the Day 1 cluster installation. Import the cluster and set the USDCLUSTER_ID variable. Run the following command: USD CLUSTER_ID=USD(curl "USDAPI_URL/api/assisted-install/v2/clusters/import" -H "Authorization: Bearer USD{API_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' \ -d "USDCLUSTER_REQUEST" | tee /dev/stderr | jq -r '.id') Generate the InfraEnv resource for the cluster and set the USDINFRA_ENV_ID variable by running the following commands: Download the pull secret file from Red Hat OpenShift Cluster Manager at console.redhat.com . Set the USDINFRA_ENV_REQUEST variable: export INFRA_ENV_REQUEST=USD(jq --null-input \ --slurpfile pull_secret <path_to_pull_secret_file> \ 1 --arg ssh_pub_key "USD(cat <path_to_ssh_pub_key>)" \ 2 --arg cluster_id "USDCLUSTER_ID" '{ "name": "<infraenv_name>", 3 "pull_secret": USDpull_secret[0] | tojson, "cluster_id": USDcluster_id, "ssh_authorized_key": USDssh_pub_key, "image_type": "<iso_image_type>" 4 }') 1 Replace <path_to_pull_secret_file> with the path to the local file containing the downloaded pull secret from Red Hat OpenShift Cluster Manager at console.redhat.com . 2 Replace <path_to_ssh_pub_key> with the path to the public SSH key required to access the host. If you do not set this value, you cannot access the host while in discovery mode. 3 Replace <infraenv_name> with the plain text name for the InfraEnv resource. 4 Replace <iso_image_type> with the ISO image type, either full-iso or minimal-iso . Post the USDINFRA_ENV_REQUEST to the /v2/infra-envs API and set the USDINFRA_ENV_ID variable: USD INFRA_ENV_ID=USD(curl "USDAPI_URL/api/assisted-install/v2/infra-envs" -H "Authorization: Bearer USD{API_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' -d "USDINFRA_ENV_REQUEST" | tee /dev/stderr | jq -r '.id') Get the URL of the discovery ISO for the cluster host by running the following command: USD curl -s "USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID" -H "Authorization: Bearer USD{API_TOKEN}" | jq -r '.download_url' Example output https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=4.12 Download the ISO: USD curl -L -s '<iso_url>' --output rhcos-live-minimal.iso 1 1 Replace <iso_url> with the URL for the ISO from the step. Boot the new worker host from the downloaded rhcos-live-minimal.iso . Get the list of hosts in the cluster that are not installed. Keep running the following command until the new host shows up: USD curl -s "USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID" -H "Authorization: Bearer USD{API_TOKEN}" | jq -r '.hosts[] | select(.status != "installed").id' Example output 2294ba03-c264-4f11-ac08-2f1bb2f8c296 Set the USDHOST_ID variable for the new host, for example: USD HOST_ID=<host_id> 1 1 Replace <host_id> with the host ID from the step. Check that the host is ready to install by running the following command: Note Ensure that you copy the entire command including the complete jq expression. USD curl -s USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID -H "Authorization: Bearer USD{API_TOKEN}" | jq ' def host_name(USDhost): if (.suggested_hostname // "") == "" then if (.inventory // "") == "" then "Unknown hostname, please wait" else .inventory | fromjson | .hostname end else .suggested_hostname end; def is_notable(USDvalidation): ["failure", "pending", "error"] | any(. == USDvalidation.status); def notable_validations(USDvalidations_info): [ USDvalidations_info // "{}" | fromjson | to_entries[].value[] | select(is_notable(.)) ]; { "Hosts validations": { "Hosts": [ .hosts[] | select(.status != "installed") | { "id": .id, "name": host_name(.), "status": .status, "notable_validations": notable_validations(.validations_info) } ] }, "Cluster validations info": { "notable_validations": notable_validations(.validations_info) } } ' -r Example output { "Hosts validations": { "Hosts": [ { "id": "97ec378c-3568-460c-bc22-df54534ff08f", "name": "localhost.localdomain", "status": "insufficient", "notable_validations": [ { "id": "ntp-synced", "status": "failure", "message": "Host couldn't synchronize with any NTP server" }, { "id": "api-domain-name-resolved-correctly", "status": "error", "message": "Parse error for domain name resolutions result" }, { "id": "api-int-domain-name-resolved-correctly", "status": "error", "message": "Parse error for domain name resolutions result" }, { "id": "apps-domain-name-resolved-correctly", "status": "error", "message": "Parse error for domain name resolutions result" } ] } ] }, "Cluster validations info": { "notable_validations": [] } } When the command shows that the host is ready, start the installation using the /v2/infra-envs/{infra_env_id}/hosts/{host_id}/actions/install API by running the following command: USD curl -X POST -s "USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/hosts/USDHOST_ID/actions/install" -H "Authorization: Bearer USD{API_TOKEN}" As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the host. Important You must approve the CSRs to complete the installation. Keep running the following API call to monitor the cluster installation: USD curl -s "USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID" -H "Authorization: Bearer USD{API_TOKEN}" | jq '{ "Cluster day-2 hosts": [ .hosts[] | select(.status != "installed") | {id, requested_hostname, status, status_info, progress, status_updated_at, updated_at, infra_env_id, cluster_id, created_at} ] }' Example output { "Cluster day-2 hosts": [ { "id": "a1c52dde-3432-4f59-b2ae-0a530c851480", "requested_hostname": "control-plane-1", "status": "added-to-existing-cluster", "status_info": "Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs", "progress": { "current_stage": "Done", "installation_percentage": 100, "stage_started_at": "2022-07-08T10:56:20.476Z", "stage_updated_at": "2022-07-08T10:56:20.476Z" }, "status_updated_at": "2022-07-08T10:56:20.476Z", "updated_at": "2022-07-08T10:57:15.306369Z", "infra_env_id": "b74ec0c3-d5b5-4717-a866-5b6854791bd3", "cluster_id": "8f721322-419d-4eed-aa5b-61b50ea586ae", "created_at": "2022-07-06T22:54:57.161614Z" } ] } Optional: Run the following command to see all the events for the cluster: USD curl -s "USDAPI_URL/api/assisted-install/v2/events?cluster_id=USDCLUSTER_ID" -H "Authorization: Bearer USD{API_TOKEN}" | jq -c '.[] | {severity, message, event_time, host_id}' Example output {"severity":"info","message":"Host compute-0: updated status from insufficient to known (Host is ready to be installed)","event_time":"2022-07-08T11:21:46.346Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host compute-0: updated status from known to installing (Installation is in progress)","event_time":"2022-07-08T11:28:28.647Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host compute-0: updated status from installing to installing-in-progress (Starting installation)","event_time":"2022-07-08T11:28:52.068Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Uploaded logs for host compute-0 cluster 8f721322-419d-4eed-aa5b-61b50ea586ae","event_time":"2022-07-08T11:29:47.802Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host compute-0: updated status from installing-in-progress to added-to-existing-cluster (Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs)","event_time":"2022-07-08T11:29:48.259Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host: compute-0, reached installation stage Rebooting","event_time":"2022-07-08T11:29:48.261Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} Log in to the cluster and approve the pending CSRs to complete the installation. Verification Check that the new host was successfully added to the cluster with a status of Ready : USD oc get nodes Example output NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.25.0 compute-1.example.com Ready worker 11m v1.25.0 12.5. Installing a mixed-architecture cluster From OpenShift Container Platform version 4.12.0 and later, a cluster with an x86_64 control plane can support mixed-architecture worker nodes of two different CPU architectures. Mixed-architecture clusters combine the strengths of each architecture and support a variety of workloads. From version 4.12.0, you can add arm64 worker nodes to an existing OpenShift cluster with an x86_64 control plane. From version 4.14.0, you can add IBM Power or IBM zSystems worker nodes to an existing x86_64 control plane. The main steps of the installation are as follows: Create and register a multi-architecture cluster. Create an x86_64 infrastructure environment, download the ISO for x86_64 , and add the control plane. The control plane must have the x86_64 architecture. Create an arm64 , IBM Power or IBM zSystems infrastructure environment, download the ISO for arm64 , IBM Power or IBM zSystems , and add the worker nodes. These steps are detailed in the procedure below. Supported platforms The table below lists the platforms that support a mixed-architecture cluster for each OpenShift Container Platform version. Use the appropriate platforms for the version you are installing. OpenShift Container Platform version Supported platforms Day 1 control plane architecture Day 2 node architecture 4.12.0 Microsoft Azure (TP) x86_64 arm64 4.13.0 Microsoft Azure Amazon Web Services Bare Metal (TP) x86_64 x86_64 x86_64 arm64 arm64 arm64 4.14.0 Microsoft Azure Amazon Web Services Bare Metal Google Cloud Platform IBM(R) Power(R) IBM Z(R) x86_64 x86_64 x86_64 x86_64 x86_64 x86_64 arm64 arm64 arm64 arm64 ppc64le s390x Important Technology Preview (TP) features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Main steps Start the procedure for installing OpenShift Container Platform using the API. For details, see Installing with the Assisted Installer API in the Additional Resources section. When you reach the "Registering a new cluster" step of the installation, register the cluster as a multi-architecture cluster: USD curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { "name": "testcluster", "openshift_version": "<version-number>-multi", 1 "cpu_architecture" : "multi" 2 "high_availability_mode": "full" 3 "base_dns_domain": "example.com", "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' Note 1 Use the multi- option for the OpenShift version number; for example, "4.12-multi" . 2 Set the CPU architecture` to "multi" . 3 Use the full value to indicate Multi-Node OpenShift. When you reach the "Registering a new infrastructure environment" step of the installation, set cpu_architecture to x86_64 : USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt \ --arg cluster_id USD{CLUSTER_ID} ' { "name": "testcluster-infra-env", "image_type":"full-iso", "cluster_id": USDcluster_id, "cpu_architecture" : "x86_64" "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' When you reach the "Adding hosts" step of the installation, set host_role to master : Note For more information, see Assigning Roles to Hosts in Additional Resources . USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "host_role":"master" } ' | jq Download the discovery image for the x86_64 architecture. Boot the x86_64 architecture hosts using the generated discovery image. Start the installation and wait for the cluster to be fully installed. Repeat the "Registering a new infrastructure environment" step of the installation. This time, set cpu_architecture to one of the following: ppc64le (for IBM Power), s390x (for IBM Z), or arm64 . For example: USD curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { "name": "testcluster", "openshift_version": "4.12", "cpu_architecture" : "arm64" "high_availability_mode": "full" "base_dns_domain": "example.com", "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' Repeat the "Adding hosts" step of the installation. This time, set host_role to worker : Note For more details, see Assigning Roles to Hosts in Additional Resources . USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "host_role":"worker" } ' | jq Download the discovery image for the arm64 , ppc64le or s390x architecture. Boot the architecture hosts using the generated discovery image. Start the installation and wait for the cluster to be fully installed. Verification View the arm64 , ppc64le or s390x worker nodes in the cluster by running the following command: USD oc get nodes -o wide 12.6. Installing a primary control plane node on a healthy cluster This procedure describes how to install a primary control plane node on a healthy OpenShift Container Platform cluster. If the cluster is unhealthy, additional operations are required before they can be managed. See Additional Resources for more information. Prerequisites You are using OpenShift Container Platform 4.11 or newer with the correct etcd-operator version. You have installed a healthy cluster with a minimum of three nodes. You have assigned role: master to a single node. Procedure Review and approve CSRs Review the CertificateSigningRequests (CSRs): USD oc get csr | grep Pending Example output csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:worker-6 <none> Pending Approve all pending CSRs: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Important You must approve the CSRs to complete the installation. Confirm the primary node is in Ready status: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 4h42m v1.24.0+3882f8f worker-1 Ready worker 4h29m v1.24.0+3882f8f master-2 Ready master 4h43m v1.24.0+3882f8f master-3 Ready master 4h27m v1.24.0+3882f8f worker-4 Ready worker 4h30m v1.24.0+3882f8f master-5 Ready master 105s v1.24.0+3882f8f Note The etcd-operator requires a Machine Custom Resources (CRs) referencing the new node when the cluster runs with a functional Machine API. Link the Machine CR with BareMetalHost and Node : Create the BareMetalHost CR with a unique .metadata.name value": apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: custom-master3 namespace: openshift-machine-api annotations: spec: automatedCleaningMode: metadata bootMACAddress: 00:00:00:00:00:02 bootMode: UEFI customDeploy: method: install_coreos externallyProvisioned: true online: true userData: name: master-user-data-managed namespace: openshift-machine-api USD oc create -f <filename> Apply the BareMetalHost CR: USD oc apply -f <filename> Create the Machine CR using the unique .machine.name value: apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: machine.openshift.io/instance-state: externally provisioned metal3.io/BareMetalHost: openshift-machine-api/custom-master3 finalizers: - machine.machine.openshift.io generation: 3 labels: machine.openshift.io/cluster-api-cluster: test-day2-1-6qv96 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master name: custom-master3 namespace: openshift-machine-api spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 customDeploy: method: install_coreos hostSelector: {} image: checksum: "" url: "" kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: master-user-data-managed USD oc create -f <filename> Apply the Machine CR: USD oc apply -f <filename> Link BareMetalHost , Machine , and Node using the link-machine-and-node.sh script: #!/bin/bash # Credit goes to https://bugzilla.redhat.com/show_bug.cgi?id=1801238. # This script will link Machine object and Node object. This is needed # in order to have IP address of the Node present in the status of the Machine. set -x set -e machine="USD1" node="USD2" if [ -z "USDmachine" -o -z "USDnode" ]; then echo "Usage: USD0 MACHINE NODE" exit 1 fi uid=USD(echo USDnode | cut -f1 -d':') node_name=USD(echo USDnode | cut -f2 -d':') oc proxy & proxy_pid=USD! function kill_proxy { kill USDproxy_pid } trap kill_proxy EXIT SIGINT HOST_PROXY_API_PATH="http://localhost:8001/apis/metal3.io/v1alpha1/namespaces/openshift-machine-api/baremetalhosts" function wait_for_json() { local name local url local curl_opts local timeout local start_time local curr_time local time_diff name="USD1" url="USD2" timeout="USD3" shift 3 curl_opts="USD@" echo -n "Waiting for USDname to respond" start_time=USD(date +%s) until curl -g -X GET "USDurl" "USD{curl_opts[@]}" 2> /dev/null | jq '.' 2> /dev/null > /dev/null; do echo -n "." curr_time=USD(date +%s) time_diff=USD((USDcurr_time - USDstart_time)) if [[ USDtime_diff -gt USDtimeout ]]; then echo "\nTimed out waiting for USDname" return 1 fi sleep 5 done echo " Success!" return 0 } wait_for_json oc_proxy "USD{HOST_PROXY_API_PATH}" 10 -H "Accept: application/json" -H "Content-Type: application/json" addresses=USD(oc get node -n openshift-machine-api USD{node_name} -o json | jq -c '.status.addresses') machine_data=USD(oc get machine -n openshift-machine-api -o json USD{machine}) host=USD(echo "USDmachine_data" | jq '.metadata.annotations["metal3.io/BareMetalHost"]' | cut -f2 -d/ | sed 's/"//g') if [ -z "USDhost" ]; then echo "Machine USDmachine is not linked to a host yet." 1>&2 exit 1 fi # The address structure on the host doesn't match the node, so extract # the values we want into separate variables so we can build the patch # we need. hostname=USD(echo "USD{addresses}" | jq '.[] | select(. | .type == "Hostname") | .address' | sed 's/"//g') ipaddr=USD(echo "USD{addresses}" | jq '.[] | select(. | .type == "InternalIP") | .address' | sed 's/"//g') host_patch=' { "status": { "hardware": { "hostname": "'USD{hostname}'", "nics": [ { "ip": "'USD{ipaddr}'", "mac": "00:00:00:00:00:00", "model": "unknown", "speedGbps": 10, "vlanId": 0, "pxe": true, "name": "eth1" } ], "systemVendor": { "manufacturer": "Red Hat", "productName": "product name", "serialNumber": "" }, "firmware": { "bios": { "date": "04/01/2014", "vendor": "SeaBIOS", "version": "1.11.0-2.el7" } }, "ramMebibytes": 0, "storage": [], "cpu": { "arch": "x86_64", "model": "Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz", "clockMegahertz": 2199.998, "count": 4, "flags": [] } } } } ' echo "PATCHING HOST" echo "USD{host_patch}" | jq . curl -s \ -X PATCH \ USD{HOST_PROXY_API_PATH}/USD{host}/status \ -H "Content-type: application/merge-patch+json" \ -d "USD{host_patch}" oc get baremetalhost -n openshift-machine-api -o yaml "USD{host}" USD bash link-machine-and-node.sh custom-master3 worker-5 Confirm etcd members: USD oc rsh -n openshift-etcd etcd-worker-2 etcdctl member list -w table Example output +--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ Confirm the etcd-operator configuration applies to all nodes: USD oc get clusteroperator etcd Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE etcd 4.11.5 True False False 5h54m Confirm etcd-operator health: USD oc rsh -n openshift-etcd etcd-worker-0 etcdctl endpoint health Example output 192.168.111.26 is healthy: committed proposal: took = 11.297561ms 192.168.111.25 is healthy: committed proposal: took = 13.892416ms 192.168.111.28 is healthy: committed proposal: took = 11.870755ms Confirm node health: USD oc get Nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 6h20m v1.24.0+3882f8f worker-1 Ready worker 6h7m v1.24.0+3882f8f master-2 Ready master 6h20m v1.24.0+3882f8f master-3 Ready master 6h4m v1.24.0+3882f8f worker-4 Ready worker 6h7m v1.24.0+3882f8f master-5 Ready master 99m v1.24.0+3882f8f Confirm the ClusterOperators health: USD oc get ClusterOperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MSG authentication 4.11.5 True False False 5h57m baremetal 4.11.5 True False False 6h19m cloud-controller-manager 4.11.5 True False False 6h20m cloud-credential 4.11.5 True False False 6h23m cluster-autoscaler 4.11.5 True False False 6h18m config-operator 4.11.5 True False False 6h19m console 4.11.5 True False False 6h4m csi-snapshot-controller 4.11.5 True False False 6h19m dns 4.11.5 True False False 6h18m etcd 4.11.5 True False False 6h17m image-registry 4.11.5 True False False 6h7m ingress 4.11.5 True False False 6h6m insights 4.11.5 True False False 6h12m kube-apiserver 4.11.5 True False False 6h16m kube-controller-manager 4.11.5 True False False 6h16m kube-scheduler 4.11.5 True False False 6h16m kube-storage-version-migrator 4.11.5 True False False 6h19m machine-api 4.11.5 True False False 6h15m machine-approver 4.11.5 True False False 6h19m machine-config 4.11.5 True False False 6h18m marketplace 4.11.5 True False False 6h18m monitoring 4.11.5 True False False 6h4m network 4.11.5 True False False 6h20m node-tuning 4.11.5 True False False 6h18m openshift-apiserver 4.11.5 True False False 6h8m openshift-controller-manager 4.11.5 True False False 6h7m openshift-samples 4.11.5 True False False 6h12m operator-lifecycle-manager 4.11.5 True False False 6h18m operator-lifecycle-manager-catalog 4.11.5 True False False 6h19m operator-lifecycle-manager-pkgsvr 4.11.5 True False False 6h12m service-ca 4.11.5 True False False 6h19m storage 4.11.5 True False False 6h19m Confirm the ClusterVersion : USD oc get ClusterVersion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 5h57m Cluster version is 4.11.5 Remove the old control plane node: Delete the BareMetalHost CR: USD oc delete bmh -n openshift-machine-api custom-master3 Confirm the Machine is unhealthy: USD oc get machine -A Example output NAMESPACE NAME PHASE AGE openshift-machine-api custom-master3 Running 14h openshift-machine-api test-day2-1-6qv96-master-0 Failed 20h openshift-machine-api test-day2-1-6qv96-master-1 Running 20h openshift-machine-api test-day2-1-6qv96-master-2 Running 20h openshift-machine-api test-day2-1-6qv96-worker-0-8w7vr Running 19h openshift-machine-api test-day2-1-6qv96-worker-0-rxddj Running 19h Delete the Machine CR: USD oc delete machine -n openshift-machine-api test-day2-1-6qv96-master-0 machine.machine.openshift.io "test-day2-1-6qv96-master-0" deleted Confirm removal of the Node CR: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION worker-1 Ready worker 19h v1.24.0+3882f8f master-2 Ready master 20h v1.24.0+3882f8f master-3 Ready master 19h v1.24.0+3882f8f worker-4 Ready worker 19h v1.24.0+3882f8f master-5 Ready master 15h v1.24.0+3882f8f Check etcd-operator logs to confirm status of the etcd cluster: USD oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf Example output E0927 07:53:10.597523 1 base_controller.go:272] ClusterMemberRemovalController reconciliation failed: cannot remove member: 192.168.111.23 because it is reported as healthy but it doesn't have a machine nor a node resource Remove the physical machine to allow etcd-operator to reconcile the cluster members: USD oc rsh -n openshift-etcd etcd-worker-2 etcdctl member list -w table; etcdctl endpoint health Example output +--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ 192.168.111.26 is healthy: committed proposal: took = 10.458132ms 192.168.111.25 is healthy: committed proposal: took = 11.047349ms 192.168.111.28 is healthy: committed proposal: took = 11.414402ms Additional resources Installing a primary control plane node on an unhealthy cluster 12.7. Installing a primary control plane node on an unhealthy cluster This procedure describes how to install a primary control plane node on an unhealthy OpenShift Container Platform cluster. Prerequisites You are using OpenShift Container Platform 4.11 or newer with the correct etcd-operator version. You have installed a healthy cluster with a minimum of two nodes. You have created the Day 2 control plane. You have assigned role: master to a single node. Procedure Confirm initial state of the cluster: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION worker-1 Ready worker 20h v1.24.0+3882f8f master-2 NotReady master 20h v1.24.0+3882f8f master-3 Ready master 20h v1.24.0+3882f8f worker-4 Ready worker 20h v1.24.0+3882f8f master-5 Ready master 15h v1.24.0+3882f8f Confirm the etcd-operator detects the cluster as unhealthy: USD oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf Example output E0927 08:24:23.983733 1 base_controller.go:272] DefragController reconciliation failed: cluster is unhealthy: 2 of 3 members are available, worker-2 is unhealthy Confirm the etcdctl members: USD oc rsh -n openshift-etcd etcd-worker-3 etcdctl member list -w table Example output +--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ Confirm that etcdctl reports an unhealthy member of the cluster: USD etcdctl endpoint health Example output {"level":"warn","ts":"2022-09-27T08:25:35.953Z","logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000680380/192.168.111.25","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\""} 192.168.111.28 is healthy: committed proposal: took = 12.465641ms 192.168.111.26 is healthy: committed proposal: took = 12.297059ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy cluster Remove the unhealthy control plane by deleting the Machine Custom Resource: USD oc delete machine -n openshift-machine-api test-day2-1-6qv96-master-2 Note The Machine and Node Custom Resources (CRs) will not be deleted if the unhealthy cluster cannot run successfully. Confirm that etcd-operator has not removed the unhealthy machine: USD oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf -f Example output I0927 08:58:41.249222 1 machinedeletionhooks.go:135] skip removing the deletion hook from machine test-day2-1-6qv96-master-2 since its member is still present with any of: [{InternalIP } {InternalIP 192.168.111.26}] Remove the unhealthy etcdctl member manually: USD oc rsh -n openshift-etcd etcd-worker-3\ etcdctl member list -w table Example output +--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ Confirm that etcdctl reports an unhealthy member of the cluster: USD etcdctl endpoint health Example output {"level":"warn","ts":"2022-09-27T10:31:07.227Z","logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0000d6e00/192.168.111.25","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\""} 192.168.111.28 is healthy: committed proposal: took = 13.038278ms 192.168.111.26 is healthy: committed proposal: took = 12.950355ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy cluster Remove the unhealthy cluster by deleting the etcdctl member Custom Resource: USD etcdctl member remove 61e2a86084aafa62 Example output Member 61e2a86084aafa62 removed from cluster 6881c977b97990d7 Confirm members of etcdctl by running the following command: USD etcdctl member list -w table Example output +----------+---------+--------+--------------+--------------+-------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |LEARNER| +----------+---------+--------+--------------+--------------+-------+ | 2c18942f | started |worker-3|192.168.111.26|192.168.111.26| false | | ead4f280 | started |worker-5|192.168.111.28|192.168.111.28| false | +----------+---------+--------+--------------+--------------+-------+ Review and approve Certificate Signing Requests Review the Certificate Signing Requests (CSRs): USD oc get csr | grep Pending Example output csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:worker-6 <none> Pending Approve all pending CSRs: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note You must approve the CSRs to complete the installation. Confirm ready status of the control plane node: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION worker-1 Ready worker 22h v1.24.0+3882f8f master-3 Ready master 22h v1.24.0+3882f8f worker-4 Ready worker 22h v1.24.0+3882f8f master-5 Ready master 17h v1.24.0+3882f8f master-6 Ready master 2m52s v1.24.0+3882f8f Validate the Machine , Node and BareMetalHost Custom Resources. The etcd-operator requires Machine CRs to be present if the cluster is running with the functional Machine API. Machine CRs are displayed during the Running phase when present. Create Machine Custom Resource linked with BareMetalHost and Node . Make sure there is a Machine CR referencing the newly added node. Important Boot-it-yourself will not create BareMetalHost and Machine CRs, so you must create them. Failure to create the BareMetalHost and Machine CRs will generate errors when running etcd-operator . Add BareMetalHost Custom Resource: USD oc create bmh -n openshift-machine-api custom-master3 Add Machine Custom Resource: USD oc create machine -n openshift-machine-api custom-master3 Link BareMetalHost , Machine , and Node by running the link-machine-and-node.sh script: #!/bin/bash # Credit goes to https://bugzilla.redhat.com/show_bug.cgi?id=1801238. # This script will link Machine object and Node object. This is needed # in order to have IP address of the Node present in the status of the Machine. set -x set -e machine="USD1" node="USD2" if [ -z "USDmachine" -o -z "USDnode" ]; then echo "Usage: USD0 MACHINE NODE" exit 1 fi uid=USD(echo USDnode | cut -f1 -d':') node_name=USD(echo USDnode | cut -f2 -d':') oc proxy & proxy_pid=USD! function kill_proxy { kill USDproxy_pid } trap kill_proxy EXIT SIGINT HOST_PROXY_API_PATH="http://localhost:8001/apis/metal3.io/v1alpha1/namespaces/openshift-machine-api/baremetalhosts" function wait_for_json() { local name local url local curl_opts local timeout local start_time local curr_time local time_diff name="USD1" url="USD2" timeout="USD3" shift 3 curl_opts="USD@" echo -n "Waiting for USDname to respond" start_time=USD(date +%s) until curl -g -X GET "USDurl" "USD{curl_opts[@]}" 2> /dev/null | jq '.' 2> /dev/null > /dev/null; do echo -n "." curr_time=USD(date +%s) time_diff=USD((USDcurr_time - USDstart_time)) if [[ USDtime_diff -gt USDtimeout ]]; then echo "\nTimed out waiting for USDname" return 1 fi sleep 5 done echo " Success!" return 0 } wait_for_json oc_proxy "USD{HOST_PROXY_API_PATH}" 10 -H "Accept: application/json" -H "Content-Type: application/json" addresses=USD(oc get node -n openshift-machine-api USD{node_name} -o json | jq -c '.status.addresses') machine_data=USD(oc get machine -n openshift-machine-api -o json USD{machine}) host=USD(echo "USDmachine_data" | jq '.metadata.annotations["metal3.io/BareMetalHost"]' | cut -f2 -d/ | sed 's/"//g') if [ -z "USDhost" ]; then echo "Machine USDmachine is not linked to a host yet." 1>&2 exit 1 fi # The address structure on the host doesn't match the node, so extract # the values we want into separate variables so we can build the patch # we need. hostname=USD(echo "USD{addresses}" | jq '.[] | select(. | .type == "Hostname") | .address' | sed 's/"//g') ipaddr=USD(echo "USD{addresses}" | jq '.[] | select(. | .type == "InternalIP") | .address' | sed 's/"//g') host_patch=' { "status": { "hardware": { "hostname": "'USD{hostname}'", "nics": [ { "ip": "'USD{ipaddr}'", "mac": "00:00:00:00:00:00", "model": "unknown", "speedGbps": 10, "vlanId": 0, "pxe": true, "name": "eth1" } ], "systemVendor": { "manufacturer": "Red Hat", "productName": "product name", "serialNumber": "" }, "firmware": { "bios": { "date": "04/01/2014", "vendor": "SeaBIOS", "version": "1.11.0-2.el7" } }, "ramMebibytes": 0, "storage": [], "cpu": { "arch": "x86_64", "model": "Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz", "clockMegahertz": 2199.998, "count": 4, "flags": [] } } } } ' echo "PATCHING HOST" echo "USD{host_patch}" | jq . curl -s \ -X PATCH \ USD{HOST_PROXY_API_PATH}/USD{host}/status \ -H "Content-type: application/merge-patch+json" \ -d "USD{host_patch}" oc get baremetalhost -n openshift-machine-api -o yaml "USD{host}" USD bash link-machine-and-node.sh custom-master3 worker-3 Confirm members of etcdctl by running the following command: USD oc rsh -n openshift-etcd etcd-worker-3 etcdctl member list -w table Example output +---------+-------+--------+--------------+--------------+-------+ | ID | STATUS| NAME | PEER ADDRS | CLIENT ADDRS |LEARNER| +---------+-------+--------+--------------+--------------+-------+ | 2c18942f|started|worker-3|192.168.111.26|192.168.111.26| false | | ead4f280|started|worker-5|192.168.111.28|192.168.111.28| false | | 79153c5a|started|worker-6|192.168.111.29|192.168.111.29| false | +---------+-------+--------+--------------+--------------+-------+ Confirm the etcd operator has configured all nodes: USD oc get clusteroperator etcd Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE etcd 4.11.5 True False False 22h Confirm health of etcdctl : USD oc rsh -n openshift-etcd etcd-worker-3 etcdctl endpoint health Example output 192.168.111.26 is healthy: committed proposal: took = 9.105375ms 192.168.111.28 is healthy: committed proposal: took = 9.15205ms 192.168.111.29 is healthy: committed proposal: took = 10.277577ms Confirm the health of the nodes: USD oc get Nodes Example output NAME STATUS ROLES AGE VERSION worker-1 Ready worker 22h v1.24.0+3882f8f master-3 Ready master 22h v1.24.0+3882f8f worker-4 Ready worker 22h v1.24.0+3882f8f master-5 Ready master 18h v1.24.0+3882f8f master-6 Ready master 40m v1.24.0+3882f8f Confirm the health of the ClusterOperators : USD oc get ClusterOperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.5 True False False 150m baremetal 4.11.5 True False False 22h cloud-controller-manager 4.11.5 True False False 22h cloud-credential 4.11.5 True False False 22h cluster-autoscaler 4.11.5 True False False 22h config-operator 4.11.5 True False False 22h console 4.11.5 True False False 145m csi-snapshot-controller 4.11.5 True False False 22h dns 4.11.5 True False False 22h etcd 4.11.5 True False False 22h image-registry 4.11.5 True False False 22h ingress 4.11.5 True False False 22h insights 4.11.5 True False False 22h kube-apiserver 4.11.5 True False False 22h kube-controller-manager 4.11.5 True False False 22h kube-scheduler 4.11.5 True False False 22h kube-storage-version-migrator 4.11.5 True False False 148m machine-api 4.11.5 True False False 22h machine-approver 4.11.5 True False False 22h machine-config 4.11.5 True False False 110m marketplace 4.11.5 True False False 22h monitoring 4.11.5 True False False 22h network 4.11.5 True False False 22h node-tuning 4.11.5 True False False 22h openshift-apiserver 4.11.5 True False False 163m openshift-controller-manager 4.11.5 True False False 22h openshift-samples 4.11.5 True False False 22h operator-lifecycle-manager 4.11.5 True False False 22h operator-lifecycle-manager-catalog 4.11.5 True False False 22h operator-lifecycle-manager-pkgsvr 4.11.5 True False False 22h service-ca 4.11.5 True False False 22h storage 4.11.5 True False False 22h Confirm the ClusterVersion : USD oc get ClusterVersion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 22h Cluster version is 4.11.5 12.8. Additional resources Installing a primary control plane node on a healthy cluster Authenticating with the REST API | [
"oc adm release info -o json | jq .metadata.metadata",
"{ \"release.openshift.io/architecture\": \"multi\" }",
"export API_URL=<api_url> 1",
"export CLUSTER_ID=USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}')",
"export CLUSTER_REQUEST=USD(jq --null-input --arg openshift_cluster_id \"USDCLUSTER_ID\" '{ \"api_vip_dnsname\": \"<api_vip>\", 1 \"openshift_cluster_id\": USDCLUSTER_ID, \"name\": \"<openshift_cluster_name>\" 2 }')",
"CLUSTER_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/clusters/import\" -H \"Authorization: Bearer USD{API_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDCLUSTER_REQUEST\" | tee /dev/stderr | jq -r '.id')",
"export INFRA_ENV_REQUEST=USD(jq --null-input --slurpfile pull_secret <path_to_pull_secret_file> \\ 1 --arg ssh_pub_key \"USD(cat <path_to_ssh_pub_key>)\" \\ 2 --arg cluster_id \"USDCLUSTER_ID\" '{ \"name\": \"<infraenv_name>\", 3 \"pull_secret\": USDpull_secret[0] | tojson, \"cluster_id\": USDcluster_id, \"ssh_authorized_key\": USDssh_pub_key, \"image_type\": \"<iso_image_type>\" 4 }')",
"INFRA_ENV_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/infra-envs\" -H \"Authorization: Bearer USD{API_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDINFRA_ENV_REQUEST\" | tee /dev/stderr | jq -r '.id')",
"curl -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID\" -H \"Authorization: Bearer USD{API_TOKEN}\" | jq -r '.download_url'",
"https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=4.12",
"curl -L -s '<iso_url>' --output rhcos-live-minimal.iso 1",
"curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{API_TOKEN}\" | jq -r '.hosts[] | select(.status != \"installed\").id'",
"2294ba03-c264-4f11-ac08-2f1bb2f8c296",
"HOST_ID=<host_id> 1",
"curl -s USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID -H \"Authorization: Bearer USD{API_TOKEN}\" | jq ' def host_name(USDhost): if (.suggested_hostname // \"\") == \"\" then if (.inventory // \"\") == \"\" then \"Unknown hostname, please wait\" else .inventory | fromjson | .hostname end else .suggested_hostname end; def is_notable(USDvalidation): [\"failure\", \"pending\", \"error\"] | any(. == USDvalidation.status); def notable_validations(USDvalidations_info): [ USDvalidations_info // \"{}\" | fromjson | to_entries[].value[] | select(is_notable(.)) ]; { \"Hosts validations\": { \"Hosts\": [ .hosts[] | select(.status != \"installed\") | { \"id\": .id, \"name\": host_name(.), \"status\": .status, \"notable_validations\": notable_validations(.validations_info) } ] }, \"Cluster validations info\": { \"notable_validations\": notable_validations(.validations_info) } } ' -r",
"{ \"Hosts validations\": { \"Hosts\": [ { \"id\": \"97ec378c-3568-460c-bc22-df54534ff08f\", \"name\": \"localhost.localdomain\", \"status\": \"insufficient\", \"notable_validations\": [ { \"id\": \"ntp-synced\", \"status\": \"failure\", \"message\": \"Host couldn't synchronize with any NTP server\" }, { \"id\": \"api-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"api-int-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"apps-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" } ] } ] }, \"Cluster validations info\": { \"notable_validations\": [] } }",
"curl -X POST -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/hosts/USDHOST_ID/actions/install\" -H \"Authorization: Bearer USD{API_TOKEN}\"",
"curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{API_TOKEN}\" | jq '{ \"Cluster day-2 hosts\": [ .hosts[] | select(.status != \"installed\") | {id, requested_hostname, status, status_info, progress, status_updated_at, updated_at, infra_env_id, cluster_id, created_at} ] }'",
"{ \"Cluster day-2 hosts\": [ { \"id\": \"a1c52dde-3432-4f59-b2ae-0a530c851480\", \"requested_hostname\": \"control-plane-1\", \"status\": \"added-to-existing-cluster\", \"status_info\": \"Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs\", \"progress\": { \"current_stage\": \"Done\", \"installation_percentage\": 100, \"stage_started_at\": \"2022-07-08T10:56:20.476Z\", \"stage_updated_at\": \"2022-07-08T10:56:20.476Z\" }, \"status_updated_at\": \"2022-07-08T10:56:20.476Z\", \"updated_at\": \"2022-07-08T10:57:15.306369Z\", \"infra_env_id\": \"b74ec0c3-d5b5-4717-a866-5b6854791bd3\", \"cluster_id\": \"8f721322-419d-4eed-aa5b-61b50ea586ae\", \"created_at\": \"2022-07-06T22:54:57.161614Z\" } ] }",
"curl -s \"USDAPI_URL/api/assisted-install/v2/events?cluster_id=USDCLUSTER_ID\" -H \"Authorization: Bearer USD{API_TOKEN}\" | jq -c '.[] | {severity, message, event_time, host_id}'",
"{\"severity\":\"info\",\"message\":\"Host compute-0: updated status from insufficient to known (Host is ready to be installed)\",\"event_time\":\"2022-07-08T11:21:46.346Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from known to installing (Installation is in progress)\",\"event_time\":\"2022-07-08T11:28:28.647Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing to installing-in-progress (Starting installation)\",\"event_time\":\"2022-07-08T11:28:52.068Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Uploaded logs for host compute-0 cluster 8f721322-419d-4eed-aa5b-61b50ea586ae\",\"event_time\":\"2022-07-08T11:29:47.802Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing-in-progress to added-to-existing-cluster (Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs)\",\"event_time\":\"2022-07-08T11:29:48.259Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host: compute-0, reached installation stage Rebooting\",\"event_time\":\"2022-07-08T11:29:48.261Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"}",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.25.0 compute-1.example.com Ready worker 11m v1.25.0",
"curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"<version-number>-multi\", 1 \"cpu_architecture\" : \"multi\" 2 \"high_availability_mode\": \"full\" 3 \"base_dns_domain\": \"example.com\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt --arg cluster_id USD{CLUSTER_ID} ' { \"name\": \"testcluster-infra-env\", \"image_type\":\"full-iso\", \"cluster_id\": USDcluster_id, \"cpu_architecture\" : \"x86_64\" \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"host_role\":\"master\" } ' | jq",
"curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"4.12\", \"cpu_architecture\" : \"arm64\" \"high_availability_mode\": \"full\" \"base_dns_domain\": \"example.com\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"host_role\":\"worker\" } ' | jq",
"oc get nodes -o wide",
"oc get csr | grep Pending",
"csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:worker-6 <none> Pending",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 4h42m v1.24.0+3882f8f worker-1 Ready worker 4h29m v1.24.0+3882f8f master-2 Ready master 4h43m v1.24.0+3882f8f master-3 Ready master 4h27m v1.24.0+3882f8f worker-4 Ready worker 4h30m v1.24.0+3882f8f master-5 Ready master 105s v1.24.0+3882f8f",
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: custom-master3 namespace: openshift-machine-api annotations: spec: automatedCleaningMode: metadata bootMACAddress: 00:00:00:00:00:02 bootMode: UEFI customDeploy: method: install_coreos externallyProvisioned: true online: true userData: name: master-user-data-managed namespace: openshift-machine-api",
"oc create -f <filename>",
"oc apply -f <filename>",
"apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: machine.openshift.io/instance-state: externally provisioned metal3.io/BareMetalHost: openshift-machine-api/custom-master3 finalizers: - machine.machine.openshift.io generation: 3 labels: machine.openshift.io/cluster-api-cluster: test-day2-1-6qv96 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master name: custom-master3 namespace: openshift-machine-api spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 customDeploy: method: install_coreos hostSelector: {} image: checksum: \"\" url: \"\" kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: master-user-data-managed",
"oc create -f <filename>",
"oc apply -f <filename>",
"#!/bin/bash Credit goes to https://bugzilla.redhat.com/show_bug.cgi?id=1801238. This script will link Machine object and Node object. This is needed in order to have IP address of the Node present in the status of the Machine. set -x set -e machine=\"USD1\" node=\"USD2\" if [ -z \"USDmachine\" -o -z \"USDnode\" ]; then echo \"Usage: USD0 MACHINE NODE\" exit 1 fi uid=USD(echo USDnode | cut -f1 -d':') node_name=USD(echo USDnode | cut -f2 -d':') proxy & proxy_pid=USD! function kill_proxy { kill USDproxy_pid } trap kill_proxy EXIT SIGINT HOST_PROXY_API_PATH=\"http://localhost:8001/apis/metal3.io/v1alpha1/namespaces/openshift-machine-api/baremetalhosts\" function wait_for_json() { local name local url local curl_opts local timeout local start_time local curr_time local time_diff name=\"USD1\" url=\"USD2\" timeout=\"USD3\" shift 3 curl_opts=\"USD@\" echo -n \"Waiting for USDname to respond\" start_time=USD(date +%s) until curl -g -X GET \"USDurl\" \"USD{curl_opts[@]}\" 2> /dev/null | jq '.' 2> /dev/null > /dev/null; do echo -n \".\" curr_time=USD(date +%s) time_diff=USD((USDcurr_time - USDstart_time)) if [[ USDtime_diff -gt USDtimeout ]]; then echo \"\\nTimed out waiting for USDname\" return 1 fi sleep 5 done echo \" Success!\" return 0 } wait_for_json oc_proxy \"USD{HOST_PROXY_API_PATH}\" 10 -H \"Accept: application/json\" -H \"Content-Type: application/json\" addresses=USD(oc get node -n openshift-machine-api USD{node_name} -o json | jq -c '.status.addresses') machine_data=USD(oc get machine -n openshift-machine-api -o json USD{machine}) host=USD(echo \"USDmachine_data\" | jq '.metadata.annotations[\"metal3.io/BareMetalHost\"]' | cut -f2 -d/ | sed 's/\"//g') if [ -z \"USDhost\" ]; then echo \"Machine USDmachine is not linked to a host yet.\" 1>&2 exit 1 fi The address structure on the host doesn't match the node, so extract the values we want into separate variables so we can build the patch we need. hostname=USD(echo \"USD{addresses}\" | jq '.[] | select(. | .type == \"Hostname\") | .address' | sed 's/\"//g') ipaddr=USD(echo \"USD{addresses}\" | jq '.[] | select(. | .type == \"InternalIP\") | .address' | sed 's/\"//g') host_patch=' { \"status\": { \"hardware\": { \"hostname\": \"'USD{hostname}'\", \"nics\": [ { \"ip\": \"'USD{ipaddr}'\", \"mac\": \"00:00:00:00:00:00\", \"model\": \"unknown\", \"speedGbps\": 10, \"vlanId\": 0, \"pxe\": true, \"name\": \"eth1\" } ], \"systemVendor\": { \"manufacturer\": \"Red Hat\", \"productName\": \"product name\", \"serialNumber\": \"\" }, \"firmware\": { \"bios\": { \"date\": \"04/01/2014\", \"vendor\": \"SeaBIOS\", \"version\": \"1.11.0-2.el7\" } }, \"ramMebibytes\": 0, \"storage\": [], \"cpu\": { \"arch\": \"x86_64\", \"model\": \"Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz\", \"clockMegahertz\": 2199.998, \"count\": 4, \"flags\": [] } } } } ' echo \"PATCHING HOST\" echo \"USD{host_patch}\" | jq . curl -s -X PATCH USD{HOST_PROXY_API_PATH}/USD{host}/status -H \"Content-type: application/merge-patch+json\" -d \"USD{host_patch}\" get baremetalhost -n openshift-machine-api -o yaml \"USD{host}\"",
"bash link-machine-and-node.sh custom-master3 worker-5",
"oc rsh -n openshift-etcd etcd-worker-2 etcdctl member list -w table",
"+--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+",
"oc get clusteroperator etcd",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE etcd 4.11.5 True False False 5h54m",
"oc rsh -n openshift-etcd etcd-worker-0 etcdctl endpoint health",
"192.168.111.26 is healthy: committed proposal: took = 11.297561ms 192.168.111.25 is healthy: committed proposal: took = 13.892416ms 192.168.111.28 is healthy: committed proposal: took = 11.870755ms",
"oc get Nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 6h20m v1.24.0+3882f8f worker-1 Ready worker 6h7m v1.24.0+3882f8f master-2 Ready master 6h20m v1.24.0+3882f8f master-3 Ready master 6h4m v1.24.0+3882f8f worker-4 Ready worker 6h7m v1.24.0+3882f8f master-5 Ready master 99m v1.24.0+3882f8f",
"oc get ClusterOperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MSG authentication 4.11.5 True False False 5h57m baremetal 4.11.5 True False False 6h19m cloud-controller-manager 4.11.5 True False False 6h20m cloud-credential 4.11.5 True False False 6h23m cluster-autoscaler 4.11.5 True False False 6h18m config-operator 4.11.5 True False False 6h19m console 4.11.5 True False False 6h4m csi-snapshot-controller 4.11.5 True False False 6h19m dns 4.11.5 True False False 6h18m etcd 4.11.5 True False False 6h17m image-registry 4.11.5 True False False 6h7m ingress 4.11.5 True False False 6h6m insights 4.11.5 True False False 6h12m kube-apiserver 4.11.5 True False False 6h16m kube-controller-manager 4.11.5 True False False 6h16m kube-scheduler 4.11.5 True False False 6h16m kube-storage-version-migrator 4.11.5 True False False 6h19m machine-api 4.11.5 True False False 6h15m machine-approver 4.11.5 True False False 6h19m machine-config 4.11.5 True False False 6h18m marketplace 4.11.5 True False False 6h18m monitoring 4.11.5 True False False 6h4m network 4.11.5 True False False 6h20m node-tuning 4.11.5 True False False 6h18m openshift-apiserver 4.11.5 True False False 6h8m openshift-controller-manager 4.11.5 True False False 6h7m openshift-samples 4.11.5 True False False 6h12m operator-lifecycle-manager 4.11.5 True False False 6h18m operator-lifecycle-manager-catalog 4.11.5 True False False 6h19m operator-lifecycle-manager-pkgsvr 4.11.5 True False False 6h12m service-ca 4.11.5 True False False 6h19m storage 4.11.5 True False False 6h19m",
"oc get ClusterVersion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 5h57m Cluster version is 4.11.5",
"oc delete bmh -n openshift-machine-api custom-master3",
"oc get machine -A",
"NAMESPACE NAME PHASE AGE openshift-machine-api custom-master3 Running 14h openshift-machine-api test-day2-1-6qv96-master-0 Failed 20h openshift-machine-api test-day2-1-6qv96-master-1 Running 20h openshift-machine-api test-day2-1-6qv96-master-2 Running 20h openshift-machine-api test-day2-1-6qv96-worker-0-8w7vr Running 19h openshift-machine-api test-day2-1-6qv96-worker-0-rxddj Running 19h",
"oc delete machine -n openshift-machine-api test-day2-1-6qv96-master-0 machine.machine.openshift.io \"test-day2-1-6qv96-master-0\" deleted",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION worker-1 Ready worker 19h v1.24.0+3882f8f master-2 Ready master 20h v1.24.0+3882f8f master-3 Ready master 19h v1.24.0+3882f8f worker-4 Ready worker 19h v1.24.0+3882f8f master-5 Ready master 15h v1.24.0+3882f8f",
"oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf",
"E0927 07:53:10.597523 1 base_controller.go:272] ClusterMemberRemovalController reconciliation failed: cannot remove member: 192.168.111.23 because it is reported as healthy but it doesn't have a machine nor a node resource",
"oc rsh -n openshift-etcd etcd-worker-2 etcdctl member list -w table; etcdctl endpoint health",
"+--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ 192.168.111.26 is healthy: committed proposal: took = 10.458132ms 192.168.111.25 is healthy: committed proposal: took = 11.047349ms 192.168.111.28 is healthy: committed proposal: took = 11.414402ms",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION worker-1 Ready worker 20h v1.24.0+3882f8f master-2 NotReady master 20h v1.24.0+3882f8f master-3 Ready master 20h v1.24.0+3882f8f worker-4 Ready worker 20h v1.24.0+3882f8f master-5 Ready master 15h v1.24.0+3882f8f",
"oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf",
"E0927 08:24:23.983733 1 base_controller.go:272] DefragController reconciliation failed: cluster is unhealthy: 2 of 3 members are available, worker-2 is unhealthy",
"oc rsh -n openshift-etcd etcd-worker-3 etcdctl member list -w table",
"+--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+",
"etcdctl endpoint health",
"{\"level\":\"warn\",\"ts\":\"2022-09-27T08:25:35.953Z\",\"logger\":\"client\",\"caller\":\"v3/retry_interceptor.go:62\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc000680380/192.168.111.25\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \\\"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\\\"\"} 192.168.111.28 is healthy: committed proposal: took = 12.465641ms 192.168.111.26 is healthy: committed proposal: took = 12.297059ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy cluster",
"oc delete machine -n openshift-machine-api test-day2-1-6qv96-master-2",
"oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf -f",
"I0927 08:58:41.249222 1 machinedeletionhooks.go:135] skip removing the deletion hook from machine test-day2-1-6qv96-master-2 since its member is still present with any of: [{InternalIP } {InternalIP 192.168.111.26}]",
"oc rsh -n openshift-etcd etcd-worker-3 etcdctl member list -w table",
"+--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+",
"etcdctl endpoint health",
"{\"level\":\"warn\",\"ts\":\"2022-09-27T10:31:07.227Z\",\"logger\":\"client\",\"caller\":\"v3/retry_interceptor.go:62\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc0000d6e00/192.168.111.25\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \\\"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\\\"\"} 192.168.111.28 is healthy: committed proposal: took = 13.038278ms 192.168.111.26 is healthy: committed proposal: took = 12.950355ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy cluster",
"etcdctl member remove 61e2a86084aafa62",
"Member 61e2a86084aafa62 removed from cluster 6881c977b97990d7",
"etcdctl member list -w table",
"+----------+---------+--------+--------------+--------------+-------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |LEARNER| +----------+---------+--------+--------------+--------------+-------+ | 2c18942f | started |worker-3|192.168.111.26|192.168.111.26| false | | ead4f280 | started |worker-5|192.168.111.28|192.168.111.28| false | +----------+---------+--------+--------------+--------------+-------+",
"oc get csr | grep Pending",
"csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:worker-6 <none> Pending",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION worker-1 Ready worker 22h v1.24.0+3882f8f master-3 Ready master 22h v1.24.0+3882f8f worker-4 Ready worker 22h v1.24.0+3882f8f master-5 Ready master 17h v1.24.0+3882f8f master-6 Ready master 2m52s v1.24.0+3882f8f",
"oc create bmh -n openshift-machine-api custom-master3",
"oc create machine -n openshift-machine-api custom-master3",
"#!/bin/bash Credit goes to https://bugzilla.redhat.com/show_bug.cgi?id=1801238. This script will link Machine object and Node object. This is needed in order to have IP address of the Node present in the status of the Machine. set -x set -e machine=\"USD1\" node=\"USD2\" if [ -z \"USDmachine\" -o -z \"USDnode\" ]; then echo \"Usage: USD0 MACHINE NODE\" exit 1 fi uid=USD(echo USDnode | cut -f1 -d':') node_name=USD(echo USDnode | cut -f2 -d':') proxy & proxy_pid=USD! function kill_proxy { kill USDproxy_pid } trap kill_proxy EXIT SIGINT HOST_PROXY_API_PATH=\"http://localhost:8001/apis/metal3.io/v1alpha1/namespaces/openshift-machine-api/baremetalhosts\" function wait_for_json() { local name local url local curl_opts local timeout local start_time local curr_time local time_diff name=\"USD1\" url=\"USD2\" timeout=\"USD3\" shift 3 curl_opts=\"USD@\" echo -n \"Waiting for USDname to respond\" start_time=USD(date +%s) until curl -g -X GET \"USDurl\" \"USD{curl_opts[@]}\" 2> /dev/null | jq '.' 2> /dev/null > /dev/null; do echo -n \".\" curr_time=USD(date +%s) time_diff=USD((USDcurr_time - USDstart_time)) if [[ USDtime_diff -gt USDtimeout ]]; then echo \"\\nTimed out waiting for USDname\" return 1 fi sleep 5 done echo \" Success!\" return 0 } wait_for_json oc_proxy \"USD{HOST_PROXY_API_PATH}\" 10 -H \"Accept: application/json\" -H \"Content-Type: application/json\" addresses=USD(oc get node -n openshift-machine-api USD{node_name} -o json | jq -c '.status.addresses') machine_data=USD(oc get machine -n openshift-machine-api -o json USD{machine}) host=USD(echo \"USDmachine_data\" | jq '.metadata.annotations[\"metal3.io/BareMetalHost\"]' | cut -f2 -d/ | sed 's/\"//g') if [ -z \"USDhost\" ]; then echo \"Machine USDmachine is not linked to a host yet.\" 1>&2 exit 1 fi The address structure on the host doesn't match the node, so extract the values we want into separate variables so we can build the patch we need. hostname=USD(echo \"USD{addresses}\" | jq '.[] | select(. | .type == \"Hostname\") | .address' | sed 's/\"//g') ipaddr=USD(echo \"USD{addresses}\" | jq '.[] | select(. | .type == \"InternalIP\") | .address' | sed 's/\"//g') host_patch=' { \"status\": { \"hardware\": { \"hostname\": \"'USD{hostname}'\", \"nics\": [ { \"ip\": \"'USD{ipaddr}'\", \"mac\": \"00:00:00:00:00:00\", \"model\": \"unknown\", \"speedGbps\": 10, \"vlanId\": 0, \"pxe\": true, \"name\": \"eth1\" } ], \"systemVendor\": { \"manufacturer\": \"Red Hat\", \"productName\": \"product name\", \"serialNumber\": \"\" }, \"firmware\": { \"bios\": { \"date\": \"04/01/2014\", \"vendor\": \"SeaBIOS\", \"version\": \"1.11.0-2.el7\" } }, \"ramMebibytes\": 0, \"storage\": [], \"cpu\": { \"arch\": \"x86_64\", \"model\": \"Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz\", \"clockMegahertz\": 2199.998, \"count\": 4, \"flags\": [] } } } } ' echo \"PATCHING HOST\" echo \"USD{host_patch}\" | jq . curl -s -X PATCH USD{HOST_PROXY_API_PATH}/USD{host}/status -H \"Content-type: application/merge-patch+json\" -d \"USD{host_patch}\" get baremetalhost -n openshift-machine-api -o yaml \"USD{host}\"",
"bash link-machine-and-node.sh custom-master3 worker-3",
"oc rsh -n openshift-etcd etcd-worker-3 etcdctl member list -w table",
"+---------+-------+--------+--------------+--------------+-------+ | ID | STATUS| NAME | PEER ADDRS | CLIENT ADDRS |LEARNER| +---------+-------+--------+--------------+--------------+-------+ | 2c18942f|started|worker-3|192.168.111.26|192.168.111.26| false | | ead4f280|started|worker-5|192.168.111.28|192.168.111.28| false | | 79153c5a|started|worker-6|192.168.111.29|192.168.111.29| false | +---------+-------+--------+--------------+--------------+-------+",
"oc get clusteroperator etcd",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE etcd 4.11.5 True False False 22h",
"oc rsh -n openshift-etcd etcd-worker-3 etcdctl endpoint health",
"192.168.111.26 is healthy: committed proposal: took = 9.105375ms 192.168.111.28 is healthy: committed proposal: took = 9.15205ms 192.168.111.29 is healthy: committed proposal: took = 10.277577ms",
"oc get Nodes",
"NAME STATUS ROLES AGE VERSION worker-1 Ready worker 22h v1.24.0+3882f8f master-3 Ready master 22h v1.24.0+3882f8f worker-4 Ready worker 22h v1.24.0+3882f8f master-5 Ready master 18h v1.24.0+3882f8f master-6 Ready master 40m v1.24.0+3882f8f",
"oc get ClusterOperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.5 True False False 150m baremetal 4.11.5 True False False 22h cloud-controller-manager 4.11.5 True False False 22h cloud-credential 4.11.5 True False False 22h cluster-autoscaler 4.11.5 True False False 22h config-operator 4.11.5 True False False 22h console 4.11.5 True False False 145m csi-snapshot-controller 4.11.5 True False False 22h dns 4.11.5 True False False 22h etcd 4.11.5 True False False 22h image-registry 4.11.5 True False False 22h ingress 4.11.5 True False False 22h insights 4.11.5 True False False 22h kube-apiserver 4.11.5 True False False 22h kube-controller-manager 4.11.5 True False False 22h kube-scheduler 4.11.5 True False False 22h kube-storage-version-migrator 4.11.5 True False False 148m machine-api 4.11.5 True False False 22h machine-approver 4.11.5 True False False 22h machine-config 4.11.5 True False False 110m marketplace 4.11.5 True False False 22h monitoring 4.11.5 True False False 22h network 4.11.5 True False False 22h node-tuning 4.11.5 True False False 22h openshift-apiserver 4.11.5 True False False 163m openshift-controller-manager 4.11.5 True False False 22h openshift-samples 4.11.5 True False False 22h operator-lifecycle-manager 4.11.5 True False False 22h operator-lifecycle-manager-catalog 4.11.5 True False False 22h operator-lifecycle-manager-pkgsvr 4.11.5 True False False 22h service-ca 4.11.5 True False False 22h storage 4.11.5 True False False 22h",
"oc get ClusterVersion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 22h Cluster version is 4.11.5"
] | https://docs.redhat.com/en/documentation/assisted_installer_for_openshift_container_platform/2023/html/assisted_installer_for_openshift_container_platform/expanding-the-cluster |
Chapter 7. Review the State of an OpenStack Service | Chapter 7. Review the State of an OpenStack Service This example tests the monitoring of the openstack-ceilometer-central service. Confirm that the openstack-ceilometer-central service is running: Connect to the Uchiwa dashboard and confirm that a successful ceilometer check is present and running as defined in the ceilometer JSON file. | [
"docker ps -a | grep ceilometer"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/monitoring_tools_configuration_guide/sect-review-service |
1.8. Uninstalling a Software Collection | 1.8. Uninstalling a Software Collection You can use conventional tools like Yum or PackageKit when uninstalling a Software Collection because Software Collections are fully compatible with the RPM Package Manager. For example, to uninstall all packages and subpackages that are part of a Software Collection named software_collection_1 , run the following command: yum remove software_collection_1\* You can also use the yum remove command to remove the scl utility. For detailed information on Yum and PackageKit usage, see the Red Hat Enterprise Linux 7 System Administrator's Guide , or the Red Hat Enterprise Linux 6 Deployment Guide . | null | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/sect-uninstalling_a_software_collection |
B.4. bind | B.4. bind B.4.1. RHSA-2010:0975 - Important: bind security update Updated bind packages that fix two security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The Berkeley Internet Name Domain (BIND) is an implementation of the Domain Name System (DNS) protocols. BIND includes a DNS server (named); a resolver library (routines for applications to use when interfacing with DNS); and tools for verifying that the DNS server is operating correctly. CVE-2010-3613 It was discovered that named did not invalidate previously cached RRSIG records when adding an NCACHE record for the same entry to the cache. A remote attacker allowed to send recursive DNS queries to named could use this flaw to crash named. CVE-2010-3614 It was discovered that, in certain cases, named did not properly perform DNSSEC validation of an NS RRset for zones in the middle of a DNSKEY algorithm rollover. This flaw could cause the validator to incorrectly determine that the zone is insecure and not protected by DNSSEC. All BIND users are advised to upgrade to these updated packages, which contain a backported patch to resolve these issues. After installing the update, the BIND daemon (named) will be restarted automatically. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/bind |
Chapter 3. Installing the client-side tools | Chapter 3. Installing the client-side tools Before you deploy the overcloud, you need to determine the configuration settings to apply to each client. Copy the example environment files from the heat template collection and modify the files to suit your environment. 3.1. Setting centralized logging client parameters For more information, see Enabling centralized logging with Elasticsearch in the Logging, Monitoring, and Troubleshooting guide. 3.2. Setting monitoring client parameters The monitoring solution collects system information periodically and provides a mechanism to store and monitor the values in a variety of ways using a data collecting agent. Red Hat supports collectd as a collection agent. Collectd-sensubility is an extension of collectd and communicates with Sensu server side through RabbitMQ. You can use Service Telemetry Framework (STF) to store the data, and in turn, monitor systems, find performance bottlenecks, and predict future system load. For more information about Service Telemetry Framework, see the Service Telemetry Framework 1.3 guide. To configure collectd and collectd-sensubility, complete the following steps: Create config.yaml in your home directory, for example, /home/templates/custom , and configure the MetricsQdrConnectors parameter to point to STF server side: MetricsQdrConnectors: - host: qdr-normal-sa-telemetry.apps.remote.tld port: 443 role: inter-router sslProfile: sslProfile verifyHostname: false MetricsQdrSSLProfiles: - name: sslProfile In the config.yaml file, list the plug-ins you want to use under CollectdExtraPlugins . You can also provide parameters in the ExtraConfig section. By default, collectd comes with the cpu , df , disk , hugepages , interface , load , memory , processes , tcpconns , unixsock , and uptime plug-ins. You can add additional plug-ins using the CollectdExtraPlugins parameter. You can also provide additional configuration information for the CollectdExtraPlugins using the ExtraConfig option. For example, to enable the virt plug-in, and configure the connection string and the hostname format, use the following syntax: parameter_defaults: CollectdExtraPlugins: - disk - df - virt ExtraConfig: collectd::plugin::virt::connection: "qemu:///system" collectd::plugin::virt::hostname_format: "hostname uuid" Note Do not remove the unixsock plug-in. Removal results in the permanent marking of the collectd container as unhealthy. Optional: To collect metric and event data through AMQ Interconnect, add the line MetricsQdrExternalEndpoint: true to the config.yaml file: To enable collectd-sensubility, add the following environment configuration to the config.yaml file: parameter_defaults: CollectdEnableSensubility: true # Use this if there is restricted access for your checks by using the sudo command. # The rule will be created in /etc/sudoers.d for sensubility to enable it calling restricted commands via sensubility executor. CollectdSensubilityExecSudoRule: "collectd ALL = NOPASSWD: <some command or ALL for all commands>" # Connection URL to Sensu server side for reporting check results. CollectdSensubilityConnection: "amqp://sensu:sensu@<sensu server side IP>:5672//sensu" # Interval in seconds for sending keepalive messages to Sensu server side. CollectdSensubilityKeepaliveInterval: 20 # Path to temporary directory where the check scripts are created. CollectdSensubilityTmpDir: /var/tmp/collectd-sensubility-checks # Path to shell used for executing check scripts. CollectdSensubilityShellPath: /usr/bin/sh # To improve check execution rate use this parameter and value to change the number of goroutines spawned for executing check scripts. CollectdSensubilityWorkerCount: 2 # JSON-formatted definition of standalone checks to be scheduled on client side. If you need to schedule checks # on overcloud nodes instead of Sensu server, use this parameter. Configuration is compatible with Sensu check definition. # For more information, see https://docs.sensu.io/sensu-core/1.7/reference/checks/#check-definition-specification # There are some configuration options which sensubility ignores such as: extension, publish, cron, stdin, hooks. CollectdSensubilityChecks: example: command: "ping -c1 -W1 8.8.8.8" interval: 30 # The following parameters are used to modify standard, standalone checks for monitoring container health on overcloud nodes. # Do not modify these parameters. # CollectdEnableContainerHealthCheck: true # CollectdContainerHealthCheckCommand: <snip> # CollectdContainerHealthCheckInterval: 10 # The Sensu server side event handler to use for events created by the container health check. # CollectdContainerHealthCheckHandlers: # - handle-container-health-check # CollectdContainerHealthCheckOccurrences: 3 # CollectdContainerHealthCheckRefresh: 90 Deploy the overcloud. Include config.yaml , collectd-write-qdr.yaml , and one of the qdr-*.yaml files in your overcloud deploy command: Optional: To enable overcloud RabbitMQ monitoring, include the collectd-read-rabbitmq.yaml file in the overcloud deploy command. Additional resources For more information about the YAML files, see Section 3.5, "YAML files" . For more information about collectd plug-ins, see Section 3.4, "Collectd plug-in configurations" . For more information about Service Telemetry Framework, see the Service Telemetry Framework 1.3 guide. 3.3. Collecting data through AMQ Interconnect To subscribe to the available AMQ Interconnect addresses for metric and event data consumption, create an environment file to expose AMQ Interconnect for client connections, and deploy the overcloud. Note The Service Telemetry Operator simplifies the deployment of all data ingestion and data storage components for single cloud deployments. To share the data storage domain with multiple clouds, see Configuring multiple clouds in the Service Telemetry Framework 1.3 guide. Warning It is not possible to switch between QDR mesh mode and QDR edge mode, as used by the Service Telemetry Framework (STF). Additionally, it is not possible to use QDR mesh mode if you enable data collection for STF. Procedure Log on to the Red Hat OpenStack Platform undercloud as the stack user. Create a configuration file called data-collection.yaml in the /home/stack directory. To enable external endpoints, add the MetricsQdrExternalEndpoint: true parameter to the data-collection.yaml file: parameter_defaults: MetricsQdrExternalEndpoint: true To enable collectd and AMQ Interconnect, add the following files to your Red Hat OpenStack Platform director deployment: the data-collection.yaml environment file the qdr-form-controller-mesh.yaml file that enables the client side AMQ Interconnect to connect to the external endpoints openstack overcloud deploy <other arguments> --templates /usr/share/openstack-tripleo-heat-templates \ --environment-file <...other-environment-files...> \ --environment-file /usr/share/openstack-tripleo-heat-templates/environments/metrics/qdr-form-controller-mesh.yaml \ --environment-file /home/stack/data-collection.yaml Optional: To collect Ceilometer and collectd events, include ceilometer-write-qdr.yaml and collectd-write-qdr.yaml file in your overcloud deploy command. Deploy the overcloud. Additional resources For more information about the YAML files, see Section 3.5, "YAML files" . 3.4. Collectd plug-in configurations There are many configuration possibilities of Red Hat OpenStack Platform director. You can configure multiple collectd plug-ins to suit your environment. Each documented plug-in has a description and example configuration. Some plug-ins have a table of metrics that you can query for from Grafana or Prometheus, and a list of options you can configure, if available. Additional resources To view a complete list of collectd plugin options, see collectd plugins in the Service Telemetry Framework guide. 3.5. YAML files You can include the following YAML files in your overcloud deploy command when you configure collectd: collectd-read-rabbitmq.yaml : Enables and configures python-collect-rabbitmq to monitor the overcloud RabbitMQ instance. collectd-write-qdr.yaml : Enables collectd to send telemetry and notification data through AMQ Interconnect. qdr-edge-only.yaml : Enables deployment of AMQ Interconnect. Each overcloud node has one local qdrouterd service running and operating in edge mode. For example, sending received data straight to defined MetricsQdrConnectors . qdr-form-controller-mesh.yaml : Enables deployment of AMQ Interconnect. Each overcloud node has one local qdrouterd service forming a mesh topology. For example, AMQ Interconnect routers on controllers operate in interior router mode, with connections to defined MetricsQdrConnectors , and AMQ Interconnect routers on other node types connect in edge mode to the interior routers running on the controllers. Additional resources For more information about configuring collectd, see Section 3.2, "Setting monitoring client parameters" . | [
"MetricsQdrConnectors: - host: qdr-normal-sa-telemetry.apps.remote.tld port: 443 role: inter-router sslProfile: sslProfile verifyHostname: false MetricsQdrSSLProfiles: - name: sslProfile",
"parameter_defaults: CollectdExtraPlugins: - disk - df - virt ExtraConfig: collectd::plugin::virt::connection: \"qemu:///system\" collectd::plugin::virt::hostname_format: \"hostname uuid\"",
"parameter_defaults: MetricsQdrExternalEndpoint: true",
"parameter_defaults: CollectdEnableSensubility: true # Use this if there is restricted access for your checks by using the sudo command. # The rule will be created in /etc/sudoers.d for sensubility to enable it calling restricted commands via sensubility executor. CollectdSensubilityExecSudoRule: \"collectd ALL = NOPASSWD: <some command or ALL for all commands>\" # Connection URL to Sensu server side for reporting check results. CollectdSensubilityConnection: \"amqp://sensu:sensu@<sensu server side IP>:5672//sensu\" # Interval in seconds for sending keepalive messages to Sensu server side. CollectdSensubilityKeepaliveInterval: 20 # Path to temporary directory where the check scripts are created. CollectdSensubilityTmpDir: /var/tmp/collectd-sensubility-checks # Path to shell used for executing check scripts. CollectdSensubilityShellPath: /usr/bin/sh # To improve check execution rate use this parameter and value to change the number of goroutines spawned for executing check scripts. CollectdSensubilityWorkerCount: 2 # JSON-formatted definition of standalone checks to be scheduled on client side. If you need to schedule checks # on overcloud nodes instead of Sensu server, use this parameter. Configuration is compatible with Sensu check definition. # For more information, see https://docs.sensu.io/sensu-core/1.7/reference/checks/#check-definition-specification # There are some configuration options which sensubility ignores such as: extension, publish, cron, stdin, hooks. CollectdSensubilityChecks: example: command: \"ping -c1 -W1 8.8.8.8\" interval: 30 # The following parameters are used to modify standard, standalone checks for monitoring container health on overcloud nodes. # Do not modify these parameters. # CollectdEnableContainerHealthCheck: true # CollectdContainerHealthCheckCommand: <snip> # CollectdContainerHealthCheckInterval: 10 # The Sensu server side event handler to use for events created by the container health check. # CollectdContainerHealthCheckHandlers: # - handle-container-health-check # CollectdContainerHealthCheckOccurrences: 3 # CollectdContainerHealthCheckRefresh: 90",
"openstack overcloud deploy -e /home/templates/custom/config.yaml -e tripleo-heat-templates/environments/metrics/collectd-write-qdr.yaml -e tripleo-heat-templates/environments/metrics/qdr-form-controller-mesh.yaml",
"parameter_defaults: MetricsQdrExternalEndpoint: true",
"openstack overcloud deploy <other arguments> --templates /usr/share/openstack-tripleo-heat-templates --environment-file <...other-environment-files...> --environment-file /usr/share/openstack-tripleo-heat-templates/environments/metrics/qdr-form-controller-mesh.yaml --environment-file /home/stack/data-collection.yaml"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/monitoring_tools_configuration_guide/sect-client-tools |
Storage | Storage OpenShift Container Platform 4.9 Configuring and managing storage in OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/storage/index |
Chapter 2. Preparing your OpenShift cluster | Chapter 2. Preparing your OpenShift cluster This chapter explains how to install Red Hat Integration - Camel K and OpenShift Serverless on OpenShift, and how to install the required Camel K and OpenShift Serverless command-line client tools in your development environment. Section 2.1, "Installing Camel K" Section 2.2, "Installing OpenShift Serverless" Section 2.3, "Configuring Maven repository for Camel K" Section 2.4, "Camel K offline" 2.1. Installing Camel K You can install the Red Hat Integration - Camel K Operator on your OpenShift cluster from the OperatorHub. The OperatorHub is available from the OpenShift Container Platform web console and provides an interface for cluster administrators to discover and install Operators. After you install the Camel K Operator, you can install the Camel K CLI tool for command line access to all Camel K features. Prerequisites You have access to an OpenShift 4.6 (or later) cluster with the correct access level, the ability to create projects and install operators, and the ability to install CLI tools on your local system. Note You installed the OpenShift CLI tool ( oc ) so that you can interact with the OpenShift cluster at the command line. For details on how to install the OpenShift CLI, see Installing the OpenShift CLI . Procedure In the OpenShift Container Platform web console, log in by using an account with cluster administrator privileges. Create a new OpenShift project: In the left navigation menu, click Home > Project > Create Project . Enter a project name, for example, my-camel-k-project , and then click Create . In the left navigation menu, click Operators > OperatorHub . In the Filter by keyword text box, type Camel K and then click the Red Hat Integration - Camel K Operator card. Read the information about the operator and then click Install . The Operator installation page opens. Select the following subscription settings: Update Channel > latest Choose among the following 2 options: Installation Mode > A specific namespace on the cluster > my-camel-k-project Installation Mode > All namespaces on the cluster (default) > Openshift operator Note Approval Strategy > Automatic Note The Installation mode > All namespaces on the cluster and Approval Strategy > Manual settings are also available if required by your environment. Click Install , and wait a few moments until the Camel K Operator is ready for use. Download and install the Camel K CLI tool: From the Help menu (?) at the top of the OpenShift web console, select Command line tools . Scroll down to the kamel - Red Hat Integration - Camel K - Command Line Interface section. Click the link to download the binary for your local operating system (Linux, Mac, Windows). Unzip and install the CLI in your system path. To verify that you can access the Kamel K CLI, open a command window and then type the following: kamel --help This command shows information about Camel K CLI commands. Note If you uninstall the Camel K operator from OperatorHub using OLM, the CRDs are not removed. To shift back to a Camel K operator, you must remove the CRDs manually by using the following command. oc get crd -l app=camel-k -o name | xargs oc delete step (optional) Specifying Camel K resource limits 2.1.1. Consistent integration platform settings You can create namespace local Integration Platform resources to overwrite settings used in the operator. These namespace local platform settings must be derived from the Integration Platform being used by the operator by default. That is, only explicitly specified settings overwrite the platform defaults used in the operator. Therefore, you must use a consistent platform settings hierarchy where the global operator platform settings always represent the basis for user specified platform settings. In case of global Camel K operator, if IntegrationPlatform specifies non-default spec.build.buildStrategy, this value is also propagated to namespaced Camel-K operators installed thereafter. Default value for buildStrategy is routine. USD oc get itp camel-k -o yaml -n openshift-operators apiVersion: camel.apache.org/v1 kind: IntegrationPlatform metadata: labels: app: camel-k name: camel-k namespace: openshift-operators spec: build: buildStrategy: pod The parameter buildStrategy of global operator IntegrationPlatform can be edited by one of the following ways; From Dashboard Administrator view: Operators Installed operators in namespace openshift-operators (that is, globally installed operators), select Red Hat Integration - Camel K Integration Platform YAML Now add or edit (if already present) spec.build.buildStrategy: pod Click Save Using the following command. Any namespaced Camel K operators installed subsequently would inherit settings from the global IntegrationPlatform. oc patch itp/camel-k -p '{"spec":{"build":{"buildStrategy": "pod"}}}' --type merge -n openshift-operators 2.1.2. Specifying Camel K resource limits When you install Camel K, the OpenShift pod for Camel K does not have any limits set for CPU and memory (RAM) resources. If you want to define resource limits for Camel K, you must edit the Camel K subscription resource that was created during the installation process. Prerequisite You have cluster administrator access to an OpenShift project in which the Camel K Operator is installed as described in Installing Camel K . You know the resource limits that you want to apply to the Camel K subscription. For more information about resource limits, see the following documentation: Setting deployment resources in the OpenShift documentation. Managing Resources for Containers in the Kubernetes documentation. Procedure Log in to the OpenShift Web console. Select Operators > Installed Operators > Operator Details > Subscription . Select Actions > Edit Subscription . The file for the subscription opens in the YAML editor. Under the spec section, add a config.resources section and provide values for memory and cpu as shown in the following example: Save your changes. OpenShift updates the subscription and applies the resource limits that you specified. Important We recommend you to install Camel K Operator through Global installtion only. 2.2. Installing OpenShift Serverless You can install the OpenShift Serverless Operator on your OpenShift cluster from the OperatorHub. The OperatorHub is available from the OpenShift Container Platform web console and provides an interface for cluster administrators to discover and install Operators. The OpenShift Serverless Operator supports both Knative Serving and Knative Eventing features. For more details, see installing OpenShift Serverless Operator . Prerequisites You have cluster administrator access to an OpenShift project in which the Camel K Operator is installed. You installed the OpenShift CLI tool ( oc ) so that you can interact with the OpenShift cluster at the command line. For details on how to install the OpenShift CLI, see Installing the OpenShift CLI . Procedure In the OpenShift Container Platform web console, log in by using an account with cluster administrator privileges. In the left navigation menu, click Operators > OperatorHub . In the Filter by keyword text box, enter Serverless to find the OpenShift Serverless Operator . Read the information about the Operator and then click Install to display the Operator subscription page. Select the default subscription settings: Update Channel > Select the channel that matches your OpenShift version, for example, 4.16 Installation Mode > All namespaces on the cluster Approval Strategy > Automatic Note The Approval Strategy > Manual setting is also available if required by your environment. Click Install , and wait a few moments until the Operator is ready for use. Install the required Knative components using the steps in the OpenShift documentation: Installing Knative Serving Installing Knative Eventing (Optional) Download and install the OpenShift Serverless CLI tool: From the Help menu (?) at the top of the OpenShift web console, select Command line tools . Scroll down to the kn - OpenShift Serverless - Command Line Interface section. Click the link to download the binary for your local operating system (Linux, Mac, Windows) Unzip and install the CLI in your system path. To verify that you can access the kn CLI, open a command window and then type the following: kn --help This command shows information about OpenShift Serverless CLI commands. For more details, see the OpenShift Serverless CLI documentation . Additional resources Installing OpenShift Serverless in the OpenShift documentation 2.3. Configuring Maven repository for Camel K For Camel K operator, you can provide the Maven settings in a ConfigMap or a secret. Procedure To create a ConfigMap from a file, run the following command. Created ConfigMap can be then referenced in the IntegrationPlatform resource, from the spec.build.maven.settings field. Example Or you can edit the IntegrationPlatform resource directly to reference the ConfigMap that contains the Maven settings using following command: Configuring CA certificates for remote Maven repositories You can provide the CA certificates, used by the Maven commands to connect to the remote Maven repositories, in a Secret. Procedure Create a Secret from file using following command: Reference the created Secret in the IntegrationPlatform resource, from the spec.build.maven.caSecret field as shown below. 2.4. Camel K offline Camel K is naturally developed to fit in an "open world" cluster model. It means that the default installation assumes it can pull and push resources from the Internet. However, there can be certain domains or use cases where this is a limitation. The following are the details about how to setup Camel K in an offline (or disconnected, or air gapped) cluster environment. Requirements Install Camel K 1.10.7 in a disconnected openshift 4.14+ cluster in OLM mode. Run integrations in the same cluster by using a maven repository manager to serve the maven artifacts. Note Out of scope Support install and management of a maven repository manager. Mount volumes with the maven artifacts Use the --offline maven parameter. How to install or configure OCP in disconnected environments. Assumption An existing OCP 4.14+ cluster configured in a disconnected environment . An existing container image registry. An existing maven repository manager. Execute the command steps in a linux machine. Prerequisites You are familiar with Camel K network architecture . Let us see the diagram here. We can identify those components which requires access to the Internet and treat them separately: the image registry and the maven builds. 2.4.1. Container images registry The container registry is the component in charge to host the containers which are built from the operator and are used by the cluster to run the Camel applications. This component can be provided out of the box by the cluster, or should be operated by you (see the guide on how to run your own registry ). As we are in a disconnected environment, we assume this component to be accessible by the cluster (through an IP or URL). However, the cluster must use the Camel K container image , in order to be installed. You must ensure that the cluster registry has preloaded the Camel K container image, which should be similar to registry.redhat.io/integration/camel-k-rhel8-operator-bundle:1.10.7 Note Red Hat container images are defined in the ecosystem catalog see Get this image tab, then you see the container image digest address, in the Manifest List Digest field: Openshift provides documentation to mirror container images . When mirroring the container images, you must ensure to include the following container images that are required by Camel K during its operations, note that, in a disconnected cluster we have to use the digest URLs and not the tag. For container images provided by Red Hat, visit the Red Hat ecosystem catalog , find the container image and copy the digest URL of the container image. registry.redhat.io/integration/camel-k-rhel8-operator:1.10.7 registry.redhat.io/integration/camel-k-rhel8-operator-bundle:1.10.7 registry.redhat.io/quarkus/mandrel-23-rhel8:23.0 registry.access.redhat.com/ubi8/openjdk-11:1.20 An example of a digest URL of Camel K 1.10.7: registry.redhat.io/integration/camel-k-rhel8-operator-bundle@sha256:a043af04c9b816f0dfd5db64ba69bae192d73dd726df83aaf2002559a111a786 If all the above is set, then, you must be ready to pull and push from the container registry in Camel K as well. 2.4.1.1. Creating your own CatalogSource The only supported way to install Camel K operator is from OperatorHub. Since we use a disconnected environment, the container images from the redhat-operators CatalogSource cannot be downloaded. For that reason, we must create our own CatalogSource to install the Camel K operator from OperatorHub. Note There is mirroring of the Operator Catalog , to mirror a CatalogSource. However, there is this way to set up a custom CatalogSource with only the Camel K Bundle Metadata container image. Prerequisites Required tools Opm: https://github.com/operator-framework/operator-registry/releases Podman: https://podman.io/docs/installation Images accessible on your registry Camel K Operator and Camel K Operator Bundle with sha256 tag. 2.4.1.1.1. Creating own Index Image Bundle (IIB) Firstly, we need to create our own IIB image with only one operator inside. Note No operator upgrade is possible. Setup Create IIB locally with only one bundle (use sha tag): For example: opm index add --bundles registry.redhat.io/integration/camel-k-rhel8-operator-bundle@sha256:a043af04c9b816f0dfd5db64ba69bae192d73dd726df83aaf2002559a111a786 --tag my-custom-registry/mygroup/ck-iib:1.10.7 --mode=semver Check if the image was created. Push to your registry. For example: podman push my-custom-registry/mygroup/ck-iib:1.10.7 Additional resource Building an index of Operators using opm 2.4.1.1.2. Creating own catalog source Create yaml file, that is, myCatalogSource.yaml with content: apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: {CATALOG_SOURCE_NAME} namespace: openshift-marketplace spec: displayName: {NAME_WHICH_WILL_BE_DISPLAYED_IN_OCP} publisher: grpc sourceType: grpc image: {YOUR_CONTAINER_STORAGE}/{USERNAME}/{IMAGE_NAME}:{SHA_TAG}} For example; apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: camel-k-source-1.10 namespace: openshift-marketplace spec: displayName: Camel K Offline publisher: grpc sourceType: grpc image: ec2-1-111-111-111.us-east-1.compute.amazonaws.com:5000/myrepository/ckiib@sha256:f67fc953b10729e49bf012329fbfb6352b91bbc7d4b1bcdf5779f6d31c397c5c Login to your Openshift with oc tool: Oc login -u {USER} -p {PASS} https://api.{YOUR_CLUSTER_API_URL}:6443 Deploy CatalogSource to openshift-marketplace namespace: oc apply -f myCatalogSource.yaml -n openshift-marketplace Open your Openshift web console, navigate to OperatorHub, select the Camel K Offline Source, then on the right side, select "Red Hat Integration - Camel K" and install it. 2.4.2. Maven build configuration Warning This guide is a best effort development done to help the final user to create maven offline bundle and be able to run Camel K in offline mode. However, since the high degree of flexibility in the installation topology we cannot provide any level of support, only guidance on the possible configuration to adopt. Also, given the quantity of third party dependencies downloaded during the procedure we cannot ensure any protection against possible CVEs affecting these third party libraries. Use at your best convenience. The procedure contains a script that resolves, downloads and packages the entire set of Camel K Runtime and their transitive dependencies required by Maven to build and run the camel integration. It requires that the Maven version from where you are running the scripts (likely your machine) is the same used in the Camel K operator, that is, Maven 3.6.3, This will ensure to have the correct dependencies versions. Important The operator expects the dependencies to be owned by 1001 user. So ensure that the script is executed by such a user to avoid the maven build to fail due to privilege faults. The output of the script is a tar.gz file containing all the tree of dependencies expected by Maven, allowing the target building system (that is, the Camel K operator). Note It may not work in Quarkus native mode as the native build may require additional dependencies not available in the bundle. 2.4.2.1. Offliner script The script is available in Camel K github repository . You can run like this: ./offline_dependencies.sh usage: ./script/offline_dependencies.sh -v <Camel K Runtime version> [optional parameters] -v <Camel K Runtime version> - Camel K Runtime Version -m </usr/share/bin/mvn> - Path to mvn command -r <http://my-repo.com> - URL address of the maven repository manager -d </var/tmp/offline-1.2> - Local directory to add the offline dependencies -s - Skip Certificate validation An example of a sample run: ./offline_dependencies.sh -v 1.15.6.redhat-00029 -r https://maven.repository.redhat.com/ga -d camel-k-offline -s How to know the correct Camel K Runtime version ? You should look at the IntegrationPlatform/camel-k custom resource object in the Openshift cluster, look for the runtimeVersion field, example: It may take about 30 minutes to resolve, if there is a maven proxy. All the packaged dependencies are available in a tar.gz file. It is a big file as it contains all the transitive dependencies required by all Camel components configured in the camel-k-catalog. 2.4.2.2. Upload dependencies to the the Maven Proxy Manager The best practice we suggest is to always use a Maven Proxy. This is also the case of an offline installation. In such case you can check your Maven Repository Manager documentation and verify how to upload dependencies using the file created in the chapter above. When you build the integration route, the build should use the maven proxy, thus requiring a custom maven settings.xml configured to mirror all maven repositories to go through the maven proxy. Get the URL of the maven proxy and set in the url field, like the example below: <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd"> <mirrors> <mirror> <id>local-central-mirror</id> <name>local-central-mirror</name> <mirrorOf>*</mirrorOf> <url>http://my-maven-proxy:8080/releases</url> </mirror> </mirrors> </settings> Create a ConfigMap from this maven settings.xml. Now you have to inform Camel K to use this settings.xml when building the integrations. If you have already installed Camel K, then you can patch the IntegrationPlatform/camel-k , verify you environment for custom name and namespace: kubectl patch itp/camel-k --type=merge -p '{"spec": {"build": {"maven": {"settings": {"configMapKeyRef": {"key": "settings.xml", "name": "local-maven-settings-offline"}}}}}}' Then you should be able to run the integration with kamel run my-integration.java and follow the camel-k-operator log. 2.4.2.3. Troubleshooting 2.4.2.3.1. Error to download dependencies Check if the maven repository manager is reachable from the camel-k-operator log. Get into the camel-k-operator log And use curl to download the maven artifact 2.4.2.3.2. Dependency not found It may happen that a specific maven artifact is not found in the maven proxy manager, due to the offline script inability to resolve and download the dependency, so you have to download that dependency and upload to the maven proxy manager. | [
"You do not need to create a pull secret when installing Camel K from the OpenShift OperatorHub. The Camel K Operator automatically reuses the OpenShift cluster-level authentication to pull the Camel K image from `registry.redhat.io`.",
"If you do not choose among the above two options, the system by default chooses a global namespace on the cluster then leading to openshift operator.",
"oc get itp camel-k -o yaml -n openshift-operators apiVersion: camel.apache.org/v1 kind: IntegrationPlatform metadata: labels: app: camel-k name: camel-k namespace: openshift-operators spec: build: buildStrategy: pod",
"patch itp/camel-k -p '{\"spec\":{\"build\":{\"buildStrategy\": \"pod\"}}}' --type merge -n openshift-operators",
"spec: channel: default config: resources: limits: memory: 512Mi cpu: 500m requests: cpu: 200m memory: 128Mi",
"create configmap maven-settings --from-file=settings.xml",
"apiVersion: camel.apache.org/v1 kind: IntegrationPlatform metadata: name: camel-k spec: build: maven: settings: configMapKeyRef: key: settings.xml name: maven-settings",
"edit itp camel-k",
"create secret generic maven-ca-certs --from-file=ca.crt",
"apiVersion: camel.apache.org/v1 kind: IntegrationPlatform metadata: name: camel-k spec: build: maven: caSecret: key: tls.crt name: tls-secret",
"opm index add --bundles {CAMEL_K_OPERATOR_BUNDLE_WITH_SHA_TAG} --tag {YOUR_CONTAINER_STORAGE}/{USERNAME}/{IMAGE_NAME}:{TAG} --mode=semver",
"opm index add --bundles registry.redhat.io/integration/camel-k-rhel8-operator-bundle@sha256:a043af04c9b816f0dfd5db64ba69bae192d73dd726df83aaf2002559a111a786 --tag my-custom-registry/mygroup/ck-iib:1.10.7 --mode=semver",
"images",
"push {YOUR_CONTAINER_STORAGE}/{USERNAME}/{IMAGE_NAME}:{TAG}",
"push my-custom-registry/mygroup/ck-iib:1.10.7",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: {CATALOG_SOURCE_NAME} namespace: openshift-marketplace spec: displayName: {NAME_WHICH_WILL_BE_DISPLAYED_IN_OCP} publisher: grpc sourceType: grpc image: {YOUR_CONTAINER_STORAGE}/{USERNAME}/{IMAGE_NAME}:{SHA_TAG}}",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: camel-k-source-1.10 namespace: openshift-marketplace spec: displayName: Camel K Offline publisher: grpc sourceType: grpc image: ec2-1-111-111-111.us-east-1.compute.amazonaws.com:5000/myrepository/ckiib@sha256:f67fc953b10729e49bf012329fbfb6352b91bbc7d4b1bcdf5779f6d31c397c5c",
"Oc login -u {USER} -p {PASS} https://api.{YOUR_CLUSTER_API_URL}:6443",
"apply -f myCatalogSource.yaml -n openshift-marketplace",
"./offline_dependencies.sh usage: ./script/offline_dependencies.sh -v <Camel K Runtime version> [optional parameters] -v <Camel K Runtime version> - Camel K Runtime Version -m </usr/share/bin/mvn> - Path to mvn command -r <http://my-repo.com> - URL address of the maven repository manager -d </var/tmp/offline-1.2> - Local directory to add the offline dependencies -s - Skip Certificate validation",
"./offline_dependencies.sh -v 1.15.6.redhat-00029 -r https://maven.repository.redhat.com/ga -d camel-k-offline -s",
"get integrationplatform/camel-k -oyaml|grep runtimeVersion",
"<settings xmlns=\"http://maven.apache.org/SETTINGS/1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd\"> <mirrors> <mirror> <id>local-central-mirror</id> <name>local-central-mirror</name> <mirrorOf>*</mirrorOf> <url>http://my-maven-proxy:8080/releases</url> </mirror> </mirrors> </settings>",
"create configmap local-maven-settings-offline --from-file=settings.xml=maven-settings-offline.xml",
"patch itp/camel-k --type=merge -p '{\"spec\": {\"build\": {\"maven\": {\"settings\": {\"configMapKeyRef\": {\"key\": \"settings.xml\", \"name\": \"local-maven-settings-offline\"}}}}}}'",
"logs -f `kubectl get pod -l app=camel-k -oname`",
"-n openshift-operators exec -i -t `kubectl -n openshift-operators get pod -l app=camel-k -oname` -- bash",
"curl http://my-maven-proxy:8080/my-artifact.jar -o file"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/getting_started_with_camel_k/preparing-openshift-cluster-camel-k |
function::task_prio | function::task_prio Name function::task_prio - The priority value of the task. Synopsis Arguments task task_struct pointer. General Syntax task_prio:long(task:long) Description This function returns the priority value of the given task. | [
"function task_prio:long(task:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-task-prio |
Chapter 117. KafkaUserStatus schema reference | Chapter 117. KafkaUserStatus schema reference Used in: KafkaUser Property Property type Description conditions Condition array List of status conditions. observedGeneration integer The generation of the CRD that was last reconciled by the operator. username string Username. secret string The name of Secret where the credentials are stored. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkauserstatus-reference |
Installing Red Hat Developer Hub on OpenShift Container Platform | Installing Red Hat Developer Hub on OpenShift Container Platform Red Hat Developer Hub 1.4 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/installing_red_hat_developer_hub_on_openshift_container_platform/index |
3.5. Post-installation Considerations for Clients | 3.5. Post-installation Considerations for Clients 3.5.1. Removing Pre-Identity Management Configuration The ipa-client-install script does not remove any LDAP and SSSD configuration from the /etc/openldap/ldap.conf and /etc/sssd/sssd.conf files. If you modified the configuration in these files before installing the client, the script adds the new client values, but comments them out. For example: To apply the new Identity Management configuration values: Open /etc/openldap/ldap.conf and /etc/sssd/sssd.conf . Delete the configuration. Uncomment the new Identity Management configuration. Server processes that rely on system-wide LDAP configuration might require a restart to apply the changes. Applications that use openldap libraries typically import the configuration when started. | [
"BASE dc=example,dc=com URI ldap://ldap.example.com #URI ldaps://server.example.com # modified by IPA #BASE dc=ipa,dc=example,dc=com # modified by IPA"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/client-post-install |
Subsets and Splits